This app is an All-In-One package to provide everything to HANA Lovers.
1. Courses on SAP HANA - Basics, Modeling and Administration
2. Multiple Quizzes on Overview, Modelling, Architeture, and Administration
3. Most popular articles on SAP HANA
4. Series of Interview questions to brushup your HANA skills
The SAP HANA database is developed in C++ and runs on SUSE Linux Enterpise Server. SAP HANA database consists of multiple servers and the most important component is the Index Server. SAP HANA database consists of Index Server, Name Server, Statistics Server, Preprocessor Server and XS Engine.
Index server is the main SAP HANA database component
It contains the actual data stores and the engines for processing the data.
The index server processes incoming SQL or MDX statements in the context of authenticated sessions and transactions.
The database persistence layer is responsible for durability and atomicity of transactions. It ensures that the database can be restored to the most recent committed state after a restart and that transactions are either completely executed or completely undone.
The index server uses the preprocessor server for analyzing text data and extracting the information on which the text search capabilities are based.
The name server owns the information about the topology of SAP HANA system. In a distributed system, the name server knows where the components are running and which data is located on which server.
The statistics server collects information about status, performance and resource consumption from the other servers in the system.. The statistics server also provides a history of measurement data for further analysis.
Session and Transaction Manager:
The Transaction manager coordinates database transactions, and keeps track of running and closed transactions. When a transaction is committed or rolled back, the transaction manager informs the involved storage engines about this event so they can execute necessary actions.
XS Engine is an optional component. Using XS Engine clients can connect to SAP HANA database to fetch data via HTTP.
The heart of SAP HANA – Index Server
The SAP HANA Index Server contains the majority of the magic behind SAP HANA.
Connection and Session Management
This component is responsible for creating and managing sessions and connections for the database clients.
Once a session is established, clients can communicate with the SAP HANA database using SQL statements.
For each session a set of parameters are maintained like, auto-commit, current transaction isolation level etc.
Users are Authenticated either by the SAP HANA database itself (login with user and password) or authentication can be delegated to an external authentication providers such as an LDAP directory.
The Authorization Manager
This component is invoked by other SAP HANA database components to check whether the user has the required privileges to execute the requested operations.
SAP HANA allows granting of privileges to users or roles. A privilege grants the right to perform a specified operation (such as create, update, select, execute, and so on) on a specified object (for example a table, view, SQLScript function, and so on).
The SAP HANA database supports Analytic Privileges that represent filters or hierarchy drilldown limitations for analytic queries. Analytic privileges grant access to values with a certain combination of dimension attributes. This is used to restrict access to a cube with some values of the dimensional attributes.
Request Processing and Execution Control:
The client requests are analyzed and executed by the set of components summarized as Request Processing and Execution Control. The Request Parser analyses the client request and dispatches it to the responsible component. The Execution Layer acts as the controller that invokes the different engines and routes intermediate results to the next execution step.
Incoming SQL requests are received by the SQL Processor. Data manipulation statements are executed by the SQL Processor itself.
Other types of requests are delegated to other components. Data definition statements are dispatched to the Metadata Manager, transaction control statements are forwarded to the Transaction Manager, planning commands are routed to the Planning Engine and procedure calls are forwarded to the stored procedure processor.
The SAP HANA database has its own scripting language named SQLScript that is designed to enable optimizations and parallelization. SQLScript is a collection of extensions to SQL.
SQLScript is based on side effect free functions that operate on tables using SQL queries for set processing. The motivation for SQLScript is to offload data-intensive application logic into the database.
Multidimensional Expressions (MDX):
MDX is a language for querying and manipulating the multidimensional data stored in OLAP cubes.
Incoming MDX requests are processed by the MDX engine and also forwarded to the Calc Engine.
Planning Engine allows financial planning applications to execute basic planning operations in the database layer. One such basic operation is to create a new version of a data set as a copy of an existing one while applying filters and transformations. For example: planning data for a new year is created as a copy of the data from the previous year.
Another example for a planning operation is the disaggregation operation that distributes target values from higher to lower aggregation levels based on a distribution function.
The SAP HANA database features such as SQLScript and Planning operations are implemented using a common infrastructure called the Calc engine.
The SQLScript, MDX, Planning Model and Domain-Specific models are converted into Calculation Models. The Calc Engine creates Logical Execution Plan for Calculation Models. The Calculation Engine will break up a model, for example some SQL Script, into operations that can be processed in parallel.
In HANA database, each SQL statement is processed in the context of a transaction. New sessions are implicitly assigned to a new transaction. The Transaction Manager coordinates database transactions, controls transactional isolation and keeps track of running and closed transactions. When a transaction is committed or rolled back, the transaction manager informs the involved engines about this event so they can execute necessary actions.
The transaction manager also cooperates with the persistence layer to achieve atomic and durable transactions.
Metadata can be accessed via the Metadata Manager component. In the SAP HANA database, metadata comprises a variety of objects, such as definitions of relational tables, columns, views, indexes and procedures.
Metadata of all these types is stored in one common database catalog for all stores. The database catalog is stored in tables in the Row Store. The features of the SAP HANA database such as transaction support and multi-version concurrency control, are also used for metadata management.
In the center of the figure you see the different data Stores of the SAP HANA database. A store is a sub-system of the SAP HANA database which includes in-memory storage, as well as the components that manages that storage.
The Row Store:
The Row Store is the SAP HANA database row-based in-memory relational data engine.
The Column Store:
The Column Store stores tables column-wise. It originates from the TREX (SAP NetWeaver Search and Classification) product.
Want to know more about Row Data and Column Data Storage?
Column Data Storage vs Row Data Storage: How Different are they Really?
The Persistence Layer is responsible for durability and atomicity of transactions. This layer ensures that the database is restored to the most recent committed state after a restart and that transactions are either completely executed or completely undone. To achieve this goal in an efficient way, the Persistence Layer uses a combination of write-ahead logs, shadow paging and savepoints.
The Persistence Layer offers interfaces for writing and reading persisted data. It also contains the Logger component that manages the transaction log. Transaction log entries are written explicitly by using a log interface or implicitly when using the virtual file abstraction.