SAP HANA – SQL Expressions

An Expression is used to evaluate a clause to return values. There are different SQL expressions that can be used in HANA −

  • Case Expressions
  • Function Expressions
  • Aggregate Expressions
  • Subqueries in Expressions

Case Expression

This is used to pass multiple conditions in a SQL expression. It allows the use of IF-ELSE-THEN logic without using procedures in SQL statements.


COUNT( CASE WHEN sal > 4000 THEN 1 ELSE NULL END ) count3 FROM emp;

This statement will return count1, count2, count3 with integer value as per passed condition.

Function Expressions

Function expressions involve SQL inbuilt functions to be used in Expressions.

Aggregate Expressions

Aggregate functions are used to perform complex calculations like Sum, Percentage, Min, Max, Count, Mode, Median, etc. Aggregate Expression uses Aggregate functions to calculate single value from multiple values.

Aggregate Functions − Sum, Count, Minimum, Maximum. These are applied on measure values (facts) and It is always associated with a dimension.

Common aggregate functions include −

  • Average ()
  • Count ()
  • Maximum ()
  • Median ()
  • Minimum ()
  • Mode ()
  • Sum ()

Subqueries in Expressions

A subquery as an expression is a Select statement. When it is used in an expression, it returns a zero or a single value.

A subquery is used to return data that will be used in the main query as a condition to further restrict the data to be retrieved.

Subqueries can be used with the SELECT, INSERT, UPDATE, and DELETE statements along with the operators like =, <, >, >=, <=, IN, BETWEEN etc.

There are a few rules that subqueries must follow −

  • Subqueries must be enclosed within parentheses.
  • A subquery can have only one column in the SELECT clause, unless multiple columns are in the main query for the subquery to compare its selected columns.
  • An ORDER BY cannot be used in a subquery, although the main query can use an ORDER BY. The GROUP BY can be used to perform the same function as the ORDER BY in a subquery.
  • Subqueries that return more than one row can only be used with multiple value operators, such as the IN operator.
  • The SELECT list cannot include any references to values that evaluate to a BLOB, ARRAY, CLOB, or NCLOB.
  • A subquery cannot be immediately enclosed in a set function.
  • The BETWEEN operator cannot be used with a subquery; however, the BETWEEN operator can be used within the subquery.

Subqueries with the SELECT Statement

Subqueries are most frequently used with the SELECT statement. The basic syntax is as follows −


WHERE SALARY > 4500) ;
| ID | NAME     | AGE | ADDRESS | SALARY   |
| 4  | Chaitali | 25  | Mumbai  | 6500.00  |
| 5  | Hardik   | 27  | Bhopal  | 8500.00  |
| 7  | Muffy    | 24  | Indore  | 10000.00 |

SAP HANA – SQL Functions

There are various SQL functions provided by SAP HANA database −

  • Numeric Functions
  • String Functions
  • Fulltext Functions
  • Datetime Functions
  • Aggregate Functions
  • Data Type Conversion Functions
  • Window Functions
  • Series Data Functions
  • Miscellaneous Functions

Numeric Functions

These are inbuilt numeric functions in SQL and use in scripting. It takes numeric values or strings with numeric characters and return numeric values.

  • ABS − It returns the absolute value of a numeric argument.
Example  SELECT ABS (-1) "abs" FROM TEST;

ACOS, ASIN, ATAN, ATAN2 (These functions return trigonometric value of the argument)

  • BINTOHEX − It converts a Binary value to a hexadecimal value.
  • BITAND − It performs an AND operation on bits of passed argument.
  • BITCOUNT − It performs the count of number of set bits in an argument.
  • BITNOT − It performs a bitwise NOT operation on the bits of argument.
  • BITOR − It perform an OR operation on bits of passed argument.
  • BITSET − It is used to set bits to 1 in <target_num> from the <start_bit> position.
  • BITUNSET − It is used to set bits to 0 in <target_num> from the <start_bit> position.
  • BITXOR − It performs XOR operation on bits of passed argument.
  • CEIL − It returns the first integer that is greater or equal to the passed value.
  • COS, COSH, COT ((These functions return trigonometric value of the argument)
  • EXP − It returns the result of the base of natural logarithms e raised to the power of passed value.
  • FLOOR − It returns the largest integer not greater than the numeric argument.
  • HEXTOBIN − It converts a hexadecimal value to a binary value.
  • LN − It returns the natural logarithm of the argument.
  • LOG − It returns the algorithm value of a passed positive value. Both base and log value should be positive.

Various other numeric functions can also be used − MOD, POWER, RAND, ROUND, SIGN, SIN, SINH, SQRT, TAN, TANH, UMINUS

String Functions

Various SQL string functions can be used in HANA with SQL scripting. Most common string functions are −

  • ASCII − It returns integer ASCII value of passed string.
  • CHAR − It returns the character associated with passed ASCII value.
  • CONCAT − It is Concatenation operator and returns the combined passed strings.
  • LCASE − It converts all character of a string to Lower case.
  • LEFT − It returns the first characters of a passed string as per mentioned value.
  • LENGTH − It returns the number of characters in passed string.
  • LOCATE − It returns the position of substring within passed string.
  • LOWER − It converts all characters in string to lowercase.
  • NCHAR − It returns the Unicode character with passed integer value.
  • REPLACE − It searches in passed original string for all occurrences of search string and replaces them with replace string.
  • RIGHT − It returns the rightmost passed value characters of mentioned string.
  • UPPER − It converts all characters in passed string to uppercase.
  • UCASE − It is identical to UPPER function. It converts all characters in passed string to uppercase.


Date Time functions

There are various Date Time functions that can be used in HANA in SQL scripts. Most common Date Time functions are −

  • CURRENT_DATE − It returns the current local system date.
  • CURRENT_TIME − It returns the current local system time.
  • CURRENT_TIMESTAMP − It returns the current local system timestamp details (YYYY-MM-DD HH:MM:SS:FF).
  • CURRENT_UTCDATE − It returns current UTC (Greenwich Mean date) date.
  • CURRENT_UTCTIME − It returns current UTC (Greenwich Mean Time) time.
  • DAYOFMONTH − It returns the integer value of day in passed date in argument.
  • HOUR − It returns integer value of hour in passed time in argument.
  • YEAR − It returns the year value of passed date.


Data Type Conversion Functions

These functions are used to convert one data type to other or to perform a check if conversion is possible or not.

Most common data type conversion functions used in HANA in SQL scripts −

  • CAST − It returns the value of an expression converted to a supplied data type.
  • TO_ALPHANUM − It converts a passed value to an ALPHANUM data type
  • TO_REAL − It converts a value to a REAL data type.
  • TO_TIME − It converts a passed time string to the TIME data type.
  • TO_CLOB − It converts a value to a CLOB data type.


There are also various Windows and other miscellaneous functions that can be used in HANA SQL scripts.

  • Current_Schema − It returns a string containing the current schema name.
  • Session_User − It returns the user name of current session

SAP HANA – SQL Operators

An operator is a special character used primarily in SQL statement’s with WHERE clause to perform operation, such as comparisons and arithmetic operations. They are used to pass conditions in a SQL query.

Operator types given below can be used in SQL statements in HANA −

  • Arithmetic Operators
  • Comparison/Relational Operators
  • Logical Operators
  • Set Operators

Arithmetic Operators

Arithmetic operators are used to perform simple calculation functions like addition, subtraction, multiplication, division and percentage.

Operator Description
+ Addition − Adds values on either side of the operator
Subtraction − Subtracts right hand operand from left hand operand
* Multiplication − Multiplies values on either side of the operator
/ Division − Divides left hand operand by right hand operand
% Modulus − Divides left hand operand by right hand operand and returns remainder

Comparison Operators

Comparison operators are used to compare the values in SQL statement.

Operator Description
= Checks if the values of two operands are equal or not, if yes then condition becomes true.
!= Checks if the values of two operands are equal or not, if values are not equal then condition becomes true.
<> Checks if the values of two operands are equal or not, if values are not equal then condition becomes true.
> Checks if the value of left operand is greater than the value of right operand, if yes then condition becomes true.
< Checks if the value of left operand is less than the value of right operand, if yes then condition becomes true.
>= Checks if the value of left operand is greater than or equal to the value of right operand, if yes then condition becomes true.
<= Checks if the value of left operand is less than or equal to the value of right operand, if yes then condition becomes true.
!< Checks if the value of left operand is not less than the value of right operand, if yes then condition becomes true.
!> Checks if the value of left operand is not greater than the value of right operand, if yes then condition becomes true.

Logical operators

Logical operators are used to pass multiple conditions in SQL statement or are used to manipulate the results of conditions.

Operator Description
ALL The ALL Operator is used to compare a value to all values in another value set.
AND The AND operator allows the existence of multiple conditions in an SQL statement’s WHERE clause.
ANY The ANY operator is used to compare a value to any applicable value in the list according to the condition.
BETWEEN The BETWEEN operator is used to search for values that are within a set of values, given the minimum value and the maximum value.
EXISTS The EXISTS operator is used to search for the presence of a row in a specified table that meets certain criteria.
IN The IN operator is used to compare a value to a list of literal values that have been specified.
LIKE The LIKE operator is used to compare a value to similar values using wildcard operators.
NOT The NOT operator reverses the meaning of the logical operator with which it is used. Eg − NOT EXISTS, NOT BETWEEN, NOT IN, etc. This is a negate operator.
OR The OR operator is used to compare multiple conditions in an SQL statement’s WHERE clause.
IS NULL The NULL operator is used to compare a value with a NULL value.
UNIQUE The UNIQUE operator searches every row of a specified table for uniqueness (no duplicates).

Set Operators

Set operators are used to combine results of two queries into a single result. Data type should be same for both the tables.

  • UNION − It combines the results of two or more Select statements. However it will eliminate duplicate rows.
  • UNION ALL − This operator is similar to Union but it also shows the duplicate rows.
  • INTERSECT − Intersect operation is used to combine the two SELECT statements, and it returns the records, which are common from both SELECT statements. In case of Intersect, the number of columns and datatype must be same in both the tables.
  • MINUS − Minus operation combines result of two SELECT statements and return only those results, which belong to first set of result and eliminate the rows in second statement from the output of first.

SAP HANA – High Availability

SAP HANA provides mechanism for business continuity and disaster recovery for system faults and software errors. High availability in HANA system defines set of practices that helps to achieve business continuity in case of disaster like power failures in data centers, natural disasters like fire, flood, etc. or any hardware failures.

SAP HANA high availability provides fault tolerance and ability of system to resume system operations after an outage with minimum business loss.

The following illustration shows the phases of high availability in HANA system −

First phase is being prepared for the fault. A fault can be detected automatically or by an administrative action. Data is backed up and stand by systems take over the operations. A recovery process is put in action includes repair of faulty system and original system to be restored to previous configuration.

High Availability

To achieve high availability in HANA system, key is the inclusion of extra components, which are not necessary to function and use in case of failure of other components. It includes hardware redundancy, network redundancy and data center redundancy. SAP HANA provides several levels of hardware and software redundancies as below −

HANA System Hardware Redundancy

SAP HANA appliance vendors offer multiple layers of redundant hardware, software and network components, such as redundant power supplies and fans, error-correcting memories, fully redundant network switches and routers, and uninterrupted power supply (UPS). Disk storage system guarantees writing even in the presence of power failure and use striping and mirroring features to provide redundancy for automatic recovery from disk failures.

SAP HANA Software Redundancy

SAP HANA is based on SUSE Linux Enterprise 11 for SAP and includes security pre-configurations.

SAP HANA system software includes a watchdog function, which automatically restarts configured services (index server, name server, and so on), in case of detected stoppage (killed or crashed).

SAP HANA Persistence Redundancy

SAP HANA provides persistence of transaction logs, savepoints and snapshots to support system restart and recovery from failures, with minimal delay and without loss of data.

HANA System Standby and Failover

SAP HANA system involves separate standby hosts that are used for failover, in case of failure of the primary system. This improves the availability of HANA system by reducing the recovery time from an outage.

SAP HANA – Backup & Recovery

SAP HANA backup and recovery is used to perform HANA system backups and recovery of system in case of any database failure.

Overview Tab

It tells the status of currently running data backup and last successful data backup.

Overview Tab

Backup now option can be used to run data backup wizard.

Configuration Tab

It tells about the Backup interval settings, file based data backup settings and log based data backup setting.

Configuration Tab

Backup Interval Settings

Backint settings give an option to use third party tool for data and log back up with configuration of backing agent.

Configure the connection to a third-party backup tool by specifying a parameter file for the Backint agent.

Backup Interval Settings

File and Log Based Data Backup Settings

File based data backup setting tells the folder where you want to save the data backup on HANA system. You can change your backup folder.

You can also limit the size of data backup files. If system data backup exceeds this set file size, it will split across the multiple files.

Backup Settings

Log backup settings tell the destination folder where you want to save log backup on external server. You can choose a destination type for log backup

File − ensures that sufficient space in system to store backups

Backint − is special named pipe exists on file system but require no disk space.

You can choose backup interval from drop down. It tells the longest amount of time that can pass before a new log backup is written. Backup Interval: It can be in seconds, minutes or hours.

Enable Automatic log backup option: It helps you to keep log area vacant. If you disable this log area will continue to fill and that can result database to hang.

Open Backup Wizard − to run the backup of system.

Backup wizard is used to specify backup settings. It tells the Backup type, destination Type, Backup Destination folder, Backup prefix, size of backup, etc.

Backup HDB(HANA)

When you click on next → Review Backup settings → Finish

Backup of System HDB

It runs the system backups and tells the time to complete backup for the each server.

Recovery of HANA System

To recover SAP HANA database, the database needs to be shut down. Hence, during recovery, end users or SAP applications cannot access the database.

Recovery of SAP HANA database is required in the following situations −

  • A disk in the data area is unusable or disk in the log area is unusable.
  • As a consequence of a logical error, the database needs to be reset to its state at a particular point in time.
  • You want to create a copy of the database.

How to recover a HANA system?

Choose HANA system → Right click → Back and Recovery → Recover System

Recover of Hana System

Types of recovery in HANA system

Most Recent State − Used for recovering the database to the time as close as possible to the current time. For this recovery, the data backup and log backup have to be available since last data backup and log area are required to perform the above type recovery.

Point in Time − Used for recovering the database to the specific point in time. For this recovery, the data backup and log backup have to be available since last data backup and log area are required to perform the above type recovery

Specific Data Backup − Used for recovering the database to a specified data backup. Specific data backup is required for the above type of recovery option.

Specific Log Position − This recovery type is an advanced option that can be used in exceptional cases where a previous recovery failed.

Note − To run recovery wizard you should have administrator privileges on HANA system.

SAP HANA – Persistent Layer

SAP HANA database persistence layer is responsible to manage logs for all the transactions to provide standard data back up and system restore function.

It ensures that database can be restored to the most recent committed state after a restart or after a system crash and transactions are executed completely or completely undone. SAP HANA Persistent Layer is part of Index server and it has data and transaction log volumes for HANA system and in-memory data is regularly saved to these volumes. There are services in HANA system that has their own persistence. It also provides save points and logs for all the database transactions from the last save point.

Why does SAP HANA database need a Persistent Layer?

  • Main memory is volatile therefore data is lost during a restart or power outage.
  • Data needs to be stored in persisted medium.
  • Backup & Restore is available.
  • It ensures that the database is restored to the most recent committed state after a restart and that transaction are either completely executed or completely undone.

Data and Transaction Log Volumes

Database can always be restored to its most recent state, to ensure these changes to data in the database are regularly copied to disk. Log files containing data changes and certain transaction events are also saved regularly to disk. Data and logs of a system are stored in Log volumes.

Data volumes stores SQL data and undo log information and also SAP HANA information modeling data. This information is stored in data pages, which are called Blocks. These blocks are written to data volumes at regular time interval, which is known as save point.

Log volumes store the information about data changes. Changes that are made between two log points are written to Log volumes and called log entries. They are saved to log buffer when transaction is committed.


In SAP HANA database, changed data is automatically saved from memory to disk. These regular intervals are called savepoints and by default they are set to occur every five minutes. Persistence Layer in SAP HANA database performs these savepoint at regular interval. During this operation changed data is written to disk and redo logs are also saved to disk as well.

The data belonging to a Savepoint tells consistent state of the data on disk and remains there until the next savepoint operation has completed. Redo log entries are written to the log volumes for all changes to persistent data. In the event of a database restart, data from the last completed savepoint can be read from the data volumes, and redo log entries written to the log volumes.

Frequency of savepoint can be configured by global.ini file. Savepoints can be initiated by other operations like database shut down or system restart. You can also run savepoint by executing the below command −


To save data and redo logs to log volumes, you should ensure that there is enough disk space available to capture these, otherwise the system will issue a disk full event and database will stop working.

During the HANA system installation, following default directories are created as the storage location for data and log volumes −

  • /usr/sap/<SID>/SYS/global/hdb/data
  • /usr/sap/<SID>/SYS/global/hdb/log

These directories are defined in global.ini file and can be changed at later stage.

Note that Savepoints do not affect the performance of transactions executed in HANA system. During a savepoint operation, transactions continue to run as normal. With HANA system running on proper hardware, impact of savepoints on the performance of system is negligible.

SAP HANA – Attribute View

Attribute Views in SAP HANA Modeling are created on the top of Dimension tables. They are used to join Dimension tables or other Attribute Views. You can also copy a new Attribute View from already existing Attribute Views inside other Packages but that doesn’t let you change the View Attributes.

Characteristics of Attribute View

  • Attribute Views in HANA are used to join Dimension tables or other Attribute Views.
  • Attribute Views are used in Analytical and Calculation Views for analysis to pass master data.
  • They are similar to Characteristics in BM and contain master data.
  • Attribute Views are used for performance optimization in large size Dimension tables, you can limit the number of attributes in an Attribute View which are further used for Reporting and analysis purpose.
  • Attribute Views are used to model master data to give some context.

How to Create an Attribute View?

Choose the Package name under which you want to create an Attribute View. Right Click on Package → Go to New → Attribute View

Creating an Attribute View

When you click on Attribute View, New Window will open. Enter Attribute View name and description. From the drop down list, choose View Type and sub type. In sub type, there are three types of Attribute views − Standard, Time, and Derived.

Choosing Type and Subtype

Time subtype Attribute View is a special type of Attribute view that adds a Time Dimension to Data Foundation. When you enter the Attribute name, Type and Subtype and click on Finish, it will open three work panes −

  • Scenario pane that has Data Foundation and Semantic Layer.
  • Details Pane shows attribute of all tables added to Data Foundation and joining between them.
  • Output pane where we can add attributes from Detail pane to filter in the report.

You can add Objects to Data Foundation, by clicking on ‘+’ sign written next to Data Foundation. You can add multiple Dimension tables and Attribute Views in the Scenario Pane and join them using a Primary Key.

When you click on Add Object in Data Foundation, you will get a search bar from where you can add Dimension tables and Attribute views to Scenario Pane. Once Tables or Attribute Views are added to Data Foundation, they can be joined using a Primary Key in Details Pane as shown below.

Adding Objects at Data Foundation

Once joining is done, choose multiple attributes in details pane, right click and Add to Output. All columns will be added to Output pane. Now Click on Activate option and you will get a confirmation message in job log.

Now you can right click on the Attribute View and go for Data Preview.

Adding Attributes to Output Pane

Note − When a View is not activated, it has diamond mark on it. However, once you activate it, that diamond disappears that confirms that View has been activated successfully.

Once you click on Data Preview, it will show all the attributes that has been added to Output pane under Available Objects.

These Objects can be added to Labels and Value axis by right click and adding or by dragging the objects as shown below −

Adding Objects to Axis

SAP HANA – Packages

SAP HANA Packages are shown under Content tab in HANA studio. All HANA modeling is saved inside Packages.

You can create a new Package by Right Click on Content Tab → New → Package


You can also create a Sub Package under a Package by right clicking on the Package name. When we right click on the Package we get 7 Options: We can create HANA Views Attribute Views, Analytical Views, and Calculation Views under a Package.

Sub Package

You can also create Decision Table, Define Analytic Privilege and create Procedures in a Package.

When you right click on Package and click on New, you can also create sub packages in a Package. You have to enter Package Name, Description while creating a Package.

SAP HANA – Tables

Tables in HANA database can be accessed from HANA Studio in Catalogue tab under Schemas. New tables can be created using the two methods given below −

  • Using SQL editor
  • Using GUI option

SQL Editor in HANA Studio

SQL Console can be opened by selecting Schema name, in which, new table has to be created using System View SQL Editor option or by Right click on Schema name as shown below −

SQL Editor

Once SQL Editor is opened, Schema name can be confirmed from the name written on the top of SQL Editor. New table can be created using SQL Create Table statement −

Create column Table Test1 (

In this SQL statement, we have created a Column table “Test1”, defined data types of table and Primary Key.

Once you write Create table SQL query, click on Execute option on top of SQL editor right side. Once the statement is executed, we will get a confirmation message as shown in snapshot given below −

Statement ‘Create column Table Test1 (ID INTEGER,NAME VARCHAR(10), PRIMARY KEY (ID))’

successfully executed in 13 ms 761 μs (server processing time: 12 ms 979 μs) − Rows Affected: 0

Execute SQL Statement

Execution statement also tells about the time taken to execute the statement. Once statement is successfully executed, right click on Table tab under Schema name in System View and refresh. New Table will be reflected in the list of tables under Schema name.

Insert statement is used to enter the data in the Table using SQL editor.

Insert into TEST1 Values (1,'ABCD')
Insert into TEST1 Values (2,'EFGH');

Click on Execute.

You can right click on Table name and use Open Data Definition to see data type of the table. Open Data Preview/Open Content to see table contents.

Creating Table using GUI Option

Another way to create a table in HANA database is by using GUI option in HANA Studio.

Right Click on Table tab under Schema → Select ‘New Table’ option as shown in snapshot given below.

Once you click on New Table → It will open a window to enter the Table name, Choose Schema name from drop down, Define Table type from drop down list: Column Store or Row Store.

Define data type as shown below. Columns can be added by clicking on + sign, Primary Key can be chosen by clicking on cell under Primary key in front of Column name, Not Null will be active by default.

Once columns are added, click on Execute.

Creating Table

Once you Execute (F8), Right Click on Table Tab → Refresh. New Table will be reflected in the list of tables under chosen Schema. Below Insert Option can be used to insert data in table. Select statement to see content of table.


Inserting Data in a table using GUI in HANA studio

You can right click on Table name and use Open Data Definition to see data type of the table. Open Data Preview/Open Content to see table contents.

To use tables from one Schema to create views we should provide access on the Schema to the default user who runs all the Views in HANA Modeling. This can be done by going to SQL editor and running this query −


SAP HANA – Schema in Data Warehouse

Schemas are logical description of tables in Data Warehouse. Schemas are created by joining multiple fact and Dimension tables to meet some business logic.

Database uses relational model to store data. However, Data Warehouse use Schemas that join dimensions and fact tables to meet business logic. There are three types of Schemas used in a Data Warehouse −

  • Star Schema
  • Snowflakes Schema
  • Galaxy Schema

Star Schema

In Star Schema, Each Dimension is joined to one single Fact table. Each Dimension is represented by only one dimension and is not further normalized.

Dimension Table contains set of attribute that are used to analyze the data.

Example − In example given below, we have a Fact table FactSales that has Primary keys for all the Dim tables and measures units_sold and dollars_ sold to do analysis.

We have four Dimension tables − DimTime, DimItem, DimBranch, DimLocation

Star Schema

Each Dimension table is connected to Fact table as Fact table has Primary Key for each Dimension Tables that is used to join two tables.

Facts/Measures in Fact Table are used for analysis purpose along with attribute in Dimension tables.

Snowflakes Schema

In Snowflakes schema, some of Dimension tables are further, normalized and Dim tables are connected to single Fact Table. Normalization is used to organize attributes and tables of database to minimize the data redundancy.

Normalization involves breaking a table into less redundant smaller tables without losing any information and smaller tables are joined to Dimension table.

Snowflakes Schema

In the above example, DimItem and DimLocation Dimension tables are normalized without losing any information. This is called Snowflakes schema where dimension tables are further normalized to smaller tables.

Galaxy Schema

In Galaxy Schema, there are multiple Fact tables and Dimension tables. Each Fact table stores primary keys of few Dimension tables and measures/facts to do analysis.

Galaxy Schema

In the above example, there are two Fact tables FactSales, FactShipping and multiple Dimension tables joined to Fact tables. Each Fact table contains Primary Key for joined Dim tables and measures/Facts to perform analysis.