SAP HANA – Reporting View

We know that with the use of Information Modeling feature in SAP HANA, we can create different Information views Attribute Views, Analytic Views, Calculation views. These Views can be consumed by different reporting tools like SAP Business Object, SAP Lumira, Design Studio, Office Analysis and even third party tool like MS Excel.

These reporting tools enable Business Managers, Analysts, Sales Managers and senior management employees to analyze the historic information to create business scenarios and to decide business strategy of the company.

This generates the need for consuming HANA Modeling views by different reporting tools and to generate reports and dashboards, which are easy to understand for end users.

Reporting and Analysis

In most of the companies, where SAP is implemented, reporting on HANA is done with BI platforms tools that consume both SQL and MDX queries with help of Relational and OLAP connections. There is wide variety of BI tools like − Web Intelligence, Crystal Reports, Dashboard, Explorer, Office Analysis and many more.

SAP HANA – Export & Import

HANA Export and Import option allows tables, Information models, Landscapes to move to a different or existing system. You do not need to recreate all tables and information models as you can simply export it to new system or import to an existing target system to reduce the effort.

This option can be accessed from File menu at the top or by right clicking on any table or Information model in HANA studio.

Explain and Import

Exporting a table/Information model in HANA Studio

Go to file menu → Export → You will see options as shown below −

Explain Destination

Export Options under SAP HANA Content

Delivery Unit

Delivery unit is a single unit, which can be mapped to multiple packages and can be exported as single entity so that all the packages assigned to Delivery Unit can be treated as single unit.

Users can use this option to export all the packages that make a delivery unit and the relevant objects contained in it to a HANA Server or to local Client location.

The user should create Delivery Unit prior to using it.

This can be done through HANA Modeler → Delivery Unit → Select System and Next → Create → Fill the details like Name, Version, etc. → OK → Add Packages to Delivery unit → Finish

Delivery Unit

Once the Delivery Unit is created and the packages are assigned to it, user can see the list of packages by using Export option −

Go to File → Export → Delivery Unit →Select the Delivery Unit.

You can see list of all packages assigned to Delivery unit. It gives an option to choose export location −

  • Export to Server
  • Export to Client

Export Through Delivery Unit

You can export the Delivery Unit either to HANA Server location or to a Client location as shown.

The user can restrict the export through “Filter by time” which means Information views, which are updated within the mentioned time interval will only be exported.

Select the Delivery Unit and Export Location and then Click Next → Finish. This will export the selected Delivery Unit to the specified location.

Developer Mode

This option can be used to export individual objects to a location in the local system. User can select single Information view or group of Views and Packages and select the local Client location for export and Finish.

This is shown in the snapshot below.

Developer Mode

Support Mode

This can be used to export the objects along with the data for SAP support purposes. This can be used when requested.

Example − User creates an Information View, which throws an error and he is not able to resolve. In that case, he can use this option to export the view along with data and share it with SAP for debugging purpose.

Support Mode

Export Options under SAP HANA Studio

Landscape − To export the landscape from one system to other.

Tables − This option can be used to export tables along with its content.

Import Option under SAP HANA Content

Go to File → Import, You will see all the options as shown below under Import.

Data from Local File

This is used to import data from a flat file like .xls or .csv file.

Import

Click on Nex → Choose Target System → Define Import Properties

Select Source file by browsing local system. It also gives an option if you want to keep the header row. It also gives an option to create a new table under existing Schema or if you want to import data from a file to an existing table.

Data from Local File1

When you click on Next, it gives an option to define Primary Key, change data type of columns, define storage type of table and also, allows you to change the proposed structure of table.

Data from Local File2

When you click on finish, that table will be populated under list of tables in mentioned Schema. You can do the data preview and can check data definition of the table and it will be same as that of .xls file.

Import Data from Local File

Delivery Unit

Select Delivery unit by going to File → Import → Delivery unit. You can choose from a server or local client.

You can select “Overwrite inactive versions” which allows you to overwrite any inactive version of objects that exist. If the user selects “Activate objects”, then after the import, all the imported objects will be activated by default. The user need not trigger the activation manually for the imported views.

Import Through Delivery Unit

Click Finish and once completed successfully, it will be populated to target system.

Developer Mode

Browse for the Local Client location where the views are exported and select the views to be imported, the user can select individual Views or group of Views and Packages and Click on Finish.

Mass Import of Metadata

Go to File → Import → Mass Import of Metadata → Next and select the source and target system.

Configure the System for Mass Import and click Finish.

Mass Import of Metadata

Selective Import of Metadata

It allows you to choose tables and target schema to import Meta data from SAP Applications.

Go to File → Import → Selective Import of Metadata → Next

Choose Source Connection of type “SAP Applications”. Remember that the Data Store should have been created already of type SAP Applications → Click Next

Selective Import of Metadata1 Selective Import of Metadata2

Select tables you want to import and validate data if required. Click Finish after that.

SAP HANA – Information Composer

SAP HANA Information Composer is a self-service modeling environment for end users to analyze data set. It allows you to import data from workbook format (.xls, .csv) into HANA database and to create Modeling views for analysis.

Information Composer is very different from HANA Modeler and both are designed to target separate set of users. Technically sound people who have strong experience in data modeling use HANA Modeler. A business user, who does not have any technical knowledge, uses Information Composer. It provides simple functionalities with easy to use interface.

Features of Information Composer

  • Data extraction − Information Composer helps to extract data, clean data, preview data and automate the process of creation of physical table in the HANA database.
  • Manipulating data − It helps us to combine two objects (Physical tables, Analytical View, attribute view and calculation views) and create information view that can be consumed by SAP BO Tools like SAP Business Objects Analysis, SAP Business Objects Explorer and other tools like MS Excel.
  • It provides a centralized IT service in the form of URL, which can be accessed from anywhere.

How to upload data using Information Composer?

It allows us to upload large amount of data (up to 5 million cells). Link to access Information Composer −

http://<server>:<port>/IC

Login to SAP HANA Information Composer. You can perform data loading or manipulation using this tool.

To upload data this can be done in two ways −

  • Uploading .xls, .csv file directly to HANA database
  • Other way is to copy data to clipboard and copy from there to HANA database.
  • It allows data to be loaded along with header.

On Left side in Information Composer, you have three options −

Select Source of data → Classify data → Publish

Upload Data

Once data is published to HANA database, you cannot rename the table. In this case, you have to delete the table from Schema in HANA database.

“SAP_IC” schema, where tables like IC_MODELS, IC_SPREADSHEETS exists. One can find details of tables created using IC under these tables.

Spreadsheet

Using Clipboard

Another way to upload data in IC is by use of the clipboard. Copy the data to clipboard and upload it with help of Information Composer. Information Composer also allows you to see preview of data or even provide summary of data in temporary storage. It has inbuilt capability of data cleansing that is used to remove any inconsistency in data.

Once data is cleansed, you need to classify data whether it is attributed. IC has inbuilt feature to check the data type of uploaded data.

Final step is to publish the data to physical tables in HANA database. Provide a technical name and description of table and this will be loaded inside IC_Tables Schema.

User Roles for using data published with Information Composer

Two set of users can be defined to use data published from IC.

  • IC_MODELER is for creating physical tables, uploading data and creating information views.
  • IC_PUBLIC allows users to view information views created by other users. This role does not allow the user to upload or create any information views using IC.

System Requirement for Information Composer

Server Requirements −

  • At least 2GB of available RAM is required.
  • Java 6 (64-bit) must be installed on the server.
  • The Information Composer Server must be physically located next to the HANA server.

Client Requirements −

  • Internet Explorer with Silverlight 4 installed.

SAP HANA – Analytic View

Analytic View is in the form of Star schema, wherein we join one Fact table to multiple Dimension tables. Analytic views use real power of SAP HANA to perform complex calculations and aggregate functions by joining tables in form of star schema and by executing Star schema queries.

Characteristics of Analytic View

Following are the properties of SAP HANA Analytic View −

  • Analytic Views are used to perform complex calculations and Aggregate functions like Sum, Count, Min, Max, Etc.
  • Analytic Views are designed to run Start schema queries.
  • Each Analytic View has one Fact table surrounded by multiple dimension tables. Fact table contains primary key for each Dim table and measures.
  • Analytic Views are similar to Info Objects and Info sets of SAP BW.

How to Create an Analytic View?

Choose the Package name under which you want to create an Analytic View. Right Click on Package → Go to New → Analytic View. When you click on an Analytic View, New Window will open. Enter View name and Description and from drop down list choose View Type and Finish.

Creating an Analytic View

When you click Finish, you can see an Analytic View with Data Foundation and Star Join option.

Click on Data Foundation to add Dimension and Fact tables. Click on Star Join to add Attribute Views.

Add Dim and Fact tables to Data Foundation using “+” sign. In the example given below, 3 dim tables have been added: DIM_CUSTOMER, DIM_PRODUCT, DIM_REGION and 1 Fact table FCT_SALES to Details Pane. Joining Dim table to Fact table using Primary Keys stored in Fact table.

Adding Tables in Analytic View

Select Attributes from Dim and Fact table to add to Output pane as shown in snapshot shown above. Now change the data type of Facts, from fact table to measures.

Click on Semantic layer, choose facts and click on measures sign as shown below to change datatype to measures and Activate the View.

Defining Measures

Once you activate view and click on Data Preview, all attributes and measures will be added under the list of Available objects. Add Attributes to Labels Axis and Measure to Value axis for analysis purpose.

There is an option to choose different types of chart and graphs.

Choosing Graphs

SAP HANA – Attribute View

Attribute Views in SAP HANA Modeling are created on the top of Dimension tables. They are used to join Dimension tables or other Attribute Views. You can also copy a new Attribute View from already existing Attribute Views inside other Packages but that doesn’t let you change the View Attributes.

Characteristics of Attribute View

  • Attribute Views in HANA are used to join Dimension tables or other Attribute Views.
  • Attribute Views are used in Analytical and Calculation Views for analysis to pass master data.
  • They are similar to Characteristics in BM and contain master data.
  • Attribute Views are used for performance optimization in large size Dimension tables, you can limit the number of attributes in an Attribute View which are further used for Reporting and analysis purpose.
  • Attribute Views are used to model master data to give some context.

How to Create an Attribute View?

Choose the Package name under which you want to create an Attribute View. Right Click on Package → Go to New → Attribute View

Creating an Attribute View

When you click on Attribute View, New Window will open. Enter Attribute View name and description. From the drop down list, choose View Type and sub type. In sub type, there are three types of Attribute views − Standard, Time, and Derived.

Choosing Type and Subtype

Time subtype Attribute View is a special type of Attribute view that adds a Time Dimension to Data Foundation. When you enter the Attribute name, Type and Subtype and click on Finish, it will open three work panes −

  • Scenario pane that has Data Foundation and Semantic Layer.
  • Details Pane shows attribute of all tables added to Data Foundation and joining between them.
  • Output pane where we can add attributes from Detail pane to filter in the report.

You can add Objects to Data Foundation, by clicking on ‘+’ sign written next to Data Foundation. You can add multiple Dimension tables and Attribute Views in the Scenario Pane and join them using a Primary Key.

When you click on Add Object in Data Foundation, you will get a search bar from where you can add Dimension tables and Attribute views to Scenario Pane. Once Tables or Attribute Views are added to Data Foundation, they can be joined using a Primary Key in Details Pane as shown below.

Adding Objects at Data Foundation

Once joining is done, choose multiple attributes in details pane, right click and Add to Output. All columns will be added to Output pane. Now Click on Activate option and you will get a confirmation message in job log.

Now you can right click on the Attribute View and go for Data Preview.

Adding Attributes to Output Pane

Note − When a View is not activated, it has diamond mark on it. However, once you activate it, that diamond disappears that confirms that View has been activated successfully.

Once you click on Data Preview, it will show all the attributes that has been added to Output pane under Available Objects.

These Objects can be added to Labels and Value axis by right click and adding or by dragging the objects as shown below −

Adding Objects to Axis

SAP HANA – Packages

SAP HANA Packages are shown under Content tab in HANA studio. All HANA modeling is saved inside Packages.

You can create a new Package by Right Click on Content Tab → New → Package

Package

You can also create a Sub Package under a Package by right clicking on the Package name. When we right click on the Package we get 7 Options: We can create HANA Views Attribute Views, Analytical Views, and Calculation Views under a Package.

Sub Package

You can also create Decision Table, Define Analytic Privilege and create Procedures in a Package.

When you right click on Package and click on New, you can also create sub packages in a Package. You have to enter Package Name, Description while creating a Package.

SAP HANA – Tables

Tables in HANA database can be accessed from HANA Studio in Catalogue tab under Schemas. New tables can be created using the two methods given below −

  • Using SQL editor
  • Using GUI option

SQL Editor in HANA Studio

SQL Console can be opened by selecting Schema name, in which, new table has to be created using System View SQL Editor option or by Right click on Schema name as shown below −

SQL Editor

Once SQL Editor is opened, Schema name can be confirmed from the name written on the top of SQL Editor. New table can be created using SQL Create Table statement −

Create column Table Test1 (
   ID INTEGER,
   NAME VARCHAR(10),
   PRIMARY KEY (ID)
);

In this SQL statement, we have created a Column table “Test1”, defined data types of table and Primary Key.

Once you write Create table SQL query, click on Execute option on top of SQL editor right side. Once the statement is executed, we will get a confirmation message as shown in snapshot given below −

Statement ‘Create column Table Test1 (ID INTEGER,NAME VARCHAR(10), PRIMARY KEY (ID))’

successfully executed in 13 ms 761 μs (server processing time: 12 ms 979 μs) − Rows Affected: 0

Execute SQL Statement

Execution statement also tells about the time taken to execute the statement. Once statement is successfully executed, right click on Table tab under Schema name in System View and refresh. New Table will be reflected in the list of tables under Schema name.

Insert statement is used to enter the data in the Table using SQL editor.

Insert into TEST1 Values (1,'ABCD')
Insert into TEST1 Values (2,'EFGH');

Click on Execute.

You can right click on Table name and use Open Data Definition to see data type of the table. Open Data Preview/Open Content to see table contents.

Creating Table using GUI Option

Another way to create a table in HANA database is by using GUI option in HANA Studio.

Right Click on Table tab under Schema → Select ‘New Table’ option as shown in snapshot given below.

Once you click on New Table → It will open a window to enter the Table name, Choose Schema name from drop down, Define Table type from drop down list: Column Store or Row Store.

Define data type as shown below. Columns can be added by clicking on + sign, Primary Key can be chosen by clicking on cell under Primary key in front of Column name, Not Null will be active by default.

Once columns are added, click on Execute.

Creating Table

Once you Execute (F8), Right Click on Table Tab → Refresh. New Table will be reflected in the list of tables under chosen Schema. Below Insert Option can be used to insert data in table. Select statement to see content of table.

Table

Inserting Data in a table using GUI in HANA studio

You can right click on Table name and use Open Data Definition to see data type of the table. Open Data Preview/Open Content to see table contents.

To use tables from one Schema to create views we should provide access on the Schema to the default user who runs all the Views in HANA Modeling. This can be done by going to SQL editor and running this query −

GRANT SELECT ON SCHEMA “<SCHEMA_NAME>” TO _SYS_REPO WITH

SAP HANA – Schema in Data Warehouse

Schemas are logical description of tables in Data Warehouse. Schemas are created by joining multiple fact and Dimension tables to meet some business logic.

Database uses relational model to store data. However, Data Warehouse use Schemas that join dimensions and fact tables to meet business logic. There are three types of Schemas used in a Data Warehouse −

  • Star Schema
  • Snowflakes Schema
  • Galaxy Schema

Star Schema

In Star Schema, Each Dimension is joined to one single Fact table. Each Dimension is represented by only one dimension and is not further normalized.

Dimension Table contains set of attribute that are used to analyze the data.

Example − In example given below, we have a Fact table FactSales that has Primary keys for all the Dim tables and measures units_sold and dollars_ sold to do analysis.

We have four Dimension tables − DimTime, DimItem, DimBranch, DimLocation

Star Schema

Each Dimension table is connected to Fact table as Fact table has Primary Key for each Dimension Tables that is used to join two tables.

Facts/Measures in Fact Table are used for analysis purpose along with attribute in Dimension tables.

Snowflakes Schema

In Snowflakes schema, some of Dimension tables are further, normalized and Dim tables are connected to single Fact Table. Normalization is used to organize attributes and tables of database to minimize the data redundancy.

Normalization involves breaking a table into less redundant smaller tables without losing any information and smaller tables are joined to Dimension table.

Snowflakes Schema

In the above example, DimItem and DimLocation Dimension tables are normalized without losing any information. This is called Snowflakes schema where dimension tables are further normalized to smaller tables.

Galaxy Schema

In Galaxy Schema, there are multiple Fact tables and Dimension tables. Each Fact table stores primary keys of few Dimension tables and measures/facts to do analysis.

Galaxy Schema

In the above example, there are two Fact tables FactSales, FactShipping and multiple Dimension tables joined to Fact tables. Each Fact table contains Primary Key for joined Dim tables and measures/Facts to perform analysis.

SAP HANA – Modeling

SAP HANA Modeler option is used to create Information views on the top of schemas → tables in HANA database. These views are consumed by JAVA/HTML based applications or SAP Applications like SAP Lumira, Office Analysis or third party software like MS Excel for reporting purpose to meet business logic and to perform analysis and extract information.

HANA Modeling is done on the top of tables available in Catalog tab under Schema in HANA studio and all views are saved under Content table under Package.

You can create new Package under Content tab in HANA studio using right click on Content and New.

All Modeling Views created inside one package comes under the same package in HANA studio and categorized according to View Type.

Each View has different structure for Dimension and Fact tables. Dim tables are defined with master data and Fact table has Primary Key for dimension tables and measures like Number of Unit sold, Average delay time, Total Price, etc.

Fact and Dimension Table

Fact Table contains Primary Keys for Dimension table and measures. They are joined with Dimension tables in HANA Views to meet business logic.

Example of Measures − Number of unit sold, Total Price, Average Delay time, etc.

Dimension Table contains master data and is joined with one or more fact tables to make some business logic. Dimension tables are used to create schemas with fact tables and can be normalized.

Example of Dimension Table − Customer, Product, etc.

Suppose a company sells products to customers. Every sale is a fact that happens within the company and the fact table is used to record these facts.

Fact and Dimension Table

For example, row 3 in the fact table records the fact that customer 1 (Brian) bought one item on day 4. And, in a complete example, we would also have a product table and a time table so that we know what she bought and exactly when.

The fact table lists events that happen in our company (or at least the events that we want to analyze- No of Unit Sold, Margin, and Sales Revenue). The Dimension tables list the factors (Customer, Time, and Product) by which we want to analyze the data.

SAP HANA – Core Architecture

SAP HANA was initially, developed in Java and C++ and designed to run only Operating System Suse Linux Enterprise Server 11. SAP HANA system consists of multiple components that are responsible to emphasize computing power of HANA system.

  • Most important component of SAP HANA system is Index Server, which contains SQL/MDX processor to handle query statements for database.
  • HANA system contains Name Server, Preprocessor Server, Statistics Server and XS engine, which is used to communicate and host small web applications and various other components.

SAP Hana Core Architecture

Index Server

Index Server is heart of SAP HANA database system. It contains actual data and engines for processing that data. When SQL or MDX is fired for SAP HANA system, an Index Server takes care of all these requests and processes them. All HANA processing takes place in Index Server.

Index Server contains Data engines to handle all SQL/MDX statements that come to HANA database system. It also has Persistence Layer that is responsible for durability of HANA system and ensures HANA system is restored to most recent state when there is restart of system failure.

Index Server also has Session and Transaction Manager, which manage transactions and keep track of all running and closed transactions.

Index Server

Index Server − Architecture

SQL/MDX Processor

It is responsible for processing SQL/MDX transactions with data engines responsible to run queries. It segments all query requests and direct them to correct engine for the performance Optimization.

It also ensures that all SQL/MDX requests are authorized and also provide error handling for efficient processing of these statements. It contains several engines and processors for query execution −

  • MDX (Multi Dimension Expression) is query language for OLAP systems like SQL is used for Relational database. MDX Engine is responsible to handle queries and manipulates multidimensional data stored in OLAP cubes.
  • Planning Engine is responsible to run planning operations within SAP HANA database.
  • Calculation Engine converts data into Calculation models to create logical execution plan to support parallel processing of statements.
  • Stored Procedure processor executes procedure calls for optimized processing; it converts OLAP cubes to HANA optimized cubes.

Transaction and Session Management

It is responsible to coordinate all database transactions and keep track of all running and closed transactions.

When a transaction is executed or failed, Transaction manager notifies relevant data engine to take necessary actions.

Session management component is responsible to initialize and manage sessions and connections for SAP HANA system using predefined session parameters.

Persistence Layer

It is responsible for durability and atomicity of transactions in HANA system. Persistence layer provides built in disaster recovery system for HANA database.

It ensures database is restored to most recent state and ensures that all the transactions are completed or undone in case of a system failure or restart.

It is also responsible to manage data and transaction logs and also contain data backup, log backup and configuration back of HANA system. Backups are stored as save points in the Data Volumes via a Save Point coordinator, which is normally set to take back every 5-10 minutes.

Preprocessor Server

Preprocessor Server in SAP HANA system is used for text data analysis.

Index Server uses preprocessor server for analyzing text data and extracting the information from text data when text search capabilities are used.

Name Server

NAME server contains System Landscape information of HANA system. In distributed environment, there are multiple nodes with each node has multiple CPU’s, Name server holds topology of HANA system and has information about all the running components and information is spread on all the components.

  • Topology of SAP HANA system is recorded here.
  • It decreases the time in re-indexing as it holds which data is on which server in distributed environment.

Statistical Server

This server checks and analyzes the health of all components in HANA system. Statistical Server is responsible for collecting the data related to system resources, their allocation and consumption of the resources and overall performance of HANA system.

It also provides historical data related to system performance for analyses purpose, to check and fix performance related issues in HANA system.

XS Engine

XS engine helps external Java and HTML based applications to access HANA system with help of XS client. As SAP HANA system contains a web server which can be used to host small JAVA/HTML based applications.

XS Engine

XS Engine transforms the persistence model stored in database into consumption model for clients exposed via HTTP/HTTPS.

SAP Host Agent

SAP Host agent should be installed on all the machines that are part of SAP HANA system Landscape. SAP Host agent is used by Software Update Manager SUM for installing automatic updates to all components of HANA system in distributed environment.

LM Structure

LM structure of SAP HANA system contains information about current installation details. This information is used by Software Update Manager to install automatic updates on HANA system components.

SAP Solution Manager (SAP SOLMAN) diagnostic Agent

This diagnostic agent provides all data to SAP Solution Manager to monitor SAP HANA system. This agent provides all the information about HANA database, which include database current state and general information.

It provides configuration details of HANA system when SAP SOLMAN is integrated with SAP HANA system.

SAP HANA Studio Repository

SAP HANA studio repository helps HANA developers to update current version of HANA studio to latest versions. Studio Repository holds the code which does this update.

Software Update Manager for SAP HANA

SAP Market Place is used to install updates for SAP systems. Software Update Manager for HANA system helps is update of HANA system from SAP Market place.

It is used for software downloads, customer messages, SAP Notes and requesting license keys for HANA system. It is also used to distribute HANA studio to end user’s systems.