Showing posts with label SAP HANA. Show all posts
Showing posts with label SAP HANA. Show all posts

Wednesday, 27 December 2017

Data quality metrics for Performance Measurement

Introduction


The data quality can be measured through metrics which in turn helps to identify the issue and helps the performance engineer to create the data or modify the data to adhere to the quality. Data quality depends on type of application, type of tables/views used in the application etc. If the data quality metrics are not adhered, the performance measurement gets compromised.

SAP applications are used by many companies. With the availability of SAP HANA platform, the business applications that are developed at SAP has undergone paradigm shift. The complex operations are push down to the database. The rule of thumb is therefore to get the best performance by doing as much as you can in the database. Applications that are developed on SAP HANA uses new data modeling infrastructure known as Core Data Services (CDS). With CDS views, data models are defined and consumed on the database rather than on the application server. The application developer can now use various built-in functions, extensions etc.

Performance Test Process

The Fiori application that are developed basically try to make most of the SAP HANA platform. In S/4Hana applications, whenever there is a request made from the Fiori application for retrieving the information, hits the CDS views. The SQL query with CDS view name in FROM clause along with the filters are passed on to the HANA database. The query gets executed in the HANA database and returns the result set to the Fiori UI.

To measure the performance of the Fiori application against single user, usually performance test starts with executing the dry runs and then measure the performance of the application subsequently. The measured performance is then compared with the thresholds that are defined and violation are identified.

Data Quality


For measuring the performance of the application, the data quality plays a crucial role. The test system wherein the performance measurement needs to be taken, should have adequate quality of data. There are CDS views that are used more frequently and has high volume of transactions when compared to others. So, there is a need to distinguish between the CDS views that has high volume and used more frequently. This kind of views need to adhere to the performance standards and lag in response is not expected. The lag in response may occur due to the following factors:

1. Filters are not push down correctly
2. Join conditions or cyclic joins can degrade the performance
3. Redundant union or joins that exist in the CDS views
4. Currency conversion not modeled appropriately
5. Execution of CDS views generates many temporary tables. The cause may be due to materializing of the fields with aggregation on large set of rows

While designing the CDS views the above factors needs to be considered. Also, apart from the above factors there are cases wherein when the system is having very less data (<1000 rows) then the performance issues are not identified. The inherent problem within CDS view gets visible or detected when the system is having bare minimum amount of data based on the annotation like Service Quality and Size are defined.

Data Quality Metrics:


The CDS views are often categorized into ‘S’, ‘M’, ‘L’, ‘XL’ and ‘XXL’. In the CDS view, ObjectModel.usageType.sizeCategory annotation is used define the size based on the volume of data the view can expect.

The resource consumption on HANA is mainly driven by two factors:

◈ The set of data which has to be searched through and
◈ The set of data which has to be materialized in order to compute the result set.This metrics helps to identify whether sufficient number of rows exist in the HANA database. This metrics are just an indicator on whether the performance measurement for single user test can be performed or not. If these bare minimum criteria are not met, then one won’t be able to unearthen the defects that may creep over a period, when the data grows.
◈ Size category S should have less than 1000 rows. Similarly, size category M should have less than 10^5. For L, it will be 10^7 rows.

SQL statement to retrieve the table information:


The report program can be written wherein the user input is a CDS view name. With the below statement, initially it is identified whether the given input is a CDS view or not.

SELECT OBJECTNAME FROM DDLDEPENDENCY WHERE STATE = ‘A’ AND OBJECTTYPE = ‘STOB’ AND DDLNAME = ‘<CDS Name>’         INTO TABLE @DATA(ENTITYTAB).

Here <CDS_Name> to be filled is the CDS view name.

For example:

SAP Online Guides, SAP Learning, SAP Certifications, SAP Tutorials and Materials

SELECT OBJECTNAME FROM DDLDEPENDENCY WHERE STATE = 'A' AND OBJECTTYPE = 'STOB' AND DDLNAME = ‘A_CHANGEMASTER’ INTO TABLE @DATA(ENTITYTAB).

The “API_CHANGEMASTER” is whitelisted service listed in SAP API Hub. When this ODATA service is invoked by any client side application (Fiori or custom application) then it internally hits “A_CHANGEMASTER” CDS view.

When we execute the above query, we will be able to retrive the object name. In this case, it is a valid and activated CDS view. Once we get the object name, we can get the number of tables used by the CDS view.

When we want to retrieve all the CDS views that starts with ‘A_C%’ then it can be done as follows:

SELECT DISTINCT SRC~DDLNAME, TADIR~DEVCLASS AS PACKAGE, TADIR~AUTHOR AS AUTHOR
        FROM TADIR INNER JOIN DDDDLSRC AS SRC ON TADIR~OBJ_NAME = SRC~DDLNAME 
       WHERE TADIR~PGMID = 'R3TR' AND TADIR~OBJECT = 'DDLS'
       AND   SRC~AS4LOCAL = 'A'
       AND SRC~DDLNAME = 'A_C%'
       INTO TABLE @DATA(DDLS).

   IF LINES( DDLS ) > 0.
      SELECT OBJECTNAME FROM DDLDEPENDENCY FOR ALL ENTRIES IN @DDLS
        WHERE STATE = 'A' AND OBJECTTYPE = 'STOB' AND DDLNAME = @DDLS-DDLNAME
        INTO TABLE @DATA(ENTITYTAB).

Now loop through the tables to find the number of rows that present in the database. For CDS view size category, this is the starting point to know the quality of the data. To be more stringent, based on the type of application, we can also check for Distinct entries that are present in the table. This will help to identify whether the enough data is present in the table. If there is less number of entries, then the quality engineer must create the data before taking the performance measurement.

IF SY-SUBRC = 0.
        CREATE OBJECT LR_VISITOR TYPE CL_DD_DDL_META_NUM_COLLECTOR EXPORTING DESCEND = ABAP_TRUE .
        data l_name type string.

        LOOP AT ENTITYTAB INTO data(LV_ENAME).
          l_name = lv_ename-objectname.
          LR_VISITOR->VISITDDLSOURCE( iv_dsname = l_name ).
          DATA(LR_NUMBERMAP) =  LR_VISITOR->GETNUMBERMAP( ).
          READ TABLE LR_NUMBERMAP ASSIGNING FIELD-SYMBOL(<CDS_VIEW>)
      WITH KEY ENTITY = LV_ENAME-OBJECTNAME.
  IF SY-SUBRC NE 0.
    CONTINUE.
  ENDIF.
ENDIF.

data tablename type string.
  data count type i.
  loop at <cds_view>-numbers-tab_info-table_tab assigning field-symbol(<tab_info>).

    collect <tab_info> into lr_tabs.
    tablename = <tab_info>-TABNAME.
    select count(*) from (tablename) INTO count.
  endloop.

Here we get number of entries present in the table. For the above example, there is only 1 table i.e. AENR. When we query for number of entries in the table AENR using count (*), and output is 100 rows. This is less as the size category annotation is specified as ‘L’. So, bare minimum rows that are required before starting with the performance system is somewhere between 10^5 and less than 10^5.

Tuesday, 5 September 2017

Unwanted Service in HANA Studio for SYSTEMDB

Have got a situation where the index server is shown in HANA studio under HANA SYSTEMDB . Though upon checking the services from OS level, you dint see any such services.

But your database in HANA studio, always shown as stopped database. It is good to get rid of such services which can cause the false alerts, or mislead the technical team. It might happen for tenant DB also.

This blog will help to understand why such incident happen and how to rectify them.

Context: When we reset credentials in SAP HANA Database without following recommended procedure it will add another index server and created rogue service. Below are the steps need to follow to reset credentials:

SAP HANA Studio, SAP HANA Certifications, SAP HANA System, SAP Guides, SAP Learning, SAP Hana Database

Usually we do not follow step 1 to 3 and directly reset credentials which results in creation of rogue service. Below are the commands to identify creation of rogue service in HANA System Database.

 Execute “select * from m_volumes” to check available volumes

SAP HANA Studio, SAP HANA Certifications, SAP HANA System, SAP Guides, SAP Learning, SAP Hana Database

Under SystemDB, execute “select * from M_TOPOLOGY_TREE where path like ‘/host/<Host name>/indexserver’ ” to check the available Indexservers including rogue ones which you can see under SystemDB.

SAP HANA Studio, SAP HANA Certifications, SAP HANA System, SAP Guides, SAP Learning, SAP Hana Database

Every index service volume is assigned and can be identified with command “Select * from M_TOPOLOGY_TREE where Path LIKE ‘/volumes”

SAP HANA Studio, SAP HANA Certifications, SAP HANA System, SAP Guides, SAP Learning, SAP Hana Database

Under Name attribute above, numbering like 4:2, 1, 5:2 and more should be matched with available volumes. No sign of -1:2 availability means rogue volume.

SAP HANA Studio, SAP HANA Certifications, SAP HANA System, SAP Guides, SAP Learning, SAP Hana Database

So in such situation if more than one service of a particular type is running on a host, you can remove the superfluous service(s)/rogue service to streamline the use of database resources.

Prerequisites

◉ You are logged on to the system database and having system privilege DATABASE ADMIN.
◉ The database is online.
◉ The service to be removed is running.
◉ Automatic log backup must be enabled.

Rogue Service Removal

With the help of steps mentioned above you are able to identify the rogue service running on HANA Database. e.g In our case its with volume -1:2. Us below query to remove rogue service from System Database.

ALTER SYSTEM ALTER CONFIGURATION (‘topology.ini’, ‘system’) UNSET (‘/host/<Host Name>/indexserver’,'<PORT>’) WITH RECONFIGURE

After executing above query refresh page and check rogue service is deleted.

SAP HANA Studio, SAP HANA Certifications, SAP HANA System, SAP Guides, SAP Learning, SAP Hana Database

Execute query to confirm rogue service removal.

SAP HANA Studio, SAP HANA Certifications, SAP HANA System, SAP Guides, SAP Learning, SAP Hana Database

Remove volume id with

ALTER SYSTEM ALTER CONFIGURATION (‘topology.ini’, ‘system’) UNSET (‘/volumes’, ‘<Volume Name>’) WITH RECONFIGURE;

Execute “”Select * from M_TOPOLOGY_TREE where Path LIKE ‘/volumes”” again to ensure removal is done.

SAP HANA Studio, SAP HANA Certifications, SAP HANA System, SAP Guides, SAP Learning, SAP Hana Database

After removal of rogue service from System Database make sure database backup is taken successfully.

Saturday, 2 September 2017

HANA 2.0 Doc Store Service

Overview


HANA so far has been excellent providing Row and column store But as they say, there is always scope of improvement and SAP proving it again.

SAP HANA now also includes a DocStore. Not unlike MongoDB, the SAP HANA JSON Document Store enables the developer to store and manage JSON documents (artifacts) with support for native operations on JSON including filtering, aggregation, and joining JSON documents with HANA relational tables. We are not talking about storing JSON as CLOB in a regular database column, in other words – the quick and dirty way

Doc store service:


The Document Store is an optional feature of the SAP HANA database and you have to enable the docstore operating system process per tenant database.
The DocStore does not have a pre-determined SQL port, all communication is routed through the regular indexserver (database engine) process

Procedure to Setup


As HANA 2.0 SPS01 on wards, it is only MDC so providing only way for Multi DB.

For multi-database instances, there can be 0 (none) or 1 (one) DocStore per tenant database; you can enable the DocStore in a database tenant by running the following command as administrator in the SQL console:

ALTER DATABASE <database> ADD ‘docstore’

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Live, SAP Learning, SAP Certifications List

You can add to particular hostname and port also
Alter database <SID> add ‘docstore’ at location ‘<hostname>:<Available Port>’

You can find the already used ports using below command

SELECT DATABASE_NAME, SERVICE_NAME, PORT, SQL_PORT, (PORT + 2) HTTP_PORT FROM SYS_DATABASES.M_SERVICES 

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Live, SAP Learning, SAP Certifications List

Removing Service


Alter database <SID> remove ‘docstore’ at location ‘<hostname>:<port>’

Operations on JSON document


Login to Tenant DB using HANA studio or hdbsql or hana cockpit, We just want SQL editor , anyway from mentioned way as per one’s comfort.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Live, SAP Learning, SAP Certifications List

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Live, SAP Learning, SAP Certifications List

Run a query for operations on JSON format. If you are developer or ABAPer it will be very easy to run queries. If you know JSON coding, it will be easy to work in HANA also like Select, update, delete, rename to name a few.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Live, SAP Learning, SAP Certifications List

Showing example of hdbsql also and running delete query as below:

DELETE FROM CUSTOMERS

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Live, SAP Learning, SAP Certifications List

You can also rename JSON documents also similar like running sql queries.

This way SAP has improved another feature of HANA and now you can store your documents directly into HANA using DocStore service.

Thursday, 24 August 2017

Using Predictive Analytics and Python on SAP Cloud Platform HANA database - Part 2

This blog is a continuation of the previous blog which focused on connecting an on-premise redictive Analytics software with HANA database on the Cloud Platform. Python is sometime used to implement data science in conjunction with Predictive analytics.

In this blog, I will show you how to connect a Python program to a HANA database available on SAP Cloud Platform. The process to connect to the database is slightly different it is on SAP Cloud Platform.

I am going to use the same “hcpta” HANA database used in the previous blog.

SAP Cloud Platform, SAP HANA Live, SAP HANA Tutorial, SAP HANA Certifications, SAP Certifications, SAP All Modules List

I have setup a Cloud Connector which setup a connection to the account in Cloud Platform. I have also registered a service channel for hcpta database on port 39815

SAP Cloud Platform, SAP HANA Live, SAP HANA Tutorial, SAP HANA Certifications, SAP Certifications, SAP All Modules List

I have installed Python on my laptop .

SAP Cloud Platform, SAP HANA Live, SAP HANA Tutorial, SAP HANA Certifications, SAP Certifications, SAP All Modules List

To connect to HANA database, I am using a python client which is available in github.

The installation steps are documented and you can follow them to prepare python.

SAP Cloud Platform, SAP HANA Live, SAP HANA Tutorial, SAP HANA Certifications, SAP Certifications, SAP All Modules List

The github has got sample code which explains how to use each of the methods.

Below is a sample program which creates a table and inserts a record. I have used localhost as the Cloud Connector is also installed on the same laptop.

import pyhdb

connection = pyhdb.connect(
    host="localhost",
    port=39815,
    user="<HANA_DB_USER>",
    password="<HANA_DB_PASSWORD>"
)
cursor = connection.cursor()
cursor.execute('CREATE TABLE PYHDB_TABLE("NAMES" VARCHAR (255) null)')
print ("Table PYHDB_TAB created ")
cursor.execute("INSERT INTO PYHDB_TABLE VALUES('Hello Python')")
print ("Record Inserted into PYHDB_TABLE")
connection.commit()
connection.close()

After executing this script, I could see the table created in my HANA database in the Cloud Platform.

Below is the new table which is created in my schema along with the record.

SAP Cloud Platform, SAP HANA Live, SAP HANA Tutorial, SAP HANA Certifications, SAP Certifications, SAP All Modules List

This shows how you can connect Python to a HANA database on Cloud Platform and perform DDL/DML operations.

Wednesday, 23 August 2017

Using Predictive Analytics and Python on SAP Cloud Platform HANA database - Part 1

I was recently working with a customer who was interested in doing Predictive Analytics on top of the HANA database which they recently subscribed to on SAP Cloud Platform. They already have an on-premise server for Predictive Suite and have been using their tools against an on-premise HANA database. I this blog, I wanted to share my experience to highlight how easy it is to do the same on a HANA database on the Cloud Platform.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

Once you have a HANA database subscribed on your Cloud Platform account, its important to note that you will need an additional subscription to Predictive services on the Cloud Platform too. The Predictive service on Cloud Platform offers REST based APIs which can be used in custom applications that you would build on the Cloud Platform.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

In this scenario, we are not going to leverage the REST APIs as we are going to use the on-premise PA Suite to handle complex use cases.

Once you have subscribed to Predictive Service, you can navigate to your database and click on “Install Components”

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

The system will give you a self-service option to install the APL Libraries you want. Note, it is recommended to have the version of APL match the version on your on-premise PA suite.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

Once the APL libraries are installed on your HANA database, the next thing to do is to setup your Cloud Connector.

The next step would be select “On-premise to Cloud” option within the Cloud Connector to setup a service channel for your database connection.

In the example below, I have given the local instance number “98” for my HANA instance “hcpta”

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

After you save the settings, you can now access the hcpta instance using the host name where the Cloud Connector is installed and Port = 39815. In my example, I have installed the Cloud Connector on my laptop and hence will refer to it as localhost.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

Search for ODBC Data Source in your programs and under “System DSN” maintain a new data source connection. In the below screen, I have maintained one with the name “HCP”

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

In the connection details, I have provided localhost:39815. When I try to test the connection, it will ask me for the HANA DB user credentials and it will give a successful message if everything works fine. I have got Cloud Connector, ODBC and PA software all on the same laptop.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

For demonstration purposes, I have created a schema ADM_DEMO which has demo data on banking customers and their transactions. The transaction table has got millions of records which we can use for predicting the customer churn.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

I am going to show to get started with Predictive Analytics (on-premise) and connect to HANA DB on SAP Cloud Platform. I am not an expert on PA, but just showing a very basic scenario which created a table back in HANA. I would recommend using Predictive Analytics 3.2 version.

Launch the PA software on your laptop and click on “Create Data Manipulation” under Data Manger.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

In the Data sources, select the one created in ODBC and provide the HANA DB login credentials.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

Select the Table by browsing through the available schemas.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

I have taken the Disposition table to begin with.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

I can navigate to the Views tab and explore the data of this table.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

Next, I am using the Merge option to connect with the Accounts table

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

Account_ID is the key which links both the table. I repeat the same steps for adding Client table.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

Once I have added all the tables and linked the keys, I can view the SQL which the system has generated.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

I can also view the contents of this Dynamic SQL within PA.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

I can apply filters, for example to only consider Client Type with a value “Owner”

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

Once I have put the relevant filters and set the aggregations which are required, I can now save the data back into HANA as a view. In the below example, I have given the name of a table PA_CHURN.

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

After execution of this step, I can now view the processed table available in HANA database

SAP Cloud Platform, SAP HANA, SAP HANA Learning, SAP All Modules List, SAP Module, Python, SAP HANA Certificaions

In the next blog, I will show you how to connect a python program to perform data science. This has been also coming up as a frequent question especially around customers who are implementing Predictive Analytics.

Tuesday, 22 August 2017

HANA Window Functions: Delivery Block Duration Example

Introduction


I have been working with databases for ages, and always thought they had little limitations, except for the possibilities to calculate across rows. In basic SQL it’s not possible to refer to values in other rows. This make some calculations very hard or even impossible.

SAP HANA Certifications, SAP HANA Live, SAP HANA Learning, SAP Guides, SAP Module, SAP Modules List, SAP Tutorial and Certifications

Working now a lot with the SAP HANA database, I learned about window functions which really opened a lot of new possibilities.

Window Functions


You can regard a window function as an in-line aggregation. You will get the results of the aggregation function on each line. Some simple examples based on the table below show the idea and the syntax of a window function:

SAP HANA Certifications, SAP HANA Live, SAP HANA Learning, SAP Guides, SAP Module, SAP Modules List, SAP Tutorial and Certifications

Let’s use a window function now to sum the total revenue of each customer. Here we use the well-known SUM() function and specify the aggregation level with the ‘over(partition by …)’ extension:

select "Customer", "Period", "Revenue",
sum("Revenue") over (partition by "Customer") as "TotalCustomerRevenue"
from "NSLTECH"."CustomerPeriodRevenue"

Which results in the following:

SAP HANA Certifications, SAP HANA Live, SAP HANA Learning, SAP Guides, SAP Module, SAP Modules List, SAP Tutorial and Certifications

This would be possible without window functions by running a subquery which does the aggregation on customer level and join it to the original table.

If we add an ‘order by’ clause, we will actually get a running sum over the periods

select "Customer", "Period", "Revenue",
sum("Revenue") over (partition by "Customer" order by "Period") as "TotalCustomerRevenue"
from "NSLTECH"."CustomerPeriodRevenue"

SAP HANA Certifications, SAP HANA Live, SAP HANA Learning, SAP Guides, SAP Module, SAP Modules List, SAP Tutorial and Certifications

Calculating Delivery Block Duration


A common question from business is to analyze the time a delivery block (or any other) has been active. This is a nice example which we can solve with the window functions LAG. The LAG function returns the value of a specific field of the previous row in the partition.

Let’s look at some example change documents regarding delivery blocks in the CDPOS/CDHDR table of SAP:

SAP HANA Certifications, SAP HANA Live, SAP HANA Learning, SAP Guides, SAP Module, SAP Modules List, SAP Tutorial and Certifications

Here you see that one document has been blocked and unblocked twice with the same code (07). The records where VALUE_OLD has a value and VALUE_NEW is empty are the moments the blocks are removed. If we take these records as the basis we would like to join the corresponding records at which the block was set.

However, this is not easily done with a subquery as you can’t just look at similar keys and block values because in this case the document has been blocked twice. You actually need to find the closest to the unset. This is where the window function LAG comes in.

First we add a couple of helper columns to the raw data:

◉ ChangeDate: to_seconddate(concat(UDATE, UTIME), ‘YYYYMMDDHH24MISS’)

◉ BlockCode: case VALUE_OLD when ” then p.VALUE_NEW else p.VALUE_OLD end

◉ BlockChange: case VALUE_OLD when ” then ‘Block’ else ‘Unblock’ end

Based on this input we calculate the previous ChangeDate for all records using the LAG function:

LAG(“ChangeDate”) over (partition by TABKEY, TABNAME, FNAME, “BlockCode” order by “ChangeDate”) As “PreviousDate”

The complete query:

select
  TABKEY, "BlockCode", "BlockChange", "ChangeDate",  VALUE_OLD, VALUE_NEW,
  LAG("ChangeDate") over (partition by TABKEY, TABNAME, FNAME, "BlockCode" order by "ChangeDate") As "PreviousDate"
from (
  select
     p.MANDANT, p.CHANGENR, p.TABKEY, p.TABNAME, p.FNAME, h.UDATE, h.UTIME,  p.VALUE_OLD, p.VALUE_NEW,
     case p.VALUE_OLD when '' then 'Block' else 'Unblock' end As "BlockChange",
     case  p.VALUE_OLD when '' then p.VALUE_NEW else p.VALUE_OLD end As "BlockCode",
     to_seconddate(concat(UDATE, UTIME), 'YYYYMMDDHH24MISS') As "ChangeDate"
  from SAP.CDPOS p
     inner join SAP.CDHDR h ON p.MANDANT = h.MANDANT and p.OBJECTCLAS = h.OBJECTCLAS and p.OBJECTID = h.OBJECTID and p.CHANGENR = h.CHANGENR
  where fname= 'LIFSK'
)

Which now results in

SAP HANA Certifications, SAP HANA Live, SAP HANA Learning, SAP Guides, SAP Module, SAP Modules List, SAP Tutorial and Certifications

If you select only the ‘Block’ records from this results and calculate the difference between the ChangeDate and the PreviousDate you will get the duration of the block.

Tuesday, 15 August 2017

5 Steps To Achieve a Successful Migration to SAP HANA

SAP HANA is the most important technology innovation from SAP in the last decade and almost all SAP products released after SAP HANA have an extensive ability to integrate into this platform.

SAP HANA, SAP Module, SAP All Modules, SAP Guides, SAP Tutorial and Material, SAP Certification, SAP Learning

SAP HANA is not just a traditional database; it is an in-memory data platform deployable on premise or in the cloud, and empowers you to accelerate business processes, deliver more business intelligence with advanced innovations and capabilities to run your business faster and simplify your IT environment.

SAP HANA, SAP Module, SAP All Modules, SAP Guides, SAP Tutorial and Material, SAP Certification, SAP Learning
“HANA is attached to everything we have.” Bill McDermott, CEO, SAP.
When implemented properly, SAP HANA delivers outstanding results in terms of performance, integration capabilities, analytic intelligence, data processing and improves ROI of your SAP landscape with faster time to value. This article is designed to help you plan and execute a successful technical migration to SAP HANA platform and aimed to provide a high-level overview of the most important, yet often overlooked steps, with a focus to reduce the risk of your organization’s journey to SAP HANA.

STEP 1: CORRECTLY SIZE YOUR HANA LANDSCAPE


Sizing your SAP HANA landscape is a vital and fundamental step when creating your technical project plan and helps you realize the maximum benefit from your investment while reducing long-term total cost of ownership. Inadequate and over-provisioned sizing could lead excess capacity and bloated hardware while under-provisioning can cause unexpected delays that will increase the cost of operational performance.

For the most part, sizing of SAP HANA database is based on the main memory which is determined by the amount of the actual data stored in memory. Since the data is compressed in HANA and the compression results depends on the used scenario, it is not easy to guess the amount of memory needed for sure. Therefore, the memory sizing for SAP HANA should be performed using SAP Quick Sizer tool, relevant SAP notes and SAP sizing reports.

Correct sizing of SAP HANA consists of three main steps:

◉ Memory sizing for static and dynamic data
◉ Disk sizing for persistence storage
◉ CPU sizing for transactions, queries and calculations

STEP 2: CHOOSE THE RIGHT PLATFORM AND MIGRATION STRATEGY


You can migrate to SAP HANA quickly and seamlessly by choosing the right platform that best fits your business needs, budget, and resources. SAP HANA can be deployed on premise for maximum control and reduced risk, or in the cloud for increased flexibility, scalability and faster time to value.

With on premise deployment; you can choose a certified SAP HANA appliance from one of SAP’s hardware partners. The preconfigured appliance with preinstalled software (by the hardware vendor) will help you harness the real-time power of SAP HANA in-memory platform – behind your own firewall. With the preconfigured approach, you will get the solution validated by both SAP and hardware vendor. On the other hand, SAP HANA Tailored Data Center Integration (TDI) provides you more flexibility. You can significantly reduce infrastructure costs and simplify your SAP HANA integration by leveraging existing hardware and operations in your own data center.

There are variety of cloud deployment scenarios. SAP also has its own private cloud offering called SAP HANA Enterprise Cloud. It includes an SAP HANA software license, underlying cloud infrastructure, and SAP-managed services. Public cloud, IaaS offerings allow you to bring your own SAP HANA license to run on third-party public cloud providers: Amazon Web Services, Google Cloud Platform, IBM Bluemix Cloud Platform and Microsoft Azure.

SAP HANA, SAP Module, SAP All Modules, SAP Guides, SAP Tutorial and Material, SAP Certification, SAP Learning

After you decide your HANA deployment scenario, you need to select the most effective migration strategy to reduce any unforeseen problems and eliminate longer system downtime during the technical migration so you can realize faster time to value.

Classical migration is the first and mostly used (so far!) approach for OS/DB migration which is basically a heterogeneous system copy using classical migration tools such as SWPM, R3load and Migration Monitor. If your system does not require any version update to run on SAP HANA, and you only need to perform the actual migration; for instance, you are migrating your Business Suite or BW system to SAP HANA as it is without any additional component requirement; classical migration approach would probably be the best way.

Another option is Database Migration Option (DMO) of SUM which combines system update, technical migration and unicode conversion (if required) with an optimized migration procedure from ABAP-based SAP system running on anyDB to SAP HANA. DMO of SUM offers simplified migration steps (hence less error-proneness), reduced manual effort compared to classical migration, and only one business downtime period which can also be optimized depending on the scenario. Source database is consistent with this approach, continues to run and it is not modified therefore it can be reactivated with only medium effort in case of a fall-back. So, even though it’s a new method, it is a completely safe option.

If you have lots of technical issues with your existing system and would like to proceed with greenfield approach with selective data migration for SAP HANA; then it might be better to perform a fresh installation on SAP HANA. This option can also be an efficient approach for companies that has mostly standard SAP processes and planning to move S/4HANA Cloud with relatively smaller data foot print.

STEP 3: CLEANSE YOUR DATA


SAP HANA offers a lot of advanced analytics and opportunity to optimize your business operations by analysing large amounts of data, in real time. It runs faster than all traditional databases we know so far, but you know what is better?

Run it even faster with reduced cost!

Data cleansing is one of the critical activities you must perform before bringing your SAP systems into HANA and unfortunately it is also the most overlooked step. There are three most important benefits from cleansing your data:

◉ Reduced data foot print will also reduce your infrastructure, hardware and SAP HANA licensing costs
◉ Reduced data size allows you to perform the technical migration with reduced business downtime
◉ By keeping only quality and necessary data in your system, SAP HANA performs even better after the technical migration

SAP HANA, SAP Module, SAP All Modules, SAP Guides, SAP Tutorial and Material, SAP Certification, SAP Learning

The figure above provides guidance for a proper data cleansing process. Keep in mind that the activities listed here may change, depending on your business needs and internal processes. You must ensure you are applying the most suitable data cleansing method for your own business. It is generally OK to remove technical data, application logs and system logs. You can always archive your aged business data to reduce the size of your database.

STEP 4: APPLY HIGH IMPLEMENTATION STANDARDS


It is generally a bad idea to cut corners during a technical migration project, and you need to allocate the time to get it right. Do not to take shortcuts during this phase of the project and focus on keeping high standards in your activities. You need to be methodical and understand all required activities from planning to cutover.

Your technical team should have plenty of experience and understanding of technical migration guidelines, relevant SAP notes and best practices. If things go wrong while performing the technical migration, you will be better prepared and less likely to miss the solution.

Make sure your source systems are ready for their journey to SAP HANA. Even though your source system version is supported for the migration, it is better to be on the latest release. SAP delivers latest fixes and solutions to common problems with every release and support package. Make sure your system has these in place before starting a migration project.

SAP HANA, SAP Module, SAP All Modules, SAP Guides, SAP Tutorial and Material, SAP Certification, SAP Learning

Don’t take unnecessary risks especially when migrating your database. You will never have too many backups; therefore, make sure you have full backups and archive logs to allow regular restore points. Eliminate common risks to ensure a smooth technical migration.

As J.R.R. Tolkien said: “Short cuts make long delays.”

If you want your project to be a success, then do not take shortcuts and always keep high standards.

STEP 5: DO A PROOF OF CONCEPT


You need to perform a “Proof of Concept” in a sandpit environment first, so you can “validate” your migration process to an SAP HANA platform. You can do this by copying your SAP Production system to create an SAP Sandpit environment and perform the actual migration on this system first.

This is a crucial step for any technical migration process and I can assure you that the cost of duplicating the environment is well worth it, because;

◉ It gives you an idea of how long it takes
◉ You can identify possible issues in the sandpit system and reduce the project risk
◉ It facilitates business to realize the power of SAP HANA
◉ It helps you make important decisions in your project plan and improve overall project productivity
◉ You will have more testing and validation time in a non-critical environment
◉ It provides a sense of momentum throughout project

SAP HANA, SAP Module, SAP All Modules, SAP Guides, SAP Tutorial and Material, SAP Certification, SAP Learning

As Aristotle once said: “For the things we have to learn before we can do them, we learn by doing them.”

WHAT NEXT?


If you have gotten this far, you are now ready to start planning your technical migration to the SAP HANA platform.

Implement these 5 steps; you will not experience any unexpected issues and your technical migration process should go smoothly.

I wish you a successful transition into the new SAP HANA platform.