Creating HANA Database

In my previous post we have created the SAP HANA Colud developer Account. Now we will see how to create HANA MDC (Row) database.

After Login to the SAP HCP Cockpit, on the Navigation Bar we have “Databases & Schemas” Click that.

Click “New” to create a database and enter the “Database ID” we can choose any name here,


Give the Password for the “SYSTEM” User, this is require to login to the Database in the next step.

Now press “Save” it will take some 15 – 20 min.

we can see the progress in the “Events” tab from Navigation bar. once the event “Database started successfully” is raised, go to the “Overview” tab.

Which look as below.

Where status is “STARTED”

database started

Now we need to configure the Schema in the Database which we created. to do that we need the user for the same.

Lets create the Database user first, we can not use SYSTEM user for db access that’s why.

Creating DB user

Click on the “Administration Tools : SAP HANA Cockpit” link which will open the Fiori Application in new window( Which is called HANA Cockpit).

Then Give the User name as “SYSTEM” and password which is given in previous step

It will say you dont have permission, dont worry we are using Trail account we have to see these errors, click “OK”.

Now in the Fiori Lanchpad click on the “Manage Roles and Users” Tile, which will open the new Fiori app (SAP HANA Web-based Development Workbench: Security) 


In the Above Screen right click on the users and select new user in the context menu

Enter the User name, Which is ALLCAPS “SAP Stranded” period, and password (Temp password once user is logged in he has to change this) and Save from the main menu as in the previous screen shot.


Add Roles user

Click on the “+” icon in “Granted roles” tab


Search of “ide” as below and select all press “OK”


Do the same to add one more role CONTENT_ADMIN”

Yes, we have created the HANA MDC database and added the User which is having role to access and modify the database though SAP HANA Databse WebIDE

Confirm the user and roles are proper 

Log out from the Security App ( yes we are working on SYSTEM user till now)

Login with the New user which is created, and temp password which is given while creating the user.

Once login it will ask to change the Password for the user.

Continue reading “Creating HANA Database”


SAP HANA Cloud Developer Account Creation

SAP Will give the Developer trail account for Cloud

Which is

  • Only one JAVA application can be deployed
  • And HANA DB will stopped every day.
  • And after 7 days the HANA DB will be removed.

If you are ok with these go ahead and try. Otherwise buy Licence from SAP 😉

While creating account go with this link

HCP Developer Page

Otherwise it will not enable BETA features,


Where i have created the SAP Account First then started using the SAP HANA Cloud Account so i cannot use the Tomcat 8 which is BETA (while writing this article).

In the HCP Developer Page Click on sign up for Free account>Try now

That will take you to the Login page

click on Register and give you details

The Activation link will come to your mail, which is given while registering.

Open the Mail and Activate.

After that you can use the SAP HCP Cockpit link to login

SAP HANA Cloud Platform Cockpit

Import SHP to HANA Spatial

So in order to load shape file into HANA you need to perform the 4 simple steps stated below:-

  1. Download Putty( and PSCP(
  2. Copy the shape file(zip it first) from your local machine into HANA using PSCP
    C:\<path to PSCP directory>>pscp.exe  <source file> <OS_Username>@<HANA _server name>:<destination folder>
  3. You can login to you server using putty and unzip the files.
  4. You can import the shape files in HANA by running the below command

IMPORT “Schema_Name”.”Table_Name” AS SHAPEFILE FROM ‘path to shape file’
        Note: Don’t give the extension of the shape file in the path. Just mention its name.


Source :

CSS Position for divs

 height: 100%;
 width: 100%;
 overflow: hidden;
 margin: 0px;

 .container {
//Make the Container Div position has relative
 position: relative;
 background: black;
 width: 100%;
 height: 100%;
 background: white;
// And the div which need to position should be absolute and give the
// top and left and right to place those
 position: absolute;
 top: 0%;
 left: 0%;
 top: 0%;
 right: 0%;
 top: 0%;
 left: 50%;
 left: 0%;
 top: 50%;
 right: 0%;
 top: 50%;
 bottom: 0%;
 left:0% ;
 bottom: 0%;
 right: 0%;
 bottom: 0%;
 left: 50%;

<!DOCTYPE html>
<div class="container">
 <div class="top-right widgetPlacer" >Top right</div>
 <div class="top-middle widgetPlacer">Top Mid</div>
 <div class="top-left widgetPlacer">top left</div>
 <div class="mid-left widgetPlacer">Mid left</div>
 <div class="mid-right widgetPlacer">mid right</div>
 <div class="bottom-left widgetPlacer">bottom left</div>
 <div class="bottom-mid widgetPlacer">bottom middle</div>
 <div class="bottom-right widgetPlacer">bottom right</div>
Top right
Top Mid
top left
Mid left
mid right
bottom left
bottom middle
bottom right

JDBC Transaction and Locks

What anomalies can occur without proper transaction and locking?

When we deal with reading and modifying data, we may face some dilemmas regarding the information integrity and validity. These dilemmas arise because of database operations colliding with each other; for example two write operations or a read and a write operation colliding together.

These  anomalies are listed below:

  1. Dirty Reads: Technically speaking a dirty read happens when a transaction reads some data that is being changed by another transaction which is not committed yet.
  2. Non-Repeatable Reads:  A Non-Repeatable Read happens when a transaction reads some records that another transaction is modifying. If the reader transaction tries the same query within the same transaction again, the result will be different compared to the first time.
  3. Phantom Reads: Occur when we are reading a set of records with a specific WHERE clause and another operation inserts new records matching our WHERE clause. Our transaction has read the records without including the record being inserted.


What is database locking?

When we say we have a database lock over a record, a set of records, a database page, a database table, a table-space, etc. we refer to the fact that the database prevents any changes on the locked values. When a set of records are locked any transaction trying to change those data will be queued until the lock is released.

The number of acquired locks and proximity of data being locked determine whether the lock should scale up and go to an upper level, for example from record level to page level, or it should shrink from a table level lock to a page level lock. When the lock goes up, it prevents changes to a larger number of records; for example, when a row level lock, which locks a few hundreds of records, scales to table level lock, which can lock millions of records.

A database manages these locks in the most resource efficient way. For example when we have 100 locks on a table, the database may up the lock and lock the entire table for new transactions instead of keeping  hundreds of separate locks.

What is optimistic concurrency?

As its name implies, optimistic concurrency means assuming that no concurrent transactions happen to affect each other’s data and therefore we not locking the records in database level. You may ask, what happens if before we commit our update another user updates the data we were working on? What will happen to the changes the other user made to those data?

The answer lies in different mechanisms used for detecting changes in the original data we read. Comparing the records or using version field, as JPA does, are two common methods to detect changes in original data and notify the user to decide whether he wants to overwrite the other user’s changes or he wants to reconsider his updates.

In optimistic locking, it is the developer’s duty to check for changes in the data before updating it.


What is pessimistic concurrency?

In contrast with optimistic locking, pessimistic locking assumes that we certainly have some colliding transaction, and locking the records is necessary to prevent any other transaction accessing the data until the undergoing transaction finishes.


Why should I consider optimistic versus pessimistic approaches to database?

The following facts can affect our decision on using optimistic and pessimistic locks:

When we know that the lock period is short it is best to use pessimistic locking, while if the lock time may be long, we should consider using optimistic locking in order to improve the overall performance.

When we have too many transactions which might collide it is better to use pessimistic locking as the roll-back cost can exceed the locking cost.


How are transactions managed in JDBC?

When using JDBC, we have two options: we can either let the JDBC driver implementation handle the transaction automatically by committing the transaction right after execution of each statement, or we can handle the transaction manually by starting the transaction and then committing it when we see fit. Note that autocommit is on by default according to the JDBC specification.

The following sample code shows how to manually start and commit a transaction:

Statement stmt = con.createStatement();
stmt.executeUpdate("insert into student(field1) values (110)");
// doing some other operations like sending email, performing other db operations  //and finally committing the transaction.
con.commit();//committing the transaction and persisting the insert operation.


For the automatic transaction management we do not need to do anything because after executing each operation the transaction will commit automatically.

Note that when dealing with batch operations all of the batch members will form one single transaction and either all batch members will affect the database or none of them.

What are the standard isolation levels defined by JDBC?

There are four isolation levels supported by JDBC driver the underlying database may or may not support them. The list below shows these levels from the least restrictive to the most restrictive one.

  • TRANSACTION_NONE: A constant indicating that transactions are not supported.
  • TRANSACTION_READ_COMMITTED:  A constant indicating that dirty reads are prevented; non-repeatable reads and phantom reads can occur.
  • TRANSACTION_READ_UNCOMMITTED:  A constant indicating that dirty reads, non-repeatable reads and phantom reads can occur.
  • TRANSACTION_REPEATABLE_READ: A constant indicating that dirty reads and non-repeatable reads are prevented; phantom reads can occur.
  • TRANSACTION_SERIALIZABLE: A constant indicating that dirty reads, non-repeatable reads and phantom reads are prevented.


All of these constants are of type int and exist in JDBC Connection interface.

We should set the isolation level in our JDBC connection object prior to using it. Some application servers allows us to specify the default isolation level for a connection acquired from a pool.  Different database management systems support one or more of these levels, we should check whether one is supported or not prior to setting the isolation level to it.

Connection con = DriverManager.getConnection(url);
DatabaseMetaData dbmd = con.getMetaData();
if (dbmd.supportsTransactionIsolationLevel(TRANSACTION_SERIALIZABLE)
{ con.setTransactionIsolation(TRANSACTION_SERIALIZABLE); }
// doing the transactional tasks


The following table shows which isolation level endures which one of the phenomena.


What are Savepoints?

As you know transactions can be rolled back, meaning that we can restore the database state back to where it was prior to starting the transaction. Sometimes the operations that our transaction is performing are so expensive and extended that we prefer not to rollback the entire transaction, and instead we prefer to commit the transaction down to some specific point.

Savepoints allows us to mark the transaction execution and later on rollback the transaction to that specific marker if required.

The following sample code demonstrates using Savepoints:

Statement stmt = con.createStatement();
stmt.executeUpdate("insert into student(field1) values (110)");
//set savepoint
Savepoint svpt1 = conn.setSavepoint("SAVEPOINT_1");
rows = stmt.executeUpdate("insert into student(field2) values ('value')");
//rolling back the transaction  up to the insert operation
//committing the transaction and persisting the insert operation.


Note that some databases do not supporte nested savepoints; meaning that you can have only one savepoint per transaction. Check your DBMS documentation for any savepoint restrictions.


What are the considerations for deciding on transaction boundaries?

Designing and deciding the transaction processing boundaries is very domain specific but there are some points which we should always consider:

  • Use manual transaction processing when possible.
  • Bundle operations, as much as the business domain allows, together to increase the overall performance.
  • Use the isolation levels carefully; a more restrictive isolation level means more transactions being blocked for the undergoing one. Decide on which isolation level is really required by consulting the business analyst documents.
  • Consult database documentation and the JDBC driver to understand which isolation levels are supported and what is the default isolation level.
  • Fine tune the database lock escalation attributes according to the system characteristics.

Source : Java.Dzone

JDBC Connection in Java

Connecting to the Postgres Database


Connecting to the MySQL Database


Connecting to the Oracle Database


To connect, you need to get a Connection instance from JDBC. To do this, you use the DriverManager.getConnection() method:

Connection db = DriverManager.getConnection(url, username, password)