This is the third blog in a three-part blog post series on best practices in migrating SAP to Azure.
BWoH and BW/4HANA on Azure
Many SAP customers’ compelling events in their migration of SAP HANA to the cloud have been driven by two factors:
- End of life first-generation SAP HANA appliances causing customers to re-evaluate their platform.
- The desire to take advantage of the early value proposition of SAP Business Warehouse (BW) on HANA in a flexible TDI model over traditional databases and later BW/4HANA.
As a result, numerous initial migrations of SAP HANA to Microsoft Azure have focused on SAP BW to take advantage of SAP HANA’s in-memory capability for the BW workload. This means migration of the BW application to utilize SAP HANA at the database layer, and eventually the more involved migration of BW on HANA to BW/4HANA.
The SAP Database Migration Option (DMO) with System Move option of SUM, used as part of the migration allows customer the options to perform the migration in a single step, from source system on-premises, or to the target system residing in Microsoft Azure, minimizing overall downtime.
As with the S/4HANA example my colleague Marshal used in “Best practices in migrating SAP applications to Azure – part 2,” the move of traditional BW on AnyDB from a customer datacenter to BW on HANA, eventually will accomplish BW/4HANA in two steps. The hyper-scale and global reach of Microsoft Azure can provide you the flexibility to complete the migration while minimizing your capital investments.
Figure 1: SAP BW on HANA and BW/4HANA migrations
Step one
The first step is a migration of SAP BW on AnyDB to SAP BW on HANA, running in Microsoft Azure. My colleague Bartosz Jarkowski has written an excellent blog describing the Database Migration Option (DMO) of SAP’s Software Update Manager (SUM).
This migration step can be accomplished using one of the HANA-certified SKUs listed in the certified and supported SAP HANA hardware directory – IaaS platforms, as the target server running SAP HANA.
Microsoft Azure today offers the most choice of HANA-certified SKUs of any hyper-scale cloud provider, ranging from 192 GiB to 20 TiB RAM, including 12 dedicated hardware options across OLTP and OLAP scenarios.
If customers opt for a SAP HANA OLAP scale-out configuration, Azure offers certified options up to 60 TiB in TDIv4 and up to 120 TB with a TDIv5 certification.
Many of our SAP HANA customers on Azure have already taken step one of the journey to BW/4HANA by executing the DMO migration to BW on HANA in Azure. This is similar to a scale-out scenario, which is sometimes on virtual machines and on the HANA Large Instance dedicated hardware.
This initial step has allowed customers to ready their cloud infrastructure such as network, virtual machines, and storage. It also helps enable operational capabilities and support models on Microsoft Azure such as backup/restore, data tiering, high availability, and disaster recovery.
Step two
Once customers have SAP BW on HANA running in Azure, the real benefits of hyper-scale cloud flexibility start to be realized. Since the BW/4HANA migration exercise can be tested iteratively, if needed in parallel, while only paying for what is used, you can throw it away once you have the migration recipe perfected and are ready to execute against production.
Run SAP HANA best on Microsoft Azure
The SAP BW on HANA and BW/4HANA scenarios, running on Microsoft Azure, are enabled through a combination of IaaS services, such as:
- High-performing Azure Virtual Machines
- Azure Premium Storage
- Azure Accelerated Networking
These highly flexible, performant virtual machines (VM) offer the ability for customers to quickly spin up SAP HANA workloads and deploy infrastructure for n+0 scale-out configurations. Additionally, Azure also offers customers the ability to scale OLAP workloads through the scale-out scenario on dedicated, purpose-built SAP HANA Large Instance hardware.
In these HANA Large Instance configurations, customers would install SAP HANA in an “N+1” configuration, which usually means:
- 1 master node
- N worker nodes
- 1 standby node (optional)
While there are scale-out configurations with 50 plus worker nodes, speaking practically the majority of scale-out configurations will be in the three to 15 node range.
What does this look like on Microsoft Azure?
Adhering to both Microsoft and SAP’s guidance to ensure a performant and available infrastructure, it is important that we design the compute, network, and storage for SAP HANA, taking advantage of the Azure infrastructure at our disposal in the VM scenario:
- Availability Sets
- Accelerated Networking
- Azure Virtual Networks
In the HANA Large Instance scenario the use of dedicated, shared NFS storage and snapshots allows us the ability to scale-out with a standby-node. Ensuring availability for the BW system with minimal additional overhead.
Networking
SAP HANA scale-out configurations using NFS require three distinct network zones, to isolate SAP HANA client traffic, SAP HANA intranode communication and SAP HANA storage communication (NFS) to ensure predictive performance. This is illustrated below in a diagram of a sample 8-node SAP HANA system, with a standby node (1+6+1) to illustrate the above concepts.
Figure 2: SAP HANA scale-out client, intranode and storage network zones
We need to follow SAP recommended TDI practices when designing for SAP HANA:
- Adequate and performant storage in proportion to the allocated RAM.
- Correct configuration of NFS shared storage to ensure proper failover operations in case of n+1 configuration with HANA Large Instances.
- Distinct network zones to isolate network traffic into client and intra-node zones.
These pre-defined HANA Large Instances compute SKUs, including the correct storage configuration according to the SAP Tailored Datacenter Initiative v4 specification (TDIv4) have been architected in advance and are ready for customers to use as needed. These SKUs can be adapted to specific needs resulting out of customer specific sizing.
Storage
SAP HANA Large Instance scale-out configurations in the TDI model use high-performant NFS shared storage to enable the standby node ability to take over a failed node.
There is a minimum of four types of shared filesystems set up for SAP HANA installations in a TDI scale-out configuration of Azure HANA Large Instances:
Shared Filesystem |
Description |
/usr/sap/SID |
Low IO performance requirements |
/hana/shared/SID |
Low IO performance requirements |
/hana/data/SID/mnt0000* |
Medium / High IO performance requirements |
/hana/log/SID/mnt0000* |
Medium / High IO performance requirements |
The number and name of HANA data and log filesystems will match the number of scale-out nodes in use, diagram below.
While there are other filesystems not depicted here necessary for proper SAP HANA operations in a scale-out configuration, these four filesystems are those required to offer highly performant IO characteristics and the ability to be shared across all the nodes in a scale-out configuration.
The below example illustrates eight Azure HANA Large Instance units in HANA scale-out, where each unit can see every other unit’s HANA filesystems provisioned, most critically the HANA and Log volumes, however at any point in time each unit has read/write control of its own data and log filesystems enforced by NFS v4 locking.
Figure 2: SAP HANA Scale-out filesystem on Microsoft Azure HANA Large Instances
In the event of a failure, the SAP HANA standby node can take over ownership of that node’s storage and assumes that nodes’ place in the operating scale-out HANA system.
Figure 3: SAP HANA standby node taking ownership of a failed node’s storage
The SAP HANA standby node capability on Microsoft Azure deepens customer choice for performance paired with availability when choosing Microsoft Azure as the platform to run SAP HANA.
I would encourage you to stop by the Microsoft booth #729 if you will be at SAP SAPPHIRENOW 2019 to learn more about these solutions and to see hands-on demo of our offerings. See you in Orlando!