Thursday, June 15, 2017

Very useful certification Questions -15



71)Which of the following is correct in regard to R/3 clients?9

a. The R/3 client has it’s own customer data and programs which are not
accessible to other clients within the same R/3 system.
b. An R/3 client has all R/3 repository objects and client independent
customizing with all other clients in the R/3 system
c. An R/3 client shares customizing and Applcn data with other clients in the
same R/3 system
d. An R/3 client enables you to separate applcn data from customizing data


72)Which of the following strategies enables R/3 customers to avoid making
modifications to SAP Std objects?

a. Using Enhancement techs such as Program exits & menu exits
b. Modifying SAP delivered programs
c. Changing SAP std functionality using the IMG
d. Performing customizing to provide the required functionality

73)Which of the following statements are correct in regards to modifications

 a modification is a change to an SAP Std object
b. A modification must be registered using the SAP OSS (SSCR)
c. SAP recommends modification only if customer’s business meets cannot be
me by customizing enhancement techs or customer dev
d. All the above


74)Which of the following strategies enables an Enterprise to meet its business needs by
changing or enhancing R/3 functionality

a. Maintaining application data using the Various R3 Business transactions in
the SAP Std
b. Using the ABAP workbench to create the read R/3 repository objects
c. Using customizing to modify R/3 programs after obtaining an access key from
OSS
d. Using customer Exits to enhance the functionality of existing SAP Objects


75) Important statistics displayed in the Database Monitor (ST04) include:
a.              Logical reads
b.              Physical reads
c.              Data buffer size and quality
d.              Shared pool size and quality
e.              Invalidations in the R/3 table buffers

Very useful certification Questions -14



66)Which of the following statements is correct in regard to multiple R/3 clients?

a. All clients in the same R/3 system share the same R/3 repository and client
independent customizing settings
b. No more than one client in the same R/3 system should allow changes to
client independent customizing changes.
c. If a client allows for changes to client dependent customizing, the client
should also allow for changes to Client independent customizing objects
d. All of the above.


67)Which of the following activities should not be performed within a system landscape

a. Customizing & dev changes are promoted to a QAS client before being
delivered to PRD
b. The R/3 system is upgraded to a new R/3 release
c. Dev changes are mad directly in the PRD client
d. Changes are assigned to a specific role

68) Which of the following is correct in regard to R/3 repository

a. Customers can develop new repository objects using tools in the ABAP
workbench.
b. Customer developed repository objects reside in the R/3 repository alongside
std repository objects
c. Customers can create & assign new repository objects to a development class
d. All of the above


69)Which of the following statements is correct in regard to SAP Client Concept

a. All customizing settings are client independent
b. A client has unique set of applcn data
c. A client has it’s own repository objects
d. All customizing settings are client dependant


70)Which of the following components indicate that R3 is a client / server system?

a. Multiple DB’s
b. Database server
c. 3 separate hardware servers (DB, applcn & presentation server)
d. Db service, app service & presentation service

Very useful certification Questions -13



61)List the Transport Requests that are generated during Client Export and the description of each request giving the type of data exported?

Sr. No.
Request Name
Description


62)Transaction Code used for Transport Management System / Transport Organizer is: (Multiple Choice)

a.     STMS / SE09
b.    SMTS / SE10
c.     STMS / SE10
d.    STSM / SE01


63)  Which of the following are the Transport directories? (Multiple Choice)

a.     bin
b.    actlogs
c.    profile
d.   tmp
e.    log


64) Select the correct statements pertaining to TMS configuration (Multiple Choice)

a.  Several transport groups can be in a transport domain
b.    Transports can take place in the different transport groups
c.     Transport layer is not required while defining transport routes
d.    Only one transport layer can be defined in the system at any given time


65)Types of Client Specific Data? (Multiple Choice)

a.    Dictionary Data
b.    User Data
c.    Master Data
d.    Transaction Data




Very useful certification Questions -12



56)  Which log viewer can be used to display local log  file?

a)     Integrated Log Viewer
b)     Central Log Viewer
c)     Log Viewer
d)     Command Log Viewer


57) What are the development Infrastructure used SAP WEB AS JAVA?

a)      JDI
b)     IDE
c)      SAP Netweaver Studio
d)     None of the Above

58) All development objects are stored in a central repository called as

a)      DTR
b)     CBS
c)      CMS
d)     Software Logistics Process


59) Software Logistics provide?

a.     Procedure for Central User Administration
b.    Procedure for Development / Customizing
c.     Procedure for Post Installation Steps
d.    Process to help you monitor your system externally


60) The administrator wants to deactivate the import of all the requests in an import queue for a SAP System, he should set which parameter? 

a.     IMPORT_ALL_NO
b.    IMPORT_ALL_STOP
c.     NO_ALL_IMPORT
d.    NO_IMPORT_ALL

Very useful certification Questions -11




51)  State whether the following statements are True/False in case of Note Assistant?

a.   No SPDD modification adjustment required during upgrade / when importing Support Packages  
b.    SSCR request is required
c.     Changes are noted in Modification Assistant (SE95)
d.    Changes are done through SNOTES transaction



52)  The following are the components of a DB?

a.    Database Buffer
b.    Data Files
c.    SAP Kernel
d.    Special DB Processes


53) ICM (Internet Communication Manager) is a Process (True/False) ?



54) What are the two managers in UME?

Ans:- Persistent Manager
      Replication Manager



55) Different  Types of Data Partitioning in UME.?

a) Attribute base
b) User Base
c) Client Base
d) Type Base

Wednesday, June 14, 2017

Very useful certification Questions -10



46. ALE customizing is done using:
a.          the BABI generator
b.          transaction SALE
c.          RSNAST00
d.          an external warehouse management (WM) system
e.          SAP’s Business Framework


47. TRFC’s: 

a.  are a part of SAP’s functionality. 
b.  are used by ALE for communication between sender and receiver.
c.  require FTP.
d.  are called transactional since a number of function modules are allowed to be bundled into a single LUW.
e.  are only used by ALE for customizing integrity checks

48.  IDOC’s can be created:
a.               only via transaction SPRO.
b.               by change pointers.
c.                by message entry for deferred processing.
d.               directly.
e.               only via transaction BALE.


49.  Which of the following are IDOC processing options? :
a.              Dispatch Control.
b.              FTP.
c.              Processing mode.
d.              Unit transfer type.
e.              LUW.



50. Which of the following is not contained in R/3 DB?

a. The R/3 Repository
b. The R/3 kernel
c. Customer Data
d. Transaction data
e. Customizing Data
f. The ABAP dictionary

Very useful certification Questions -9


41.Which of the following statements is correct in regard to setup of 3-system landscape

a. There is only R/3 DB for the system landscape
b. One client should allow for the automatic recording of client dependent
customizing and client independent changes.
c. All R/3 systems have the same System ID (SID)
d. All clients must have unique client numbers


42. Which of the following benefits does the 3-system landscape recommended by SAP
have?

a. Customizing and dev, testing, and prod activities take place in separate DB
environments and do not affect one another
b. Changes are tested in the QAS system and imported in the PRD system only
after verification
c. Client independent changes can be made in the DEV system w/o immediately
affecting the PRD client
d. All of the above

43.Which of the following statements are correct in regard to IMG?

a. The IMG consists of series of customizing activities for defining a Companies
Business process
b. The IMG is a online resource providing the necessary info and steps to help
you implementing the R/3 applcn Modules
c. IMG is client independent
d. All of the above

44.Which of the following statements is correct in regard to SAP Client Concept

a. All customizing settings are client independent
b. A client has unique set of application data
c. A client has it’s own repository objects
d. All customizing settings are client dependant


45. Which of the following are true?

a.         For performance reasons the TRFC queue should never be reorganized.
b.         The TRFC queue consists of tables on both the sender and receiver of the communication.
c.         The TRFC queue cannot be monitored in R/3 since it is an external process.
d.         TRFC refers to transactional RFC’s.
e.         IDOC transfers between R/3 systems using TRFC’s do not require a logon and password.

Very useful certification Questions -8



36) Which business logic component runs on the server?

a)      Applets
b)     Enterprise Java Beans
c)      Servlets
d)     JSP


37) Which tool can define password rule?

a)     ABAP User management
b)     CUA
c)     UME Console
d)     Visual Administrator

38) SQL statement given –screen dump of ST04à detail analysisà SQL request
& ST04 main screen dump-  Single choice only

To find SQL statement is expensive or not
a)buffer gets > 5% Total Reads  so Statement is expensive
b)buffer gets > 5% Physical Reads  so Statement is expensive
c)Statement is not expensive
d)Very high buffer gets so statement is expensive
e)High no of execution so statement is expensive

39)Which of the following files are required for transport in to QAS system

a)transport.pfg
b)DOMAIN_DEV.cfg
c)TP_DOMAIN_<SID>.pfl
d)tpimport.pfg


40) Which is the sapbda & brarchive parameter file ?

Very useful certification Questions -7



31) Connection b/n a Dispatcher and Server is called as

a)      Message Server Communication
b)     Lazy Communication
c)      Session Communication
d)     Cluster Communication

32) What are the user stores provided by SAP WEB AS JAVA

a)      UME
b)     UDDI
c)      Database
d)     None of the Above

33) Which manager is first started in SAP WEB AS JAVA?

a)      Cluster Manager
b)     Log Manager
c)      Application Manager
d)     Thread Manager

34)  Which trace works on Byte Code?

a) SAT
b) CUA
c) JARM
d) Application Trace


35)  Severities of Log Controller can be changed by

a) Log Viewer
b) Config tool
c) Visual Administrator
d) Log Configurator Service

Very useful certification Questions -6



26.     Parameter rdisp/mshost found in which profile?

          a) default
          b) start profile
          c) Central instance –instance profile
          c) Application Instance – instance profile

27.   Parameter rdisp/vbname found in which profile

      a)default
      b)start profile
      c)Central instance – instance profile
     c)Application Instance – instance profile

28. Program ZZSLOW is there and it access ZZTABLE .And want to improve the performance

a) create an index on ZZTABLE and this is preferred over option of tuning SQL statement
b) Creating an index would definitely improve the performance
c)Tuning the SQL statement would solve the performance and is preferred over Index Creation
d)Optimizer may choose  wrong index after creation  of the index


29.Locking Process of SAP Transactions is carried out by?

a.    Enqueue
b.    Dialog
c.    Spool
d.    Background

30.Spool Data is stored in which table ?

a.    TSP01
b.    TSP03
c.    TST03
d.    TSP02






Very useful certification Questions -5



21.Which parameter keeps the incoming HTTP request in store for SAP WEB AS Java?

22.While installing Support Packs through Sapinst for SAP WEB AS JAVA J2EE engine should be stopped?

True
False

23.Which of the following is NOT a profile that can be used for client copies? 

a.     SAP_USER – User master records
b.    SAP_ACUST – Application & Customizing data
c.     SAP_EXBC – Client dependent/independent Customizing & Users
d.    SAP_MAST – Master data only

24. Which tool is used for carrying out R/3 Upgrade?
a.     PREPARE
b.    SPAU
c.     R3UP
d.    SPDD


25.  The logs generated by ‘tp’ tool are:?

a.     tp system log
b.    Transport Step Monitor
c.     ULOG
d.    <Source SID><Import Step><6 digit>.<Target SID>




Very useful certification Questions -4


16. In 3 system landscape from where the request can be imported in QAS system.

a.     Only from DEV
b.    Only from QAS
c.     Only from PRD
d.    From all three systems


17. While importing Support Packages you need to do the following.  State True/False:

a.    Update SPAM/SAINT update with latest version 
b.    Client 000 is not to be used for importing 
c.    Import must always be performed during peak operations  
d.    Aborted packages must be kept untouched 

18.  CCMS Monitoring Infrastructure consists of

a.    Data Migration
b.    Data Collection
c.    Data Storage
d.    Data Archiving

19.How many server socket listner and new connection can be processed by Default in SAP WEB AS JAVA

a) 5 S.S.L.   and 10    new connections/sec
b) 10 S.S.L. and 350  new connections/sec
c) 5 S.S.L.   and 650  new connections/sec
d) 5 S.S.L.   and 1000 new connections/sec

20. JRE Consists of

a)     Class Loader & Byte Code Verifier
b)     Byte Code Verifier & JVM
c)     JVM,Class Loader & Byte Code Verifier
d)     J2SDK

Very useful certification Questions -3




11.From SAP System which process is used to start External Command / Program

a.    sapxpg
b.    sapevt
c.    disp+work.exe
d.    gwrd


12. Access Method is: (Single Choice)

a.    Connection between Dialog & Spool Work Process
b.    Connection between OS Spool and the Actual Printer
c.    Connection between Spool Work Process and OS Spool
d.    Connection between Spool Request & Output Request


13.SAP System has a Operation Mode configured.  There are performance issues in the System and finally the BASIS administrator 
   adds RAM (Physical Memory) to the Server.  He configures few parameters and increases the no. of work processes.    
   The BASIS administrator needs to redefine Operation Mode to take effect of the new processes –(True/False)

14.Printer Name in SAP is not case sensitive (True / False)


15.The standard System Landscape recommended by SAP is

a.    DEV, QAS & PRD systems on one Server
b.    DEV & QAS on one server and PRD on another server
c.    DEV, QAS & PRD on separate server
d.    DEV on one server and QAS & PRD on one server

Very useful certification Questions -2



6. What will happen if a background job is running and the Operation Mode Switch is activated in between?

a. The job will be cancelled
b. The job will be put in Suspended state
c. The Job will continue to run and once it finishes then the Operation Mode Switch will occur
d. The Operation Mode will happen the next day

7.What is the order in which the Profile files are read during the system start? 

a. DEFAULT, INSTANCE & START
b. INSTANCE, DEFAULT & START
c. START, INSTANCE, DEFAULT
d. START, DEFAULT, INSTANCE

8.What is the Profile Parameter name used to set the Trace Level in SAP system? 

a. rdisp/trace_level
b. rdisp/tracelevel
c. rdisp/LEVEL
d. rdisp/TRACE

9.State whether the followings statements are True or False:

a. W Gate should always be on Web Server
b. A Gate should always be coupled with W Gate
c. SAP System should be installed on NT Operating System
d. Dual Host Installation in ITS means A Gate and W Gate are on the same host



10.‘Event-Based Job Scheduler’ runs on every instance (True / False)




Very useful certification Questions -1


These questions are gathered from who already appeared exam and some of the questions from google.

Here i am posting which will help who wants prepare certification exams. All the best.
1)   What is the name of the DEFAULT Profile in SAP System? (Single Choice)

a.     DEFAULT.imp
b.    DEFAULT.prof
c.     DEFAULT.pfl
d.    DEFAULT.ini


2) Is data written to the Oracle transaction logs (redo logs) before or after the information is written in
the database files?

A. Before
B. After
C. Simultaneously
D. The priority and sequence of database writes and transaction log writes is controlled by
the parameter "dblg_after" in the ORACLE initialization file.
E. Randomly, sometimes before, sometimes after.


3)     What is SAP Instance? (Single Choice)

a.     Group of Database that make a system runnable
b.    Group of SAP instances that make a system runnable
c.     Group of Services that make a system runnable
d.    Group of O/S level Files that make a system runnable


4) Transaction Code used for Profile Maintenance / Job overview is: (Single Choice)
a.     PFCG / SM37
b.    RZ11 / SM36
c.     RZ10 / SM37
d.    RZ03 / SM37




5) What is the primary function of the $ORACLE HOME/saparch directory?

A. The saparch directory is a work area for the R/3 database utilities.
B. The saparch directory is a work area for the R/3 Instance tasks.
C. The saparch directory is used as the holding area for online database backups instead of
writing them to tape.
D. The saparch directory holds the archived database log files.
E. The saparch directory contains the current database transaction log files.




Monday, June 12, 2017

Database migration option (DMO)


While migrating an existing SAP system to the SAP HANA database using SUM with database

migration option (DMO).


There are many ways to optimize the performance and reduce the downtime.



Preparation steps:


The DMO uses tables from the nametab. Therefore it is recommended to clean up the nametab before starting a DMO run. Proceed as follows:

1.) Start transaction DB02 (Tables and Indexes Monitor) and choose “Missing tables and Indexes”

2.) Resolve any detected inconsistencies




If you do not perform this step, the DMO run may stop with errors/warnings



Before you start a complete DMO test run, we highly recommend using the benchmarking tool to evaluate the migration rate for your system, to find the optimal number of R3load processes and to optimize the table splitting.

Start the benchmarking mode with the following addresses:

http://<hostname>:1128/lmsl/migtool/<sid>/doc/sluigui

or

https://<hostname>:1129/lmsl/migtool/<sid>/doc/sluigui



This opens the following dialog box:



1.) Select the option “Benchmark migration”

2.) Select the option “Benchmark export (discarding data)”
     This selection will run a benchmark of the data export and discard the data read

     from the source database (source DB).

     Note:

a) Always start with the benchmark of the export to test and optimize the performance of your source DB.

Since almost the complete content of the source DB needs to be migrated to the SAP HANA database, additional load is generated on the source DB, which differs from the usual database load of a productive SAP system.
This is an essential part for the performance of the DMO process, since on the one hand parts of the data is already transferred during the uptime while users are still active on the system. On the other hand the largest part of the data is transferred during the downtime. Therefore you have to optimize your source DB for the concurring read access during uptime to minimize the effect on active business users and also for the massive data transfers during the downtime to minimize the migration time.



b) Always start with a small amount of data for your first benchmarking run.

This will avoid extraordinary long runtimes and allow you to perform several iterations.
The idea behind this is that performance bottlenecks on the source DB can already be found with a short test run while more iterations are useful to verify the positive effects of source DB configuration changes on the migration performance.

However, too short runtimes should also be avoided, since the R3load processes and the database need some time at the beginning to produce stable transfer rates.
We recommend about 100GB or less than 10% of the source database size for the first run.
The ideal runtime of this export benchmark is about 1 hour.





Benchmarking Parameters.jpg

3.) Select the option „Operate on all tables“ and define the sample size as percentage of the source database size as well as the size of the largest table in the sample as percentage of the source database size.

4.) Also select “Enable Migration Repetition Option”.
This option enables you to simply repeat the migration benchmark without changing the set of tables. This is especially useful for finding the optimal number or R3load processes for the migration.

5.) Define a high number of R3load processes in your first test iteration to get enough packages from the table spitting to be able to play around with the number of parallel running R3load processes later on. For detailed information of the table splitting mechanism see the blog DMO: background on table split mechanism

Use 10 times the number of CPU cores available on the SUM host (usually the Primary Application Server) as the number of R3load processes here.
The R3loads for “UPTIME” are used for the preparation (determine tables for export), the R3loads for “DOWNTIME” are used for the export (and import, if selected), so UPTIME and DOWNTIME are no indication for uptime or downtime (concerning the configuration of R3load processes).  






6.) Directly before starting the roadmap step “Execution”, in which the actual data migration will take place, reduce the R3load processes to 2 times the number of CPU cores available on the SUM host.

You can change the SUM process parameters during the run by means of the DMO utilities:








7.) Start the roadmap step “Execution”.
While monitoring your network traffic and CPU load, raise the number of R3load processes step by step, always waiting 10 to 15 seconds until they are started.
When either the CPU load or the network traffic reach 80% to 90%, you have found the optimal number of R3load processes for this system landscape.

8.) If you repeat the benchmarking run, avoid database caching.
This can either be realized by flushing the cache or by restarting the database.

If you want to change the table set, finish the current benchmarking run and start the test from the beginning. To avoid database caching, you can also select bigger tables that exceed the database cache.

Benchmark Export + Import
Use this option when you want to simulate the export of data from the source system and the import of data into the target system.

After you have executed at least one export benchmark, you can continue with benchmarking the migration export and import in combination. In this way you can find out if your target SAP HANA database is already running at peak performance or if it needs to be optimized for the mass import of migrated data.
The behavior of this combined benchmark is very similar to a real migration run since the exported data is really imported into the target HANA database. Only after a manual confirmation at the end of the migration benchmark the temporarily created database schema is dropped from the target HANA database.

Proceed as follows in the dialog box:

1.) Select the option “Benchmark migration”

2.) Select the option “Benchmark export and import”



Automatically optimize Table Splitting

1.) Perform a benchmark migration of the whole database to generate a durations file, which contains the migration runtimes of the most significant tables.



Set the percentage of the DB size as well as the size of the largest tables to 100% and enable the “Migration Repetition Option”.
On the process configuration screen, input the optimal number of R3load processes, identified beforehand.

  
2.) Repeat the migration phase to run the full migration benchmark again.
This time the benchmarking tool makes use of the durations file from the first full run to automatically optimize the table splitting, which should result in a shorter overall migration runtime.









 

Analysis
After a complete migration run, you can analyze the migrated data volume and the migration speed.
The SUM creates a summary at the end of the file ../SUM/abap/log/EUMIGRATERUN*.LOG:

Total import time: 234:30:20, maximum run time: 2:31:41.

Total export time: 222:31:49, maximum run time: 2:31:42.

Average exp/imp/total load: 82.0/87.0/168.9 of 220 processes.

Summary (export+import): time elapsed 2:41:40, total size 786155 MB, 81.05 MB/sec (291.77 GB/hour).

Date & Time: 20150803161808

Upgrade phase “EU_CLONE_RUN” completed successfully (“20150803161808”)

In this example
– 220 R3load processes have been used (110 Export, 110 Import)
– the downtime migration phase took 2 hours 41 minutes
– total migration data volume was: 786155 MB (786 GB)
– migration speed was: 81 MB/s (291 GB/h)
– the migration phase ended without issues: “completed successfully”

In general, a good migration speed is above 300 GB per hour.

R3load Utilization
In the DMO Utilities, analyze the R3load utilization after a migration run.

1.) Open the DMO utilities and navigate to “DMO Migration Post Analysis -> Charts”.

2.) Select the file “MIGRATE_*PROC*”

3.) Check for a long tail at the end of the migration, in which only a small number of R3loads still process remaining tables.




 



For a definition of this tail and examples for a long and a short tail, see the blog

DMO: background on table split mechanism

If such a long tail is found, analyze the durations file to find out which tables cause it.

Durations file
1.) Open the file SUM/abap/htdoc/MIGRAT*_DUR.XML with a browser to get a graphical representation of the runtimes of the migrated tables.

2.) Look for long-running tables at the end of the migration phase.


In this example, the table RFBLG has a very long runtime. It is running from the beginning of the migration phase until the end.

R3load logs
Analyze the R3load logs to identify the origin of performance bottlenecks of long-running tables.

1.) Open the R3load log summary file SUM/abap/log/MIGRATE_RUN*.LOG

2.) Search for the problematic tables

3.) Analyze the R3load runtimes to identify the origin of the performance bottlenecks.

You will find R3load statistics of the time spend in total (wall time), in CPU in user mode (usr) and in the kernel system calls (sys).
There are separate statistics available for the database and memory pipe of the exporting R3load (_EXP) and the importing R3load (_IMP).

#!—- MASKING file “MIGRATE_00009_RFBLG_EXP.LOG”

(STAT) DATABASE times: 1162.329/4.248/0.992 93.6%/36.9%/47.6% real/usr/sys.

(STAT) PIPE    times: 79.490/7.252/1.092 6.4%/63.1%/52.4% real/usr/sys.

#!—- MASKING file “MIGRATE_00009_RFBLG_IMP.LOG”

(STAT) DATABASE times: 702.479/213.625/4.896 56.6%/96.6%/86.3% real/usr/sys.

(STAT) PIPE    times: 539.445/7.620/0.780 43.4%/3.4%/13.7% real/usr/sys.

In this example the exporting R3load spend 1162 seconds on the source DB reading data. 
79 seconds were required to copy the data to the memory pipe.
The importing R3load spent 702 seconds on the target SAP HANA DB to write the data and it spend 539 seconds on the memory pipe waiting for data.

Conclusion: In this example the source DB was the bottleneck, because the importing R3load has been waiting for data on the pipe most of the time. 
In this case you should ask the administrator of the source DB if he can do a performance analysis of this table.

Extended Analysis
If you still experience low migration speeds, an extended analysis of the following factors during a migration run might help to find bottlenecks:

CPU Usage
As already mentioned in the R3log analysis example, the R3loads usually wait for the database most of the time while the actual processing of the data only takes a small amount of time.
Therefore it should not happen, that the R3load processes use more than 90% of the CPU time on the application server. If this is the case, either reduce the number of R3load processes or equip the server, on which SUM is running (usually the application server), with more CPUs, if feasible.




Memory Usage:

Analogous to the CPU usage on the server where SUM is running, enough main memory should be available for the R3load processing.

Otherwise the operating system will apply paging mechanisms that significantly slow down the migration performance.
The minimum memory usage of a single R3load process during the migration of a standard table is about 60 MB.

Especially when declustering is necessary (for target releases 7.40 and higher), the memory required by R3load is very content dependent.
Therefore it makes sense to monitor the actual memory usage during a complete test migration run to determine the optimal memory configuration.



Disk I/O:

The performance of export and import operations on the source and target DB is depending on a good disk input/output (I/O) performance. Therefore it might be necessary to postpone activities which create heavy disk I/O (such as backup jobs) during the migration run.

Sometimes it is not obvious, which activities create disk I/O and have a negative impact on the DMO migration performance.
In this case it might be useful to actively monitor the disk I/O during a test migration to pinpoint the timeframe of problematic activities.


Network:
The network can also be a bottleneck, therefore it is recommended to monitor the throughput of the different network connections (from PAS to source DB, from PAS to target SAP HANA DB) during a migration run.

Theoretically this should not be a major issue with modern LAN networks. The recommend 10 Gbit LAN would already deliver an expected transfer rate of ~3500 GB / hour. Therefore a low throughput can be an indicator for an unfavorable setup for the migration (e.g data flow through two firewalls).
It also has to be considered, if parallel migrations of different systems or other activities that use network bandwidth, are planned.



Remove the bottlenecks
Depending on the results of your analysis there may be various ways to deal with the bottlenecks found.
If a more powerful machine is required for the R3load processes, it might be an option to run the SUM on a powerful Additional Application Server (AAS) instance with free resources.
In general, SUM and SUM with DMO may be executed not only on the Primary Application Server (PAS), but also on an Additioal Application Server (AAS). However, running SUM with DMO on an AAS is only supported if your system has a separate ASCS instance.

It might be even possible to use an SAP HANA Standby Node for this purpose, especially if the network connection to the SAP HANA database is the bottleneck.

Housekeeping
Especially when performing a SAP BW migration, the positive impact of housekeeping tasks like cleaning up the persistent staging area (PSA), the deletion of aggregation tables and compression of InfoCubes should not be underestimated.

For details regarding the SAP BW migration using DMO see the document:
SAP First Guidance – Using the new DMO to Migrate BW on HANA

But even with a standard DMO you should give some thought to housekeeping before starting the migration. For example, it might be an option for you to delete or archive old data that is not accessed frequently anymore (analogous to moving BW data to Near-Line Storage) before starting the DMO migration. This data does not need to be transferred, which reduces the migration runtime, and it does not need to be stored in-memory on the target HANA database.

 

Table Comparison
After you have optimized the DMO migration using the benchmarking tool, you are ready for the first test migration run.
You have now the option to let SUM compare the contents of tables on the target database with their respective content on the source database to make sure that everything has been migrated successfully.


We recommend to switch on the table comparison for all tables in the first test run only.
The reason is that the full table comparison via checksums takes a lot of time, usually as long as the table export itself.
If no errors are found, keep the table comparison off (“Do not compare table contents”) or compare only single, business critical tables in the productive DMO migration run.
This will minimize the Downtime in the productive run.
In fact, even when “Do not compare table contents” is selected, the SUM still compares the number of rows of the migrated tables on the target database with the number of rows on the source database after the migration of their content.

For further information regarding the DMO table comparison see DMO: table comparison and migration tools

 

Downtime Optimization
If the performance of the standard DMO is still not sufficient after all optimization potential has been utilized (usually a migration speed of up to ~500 GB/h can be reached) and the downtime needs to be significantly shorter, additional options to minimize the downtime are available.

 


Downtime optimized DMO:

The Downtime optimized DMO further reduces the downtime by enabling the migration of selected application tables during the DMO uptime.

The report RSDMODBSIZE (available with SAP Note 2153242) determines the size of the largest tables in a SAP system and gives an estimation about the transfer time required for these tables in the DMO downtime.
Tables transferred with Downtime optimized DMO in the DMO uptime effectively reduce the downtime.
The report facilitates the decision if the usage of Downtime optimized DMO is suitable and generates a list of tables as input for SLT.




The following blog post describes this technology, prerequisites and how to register for pilot usage of the Downtime optimized DMO:

DMO: downtime optimization by migrating app tables during uptime (preview)

Note that the Downtime optimized DMO works for SAP Business Suite systems, but not for SAP BW.

 

BW Post Copy Automation including Delta Queue Cloning
To minimize the migration downtime of a productive SAP BW system, one of the recommended migration paths from SAP BW to SAP BW on SAP HANA comprises a system copy of your SAP BW system.
To keep things simple, SAP offers the Post-Copy Automation framework (PCA) as part of the SAP Landscape Virtualization Management which includes post-copy automation templates for SAP BW as well as an automated solution for delta queue cloning and synchronization, enabling the parallel operation of your existing production system.




In combination with the SUM DMO the production downtime of the migration from SAP BW to SAP BW on SAP HANA can be kept at a minimum. The usage of the delta queue cloning solution requires additional steps to be performed before the standard SUM DOM is started.







Goods and Service Tax (GST) Information



Goods and Service Tax will replace the existing indirect taxes in India

Minimum support pack that the customer should be on is provided in the following


SAP_APPL Release Support Pack

SAP ERP 6.0 (600) SP 26
EHP2 FOR SAP ERP 6.0 (602) SP 16
EHP3 FOR SAP ERP 6.0 (603) SP 15
EHP4 FOR SAP ERP 6.0 (604) SP 16
EHP5 FOR SAP ERP 6.0 (605) SP 13
EHP6 FOR SAP ERP 6.0 (606) SP 14
EHP6 FOR SAP ERP 6.0 for HANA (616- SAP HANA) SP 08
EHP7 FOR SAP ERP 6.0(617) SP 07
EHP8 FOR SAP ERP 6.0(618) SP 02
SAP S/4HANA ON-PREMISE 1511 SP 02


TAXINN is the default tax procedure. Please refer to the following FAQ note related to the migration of the tax procedure - Refer the snote - 2252781


1. GST India - Changes to Master data  - Refer snote:2385575

2. GST India - Changes to Master data- DDIC activities - Refer snote 2405502

3. GST India – Utilities (Utility Objects for GST - Refer snote 2417506

4. GST India: Removal of Country Constant from J_1B_LOCALIZATION_SRV Structure
 Refer snote - 2446083

5. GST India - Changes to tax procedure and pricing procedure
Refer snote - 2407980

6. GST IN: BAdI definition for screen enhancement in Enjoy transactions of FI and MM
Refer snote- 2376723

7. GST IN: Line item wise tax calculation for sales invoices
Refer snote- 2410917

8. GST India - Changes to Transaction data- Data Dictionary activities
Refer snote - 2415115

9. GST IN - Defaulting No excise entry for GST material in excise tab in transaction - MIGO while doing goods receipt
Refer snote - 2382903

10. GST India - Disabling excise invoice creation in background in tcode - VF01 for GST Scenario
Refer snote - 2434518

11. GST IN: Sales related changes for India GST for sales order creation and Invoice posting
Refer snote- 2410105

12. GST IN: BAdI implementation in Enjoy transactions of FI and MM
Refer snote - 2378678

13. GST India: Stock Transfer
Refer snote -2416018

14. GST IN: Changes to MIRO Invoice item level screen
Refer snote - 2419214


15. GST IN: Changes for FI Invoice item level screen
Refer snote- 2419215


16. GST India: External Service Management
Refer snote- 2444868


17. GST India: Import
Refere snote: 2458404

18. GST IN: Corrections to Note 2444868 (External service procurement)
Refer snote - 2478039


19. GST India: Changes to manual steps provided in the attachments of SAP Note 2415115
Refer snote - 2456927
Version 4

20. GST India - DDIC activity and configurations for GST compensation Cess
Refer snote -2482200


21. GST IN : Reverse Charge Configuration
Refer snote - 2482349

22. GST: Invoice Form
Refer snote -2482854


23. GST IN: Data Dictionary Changes for Subcontracting/Jobwork
Refer snote- 2484361


24. GST IN: Subcontracting process /Jobwork
Refer snote -2483852


25. GST India: Tax Deduction at Source (TDS)
Refer snote -2487373

Minimum support pack level for any subsequent legal change, for country version India


You are required to be at the following minimum support pack level of software
component SAP_APPL, to obtain support for any subsequent legal change, for
Country version India(CIN).

SAP_APPL Release Support Pack

SAP ERP 6.0 (600) SP 26
EHP2 FOR SAP ERP 6.0 (602) SP 16
EHP3 FOR SAP ERP 6.0 (603) SP 15
EHP4 FOR SAP ERP 6.0 (604) SP 16
EHP5 FOR SAP ERP 6.0 (605) SP 13
EHP6 FOR SAP ERP 6.0 (606) SP 14
EHP6 FOR SAP ERP 6.0 for HANA (616- SAP HANA) SP 08
EHP7 FOR SAP ERP 6.0(617) SP 07
EHP8 FOR SAP ERP 6.0(618) SP 02
SAP S/4HANA ON-PREMISE 1511 SP 02

STMS: Duplicate transport requests in the import queue


A transport request exported from source system already exist in the target system import queue

Below could be reasons for this issue

- Having two systems with same SID which is not supported.
- Recent system copy with the SID on the transport request as target system

In case above reasons do not apply, most possible reason is a manual modification from E070L table. Please note this possibility can only be investigated if table logging was active for E070L.

Take into account that name range for customer transports start with <sid>K9. Any other names are reserved for SAP


It's resolved with below solution:

If manual modification from E070L table, the only way to modify the transport request number is to change the TRKORR field value from the same table.

Please use RSWBO301 report which will provide the free number ranges available. The change request numbers will continue to increment from the number entered in above mentioned field.

STMS: Transport log overview missing




Transport log overview missing

If You use the transport organizer (SE01, SE09 or SE10) or STMS to see transport logs in DEV, but you cannot see the logs for import to QTY.


If you want to be able to read the log overview from any system within the transport domain,
then you must share one transport group and one transport directory between all systems.

You will then have only one cofile for all systems as opposed to a separate cofile for each system.

This will resolve the issue moving forward for new transports, but not for old transports.
For those  you could try to copy the cofile from the last system in the transport route, (into all previous systems)
as that would usually contain all of the required information.

STMS: While configure STMS if you face error "Cannot create user TMSADM"



While configure STMS  if you face error "Cannot create user TMSADM"

Create the following entry in table TMSCROUTE

TMSCROUTE

System: ,ADMPWD

RFCROUTE: USER


This will mean that the new standard TMSADM password will be used which is a mixture of uppercase, lowercase, special characters and digits that suit most customer's password rules


If the password rules are very strict (for example it calls for 10 chars) then you will have to adjust the profile parameters so TMSADM can be created.





STMS: RFC communications error with system destinations


While using STMS  if  you experience below error,

RFC communications error with system/destination
%%ashost=ciSDS,sysnr=,lang=E,client=<nnn>,pcs=1,icce
RFC destination
%%ashost=ciSDS,sysnr=,lang=E,client=<nnn>,pcs=1,icce=1,cfit
=0x20 does not exist.







This can be caused by incorrect hostname in the communication tab of the TMS configuration. The
Target Host in the communication tab should match the Host name in SM51.

Alternatively it can be cause by missing System No. in the communication tab of the TMS configuration.








It's resolved with below solution:


Server name and Host name in SM51 are both either uppercase or both lowercase (they
cannot be mixed) and that the Host name in SM51 matches the Target Host in STMS.

If needed resolve the Host name in the communication tab manually via the domain controller but this
should not be required if SM51 has Server name and Host name correctly maintained.

If the issue is caused by missing System No. then please add the system number (again manually in the
communication tab via the domain controller) or else delete the system from STMS and
re-add it, keeping in mind if the system in question is the domain controller then you will need to activate the backup domain controller first.

Sunday, June 11, 2017

SUM fails to start correctly on AIX, also unable to connect using the GUI


SUM fails to start correctly on AIX, also unable to connect using the GUI.

You will also see  somestimes a HTTP 404 error.


When you start the SUM service you are unable to connect using the remote GUI.
Checking in the host control directory you find errors, including possibly:
[Thr 515] *** ERROR => Received a corrupt CommunicationHeader [Communicatio 175]
[Thr 515] *** ERROR => Header could not be read [Communicatio 154]
[Thr 515] *** ERROR => ProcessWatcher::Loop: More that 100 messages corrupt. Exit from ProcessWatcher::Loop [ProcessWatch 252]


Issue is resolved as follow.

Run CALL R3SAP400/SAPINIT

If there is still issues with connecting to SUM, update the SAPHOSTAGENT on your system

or else check the authorisations on the directories under /usr/sap/hostctrl/.

SUM: MC*SETUP entries found in extractor queue, please clean them up


While performing stack upgrade if you experience below.

SUM: MC*SETUP entries found in extractor queue, please clean them up

Issue is resolved with below solution:

Go to LBWG/LBWQ  and delete the entries  MCEX01, MCEX02, MCEX03, MCEX04....etc

Also check if any entries are there in SMQ1. if any delete it.



SUM: Error in phases MAIN_SWITCH/JOB_RSVBCHCK2 or MAIN_SWITCH/JOB_RSVBCHCK_D



While performing GST stack upgrade, if you face below error:


Error in phases MAIN_SWITCH/JOB_RSVBCHCK2 or MAIN_SWITCH/JOB_RSVBCHCK_D; Checks after phase MAIN_INIT/JOB_RSVBCHCK* were


Batchjob RSVBCHCK failed.
Detected the following errors due to error summary in /exportdmp/software/SUM/abap/log/PSVBCHCK.ELG:

1 ETH010XRSVBCHCK: Check of open update requests
2EETG050 Update records still exist - Please process

In Log file: PSVBCHCK.ELG

1 ETH010XRSVBCHCK: Check of open update requests
2EETG050 Update records still exist - Please process



Below are the reasons:

The update requests have been canceled, but entries in tables related to the update process are not
empty and these tables must be empty during upgrade process.

Outstanding update requests (visible in transaction code SM13) and queued RFCs (visible with
transaction code SMQ1) exist in the system.

Follow the below to fix the issue:

Go to transaction SM13 and delete the default values for the client, user, and time.

Select all the update requests and clear them.

Run report RSM13002 with "delerr" option to delete incomplete update records.

--------------------------------------------------------------------------------------------------------------------




Batchjob RSVBCHCK failed.

Detected the following errors due to error summary in ../SUM/abap/log/PSVBCHCK.ELG:
1 ETH010XRSVBCHCK: Check of open update requests
2017-06-11 Page 1/4 © 2017 SAP SE or an SAP affiliate company. All rights reserved
1.


A2EEMCEX 151 Entries for application "xx" still exist in the extraction queue ->
Log file: PSVBCHCK.ELG

1 ETH010XRSVBCHCK: Check of open update requests
A2EEMCEX 151 Entries for application "xx" still exist in the extraction queue ->


Solution:

Make sure that you have cleaned up all outbound queue RFC calls

To find unresolved outbound queue RFC calls, proceed as follows:

Call transaction SMQ1.

Delete the default values for the client.

Make sure that the list of outbound queue RFC calls is empty.


To be able to logon to each listed client, please unlock the system first and then logon with client that has
open update requests and then do the deletion.

Make sure that you lock the system back after you delete the entries and then continue with the upgrade.
(see also the SAP KBA 1901463)

----------------------------------------------------------------------------------------------------------------------



Batchjob RSVBCHCK failed.

Detected the following errors due to error summary in
/exportdmp/software/SUM/abap/log/PSVBCHCK.ELG:

1 ETH010XRSVBCHCK: Check of open update requests
2EETG050 Update records still exist - Please process
A2EEMCEX 151 Entries for application "xx" still exist in the extraction queue ->
In Log file: PSVBCHCK.ELG

1 ETH010XRSVBCHCK: Check of open update requests
2EETG050 Update records still exist - Please process
A2EEMCEX 151 Entries for application "xx" still exist in the extraction queue ->



Execute the above both steps as described to resolved the issue

---------------------------------------------------------------------------------------------------------------------