Sunday, October 2, 2016

Difference between SAP_ALL and SAP_NEW profiles





The Difference between SAP_ALL and SAP_NEW profiles

:

SAP_NEW:-

SAP_NEW is a SAP standard Profile which is usually assigned to system users temporarily during an upgrade to ensure that the activities and operations of SAP users is not delayed, during the Upgrade.
It contains all the necessary objects and transactions for the users to continue their work during the upgrade.

It should be withdrawn once all upgrade activities is completed, and replaced with the now modified Roles as it has extensive authorizations than required.


SAP_ALL:-

SAP_ALL is a SAP standard profile, which is used on need basis, to resolve particular issues which may arise during the usage of SAP. It is used by Administrators/Developers only and is applied on a need to use basis, then withdrawn. It contains all SAP system objects and Transactions. SAP_ALL is very critical and only SAP* contains SAP_ALL attached to it in the production system. No other dialog users have SAP_ALL attached to them.

SAP_NEW is used in the Production environment during a version upgrade whereas SAP_ALL shouldn't be or not allowed be used in Production except where necessary, in a controlled manner with all proper approvals from the customer.



To check BRTOOLS version



#su - ora<sid>
>

Type this command :

>brtools -V

You can see like below :

host:oraids 1> brtools -V
BR0651I BRTOOLS 6.40 (40)

Patch Date Info

36 2006-01-11 Small functional enhancements in BR*Tools (note 914174)
38 2006-03-29 BR*Tools support for MDM databases (note 936665)
40 2006-08-30 Wrong message numbers in BR*Tools 6.40 (note 976755)

release note 680046
kernel release 640
patch date 2006-08-30
patch level 40
make platform hpia64
make mode OCI_920
make date Sep 5 2006



Also  you can see that using  BRTOOLS command 

Optimizer Statistics Update

brconnect -u / -c -f stats -t all -m +I -s P10 -f allsel,collect,method,precision -p 8

To analyze audit-information-during-user-logon

1  Incorrect logon data (client, user name, password)
User: check the logon data entered (enter data again)
Admin: check the logon data for the service users, for example in the ITS service file
(usually wrong client) or in the RFC destinations
(usually wrong password)

2  User is locked (by administrator or on account of failed logons)

User: Contact user administrator / helpdesk
Admin: Release lock(s) (transaction SU01)
3  Incorrect logon data; for SAPGUI: connection closed
see 1
4  Logon using emergency user SAP* (refer to Note 2383)
User: no error – logon successful
Admin: deactivate the automatic user SAP* if necessary
(Note 68048)
5  Error when constructing the user buffer (==> possibly a follow-on error!)
User: Contact user administrator / helpdesk
Admin: solve technical problem (refer to Note 10187)
6  User only exists in the central user administration (CUA)
User: check the logon data entered (enter data again)
Admin: check settings for the central user administration
(refer to Note 159885)
7  Invalid user type
User: check the logon data entered (enter data again)
Admin: change user type (transaction SU01)
8  User account outside validity period
User: Contact user administrator / helpdesk
Admin: change validity period (transaction SU01)
9  SNC name and specified user/client do not match
User: check the logon data entered (enter data again)
Admin: change SNC assignment if necessary (transaction SU01)
10  Logon requires SNC (Secure Network Communication)
User: Contact system administrator / helpdesk
Admin: check SNC settings (refer to “SNC User’s Guide”)
11  No SAP user with this SNC identification in the system
User: Contact system administrator / helpdesk
Admin: if necessary, enhance or correct SNC name mapping ==> R/3 account (table USRACL(EXT))
(transaction SU01)
(see: SAPnet – http://service.sap.com/security:
-> Security in Detail -> Infrastructure Security:
“SNC User’s Guide”)
12  ACL entry for SNC-secured server-server link is missing
User: Contact system administrator / helpdesk
Admin: if necessary, enhance or correct SNC name mapping ==> access types (table SNCSYSACL)
(transaction SNC0).
This setting is necessary for
X.509 certificate logons, external IDs or
SNC-secured system-system links (RFC)
(see: SAPnet – http://service.sap.com/security:
-> Security in Detail -> Secure User Access -> Authentication & Single Sign-On:
“SNC User’s Guide” or
“X.509 Certificate Logon via the ITS”)
13  No suitable SAP account found for the SNC name
User: Contact system administrator / helpdesk
Admin:    see Section 11  (=> Note 650347)
14 Ambiguous assignment of SNC names to SAP accounts
User: Contact system administrator / helpdesk
Admin:    see Section 11  (=> Note 650347)
20  Logon using logon ticket is deactivated
User: Contact system administrator / helpdesk
Admin: Set profile parameter login/accept_sso2_ticket = 1
(Refer to Note 177895 – Technical Prerequisites)
21  Syntax error in the received logon ticket
User: Contact system administrator / helpdesk
Admin: analyze the error by trace (Level 2, only “Security” component)
contact the SAP Hotline if necessary (BC-SEC)
22  Digital signature check for logon ticket fails
User: Contact system administrator / helpdesk
Admin: analyze the error by trace (Level 2, only “Security” component)
check settings using transaction SS02,
(configuration error, refer to Note 177895),
contact SAP Hotline if necessary (BC-SEC-SSF)
23  Logon ticket issuer is not in the ACL table
User: Contact system administrator / helpdesk
Admin: analyze the error by trace (Level 2, only “Security” component)
check settings using transaction SS02
(configuration error, ACL table: TWPSSO2ACL,
see Note 177895)
24  Logon ticket is no longer valid
User: log on to the Workplace server (ticket issuer) again
Admin: extend the ticket validity period if necessary
(profile parameter login/ticket_expiration_time)
30  Logon using X.509 certificate is generally deactivated
User: Contact system administrator / helpdesk
Admin: set profile parameter snc/extid_login_diag = 1 if necessary
(see: SAPnet – http://service.sap.com/security:
-> Security in Detail -> Secure User Access -> Authentication & Single Sign-On:
“X.509 Certificate Logon via the ITS”)
31  Syntax error in the received X.509 certificate
User: Contact system administrator / helpdesk
Admin: analyze the error by trace (Level 2, only “Security” component)
contact SAP Hotline if necessary (BC-SEC-SSF)
32  X.509 certificate does not originate from the Internet Transaction Server
User: Contact system administrator / helpdesk
Admin: Check the configuration – this error is very rare,
analyze the error by trace (Level 2, only “Security” component)
contact the SAP Hotline if necessary (BC-SEC)
34  No appropriate SAP account found for the X.509 certificate
User: Contact system administrator / helpdesk
Admin: Check the X.509 certificate mapping ==> R/3-Account
(Table USREXTID, TYPE=DN using view VUSREXTID, SM30),
analyze the error by trace (Level 2, only “Security” component)
(display X.509 certificate contents).
(see: SAPnet – http://service.sap.com/security:
-> Security in Detail -> Secure User Access -> Authentication & Single Sign-On:
“X.509 Certificate Logon via the ITS”)

35  Ambiguous assignment of X.509 certificate to SAP account

User: Contact system administrator / helpdesk
Admin: Check the X.509 certificate mapping ==> R/3-Account
(as for error code 34), alternatively you can enter
USER=* as part of the logon process (RFC) and thereby force the mapping onto the
“selected” entry (No. 000).
41  No suitable SAP account found for the external ID
— analogous to error code 34, difference: different TYPE assignment
42  Ambiguous assignment of external ID to SAP accounts
— analogous to error code 35, difference: different TYPE assignment
50  Password logon is deactivated
User: contact system administrator / helpdesk or
use other logon variant (=> Single Sign-On)
Admin: see note 379081: Profile parameters
- login/disable_password_logon
- login/password_logon_usergroup
51  Initial password has not been used for too long
User: Contact user administrator / helpdesk
Admin:  assign new password (transaction SU01)
see note 379081: Profile parameters
- login/password_max_new_valid
- login/password_max_reset_valid
- login/password_max_idle_initial (from 7.00)
52  User does not have a password
User: Contact user administrator / helpdesk
Admin: assign new password (transaction SU01)
53  Password lock active (too many failed logons)
User: Contact user administrator / helpdesk
Admin:  release lock and assign new password if necessary
see note 939017: Distinction between types of locks
54  Productive password has not been used for too long
User: Contact user administrator / helpdesk
Admin:  assign new password (transaction SU01)
see note 862989: Profile parameter
- login/password_max_idle_productive
100  Client does not exist
User: check the logon data entered (enter data again)
Admin: check the logon data for the service users, for example in the ITS service file
or in the RFC destinations (client specification)
101  Client is currently locked for logons (upgrade running)
User: contact system administrator / helpdesk or
carry out logon at a later stage
Admin: See Note 12946.
1001 Password has expired – interactive change required (RFC/ICF)
User: Contact system administrator / helpdesk
Admin: set profile parameter rfc/reject_expired_passwd = 0 or
profile parameter icf/reject_expired_passwd = 0
(see Notes 161146 and 454962)

Very useful snotes for STMS issues



Note 71353  - Transports hang during background steps

NOTE   12746,818065, 302859, 556941, 556946, 71353, 49242, 323726, 56311, 24800, 486991, 506771, 138200,26966, 449270.

Note 1114801 - STMS_QA: Error UNKNOWS_SYSTEM with authorization in QA

Note 1298927 - SYSTEM_ACCESS_DENIED when creating/deleting TMS domain link

Note 642464 - Profile file for the program sapevt

Note 821875 - Security settings in the message server

Note 802172 - sapevt (Version 4): Release-independent version

1223360 - Composite SAP Note: Performance optimization during import

1008058 - Performance of individual imports

1127194 - R3trans import with parallel processes

 41732 - Deletion of data in transport directory

168175 - tp check/clearold problems

312843 - tp CHECK/CLEAROLD latest News

556734 - FAQ Transport: Setup and further information

1809122 - Termination in transaction STMS

2207671 - RFC system error in STMS

SAP Note 690449 for any .LOB file that is locking the tp

Landscape Management Database (LMDB)

The Landscape Management Database (LMDB) is a directory of elements of a system landscape. The core task of the LMDB is to provide information about the entire system landscape at a central location. The Solution Manager System Landscape (SMSY) and the System Landscape Directory (SLD) already provide this function with different technologies for different applications. The aim of the LMDB is to unify the SLD and SMSY in SAP Solution Manager. The LMDB is supplied with data automatically, in as far as this is possible.

                               

Integration with the System Landscape Directory
Data suppliers exist for most technical systems and automatically register these systems in the System Landscape Directory (SLD). The SLD, therefore, is the central data source for the LMDB. Changes to technical systems should be carried out in the SLD where possible.
Integration with the Solution Manager System Landscape


For information on the advantages of the LMDB with regards to the previous SMSY storage, see the SCN blog Evolution of landscape data management - What's better with LMDB?
SAP Note 1679673 explains the most important differences between LMDB and SMSY.

There is a clear separation of tasks in System Landscape Directory (SLD) and LMDB:
The SLD is the target for self-registration of technical systems in the IP landscape. This technical systems information is provided for applications in non-productive and productive applications such as SAP NetWeaver Process Integration and WebDynpro (Java) based applications that need this information. Because of the productive use of the data it provides, often many SLD systems are used in customer landscapes (see SLD Topology: How-To Gather and Distribute SLD-Data in Your IT-Landscape).
The LMDB retrieves technical system information from the SLD as a basis for landscape data, to manage technical systems in monitoring and maintenance processes. It acts as a single source of truth in SAP Solution Manager. It provides a centralized UI approach to landscape verification functions and transaction SMSY.
As of SAP Solution Manager 7.1 SP0-4, the LMDB manages information about technical systems, hosts.
As of SP5, the LMDB also manages product system information.
Transaction SMSY is the "old" system information storage, being replaced by the LMDB bit by bit.
Up to SAP Solution Manager 7.1 SP4, SMSY is still used to manage product system information, but gets all technical systems information from the LMDB.
As of SP5, SMSY is only used to manage project-related data (logical components and solutions); the required data (technical systems and product systems) ar edited and provided by the LMDB (see New in SP05: Product System Editor in the Landscape Management Database of SAP Solution Manager 7.1).


                               

As of SAP Solution Manager 7.2, SMSY does not exist any longer. See Deactivation of Transaction SMSY.
The separation of roles is reflected in the architecture: Technical systems can register only in the SLD. All data changes in the SLD are retrieved immediately by the LMDB, as well as the CIM model and the CR content (SAP software catalog). Because both SLD and LMDB are needed in one landscape; without an SLD connected, LMDB cannot work AND data in SLD and LMDB must be consistent and must not overlap.

TP Clean buffer


Read the below snotes which will help you to fix the buffer issuees

41732 - Deletion of data in transport directory
168175 - tp check/clearold problems
312843 - tp CHECK/CLEAROLD latest News


Show buffer contents   : tp showbuffer <SAPSID>. 

Delete one transport request from a buffer: 
tp delfrombuffer <TR> <SAPSID> 
 
tp CLEANBUFFER <SAPSID> [options ...]Removes All completed transport requests from the buffer of system <SAPSID>.

  Options:
     pf=<TPPARAM>          specify a transport parameter file
     silent                redirect stdout to dev_tp in current directory
     -D"<entry>"           specify a parameter for tp
     -t<k>                 specify a trace level for tp
     tf=<filename>         specify the name of a trace file for tp
     shdwtadir=<tabname>   specify the name of a shadow TADIR

  Description of arguments:
     <TPPARAM>   (complete) path of a transport parameter file
     <entry>     a parameter description as in a tp parameter file
     <SAPSID>    R/3 system's name, (target or source whatever is appropriate)

STMS: Clean-up STMS Transport Directory /usr/sap/trans


The directory /usr/sap/trans is filling up and most of the files are generated in cofile, datafile, sapnames and log directory which is the SAP transport files. You are planning to delete all these SAP transport files to regain space back in the /usr/sap/trans


The STMS is configured as DEV > QAS > PRD.  if devlopment system  is set as the domain controller. 

1.Login to the operating system as sidadm and go to /usr/sap/trans/SID/bin


2. First of all, you need to execute command tp CHECK ALL. The command will generate a list of all deletable files for cofile, datafile and log directories


3. The tp CHECK ALL command will logged all the information into /usr/sap/trans/SID/tmp/CHECK.LOG


4. Before you proceed with the file deletion, you can try to execute a delete test
5. You can get the log test from /usr/sap/trans/tmp/TESTOLD.LOG file


6. Before executing this step, you need create (if not exist) folder olddata. Assign the olddata with the sidadm:sapsys / rwxrwx–x / 771 authorization. Once created, you can proceed with the files deletion as per below. The command will delete files from the subdirectories cofiles, log, and olddata, and to move files from the subdirectory data to the subdirectory olddata. The log of the activity can be obtained from /usr/sap/trans/tmp/CLEAROLD.log


NOTE:
The tp CLEAROLD ALL command is executed as per settings below, which can be obtained by this command


GET_SYSTEM_TYPE of the class CL_ESH_OM_MODELS

When you execute transaction STMS, a runtime error occurs in the method GET_SYSTEM_TYPE of the class CL_ESH_OM_MODELS.

It's a program error.

1809122 - Termination in transaction STMS 


STMS : If SAP transport request is hang

SAP transport request is hang.

Check the below snotes which will help you to fix the issue

Note 71353 - Transports hang during background steps
NOTE 12746,818065, 302859, 556941, 556946, 71353, 49242, 323726, 56311, 24800, 486991, 506771, 138200,26966, 449270.
Note 1114801 - STMS_QA: Error UNKNOWS_SYSTEM with authorization in QA
Note 1298927 - SYSTEM_ACCESS_DENIED when creating/deleting TMS domain link
Note 642464 - Profile file for the program sapevt
Note 821875 - Security settings in the message server
Note 802172 - sapevt (Version 4): Release-independent version

1.Verify that there are no errors in system log.

2. - If Problems with job RDDIMPDP
Execute SE38 > Run program RDDNEWPP > Choose Normal priority. This will schedule new RDDIMPDP job that is responsible for pushing transport requests in your system.

3.Execute transaction STMS

a) Select Import Overview
b) select SID
c) From menu > Go To > System Log
d) Correct if any issues in log.

4. \usr\sap\trans\tmp\SID.LOB is already in use

(22720), I'm waiting 2 sec (20100322091720). My name: pid 10128 on
SERVER (SIDadm)

a) Backup and delete the file  usr\sap\trans\tmp\SID.LOB'. Add the transport again and perform import.

b) Cannot find \usr\sap\trans\log\SLOGXXX.SID
Make sure that file exists. If not, copy the most recent SLOG (example SLOGXX01) and rename it to SLOGXX02.SID. Check again the system log if the issue persist.



Steps to clear the transport list to make sure that transports push successfully. This should be aligned with customer.

1. Clear the transport list and redo the transport
a) Execute transaction STMS
b) Import overview -> Goto -> Import monitor >
c) 'Monitor' -> right click on transport -> delete entry
d) Redo the transport action. Proceed to next step if not solved.

2. Clear the transport tables
a) Make sure that there are no stuck transports in your transport tables.
b) Check and delete entries (using transaction SM30) any entries found in the following tables:

TRJOB
TRBAT

b.1 To do this execute SM30
b.2 Enter 'TRBAT' in Table/View.
b.3 Click on Display
b.4 Verify that No transport number or HEADER exists.

or else you can truncate from oracle using below commands.

truncate table sapecc.tpstat
truncaate table sapecc.trjob


Also you can delete the entries from tables TMSTLOCKR and TMSTLOCKP.

import the transport request.

If this does not solve the problem. You have the option to stop all transport from OS level.

3 Kill if running any tp process at OS level
and import the TR agin.


To save transport resources:

1. It is SAP's recommendation that imports to be done asynchrounously rather than synchronously. By doing this you will be conserving system Resources.
   import > Execution tab > Select 'Asynchrounously' > Confirm import

STMS: Transport Tool hang

Check below,and also see what message do you get if Transport tool hang

tp Interface
Transport Profile
RFC Check
tp log

Check any RFC tables (rfcdes) locked,

or

check whether RDDIMPDP background job is schedule in all clients ,if it is not scheduled properly, you can schedule this using RDDNEWPP. Also, check the following points to find out the reason for this problem...

1. Check the transport logs under transport directory for the above
long running requests (/usr/sap/trans/log or /usr/sap/trans/tmp).

2. Check whether Transport routes are configured properly, using STMS transaction. Also, perform the connection test, tp tool and RFC connections.

3. Execute RSTPTEST program to check the STMS configuration. To clear the inconsistency and long running tp process , proceed as below.

1. verify whether any tp is running at OS level. kill all the tp process which is running at the OS level (Ensure there is no transport done during this time).

2. Move all the files from /usr/sap/trans/tmp to some temporary backup directory.
3. Go to STMS -> import overview -> go to ->import monitor and

Delete the old entries.

Also, review following notes for further information on the background jobs and transport locks.

745468        Problems with sapevt program in Releases 6.2
449270        Job RDDIMPDP is not triggered by sapevt
26966          Background jobs do not start when transporti
690449         Transport buffer lock file (.LOB) remains blocked on

check the SAP Note 690449 for any .LOB file that is locking the tp.

If exist any .LOB file in temp dir delete it

Wednesday, September 28, 2016

Error: 500 Internal Server Errors



Description: The server encountered an unexpected condition which prevented it from fulfilling the request.

Possible Tips: Have a look into SAP – 804124, 807000

Error: 405 Method Not Allowed

Error: 405 Method Not Allowed

Description:The method specified in the Request-Line is not allowed for the resource identified by the Request-URL

Possible Tips: ICM_HTTP_INTERNAL_ERROR - Refer SAP 90643

XI/PI Outbound message errors


Sometimes connection between source  SAP XI & Target system  ECC  goes down and messages fail on the outbound side ( i.e. outbound interface is on sender side and inbound on receiver side)

If not be possible to restart them from using RWB or  transactions like SXI_MONITOR/SXMB_MONI. .

In general, messages are picked up and sent via SAP XI when the link returns. However, in some scenarios, it may be possible that SAP XI server could not  finish conversation with ERP. Main status of messages is “Processed successfully” – but there is an error in the outbound side as shown below. (Transactions – SXI_MONITOR/SXMB_MONI).




These messages do not get picked up automatically – and it is not possible restart them from using RWB or the transactions like SXMB_MONI.

Such messages could be processed in the following way:


  1. Send data directly to Integration Engine
  2. Change the status of failed message


This example shows how to solve the problem – two error messages are shown and one of them is solved here.

Send data directly to Integration Engine

Go to Component Monitoring in SAP XI Runtime Workbench. Click on the Test Message tab for the Adapter Engine. Specify the URL of SAP XI Integration engine to send the message to e.g. http://<XIServer>:<J2EE Port>/sap/xi/engine?type=entry




Specify the header Information. Copy payload of the message using SXMB_MONI and paste it into Payload area in RWB.




Send the message using Send Message button.

Change the status of failed message:

Call the transaction SWWL -> Delete appropriate work items.




Check that the messages are complete in SXI_MONITOR/SXMB_MONI.





Another simpler way to accomplish this is to use transaction SXMB_MONI_BPE . Select Continue Process Following Error under Business Process Engine -> Monitoring and Execute (F8). Update the selection criteria as required and Execute (F8). Choose the appropriate line item and click on Restart workflow button.

Tuesday, September 27, 2016

Row store vs Column store in HANA



A database table is a two-dimensional data structure with cells organized in rows and columns. Computer memory however is organized as a linear structure. To store a table in linear memory, two options.

A row-oriented storage stores a table as a sequence of records, each of which contains the fields of one row. Conversely, in a column store the entries of a column are stored in contiguous memory locations.

The concept of columnar data storage has been used for quite some time. Historically it was mainly used for analytics and data warehousing where aggregate functions play an important role. Using column stores in OLTP applications requires a balanced approach to insertion and indexing of column data to minimize cache misses.

The SAP HANA database allows the developer to specify whether a table is to be stored column-wise or row-wise.

It is also possible to alter an existing table from columnar to row-based and vice versa.


Row based tables have advantages in the following circumstances: 

1. The application needs to only process a single record at one time (many selects and/or updates of      single records).

2.The application typically needs to access a complete record (or row).

3.The columns contain mainly distinct values so that the compression rate would be low.

4.Neither aggregations nor fast searching are required.

5.The table has a small number of rows (e. g. configuration tables).
To enable fast on-the-fly aggregations, ad-hoc reporting, and to benefit from compression mechanisms it is recommended that transaction data is stored in a column-based table.

The SAP HANA data-base allows joining row-based tables with column-based tables. However, it is more efficient to join tables that are located in the same row or column store. For example, master data that is frequently joined with transaction data should also be stored in column-based tables.





Column-based tables have advantages in the following circumstances:

1. Calculations are typically executed on single or a few columns only.
2. The table is searched based on values of a few columns.
3.The table has a large number of columns.
4. The table has a large number of rows and columnar operations are required (aggregate, scan           etc.)
5. High compression rates can be achieved because the majority of the columns contain only few distinct values (compared to number of rows).

COLUMN STORAGE IS BEST SUITABLE FOR MODERN CPU'S:

Modern CPUs with multi-core architecture provide anenormous amount of computing power. Blades with 8 CPUs and 16 cores per CPU will populate next-generation blade servers. That gives us 128 computing units with up to approximately 500 GB of main memory. To optimize the use of
these computing devices we have to understand memory hierarchies,cache sizes, and how to enable parallel processing within one program [6].

We consider the memory situation first. Enterprise applications are to a large extent memory bound, that means the program execution time is proportional to the amount of memory accessed for read and write operations or memory being moved.As an example,

 we compare a full table scan of SAP’s accounting document line items table, which has 160 attributes,in order to calculate a total value over all tuples. In an experiment we did with 5 years worth of accounting of a German brewery, the number of tuples in this table was 34 million. In the underlying row database, 1 million tuples of this particular table consume about 1 GB of space.The size of the table was thus 35 GB. The equivalent columnstore table size was only 8 GB because of the more efficient vertical compression along columns.

If we consider that in real world applications only 10% of the attributes of a single table are typically used in one SQL-statement (see Figure 1), that means for the column store at most 800
MB of data have to be accessed to calculate the total values [1].


COLUMN STORAGE IS SUPERIOR TO ROW STORAGE WITH REGARDS TO
MEMORY CONSUMPTION:

Under the assumption to build a combined system for OLTP and OLAP data has to be organized for set processing, fast inserts, maximum (read) concurrency and low impact of reorganization. This imposes limits on the degree of compression for both row and column storage.

 While it is possible to achieve the same degree of compression in a row store as in a column store (see for e.g. IBM’s Blink engine [17]), a comparison of the two should be done assuming that the requirements above (especially fast inserts) are met,which excludes read-only row stores from the discussion.

In the column store, the compression via conversion of attribute values and the complete elimination of columns with null values only is very efficient but can be improved in this
research system by interpreting the values: all characters blank, all characters zero, and decimal floating point zero as null values. Applications think in default values and do not handle null values properly.

By translating the default values automatically into null values on the way into the database and back into default values on the way out. Comparing the memory requirements of column and row
storage of a table, the difference in compression rate is obvious.Various analyses of existing customer data show a rate of 2 for (write-optimized) row storage on disk.

For further memory consumption estimates we use a factor of 10 based on compression in favor of column storage.  Column storage allows us to eliminate all materialized views (aggregates) and calculate them algorithmically on demand. The storage requirements associated with these aggregates vary from application to application. The multi-dimensional cubes typically used in OLAP systems for materialized roll-ups grow with the cardinality of the individual dimensions. Therefore a factor 2
in favor of column storage based on the elimination of redundant aggregates is a conservative estimate.

Horizontal partitioning of tables will be used based on time and tenants. The option to partition into multiple dimensions is very helpful in order to use different qualities of main memory and processor speed for specific dimensions.

Within the context of memory consumption the option to split tables into current data and historic data per year is extremely interesting. The analysis of customer data showed that typically 5-10 years of historic data (no changes allowed) are kept in the operational database. Historic data can be kept accessible but reside on a much cheaper and slower storage medium (flash memory or disk).
The current data plus the last completed year should be kept in DRAM memory on blades for the typical year over year comparison in enterprise systems. For the separation by time we use two time stamps, creation time and completion time. The completion time is controlled by the application
logic e.g. an order is completely processed or an invoice paid.

The completion date determines the year in which data can become historic, that means no further changes are possible. With regards to main memory requirements we can take a factor 5 in favor of column storage into account. It is only fair to mention a horizontal partitioning could also
be achieved in record storage. Should the remaining table size for the current and last years partition still be substantial, horizontal partitioning by the data base management may occur. Ignoring memory requirements for indices and dimension dictionaries, we can assume a 10x2x5 time reduction
in storage capacity (from disk to main memory). Next generation boards for blade servers will most certainly provide roughly 500 GB of main memory with a tendency of further growth. Since arrays of 100 blades are already commercially available, installations with up to 50 TB for OLTP and OLAP could be converted to an in-memory only system on DRAM. This covers the majority of e.g. SAP’s Business Suite customers as far as storage capacity is concerned.

SAP HANA Sizing



Sizing is a usual term in SAP, which means determining the hardware requirements of an SAP System such as physical memory, CPU power, and I/O capacity.

The process of translating business requirements into hardware requirements 
is called hardware sizing.

The size of the hardware and database is influenced by both business aspects and technological aspects. This means that the number of users using the various application components and the data load they put on the network must be taken into account.

Determining sizing requirement ensures that customers purchase hardware as per their business need and also ensures lower cost and reduce TCO.

SAP provides number of tools and methodologies, how these hardware requirements can be determined. The most Perceptible tool is the "Quick Sizer." 




SAP HANA In-memory database sizing consists of 
  1. Memory sizing for static data
  2. Memory sizing for objects creating during run-time such as data load and query execution
  3. Disk sizing
  4. CPU sizing

Memory:
CPU:
Disk Size:

For the purpose of successful SAP HANA implementation, SAP has provided various guidelines and methods to calculate the correct hardware size.
We can use any of the below method:


1. SAP HANA sizing using the QuickSizer tool
2. SAP HANA sizing using the DB specific scripts
3. SAP HANA sizing using the ABAP report

Selecting a T-shirt sizes:




According to the sizing results, select a SAP HANA T-shirt size that satisfies the sizing requirements in terms of main memory, and possibly CPU capabilities.
For example
The SAP hardware partners provide configurations for SAP HANA according to one or more of these T-shirt sizes. Below table lists the T-shirt sizes for SAP HANA.



The three main KPIs used to size for SAP HANA is
  1. Main memory (RAM) space,
  2. CPU processing performance
  3. Disk size.


While traditional sizing approaches focus on CPU performance, the main driver for SAP HANA sizing is memory. Because SAP HANA is a main memory database, essentially all business data (e.g., master and transactional data) resides in the main memory, which leads to a higher memory footprint compared to traditional databases. In addition to the main memory required for storing the business data, temporary memory space is needed to operate the database management system — to support complex queries or data that is needed for buffers and caches, for example.
Sizing for SAP HANA includes unique requirements for CPU processing performance. The CPU behaves differently with SAP HANA compared to traditional databases. The processing engine for SAP HANA is optimized to operate very complex queries at maximum speed, which means that many of these queries are processed internally and in parallel, and most of the data is stored in a column-based format. This architecture not only might lead to a higher CPU demand compared to traditional databases, it also requires planning for a lower average utilization to ensure that there is enough headroom for the database to process queries sufficiently fast.
An in-memory database still requires disk storage space — to preserve database information if the system shuts down, either intentionally or due to a power loss, for instance. Data changes in the database must be periodically copied to disk to ensure a full image of the business data on disk, and to preserve the current state of the database and all of the data entered in the persistence layer. In addition, a logging mechanism is required to log changes and enable system recovery. To accommodate these requirements, there always must be enough space on disk to save data and logs.
Static & Dynamic RAM:



Calculation of Static RAM Size:

Static RAM Size means the amount of RAM required to store the data in the SAP HANA database.
Assuming the compression factor as 7:



Calculation of Dynamic RAM Size:

Dynamic RAM is for additional main memory required for objects that are created dynamically 
when new data is loaded or queries are executed. SAP recommends to keep Dynamic RAM size same as Static RAM size.



Calculation of Total RAM Size








                           Table footprint of source database: 186348 MB = 182 GB

                                Assumption: Source DB Compressed by factor 1.8

                           RAM = Source data footprint*2/7=182 GB*2/7*1.8=94 GB

                                   Disk          = 94 GB*4=376 GB
                                     (persistence) 
                                  Disk         = 94 GB
                                        (log)         


Refer below snotes which will help you more understanding HANA Sizing.

2121330 - SAP BW on HANA Sizing Report
1637145 - SAP BW on HANA: Sizing SAP In-Memory Database
1736976 - Sizing Report for BW on HANA
1799449 - Sizing report ERP on SAP HANA database
2296290 - New Sizing Report for BW on HANA


Monday, September 26, 2016

SAP BW Isses-4

TRFC's errors -BW

tRFC – Transact Remote Function Call Error, occurs whenever LUW’s (Logical Unit of Work’s) are not transferred from the source system to the destination system.

Message appears in the bottom of the “Status” tab in RSMO. The error message would appear like “tRFC Error in Source System” or “tRFC Error in Data Warehouse” or simply “tRFC Error” depending on the system from where data is being extracted.

Sometimes IDOC are also stuck on R/3 side as there were no processors available to process them.
Once this error is encountered, we could try to Click a complete Refresh “F6” in RSMO,
and check if the LUW’s get cleared manually by the system.

If after “couple” of Refresh, the error is as it is, then follow the below steps quickly as it
may happen that the load may fail with a short dump.

Go to the menu Environment -> Transact. RFC -> In the Source System, from RSMO. It
asks to login into the source system.

Once logged in, it will give a selection screen with “Date”, “User Name”, TRFC options.

On execution with “F8” it will give the list of all Stuck LUW’s. The “Status Text” will appear Red for the Stuck LUW’s which are not getting processed. And the “Target System” for those LUWs should be “WP1CL015”, that’s the Bose BW Production system. Do not execute any other IDOC which is not related have the “Target System” as “WP1CL015”.

Right Click and “Execute” or “F6” after selection, those LUW’s which are identified properly. So that they get cleared, and the load on BW side gets completed successfully.

When IDocs are stuck go to R/3, use Tcode BD87 and expand ‘IDOC in inbound Processing’ tab for
IDOC Status type as 64 (IDoc ready to be transferred to application). Keep the cursor on the error
message (pertaining to IDOC type RSRQST only) and click Process tab (F8) . This will push any
stuck Idoc on R/3.

Monitor the load for successful completion, and complete the further loads if any in the Process
Chain. 
--------------------------------------------------------------------------------------------------------------------
Time Stamp Error -BW

The “Time Stamp” Error occurs when the Transfer Rules/Structure (TR/TS) are internally inactive in
the system.

• They can also occur whenever the DataSources are changed on the R/3 side or the DataMarts are
changed in BW side. In that case, the Transfer Rules (TR) is showing active status when checked.
But they are actually not, it happens because the time stamp between the DataSource and the
Transfer Rules are different.

The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
in the PC.

• Check the Transfer Rules in RSA1, Administrator Workbench.

Whenever we get such an error, we first need to check the Transfer Rules (TR) in the Administrator
Workbench. Check each rule if they are inactive. If so then Activate the same.
• You need to first replicate the relevant data source, by right click on the source system of D/s ->
Replicate Datasources.

During such occasions, we can execute the following ABAP Report Program
“RS_TRANSTRU_ACTIVATE_ALL”. It asks for Source System Name, InfoSource Name, and 2
check boxes. For activating only those TR/TS which are set by some lock, we can check the option
for “LOCK”. For activating only those TR/TS which are Inactive, we check for the option for “Only
Inactive”.

Once executed it will activate the TR/TS again within that particular InfoSource even though they are already active.

 Now re-trigger the InfoPackage again.

Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.

------------------------------------------------------------------------------------------------------------------
Short Dump -BW

Whenever a Job fails with an error “Time Out” it means that the job has been stopped
due to some reason, and the request is still in yellow state. And as a result of the same
it resulted in Time Out error. It will lead to a short dump in the system. Either in R/3 or
in BW.

Short dump may also occur if there is some mismatch in the type of incoming data. For
example say date field is not in the format which is specified in BW, then it may happen
that instead of giving an error it may give a short dump. Every time we trigger the load.

We would get a Time Out Error after the time which is specified in the Infopackage ->
Time Out settings (which may or may not be same for all InfoPackages). But by that
time in between, we may get a short dump in the BW system or in the Source System
R/3.

The message appears in the Job Overview in RSMO, or in “Display Message” option of
the Process in the PC.


Usually “Time Out” Error results in a Short Dump. In order to check the Short Dump we go to the
following, Environment -> Short Dump -> In the Data Warehouse / -> In the Source System.

Alternatively we can check the Transaction ST22, in the Source System / BW system. And then
choose the relevant option to check the short dump for the specific date and time. Here when we
check the short dump, make sure we go through the complete analysis of the short dump in detail
before taking any actions.

In case of Time Out Error, Check whether the time out occurred after the extraction or not. It may
happen that the data was extracted completely and then there was a short dump occurred. Then
nothing needs to be done.

In order to check whether the extraction was done completely or not, we can check the “Extraction”
in the “Details” tab in the Job Overview. Where in we can conclude whether the extraction was done
or not. If it is a “full load” from R/3 then we can also check the no. of records in RSA3 in R/3 and
check if the same no of records are loaded in BW.


 In the short dump we may find that there is a Runtime Error, "CALL_FUNCTION_SEND_ERROR"
which occurred due to Time Out in R/3 side.

In such cases following could be done.

If the data was extracted completely, then change the QM status from yellow to green. If “CUBE” is
getting loaded then create indexes, for ODS activate the request.

If the data was not extracted completely, then change the QM status from yellow to red. Re-trigger
the load and monitor the same.

 Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.

-------------------------------------------------------------------------------------------------------------------

 Job Cancellation in Source System (ECC)

If the job in R/3 system cancels due to some reasons, then this error is encountered. This may be
due to some problem in the system. Some times it may also be due to some other jobs running in
parallel which takes up all the Processors and the jobs gets cancelled on R/3 side.

 The error may or may not be resulted due to Time Out. It may happen that there would be some
system hardware problem due to which these errors could occur.

The Exact Error message is "Job termination in source system". The exact error message may also
differ, it may be “The background job for data selection in the source system has been terminated”.
Both the error messages mean the same. Some times it may also give “Job Termination due to
System Shutdown”.

The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
in the PC.

Firstly we check the job status in the Source System. It can be checked through Environment -> Job
Overview -> In the Source System. This may ask you to login to the source system R/3. Once logged
in it will have some pre-entered selections, check if they are relevant, and then Execute. This will
show you the exact status of the job. It should show “X” under Canceled.

The job name generally starts with “BIREQU_” followed by system generated number.

 Once we are confirm that this error has occurred due to job cancellation, we then check the status of
the ODS, Cube under the manage tab. The latest request would be showing the QM status as Red.

We need to re-trigger the load again in such cases as the job is no longer active and it is cancelled.
We re-trigger the load from BW.

 We first delete the Red request from the manage tab of the InfoProvider and then re-trigger the
InfoPackage.

Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.

-------------------------------------------------------------------------------------------------------------------
 Incorrect data in PSA:

It may happen some times that the incoming data to BW is having some incorrect format, or few
records have few incorrect entries. For example, expected value was in upper case and data is in
lower case or if the data was expected in numeric form, but the same was provided in Alpha
Numeric.

The data load may be a Flat File load or it may be from R/3. Mostly it may seem that the Flat File
provided by the users may have incorrect format.


The error message will appear in the job overview and will guide you what exactly we need to do for
the error occurred.

The message on the bottom of the “Header” tab of the Job Overview in RSMO will have “PSA Pflege” written on it, which will give u direct link to the PSA data


Once confirmed with the error, we go ahead and check the “Detail” tab of the Job Overview to check
which Record, field and what in the data has the error.

Once we make sure from the Extraction, in the Details tab in the Job Overview that the data was
completely extracted, we can actually see here, which record, which field, has the erroneous data.
Here we can also check the validity of the data with the previous successful load PSA data.

When we check the data in the PSA, it will show the record with error with traffic signal as “Red”. In
order to change data in PSA, we need to have the request deleted from Manage Tab of the
InfoProvider first, only then it will allow to change the data in PSA.

• Once the change in the specific field entry in the record in PSA is done, we then save it. Once data
in PSA is changed. We then again reconstruct the same request from the manage tab. Before we
could reconstruct the request, it needs to have QM status as “Green”.

• This will update the records again which are present in the request

• Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.
--------------------------------------------------------------------------------------------------------------------

ODS Activation Failed.

• During data load in ODS, It may happen sometimes that the data gets extracted and loaded
completely, but then at the time of the ODS activation it may fail giving status 9 error.

• Or due to lack of resources, or cause of an existing failed request in the ODS. For Master Data it is
fine if we have an existing failed request.

• This happens as there are Roll back Segment errors in Oracle Database and gives an error ORA-
00060. When activation of data takes place data is read in Active data table and then either Inserted
or Updated. While doing this there are system dead locks and Oracle is unable to extend the extents.

• The exact error message would be like “Request REQU_3ZGI6LEA5MSAHIROA4QUTCOP8, data
package 000012 incorrect with status 9 in RSODSACTREQ”. Some times it may accompany with
“Communication error (RFC call) occurred” error. It is actually due to some system error.

• The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
in the PC.

• The exact error message is “ODS Activation Failed”.


• Whenever such error occurs the data is may or may not be completely loaded. It is only while
activation it fails. Hence when we see the details of the job, we can actually see which data package
failed during activation.

• We can once again try to manually Activate the ODS, here do not change the QM status as in
Monitor its green but within the Data Target it red. Once the data is activated QM status turns into
Green .

• For successful activation of the failed request, click on the “Activate” button at the bottom, which will open another window which will only have the request which is/are not activated. Select the request and then check the corresponding options on the bottom. And then Click on “Start”

• This will set a background job for activation of the selected request.

• Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.

• In case the above does not work out, we check the size of the Data Package specified in the
InfoPackage. In InfoPackage -> Scheduler -> DataS. Default Data Transfer. Here we can set the size
of the Data Package. Here we need to “reduce” the maximum size of the data package. So that
activation takes place successfully.

• Once the size of the Data Package is reduced we again re trigger the load and reload the complete
data again.

• Before starting the manual activation, it is very important to check if there was an existing failed
“Red” Request. If so make sure you delete the same before starting the manual activation.

This error is encountered at the first place and then rectified as at that point in time system is not
able to process the activation process via 4 different Parallel processes. This parameter is set in
RSCUSTA2 transaction. Later on the resources are free so the activation completes successfully

-------------------------------------------------------------------------------------------------------------------

Caller 70 is missing:

• This error normally occurs whenever BW encounters error and is not able to classify them. There
could be multiple reasons for the same.

Whenever we are loading the Master Data for the first time, it creates SID’s. If system is
unable to create SID’s for the records in the Data packet, we can get this error message.

 If the Indexes of the cube are not deleted, then it may happen that the system may give the
caller 70 error.

Whenever we are trying to load the Transactional data which has master data as one of the
Characteristics and the value does not exist in Master Data table we get this error. System
can have difficultly in creating SID’s for the Master Data and also load the transactional data.

If ODS activation is taking place and at the same time there is another ODS activation
running parallel then in that case it may happen that the system may classify the error as
caller 70. As there were no processes free for that ODS Activation.

 It also occurs whenever there is a Read/Write occurring in the Active Data Table of ODS.
For example if activation is happening for an ODS and at the same time the data loading is
also taking place to the same ODS, then system may classify the error as caller 70.

It is a system error which can be seen under the “Status” tab in the Job over View.

 The exact error message is “System response "Caller 70" is missing”.

 It may happen that it may also log a short dump in the system. It can be checked at "Environment ->
Short dump -> In the Data Warehouse".

• If the Master Data is getting loaded for the first time then in that case we can reduce the Data
Package size and load the Info Package. Processing sometimes is based on the size of Data
Package. Hence we can reduce the data package size and then reload the data again. We can also
try to split the data load into different data loads

• If the error occurs in the cube load then we can try to delete the indexes of the cube and then reload
the data again.

• If we are trying to load the Transactional and Master Data together and this error occurs then we can
reduce the size of the Data Package and try reloading, as system may be finding it difficult to create
SID’s and load data at the same time. Or we can load the Master Data first and then load
Transactional Data

• If the error is happening while ODS activation cause of no processes free, or available for processing the ODS activation, then we can define processes in the T Code RSCUSTA2.

• If error is occurring due to Read/Write in ODS then we need to make changes in the schedule time of the data loading

Once we are sure that the data has not been extracted completely, we can then go ahead and delete
the red request from the manage tab in the InfoProvider. Re-trigger the InfoPackage again.

• Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.

-------------------------------------------------------------------------------------------------------------------
Attribute Change Run Failed – ALEREMOTE was locked.

• During Master Data loads, some times a lock is set by system user ALEREMOTE.
• This normally occurs when HACR is running for some other MD load, and system tries to carry out
HACR for this new MD. This is a scheduling problem.


• The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
in the PC.

The exact error message would be like, “User ALEREMOTE locked the load of master data for
characteristic 0CUSTOMER”. Here it is specifically for the 0CUSTOMER load. It may be different
related to Master Data InfoObject which is getting loaded.


• Check the error message completely and also check the long text of the error message, as it will tell
you the exact Master Data which is locked by user ALEREMOTE.

• The lock which is set is because of load and HACR timing which clashed. We first need to check
RSA1 -> Tools -> HACR, where in we would get the list of InfoObjects on which HACR is currently
running. Once that is finished only then, go to the TCode SM12. This will give you few options and
couple of default entries. When we list the locks, it will display all the locks set. Delete the lock for the specific entry only else it may happen that some load which was running may fail, due to the lock
released.

• Now we choose the appropriate lock which has caused the failure, and click on Delete. So that the
existing lock is released. Care should be taken that we do not delete an active running job.
Preferable avoid this solution.

• When HACR finishes for the other Master Data, trigger Attribute change run for this Master Data.
-------------------------------------------------------------------------------------------------------------------
SAP R/3 Extraction Job Failed.

There are certain jobs which are triggered in R/3 based upon events created there. These events are
triggered from SAP BW via ABAP Program attached in Process Chains. This extract job also triggers along with it a extract status job. The extract status job will send the status back to BW with success, failure. Hence it is important that the extract job, and the extract status job both get completed. This is done so that on completion of these jobs in R/3, extraction jobs get triggered in R/3 via Info pack from BW. Error may occur in the extract job or in the extract status job.

• The exact error message normally can be seen in the source system where the extraction occurs. In
BW the process for program in the PC will fail.

• This Process is placed before the InfoPackage triggers, hence if the extraction program in R/3 is still
running or is not complete, or is failed, the InfoPackage will not get triggered. Hence it becomes very
important to monitor such loads through RSPC rather than through RSMO



• We login to the source system and then check the Tx Code SM37, for the status of the job running in R/3. Here it will show the exact status of the running job.

• Enter the exact job name, user, date, and choose the relevant options, then execute. It will show a
list of the job, which is Active with that name. You may also find another job Scheduled for the next
load, Cancelled job if any, or previous finished job. The active job is the one which is currently
running.

• Here if the job status for the “Delay (sec.)” is increasing instead of “Duration(sec.)” then it means
there is some problem with the extraction job. It is not running, and is in delay.
• It may happen sometimes that there is no active job and there is a job which is in finished status with the current date/time.

• The extract job and the status job both needs to be checked, because it may happen that the extract
job is finished but the extract status job has failed, as a result of which it did not send success status
to BW. But the extraction was complete. In such cases, we manually change the status of the Extract
Program Process in the PC in BW to green with the help of the FM “ZRSPC_ABAP_FINISH”.
Execute the FM with the correct name of the Program process variant and the status “F”. This will
make the Process green triggering the further loads. Here we need to check if there is no previous
Extract Program Process is running in the BW system. Hence we need to check the PC logs in detail
for any previous existing process pending.

• Monitor the PC to complete the loads successfully.

• If in case we need to make the ABAP Process within the PC to turn “RED” and retrigger the PC, then we execute the FM “ZRSPC_ABAP_FINISH” with the specific variant and Job Status as “R” – which will turn the ABAP process RED

This usually needs to be done when the Extraction Job was cancelled in R/3 due to some reason &
we have another job in Released state and the BW ABAP Process is in Yellow state. We can then
make the ABAP Process RED via the FM, and then re-trigger the PC.
---------------------------------------------------------------------------------------------------------------------

File not found (System Command for file check failed):

• The system command process is placed in a PC before the infopackage Process. Hence it will check
for the Flat File on the application server before the infopackage is triggered. This will ensure that
when the load starts it has a Flat File to upload.

• It may happen that the file is not available and the system command process fails. In that case it will
not trigger the InfoPackage. Hence it is very important to monitor the PC through RSPC.

• The error message will turn the System Command Process in the PC “Red” and the UNIX Script
which has failed will have a specific return code which determines that the script has failed.
10.3 What can be the possible actions to be carried out?
• Whenever the system command process fails it indicated that the file is not present. We right click on
the Process and “Display Message” we get to see the failed script. Here we need to check the return
code.Here if exit status is –1 then failure i.e. Process becomes Red, else it becomes Green in PC.

• We need to check the script carefully for the above mentioned exit status. And then only conclude
that the file was really not available.

• Once confirmed that the file is not available we need to take appropriate actions.

• We need to identify the person who is responsible for FTPing the file on the Application server. A
mail already goes to the responsible person, via the error message in the Process. But we also need
to send a mail, regarding the same.

• The Process Chains which are having the system command Process in them, and the corresponding
actions to be taken.
-------------------------------------------------------------------------------------------------------------------
Table space issue.

• Many a times, particularly with respect to HACR while the Program is doing realignment of
aggregates it needs lot of temporary table space [PSATEMP]. If there is a large amount of data to be
processed and if Oracle is not able to extend the table space it gives a dump.

• This normally happens if there are many aggregates created on the same day or there is a large
change in the incoming Master data / Hierarchy, so that large amount of temporary memory is
needed to perform the realignment.

• Also whenever the PSAPODS (Which houses the many tables) is full, the data load / ODS Activation.

• The Error ORA - 01653 and ORA - 01688 – Relates to issues with table space. It will give error as
the ORA number which asks to increase the table space.

• In case the table space is full then we need to contact the Basis and accordingly ask for a increase in
the size of the table space.

• The increase of the table space is done by changing some parameters allocating more space which
is defined for individual tables.