Wednesday, September 28, 2016

Error: 500 Internal Server Errors



Description: The server encountered an unexpected condition which prevented it from fulfilling the request.

Possible Tips: Have a look into SAP – 804124, 807000

Error: 405 Method Not Allowed

Error: 405 Method Not Allowed

Description:The method specified in the Request-Line is not allowed for the resource identified by the Request-URL

Possible Tips: ICM_HTTP_INTERNAL_ERROR - Refer SAP 90643

XI/PI Outbound message errors


Sometimes connection between source  SAP XI & Target system  ECC  goes down and messages fail on the outbound side ( i.e. outbound interface is on sender side and inbound on receiver side)

If not be possible to restart them from using RWB or  transactions like SXI_MONITOR/SXMB_MONI. .

In general, messages are picked up and sent via SAP XI when the link returns. However, in some scenarios, it may be possible that SAP XI server could not  finish conversation with ERP. Main status of messages is “Processed successfully” – but there is an error in the outbound side as shown below. (Transactions – SXI_MONITOR/SXMB_MONI).




These messages do not get picked up automatically – and it is not possible restart them from using RWB or the transactions like SXMB_MONI.

Such messages could be processed in the following way:


  1. Send data directly to Integration Engine
  2. Change the status of failed message


This example shows how to solve the problem – two error messages are shown and one of them is solved here.

Send data directly to Integration Engine

Go to Component Monitoring in SAP XI Runtime Workbench. Click on the Test Message tab for the Adapter Engine. Specify the URL of SAP XI Integration engine to send the message to e.g. http://<XIServer>:<J2EE Port>/sap/xi/engine?type=entry




Specify the header Information. Copy payload of the message using SXMB_MONI and paste it into Payload area in RWB.




Send the message using Send Message button.

Change the status of failed message:

Call the transaction SWWL -> Delete appropriate work items.




Check that the messages are complete in SXI_MONITOR/SXMB_MONI.





Another simpler way to accomplish this is to use transaction SXMB_MONI_BPE . Select Continue Process Following Error under Business Process Engine -> Monitoring and Execute (F8). Update the selection criteria as required and Execute (F8). Choose the appropriate line item and click on Restart workflow button.

Tuesday, September 27, 2016

Row store vs Column store in HANA



A database table is a two-dimensional data structure with cells organized in rows and columns. Computer memory however is organized as a linear structure. To store a table in linear memory, two options.

A row-oriented storage stores a table as a sequence of records, each of which contains the fields of one row. Conversely, in a column store the entries of a column are stored in contiguous memory locations.

The concept of columnar data storage has been used for quite some time. Historically it was mainly used for analytics and data warehousing where aggregate functions play an important role. Using column stores in OLTP applications requires a balanced approach to insertion and indexing of column data to minimize cache misses.

The SAP HANA database allows the developer to specify whether a table is to be stored column-wise or row-wise.

It is also possible to alter an existing table from columnar to row-based and vice versa.


Row based tables have advantages in the following circumstances: 

1. The application needs to only process a single record at one time (many selects and/or updates of      single records).

2.The application typically needs to access a complete record (or row).

3.The columns contain mainly distinct values so that the compression rate would be low.

4.Neither aggregations nor fast searching are required.

5.The table has a small number of rows (e. g. configuration tables).
To enable fast on-the-fly aggregations, ad-hoc reporting, and to benefit from compression mechanisms it is recommended that transaction data is stored in a column-based table.

The SAP HANA data-base allows joining row-based tables with column-based tables. However, it is more efficient to join tables that are located in the same row or column store. For example, master data that is frequently joined with transaction data should also be stored in column-based tables.





Column-based tables have advantages in the following circumstances:

1. Calculations are typically executed on single or a few columns only.
2. The table is searched based on values of a few columns.
3.The table has a large number of columns.
4. The table has a large number of rows and columnar operations are required (aggregate, scan           etc.)
5. High compression rates can be achieved because the majority of the columns contain only few distinct values (compared to number of rows).

COLUMN STORAGE IS BEST SUITABLE FOR MODERN CPU'S:

Modern CPUs with multi-core architecture provide anenormous amount of computing power. Blades with 8 CPUs and 16 cores per CPU will populate next-generation blade servers. That gives us 128 computing units with up to approximately 500 GB of main memory. To optimize the use of
these computing devices we have to understand memory hierarchies,cache sizes, and how to enable parallel processing within one program [6].

We consider the memory situation first. Enterprise applications are to a large extent memory bound, that means the program execution time is proportional to the amount of memory accessed for read and write operations or memory being moved.As an example,

 we compare a full table scan of SAP’s accounting document line items table, which has 160 attributes,in order to calculate a total value over all tuples. In an experiment we did with 5 years worth of accounting of a German brewery, the number of tuples in this table was 34 million. In the underlying row database, 1 million tuples of this particular table consume about 1 GB of space.The size of the table was thus 35 GB. The equivalent columnstore table size was only 8 GB because of the more efficient vertical compression along columns.

If we consider that in real world applications only 10% of the attributes of a single table are typically used in one SQL-statement (see Figure 1), that means for the column store at most 800
MB of data have to be accessed to calculate the total values [1].


COLUMN STORAGE IS SUPERIOR TO ROW STORAGE WITH REGARDS TO
MEMORY CONSUMPTION:

Under the assumption to build a combined system for OLTP and OLAP data has to be organized for set processing, fast inserts, maximum (read) concurrency and low impact of reorganization. This imposes limits on the degree of compression for both row and column storage.

 While it is possible to achieve the same degree of compression in a row store as in a column store (see for e.g. IBM’s Blink engine [17]), a comparison of the two should be done assuming that the requirements above (especially fast inserts) are met,which excludes read-only row stores from the discussion.

In the column store, the compression via conversion of attribute values and the complete elimination of columns with null values only is very efficient but can be improved in this
research system by interpreting the values: all characters blank, all characters zero, and decimal floating point zero as null values. Applications think in default values and do not handle null values properly.

By translating the default values automatically into null values on the way into the database and back into default values on the way out. Comparing the memory requirements of column and row
storage of a table, the difference in compression rate is obvious.Various analyses of existing customer data show a rate of 2 for (write-optimized) row storage on disk.

For further memory consumption estimates we use a factor of 10 based on compression in favor of column storage.  Column storage allows us to eliminate all materialized views (aggregates) and calculate them algorithmically on demand. The storage requirements associated with these aggregates vary from application to application. The multi-dimensional cubes typically used in OLAP systems for materialized roll-ups grow with the cardinality of the individual dimensions. Therefore a factor 2
in favor of column storage based on the elimination of redundant aggregates is a conservative estimate.

Horizontal partitioning of tables will be used based on time and tenants. The option to partition into multiple dimensions is very helpful in order to use different qualities of main memory and processor speed for specific dimensions.

Within the context of memory consumption the option to split tables into current data and historic data per year is extremely interesting. The analysis of customer data showed that typically 5-10 years of historic data (no changes allowed) are kept in the operational database. Historic data can be kept accessible but reside on a much cheaper and slower storage medium (flash memory or disk).
The current data plus the last completed year should be kept in DRAM memory on blades for the typical year over year comparison in enterprise systems. For the separation by time we use two time stamps, creation time and completion time. The completion time is controlled by the application
logic e.g. an order is completely processed or an invoice paid.

The completion date determines the year in which data can become historic, that means no further changes are possible. With regards to main memory requirements we can take a factor 5 in favor of column storage into account. It is only fair to mention a horizontal partitioning could also
be achieved in record storage. Should the remaining table size for the current and last years partition still be substantial, horizontal partitioning by the data base management may occur. Ignoring memory requirements for indices and dimension dictionaries, we can assume a 10x2x5 time reduction
in storage capacity (from disk to main memory). Next generation boards for blade servers will most certainly provide roughly 500 GB of main memory with a tendency of further growth. Since arrays of 100 blades are already commercially available, installations with up to 50 TB for OLTP and OLAP could be converted to an in-memory only system on DRAM. This covers the majority of e.g. SAP’s Business Suite customers as far as storage capacity is concerned.

SAP HANA Sizing



Sizing is a usual term in SAP, which means determining the hardware requirements of an SAP System such as physical memory, CPU power, and I/O capacity.

The process of translating business requirements into hardware requirements 
is called hardware sizing.

The size of the hardware and database is influenced by both business aspects and technological aspects. This means that the number of users using the various application components and the data load they put on the network must be taken into account.

Determining sizing requirement ensures that customers purchase hardware as per their business need and also ensures lower cost and reduce TCO.

SAP provides number of tools and methodologies, how these hardware requirements can be determined. The most Perceptible tool is the "Quick Sizer." 




SAP HANA In-memory database sizing consists of 
  1. Memory sizing for static data
  2. Memory sizing for objects creating during run-time such as data load and query execution
  3. Disk sizing
  4. CPU sizing

Memory:
CPU:
Disk Size:

For the purpose of successful SAP HANA implementation, SAP has provided various guidelines and methods to calculate the correct hardware size.
We can use any of the below method:


1. SAP HANA sizing using the QuickSizer tool
2. SAP HANA sizing using the DB specific scripts
3. SAP HANA sizing using the ABAP report

Selecting a T-shirt sizes:




According to the sizing results, select a SAP HANA T-shirt size that satisfies the sizing requirements in terms of main memory, and possibly CPU capabilities.
For example
The SAP hardware partners provide configurations for SAP HANA according to one or more of these T-shirt sizes. Below table lists the T-shirt sizes for SAP HANA.



The three main KPIs used to size for SAP HANA is
  1. Main memory (RAM) space,
  2. CPU processing performance
  3. Disk size.


While traditional sizing approaches focus on CPU performance, the main driver for SAP HANA sizing is memory. Because SAP HANA is a main memory database, essentially all business data (e.g., master and transactional data) resides in the main memory, which leads to a higher memory footprint compared to traditional databases. In addition to the main memory required for storing the business data, temporary memory space is needed to operate the database management system — to support complex queries or data that is needed for buffers and caches, for example.
Sizing for SAP HANA includes unique requirements for CPU processing performance. The CPU behaves differently with SAP HANA compared to traditional databases. The processing engine for SAP HANA is optimized to operate very complex queries at maximum speed, which means that many of these queries are processed internally and in parallel, and most of the data is stored in a column-based format. This architecture not only might lead to a higher CPU demand compared to traditional databases, it also requires planning for a lower average utilization to ensure that there is enough headroom for the database to process queries sufficiently fast.
An in-memory database still requires disk storage space — to preserve database information if the system shuts down, either intentionally or due to a power loss, for instance. Data changes in the database must be periodically copied to disk to ensure a full image of the business data on disk, and to preserve the current state of the database and all of the data entered in the persistence layer. In addition, a logging mechanism is required to log changes and enable system recovery. To accommodate these requirements, there always must be enough space on disk to save data and logs.
Static & Dynamic RAM:



Calculation of Static RAM Size:

Static RAM Size means the amount of RAM required to store the data in the SAP HANA database.
Assuming the compression factor as 7:



Calculation of Dynamic RAM Size:

Dynamic RAM is for additional main memory required for objects that are created dynamically 
when new data is loaded or queries are executed. SAP recommends to keep Dynamic RAM size same as Static RAM size.



Calculation of Total RAM Size








                           Table footprint of source database: 186348 MB = 182 GB

                                Assumption: Source DB Compressed by factor 1.8

                           RAM = Source data footprint*2/7=182 GB*2/7*1.8=94 GB

                                   Disk          = 94 GB*4=376 GB
                                     (persistence) 
                                  Disk         = 94 GB
                                        (log)         


Refer below snotes which will help you more understanding HANA Sizing.

2121330 - SAP BW on HANA Sizing Report
1637145 - SAP BW on HANA: Sizing SAP In-Memory Database
1736976 - Sizing Report for BW on HANA
1799449 - Sizing report ERP on SAP HANA database
2296290 - New Sizing Report for BW on HANA


Monday, September 26, 2016

SAP BW Isses-4

TRFC's errors -BW

tRFC – Transact Remote Function Call Error, occurs whenever LUW’s (Logical Unit of Work’s) are not transferred from the source system to the destination system.

Message appears in the bottom of the “Status” tab in RSMO. The error message would appear like “tRFC Error in Source System” or “tRFC Error in Data Warehouse” or simply “tRFC Error” depending on the system from where data is being extracted.

Sometimes IDOC are also stuck on R/3 side as there were no processors available to process them.
Once this error is encountered, we could try to Click a complete Refresh “F6” in RSMO,
and check if the LUW’s get cleared manually by the system.

If after “couple” of Refresh, the error is as it is, then follow the below steps quickly as it
may happen that the load may fail with a short dump.

Go to the menu Environment -> Transact. RFC -> In the Source System, from RSMO. It
asks to login into the source system.

Once logged in, it will give a selection screen with “Date”, “User Name”, TRFC options.

On execution with “F8” it will give the list of all Stuck LUW’s. The “Status Text” will appear Red for the Stuck LUW’s which are not getting processed. And the “Target System” for those LUWs should be “WP1CL015”, that’s the Bose BW Production system. Do not execute any other IDOC which is not related have the “Target System” as “WP1CL015”.

Right Click and “Execute” or “F6” after selection, those LUW’s which are identified properly. So that they get cleared, and the load on BW side gets completed successfully.

When IDocs are stuck go to R/3, use Tcode BD87 and expand ‘IDOC in inbound Processing’ tab for
IDOC Status type as 64 (IDoc ready to be transferred to application). Keep the cursor on the error
message (pertaining to IDOC type RSRQST only) and click Process tab (F8) . This will push any
stuck Idoc on R/3.

Monitor the load for successful completion, and complete the further loads if any in the Process
Chain. 
--------------------------------------------------------------------------------------------------------------------
Time Stamp Error -BW

The “Time Stamp” Error occurs when the Transfer Rules/Structure (TR/TS) are internally inactive in
the system.

• They can also occur whenever the DataSources are changed on the R/3 side or the DataMarts are
changed in BW side. In that case, the Transfer Rules (TR) is showing active status when checked.
But they are actually not, it happens because the time stamp between the DataSource and the
Transfer Rules are different.

The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
in the PC.

• Check the Transfer Rules in RSA1, Administrator Workbench.

Whenever we get such an error, we first need to check the Transfer Rules (TR) in the Administrator
Workbench. Check each rule if they are inactive. If so then Activate the same.
• You need to first replicate the relevant data source, by right click on the source system of D/s ->
Replicate Datasources.

During such occasions, we can execute the following ABAP Report Program
“RS_TRANSTRU_ACTIVATE_ALL”. It asks for Source System Name, InfoSource Name, and 2
check boxes. For activating only those TR/TS which are set by some lock, we can check the option
for “LOCK”. For activating only those TR/TS which are Inactive, we check for the option for “Only
Inactive”.

Once executed it will activate the TR/TS again within that particular InfoSource even though they are already active.

 Now re-trigger the InfoPackage again.

Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.

------------------------------------------------------------------------------------------------------------------
Short Dump -BW

Whenever a Job fails with an error “Time Out” it means that the job has been stopped
due to some reason, and the request is still in yellow state. And as a result of the same
it resulted in Time Out error. It will lead to a short dump in the system. Either in R/3 or
in BW.

Short dump may also occur if there is some mismatch in the type of incoming data. For
example say date field is not in the format which is specified in BW, then it may happen
that instead of giving an error it may give a short dump. Every time we trigger the load.

We would get a Time Out Error after the time which is specified in the Infopackage ->
Time Out settings (which may or may not be same for all InfoPackages). But by that
time in between, we may get a short dump in the BW system or in the Source System
R/3.

The message appears in the Job Overview in RSMO, or in “Display Message” option of
the Process in the PC.


Usually “Time Out” Error results in a Short Dump. In order to check the Short Dump we go to the
following, Environment -> Short Dump -> In the Data Warehouse / -> In the Source System.

Alternatively we can check the Transaction ST22, in the Source System / BW system. And then
choose the relevant option to check the short dump for the specific date and time. Here when we
check the short dump, make sure we go through the complete analysis of the short dump in detail
before taking any actions.

In case of Time Out Error, Check whether the time out occurred after the extraction or not. It may
happen that the data was extracted completely and then there was a short dump occurred. Then
nothing needs to be done.

In order to check whether the extraction was done completely or not, we can check the “Extraction”
in the “Details” tab in the Job Overview. Where in we can conclude whether the extraction was done
or not. If it is a “full load” from R/3 then we can also check the no. of records in RSA3 in R/3 and
check if the same no of records are loaded in BW.


 In the short dump we may find that there is a Runtime Error, "CALL_FUNCTION_SEND_ERROR"
which occurred due to Time Out in R/3 side.

In such cases following could be done.

If the data was extracted completely, then change the QM status from yellow to green. If “CUBE” is
getting loaded then create indexes, for ODS activate the request.

If the data was not extracted completely, then change the QM status from yellow to red. Re-trigger
the load and monitor the same.

 Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.

-------------------------------------------------------------------------------------------------------------------

 Job Cancellation in Source System (ECC)

If the job in R/3 system cancels due to some reasons, then this error is encountered. This may be
due to some problem in the system. Some times it may also be due to some other jobs running in
parallel which takes up all the Processors and the jobs gets cancelled on R/3 side.

 The error may or may not be resulted due to Time Out. It may happen that there would be some
system hardware problem due to which these errors could occur.

The Exact Error message is "Job termination in source system". The exact error message may also
differ, it may be “The background job for data selection in the source system has been terminated”.
Both the error messages mean the same. Some times it may also give “Job Termination due to
System Shutdown”.

The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
in the PC.

Firstly we check the job status in the Source System. It can be checked through Environment -> Job
Overview -> In the Source System. This may ask you to login to the source system R/3. Once logged
in it will have some pre-entered selections, check if they are relevant, and then Execute. This will
show you the exact status of the job. It should show “X” under Canceled.

The job name generally starts with “BIREQU_” followed by system generated number.

 Once we are confirm that this error has occurred due to job cancellation, we then check the status of
the ODS, Cube under the manage tab. The latest request would be showing the QM status as Red.

We need to re-trigger the load again in such cases as the job is no longer active and it is cancelled.
We re-trigger the load from BW.

 We first delete the Red request from the manage tab of the InfoProvider and then re-trigger the
InfoPackage.

Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.

-------------------------------------------------------------------------------------------------------------------
 Incorrect data in PSA:

It may happen some times that the incoming data to BW is having some incorrect format, or few
records have few incorrect entries. For example, expected value was in upper case and data is in
lower case or if the data was expected in numeric form, but the same was provided in Alpha
Numeric.

The data load may be a Flat File load or it may be from R/3. Mostly it may seem that the Flat File
provided by the users may have incorrect format.


The error message will appear in the job overview and will guide you what exactly we need to do for
the error occurred.

The message on the bottom of the “Header” tab of the Job Overview in RSMO will have “PSA Pflege” written on it, which will give u direct link to the PSA data


Once confirmed with the error, we go ahead and check the “Detail” tab of the Job Overview to check
which Record, field and what in the data has the error.

Once we make sure from the Extraction, in the Details tab in the Job Overview that the data was
completely extracted, we can actually see here, which record, which field, has the erroneous data.
Here we can also check the validity of the data with the previous successful load PSA data.

When we check the data in the PSA, it will show the record with error with traffic signal as “Red”. In
order to change data in PSA, we need to have the request deleted from Manage Tab of the
InfoProvider first, only then it will allow to change the data in PSA.

• Once the change in the specific field entry in the record in PSA is done, we then save it. Once data
in PSA is changed. We then again reconstruct the same request from the manage tab. Before we
could reconstruct the request, it needs to have QM status as “Green”.

• This will update the records again which are present in the request

• Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.
--------------------------------------------------------------------------------------------------------------------

ODS Activation Failed.

• During data load in ODS, It may happen sometimes that the data gets extracted and loaded
completely, but then at the time of the ODS activation it may fail giving status 9 error.

• Or due to lack of resources, or cause of an existing failed request in the ODS. For Master Data it is
fine if we have an existing failed request.

• This happens as there are Roll back Segment errors in Oracle Database and gives an error ORA-
00060. When activation of data takes place data is read in Active data table and then either Inserted
or Updated. While doing this there are system dead locks and Oracle is unable to extend the extents.

• The exact error message would be like “Request REQU_3ZGI6LEA5MSAHIROA4QUTCOP8, data
package 000012 incorrect with status 9 in RSODSACTREQ”. Some times it may accompany with
“Communication error (RFC call) occurred” error. It is actually due to some system error.

• The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
in the PC.

• The exact error message is “ODS Activation Failed”.


• Whenever such error occurs the data is may or may not be completely loaded. It is only while
activation it fails. Hence when we see the details of the job, we can actually see which data package
failed during activation.

• We can once again try to manually Activate the ODS, here do not change the QM status as in
Monitor its green but within the Data Target it red. Once the data is activated QM status turns into
Green .

• For successful activation of the failed request, click on the “Activate” button at the bottom, which will open another window which will only have the request which is/are not activated. Select the request and then check the corresponding options on the bottom. And then Click on “Start”

• This will set a background job for activation of the selected request.

• Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.

• In case the above does not work out, we check the size of the Data Package specified in the
InfoPackage. In InfoPackage -> Scheduler -> DataS. Default Data Transfer. Here we can set the size
of the Data Package. Here we need to “reduce” the maximum size of the data package. So that
activation takes place successfully.

• Once the size of the Data Package is reduced we again re trigger the load and reload the complete
data again.

• Before starting the manual activation, it is very important to check if there was an existing failed
“Red” Request. If so make sure you delete the same before starting the manual activation.

This error is encountered at the first place and then rectified as at that point in time system is not
able to process the activation process via 4 different Parallel processes. This parameter is set in
RSCUSTA2 transaction. Later on the resources are free so the activation completes successfully

-------------------------------------------------------------------------------------------------------------------

Caller 70 is missing:

• This error normally occurs whenever BW encounters error and is not able to classify them. There
could be multiple reasons for the same.

Whenever we are loading the Master Data for the first time, it creates SID’s. If system is
unable to create SID’s for the records in the Data packet, we can get this error message.

 If the Indexes of the cube are not deleted, then it may happen that the system may give the
caller 70 error.

Whenever we are trying to load the Transactional data which has master data as one of the
Characteristics and the value does not exist in Master Data table we get this error. System
can have difficultly in creating SID’s for the Master Data and also load the transactional data.

If ODS activation is taking place and at the same time there is another ODS activation
running parallel then in that case it may happen that the system may classify the error as
caller 70. As there were no processes free for that ODS Activation.

 It also occurs whenever there is a Read/Write occurring in the Active Data Table of ODS.
For example if activation is happening for an ODS and at the same time the data loading is
also taking place to the same ODS, then system may classify the error as caller 70.

It is a system error which can be seen under the “Status” tab in the Job over View.

 The exact error message is “System response "Caller 70" is missing”.

 It may happen that it may also log a short dump in the system. It can be checked at "Environment ->
Short dump -> In the Data Warehouse".

• If the Master Data is getting loaded for the first time then in that case we can reduce the Data
Package size and load the Info Package. Processing sometimes is based on the size of Data
Package. Hence we can reduce the data package size and then reload the data again. We can also
try to split the data load into different data loads

• If the error occurs in the cube load then we can try to delete the indexes of the cube and then reload
the data again.

• If we are trying to load the Transactional and Master Data together and this error occurs then we can
reduce the size of the Data Package and try reloading, as system may be finding it difficult to create
SID’s and load data at the same time. Or we can load the Master Data first and then load
Transactional Data

• If the error is happening while ODS activation cause of no processes free, or available for processing the ODS activation, then we can define processes in the T Code RSCUSTA2.

• If error is occurring due to Read/Write in ODS then we need to make changes in the schedule time of the data loading

Once we are sure that the data has not been extracted completely, we can then go ahead and delete
the red request from the manage tab in the InfoProvider. Re-trigger the InfoPackage again.

• Monitor the load for successful completion, and complete the further loads if any in the Process
Chain.

-------------------------------------------------------------------------------------------------------------------
Attribute Change Run Failed – ALEREMOTE was locked.

• During Master Data loads, some times a lock is set by system user ALEREMOTE.
• This normally occurs when HACR is running for some other MD load, and system tries to carry out
HACR for this new MD. This is a scheduling problem.


• The message appears in the Job Overview in RSMO, or in “Display Message” option of the Process
in the PC.

The exact error message would be like, “User ALEREMOTE locked the load of master data for
characteristic 0CUSTOMER”. Here it is specifically for the 0CUSTOMER load. It may be different
related to Master Data InfoObject which is getting loaded.


• Check the error message completely and also check the long text of the error message, as it will tell
you the exact Master Data which is locked by user ALEREMOTE.

• The lock which is set is because of load and HACR timing which clashed. We first need to check
RSA1 -> Tools -> HACR, where in we would get the list of InfoObjects on which HACR is currently
running. Once that is finished only then, go to the TCode SM12. This will give you few options and
couple of default entries. When we list the locks, it will display all the locks set. Delete the lock for the specific entry only else it may happen that some load which was running may fail, due to the lock
released.

• Now we choose the appropriate lock which has caused the failure, and click on Delete. So that the
existing lock is released. Care should be taken that we do not delete an active running job.
Preferable avoid this solution.

• When HACR finishes for the other Master Data, trigger Attribute change run for this Master Data.
-------------------------------------------------------------------------------------------------------------------
SAP R/3 Extraction Job Failed.

There are certain jobs which are triggered in R/3 based upon events created there. These events are
triggered from SAP BW via ABAP Program attached in Process Chains. This extract job also triggers along with it a extract status job. The extract status job will send the status back to BW with success, failure. Hence it is important that the extract job, and the extract status job both get completed. This is done so that on completion of these jobs in R/3, extraction jobs get triggered in R/3 via Info pack from BW. Error may occur in the extract job or in the extract status job.

• The exact error message normally can be seen in the source system where the extraction occurs. In
BW the process for program in the PC will fail.

• This Process is placed before the InfoPackage triggers, hence if the extraction program in R/3 is still
running or is not complete, or is failed, the InfoPackage will not get triggered. Hence it becomes very
important to monitor such loads through RSPC rather than through RSMO



• We login to the source system and then check the Tx Code SM37, for the status of the job running in R/3. Here it will show the exact status of the running job.

• Enter the exact job name, user, date, and choose the relevant options, then execute. It will show a
list of the job, which is Active with that name. You may also find another job Scheduled for the next
load, Cancelled job if any, or previous finished job. The active job is the one which is currently
running.

• Here if the job status for the “Delay (sec.)” is increasing instead of “Duration(sec.)” then it means
there is some problem with the extraction job. It is not running, and is in delay.
• It may happen sometimes that there is no active job and there is a job which is in finished status with the current date/time.

• The extract job and the status job both needs to be checked, because it may happen that the extract
job is finished but the extract status job has failed, as a result of which it did not send success status
to BW. But the extraction was complete. In such cases, we manually change the status of the Extract
Program Process in the PC in BW to green with the help of the FM “ZRSPC_ABAP_FINISH”.
Execute the FM with the correct name of the Program process variant and the status “F”. This will
make the Process green triggering the further loads. Here we need to check if there is no previous
Extract Program Process is running in the BW system. Hence we need to check the PC logs in detail
for any previous existing process pending.

• Monitor the PC to complete the loads successfully.

• If in case we need to make the ABAP Process within the PC to turn “RED” and retrigger the PC, then we execute the FM “ZRSPC_ABAP_FINISH” with the specific variant and Job Status as “R” – which will turn the ABAP process RED

This usually needs to be done when the Extraction Job was cancelled in R/3 due to some reason &
we have another job in Released state and the BW ABAP Process is in Yellow state. We can then
make the ABAP Process RED via the FM, and then re-trigger the PC.
---------------------------------------------------------------------------------------------------------------------

File not found (System Command for file check failed):

• The system command process is placed in a PC before the infopackage Process. Hence it will check
for the Flat File on the application server before the infopackage is triggered. This will ensure that
when the load starts it has a Flat File to upload.

• It may happen that the file is not available and the system command process fails. In that case it will
not trigger the InfoPackage. Hence it is very important to monitor the PC through RSPC.

• The error message will turn the System Command Process in the PC “Red” and the UNIX Script
which has failed will have a specific return code which determines that the script has failed.
10.3 What can be the possible actions to be carried out?
• Whenever the system command process fails it indicated that the file is not present. We right click on
the Process and “Display Message” we get to see the failed script. Here we need to check the return
code.Here if exit status is –1 then failure i.e. Process becomes Red, else it becomes Green in PC.

• We need to check the script carefully for the above mentioned exit status. And then only conclude
that the file was really not available.

• Once confirmed that the file is not available we need to take appropriate actions.

• We need to identify the person who is responsible for FTPing the file on the Application server. A
mail already goes to the responsible person, via the error message in the Process. But we also need
to send a mail, regarding the same.

• The Process Chains which are having the system command Process in them, and the corresponding
actions to be taken.
-------------------------------------------------------------------------------------------------------------------
Table space issue.

• Many a times, particularly with respect to HACR while the Program is doing realignment of
aggregates it needs lot of temporary table space [PSATEMP]. If there is a large amount of data to be
processed and if Oracle is not able to extend the table space it gives a dump.

• This normally happens if there are many aggregates created on the same day or there is a large
change in the incoming Master data / Hierarchy, so that large amount of temporary memory is
needed to perform the realignment.

• Also whenever the PSAPODS (Which houses the many tables) is full, the data load / ODS Activation.

• The Error ORA - 01653 and ORA - 01688 – Relates to issues with table space. It will give error as
the ORA number which asks to increase the table space.

• In case the table space is full then we need to contact the Basis and accordingly ask for a increase in
the size of the table space.

• The increase of the table space is done by changing some parameters allocating more space which
is defined for individual tables.

Oracle database -12c -4

Constraint:
  1. Types of constraints
  2. Sample tables
  3. Creating integrity constraints
  4. Creating example table

WHAT IS A CONSTRAINT? In the previous chapter we have seen how to create a table using CREATE TABLE command. Now we will understand how to define constraints. Constraints are used to implement standard and business rules. Data integrity of the database must be maintained. In order to ensure data has integrity we have to implement certain rules or constraints. As these constraints are used to maintain integrity they are called as integrity constraints.
Standard rules Standard constraints are the rules related to primary key and foreign key. Every table must have a primary key. Primary key must be unique and not null. Foreign key must derive its values from corresponding parent key. These rules are universal and are called as standard rules.
Business rules These rules are related to a single application. For example, in a payroll application we may have to implement a rule that prevents any row of an employee if salary of the employee is less than 2000. Another example is current balance of a bank account must be greater than or equal to 500.
Once the constraints are created, Oracle database makes sure that the constraints are not violated when a row is inserted, deleted or updated. If constraint is not satisfied then the operation will fail.
Constraints are normally defined at the time of creating table. But it is also possible to add constraints after the table is created using ALTER TABLE command. Constraints are stored in the Data Dictionary (a set of tables which stores information regarding database).
Each constraint has a name; it is either given by user using CONSTRAINT option or assigned by system. In the later case, the name is SYS_Cn; where n is a number.
Note: It is recommended that you use constraint name so that referring to constraint will be easier later on.
TYPES OF CONSTRAINTS Constraints can be given at two different levels. If the constraint is related to a single column the constraint is given at the column level otherwise constraint is to be given at the table level. Based on where a constraint is given, constraints are of two types:
o Column Constraints
o Table Constraints
Column Constraint A constraint given at the column level is called as Column Constraint. It defines a rule for a single column. It cannot refer to column other than the column at which it is defined. A typical example is PRIMARY KEY constraint when a single column is the primary key of the table.
Table Constraint A constraint given at the table level is called as Table Constraint. It may refer to more than one column of the table. A typical example is PRIMARY KEY constraint that is used to define composite primary key. A column level constraint can be given even at the table level, but a constraint that deals with more than one column must be given only at the table level.


SQL CREATE TABLE with CONSTRAINT Syntax CREATE TABLE table_name ( column_name1 data_type(size) constraint_name, column_name2 data_type(size) constraint_name, column_name3 data_type(size) constraint_name, .... );
The following is the syntax of CONSTRAINT clause used with CREATE TABLE and ALTER TABLE commands.
[CONSTRAINT constraint] { [NOT] NULL | {UNIQUE | PRIMARY KEY} | REFERENCES [schema.] table [(column)] [ON DELETE CASCADE] | CHECK (condition) }

The following is the syntax of table constraint.
[CONSTRAINT constraint] { {UNIQUE | PRIMARY KEY} (column [,column] ...) | FOREIGN KEY (column [,column] ...) REFERENCES [schema.] table [(column [,column] ...)] [ON DELETE CASCADE] | CHECK (condition) }
The main difference between column constraint and table constraint is that in table constraint we can access more than one column whereas in column constraint we can refer to only the column for which constraint is being defined.

NOT NULL - Indicates that a column cannot store NULL value
CREATE TABLE PersonsNotNull
(
P_Id int NOT NULL,
LastName varchar(255) NOT NULL,
FirstName varchar(255),
Address varchar(255),
City varchar(255)
)
UNIQUE - Ensures that each row for a column must have a unique value
CREATE TABLE Persons
(
P_Id int NOT NULL,
LastName varchar(255) NOT NULL,
FirstName varchar(255),
Address varchar(255),
City varchar(255),
CONSTRAINT uc_PersonID UNIQUE (P_Id,LastName)
)
PRIMARY KEY - A combination of a NOT NULL and UNIQUE. Ensures that a column (or combination of two or more columns) have a unique identity which helps to find a particular record in a table more easily and quickly
FOREIGN KEY - Ensure the referential integrity of the data in one table to match values in another table
CHECK - Ensures that the value in a column meets a specific condition
DEFAULT - Specifies a default value for a column

Oracle database -12c - 3

Creating a table
Datatypes
Inserting rows into a table
Selecting rows from a table

CREATING A TABLE A Table is a collection of rows and columns. Data in relational model is stored in tables. Let us create a table first. Then we will understand how to store data into table and retrieve data from the table.
Before a table is created the following factors of a table are to be finalized.
  1. What data the table is supposed to store
  2. The name of the table. It should depict the content of the
  3. table
  4. What are the columns that table should contain
  5. The name, data type and maximum length of each column of the table
  6. What are the rules to be implemented to maintain data
  7. integrity of the table

...............................
CREATE TABLE tablename
(column datatype [DEFAULT expr] [constraints][, ...] [,table_constraints] ... )
.............................

CREATE TABLE table_name
(
column_name1 data_type(size),
column_name2 data_type(size),
column_name3 data_type(size),
....
);

The following CREATE TABLE command is used to create Persons table.
CREATE TABLE Persons
(
PersonID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);

The above command creates a table called Persons. This table contains 5 columns.
If command is successful, Oracle responds by displaying the message Table Created.
Rules to be followed for names The following are the rules to be followed while naming an Oracle Object. These rules are applicable for name of the table and column.
  1. The name must begin with a letter - A-Z or a-z.
  2. Letters, digits and special characters – underscore (_), $
  3. and # are allowed.
  4. Maximum length of the name is 30 characters.
  5. It must not be an SQL reserved word.
  6. There should not be any other object with the same name in your account.

Note: A table can contain up to 1000 columns in Oracle8 or above, whereas in Oracle7 a table can contain only 254 columns.

Datatypes:
DATATYPES Each column of the table contains the datatype and maximum length, if length is applicable to column. Datatype of the column specifies what type of data can be stored in the column.
The datatype VARCHAR2 is to store strings that may have different number of characters, NUMBER is used to store numbers. The maximum length, which is given in parentheses after the datatype, specifies how many characters (or digits) the column can store at the most. For example, column VARCHAR2 (20) would mean it can store up to 20 characters.
Following table lists out data types available in Oracle along with type of data that can be stored and maximum length allowed.

Datatype Description VARCHAR2(len) Can store up to len number of characters. Each character would occupy one byte. Maximum width is 32767 characters. VARCHAR(len) Same as VARCHAR2. But use VARCHAR2 as Oracle might change the usage of VARCHAR in future releases. CHAR(len) Fixed length character data. If len is given then it can store up to len number of characters. Default width is 1. String is padded on the right with spaces until string is of len size. Maximum width is 2000. NUMBER Can store numbers up to 40 digits plus decimal point and sign. NUMBER(p,s) P represents the maximum significant digits allowed. S is the number of digits on the right of the decimal point. DATE Can store dates in the range 1-1-4712 B.C. to 31-12-4712 A.D. LONG Variable length character values up to 2 gigabytes. Only one LONG column is allowed per table. You cannot use LONG datatype in functions, WHERE clause of SELECT, in indexing and in subqueries. RAW and LONG RAW Equivalent to VARCHAR2 and LONG respectively, but used for storing byte- oriented or binary data such as digital sound or graphics images. CLOB,BLOB,NCLOB Used to store large character and binary objects. Each can accommodate up to 4 gigabytes. We will discuss more about it later in this book.
BFILE Stores a pointer to an external file. The content of the file resides in the file system of the operating system. Only the name of the file is stored in the column. ROWID Stores aunique number that is used by Oracle to uniquely identify each row of the table. NCHAR (size) Same as CHAR, but supports national language. NVARCHAR2 (size) Same as VARCHAR2, but supports national language.

Inserting rows into a table:
Inserting a row with selected columns It is possible to insert a new row by giving values only for a few columns instead of giving values for all the available columns.
INSERT INTO table_name
VALUES (value1,value2,value3,...);

It is also possible to only insert data in specific columns.
The following SQL statement will insert a new row, but only insert data in the "CustomerName", "City", and "Country" columns (and the CustomerID field will of course also be updated automatically):

INSERT INTO Customers (CustomerName, ContactName, Address, City, PostalCode, Country)
VALUES ('Syamal','Venkat','KPHB 21','HYD','520084','INDIA');

Selecting rows from a table:

The SELECT statement is used to select data from a database.

The result is stored in a result table, called the result-set.

SELECT column_name,column_name
FROM table_name;

or

SELECT * FROM table_name;



SELECT * FROM DBA_USERS;