Home
1. DTP Failure
Select the step-> right click and select “Display Message”-> there we will get the message which gives the reason for ABEND.
A DTP can failure due to following reasons, in such case we can go for restarting the job.
System Exception Error
Request Locked
ABAP Run time error.
Duplicate records
Erroneous Records from PSA.
Duplicate records: In case of duplication in the records, we can find it in the error message along with the Info Provider’s name. Before restarting the job after deleting the bad DTP request, we have to handle the duplicate records. Go to the info provider -> DTP step -> Update tab -> check handle duplicate records -> activate -> Execute DTP. After successful competition of the job uncheck the Handle Duplicate records option and activate.
DTP Log Run:
If a DTP is taking log time than the regular run time without having the back ground job, then we have to turn the status of the DTP into Red and then delete the DTP bad request (If any), repeat the step or restart the job.
Before restarting the Job/ repeating the DTP step, make sure about the reason for failure.
If the failure is due to “Space Issue” in the F fact table, engage the DBA team and also BASIS team and explain them the issue. Table size needs to be increased before performing any action in BW. It’ll be done by DBA Team. After increasing the space in the F fact table we can restart the job.
Erroneous Records from PSA: When ever a DTP fails because of erroneous records while fetching the data from PSA to Data Target, in such cases data needs to be changed in the ECC. If it is not possible, then after getting the approval from the business, we can edit the Erroneous records in PSA and then we have to run the DTP. Go to PSA -> select request -> select error records -> edit the records and save.Then run the DTP. 2.
INFO PACKAGE FAILURE: The following are the reasons for Info Pack failure.
Source System Connection failure
tRFC/IDOC failure
Communication Issues
Processing the IDOC Manually in BI
Check the source system connection with the help of SAP BASIS, if it is not fine ask them to rebuild the connection. After that restart the job (Info Pack).
Go to RSA1 -> select source system -> System -> Connection check.
In case of any failed tRFC’s/IDOC’s, the error message will be like “Error in writing the partition number DP2” or “Caller 01, 02 errors”. In such case reprocess the tRFC/IDOC with the help of SAP BASIS, and then job will finish successfully.
If the data is loading from the source system to DSO directly, then delete the bad request in the PSA table, then restart the job
Info Pack Long Run: If an info pack is running long, then check whether the job is finished at source system or not. If it is finished, then check whether any tRFC/IDOC struck/Failed with the help of SAP BASIS. Even after reprocessing the tRFC, if the job is in yellow status then turn the status into “Red”. Now restart / repeat the step. After completion of the job force complete.
Before turning the status to Red/Green, make sure whether the load is of Full/Delta and also the time stamp is properly verified.
Time Stamp Verification:
Select Info Package-> Process Monitor -> Header -> Select Request -> Go to source System (Header->Source System) -> Sm37-> give the request and check the status of the request in the source system -> If it is in active, then we have to check whether there any struck/failed tRFC’s/IDOC’s If the request is in Cancelled status in Source system -> Check the Info Pack status in BW system -> If IP status is also in failed state/cancelled state -> Check the data load type (FULL or DELTA) -> if the status is full then we can turn the Info Package status red and then we can repeat/restart the Info package/job. -> If the load is of Delta type then we have to go RSA7 in source system-> (Compare the last updated time in Source System SM37 back ground job)) Check the time stamp -> If the time stamp in RSA7 is matching then turn the Info Package status to Red -> Restart the job. It’ll fetch the data in the next iterationIf the time stamp is not updated in RSA7 -> Turn the status into Green -> Restart the job. It’ll fetch the data in the next iteration.
Source System BW System Source System RSA7 Source System SM37 Action
I/P Status RED(Cancelled) I/P Status (Active) Time stamp matching with SM37 last updated time Time stamp matching with RSA7 time stamp Turn the I/P Status into Red and Restart the Job
I/P Status RED(Cancelled) I/P Status (Cancelled) Time stamp matching with SM37 last updated time Time stamp matching with RSA7 time stamp Turn the I/P Status into Red and Restart the Job
I/P Status RED(Cancelled) I/P Status (Active) Time stamp is not matching with SM37 last updated time Time stamp is not matching with RSA7 time stamp Turn the I/P status into Green and Restart the job
I/P Status RED(Cancelled) I/P Status (Cancelled) Time stamp is not matching with SM37 last updated time Time stamp is not matching with RSA7 time stamp Turn the I/P status into Green and Restart the job
Processing the IDOC Manually in BI:
When there is an IDOC which is stuck in the BW and successfully completed the background job in the source system, in such cases we can process the IDOC manually in the BW. Go to Info Package -> Process Monitor -> Details -> select the IDOC which is in yellow status(stuck) -> Right click -> Process the IDOC manually -> it’ll take some time to get processed.******Make sure that we can process the IDOC in BW only when the back ground job is completed in the source system and stuck in the BW only. 3. DSO Activation Failure: When there is a failure in DSO activation step, check whether the data is loading to DSO from PSA or from the source system directly. If the data is loading to DSO from PSA, then activate the DSO manually as follows
Right click DSO Activation Step -> Target Administration -> Select the latest request in DSO -> select Activate -> after request turned to green status, Restart the job.
If the data is loading directly from the source system to DSO, then delete the bad request in the PSA table, then restart the job
4. Failure in Drop Index/ Compression step: When there is a failure in Drop Index/ compression step, check the Error Message. If it is failed due to “Lock Issue”, it means job failed because of the parallel process or action which we have performed on that particular cube or object. Before restarting the job, make sure whether the object is unlocked or not. There is a chance of failure in Index step in case of TREX server issues. In such cases engage BASIS team and get the info reg TREX server and repeat/ Restart the job once the server is fixed. Compression Job may fail when there is any other job which is trying to load the data or accessing from the Cube. In such case job fails with the error message as “Locked by ......” Before restarting the job, make sure whether the object is unlocked or not. 5. Roll Up failure: “Roll Up” fails due to Contention Issue. When there is Master Data load is in progress, there is a chance of Roll up failure due to resource contention. In such case before restarting the job/ step, make sure whether the master data load is completed or not. Once the master data load finishes restart the job. 6. Change Run – Job finishes with error RSM 756 When there is a failure in the attribute change run due to Contention, we have to wait for the other job (Attribute change run) completion. Only one ACR can run in BW at a time. Once the other ACR job is completed, then we can restart/repeat the job. We can also run the ACR manually in case of nay failures. Go to RSA1-> Tool -> Apply Hierarchy/Change Run -> select the appropriate Request in the list for which we have to run ACR -> Execute. 7. Transformation In-active: In case of any changes in the production/moved to the production without saving properly or any modification done in the transformation without changing, in such cases there is a possibility of Load failure with the error message as “ Failure due to Transformation In active”. In such cases, we will have to activate the Transformation which is inactive. Go to RSA1 -> select the transformation -> Activate In case of no authorization for activating the transformation in production system, we can do it by using the Function Module - RSDG_TRFN_ACTIVATE Try the following steps to use the program "RSDG_TRFN_ACTIVATE” here you will need to enter certain details:Transformation ID: Transformation’s Tech Name (ID)Object Status: ACTType of Source: “Source Name”Source name: “Source Tech Name”Type of Target: Target NameTarget name: “Target Tech Name”
Execute. The Transformation status will be turned into Active.
Then we can restart the job. Job will be completed successfully.
8. Process Chain Started from the yesterday’s failed step:
In few instances, process chain starts from the step which was failed in the previous iteration instead of starting from the “Start” step.
In such cases we will have to delete the previous day’s process chain log, to start the chain form the beginning (from Start variant).
Go To ST13-> Select the Process Chain -> Log -> Delete.
Or we can use Function Module for Process Chain Log Deletion: RSPROCESS_LOG_DELETE.
Give the log id of the process chain, which we can get from the ST13 screen.
Then we can restart the chain.
Turning the Process Chain Status using Function Module:
At times, when there is no progress in any of the process chains which is running for a long time without any progress, we will have to turn the status of the entire chain/Particular step by using the Function Module.
Function Module: RSPC_PROCESS_FINISH
The program "RSPC_PROCESS_FINISH" for making the status of a particular process as finished.
To turn any DTP load which was running long, so please try the following steps to use the program "RSPC_PROCESS_FINISH" here you need to enter the following details:
LOG ID: this id will be the id of the parent chain.
CHAIN: here you will need to enter the chain name which has failed process.
TYPE: Type of failed step can be found out by checking the table "RSPCPROCESSLOG" via "SE16" or "ZSE16" by entering the Variant & Instance of the failed step. The table "RSPCPROCESSLOG" can be used to find out various details regarding a particular process.
INSTANCE & VARIANT: Instance & Variant name can be found out by right clicking on the failed step and then by checking the "Displaying Messages Options" of the failed step & then checking the chain tab.
STATE: State is used to identify the overall state of the process. Below given are the various states for a step.
R Ended with errors
G Successfully completed
F Completed
A Active
X Canceled
P Planned
S Skipped at restart
Q Released
Y Ready
Undefined
J Framework Error upon Completion (e.g. follow-on job missing)
9. Hierarchy save Failure:
When there a failure in Hierarchy Save, then we have to follow the below process...
If there is an issue with Hierarchy save, we will have to schedule the Info packages associated with the Hierarchies manually. Then we have to run Attribute Change Run to update the changes to the associated Targets. Please find the below mentioned the step by step process...
ST13-> Select Failed Process Chain -> Select Hierarchy Save Step -> Rt click Display Variant -> Select the info package in the hierarchy -> Go to RSA! -> Run the Info Package Manually -> Tools -> Run Hierarchy/Attribute Change Run -> Select Hierarchy List (Here you can find the List of Hierarchies) -> Execute.
10. Why there is frequent load failures during extractions? and how to analyse them?
If these failures are related to Data, there might be data inconsistency in source system. Though we are handling properly in transfer rules. We can monitor these issues in T-code -> RSMO and PSA (failed records) and update.
If we are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system. These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO). and restart the extraction.
11. Explain briefly about 0record modes in ODS?
ORECORDMODE is SAP Delivered object and will be added to ODS object while activating. Using this ODS will be updated during DELTA loads.. This has three possible values ( X D R).. D & R is for deleting and removing records and X is for skipping records during delta load.
12. What is reconciliation in bw? What the procedure to do reconciliation?
Reconcilation is the process of comparing the data after it is transferred to the BW system with the source system. The procedure to do reconcilation is either you can check the data from the SE16 if the data is coming from a particular table only or if the datasource is any std datasource then the data is coming from the many tables in that scenario what I used to do ask the R/3 consultant to report on that particular selections and used to get the data in the excel sheet and then used to reconcile with the data in BW . If you are familiar with the reports of R/3 then you are good to go meaning you need not be dependant on the R/3 consultant ( its better to know which reports to run to check the data ).
13. What is the daily task we do in production support.How many times we will extract the data at what times.
It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided. If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures... long times are expected..
Usually You need to work on RSMO and see what records are failing.. and update from PSA.
14. What are some of the frequent failures and errors?
As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be becuase of previuos load failure , if the load is dependant on other loads
d) Can be because of erreneous records
e) Can be because of RFC connections
These are some of the reasons for the load failures
1. DTP Failure
Select the step-> right click and select “Display Message”-> there we will get the message which gives the reason for ABEND.
A DTP can failure due to following reasons, in such case we can go for restarting the job.
System Exception Error
Request Locked
ABAP Run time error.
Duplicate records
Erroneous Records from PSA.
Duplicate records: In case of duplication in the records, we can find it in the error message along with the Info Provider’s name. Before restarting the job after deleting the bad DTP request, we have to handle the duplicate records. Go to the info provider -> DTP step -> Update tab -> check handle duplicate records -> activate -> Execute DTP. After successful competition of the job uncheck the Handle Duplicate records option and activate.
DTP Log Run:
If a DTP is taking log time than the regular run time without having the back ground job, then we have to turn the status of the DTP into Red and then delete the DTP bad request (If any), repeat the step or restart the job.
Before restarting the Job/ repeating the DTP step, make sure about the reason for failure.
If the failure is due to “Space Issue” in the F fact table, engage the DBA team and also BASIS team and explain them the issue. Table size needs to be increased before performing any action in BW. It’ll be done by DBA Team. After increasing the space in the F fact table we can restart the job.
Erroneous Records from PSA: When ever a DTP fails because of erroneous records while fetching the data from PSA to Data Target, in such cases data needs to be changed in the ECC. If it is not possible, then after getting the approval from the business, we can edit the Erroneous records in PSA and then we have to run the DTP. Go to PSA -> select request -> select error records -> edit the records and save.Then run the DTP. 2.
INFO PACKAGE FAILURE: The following are the reasons for Info Pack failure.
Source System Connection failure
tRFC/IDOC failure
Communication Issues
Processing the IDOC Manually in BI
Check the source system connection with the help of SAP BASIS, if it is not fine ask them to rebuild the connection. After that restart the job (Info Pack).
Go to RSA1 -> select source system -> System -> Connection check.
In case of any failed tRFC’s/IDOC’s, the error message will be like “Error in writing the partition number DP2” or “Caller 01, 02 errors”. In such case reprocess the tRFC/IDOC with the help of SAP BASIS, and then job will finish successfully.
If the data is loading from the source system to DSO directly, then delete the bad request in the PSA table, then restart the job
Info Pack Long Run: If an info pack is running long, then check whether the job is finished at source system or not. If it is finished, then check whether any tRFC/IDOC struck/Failed with the help of SAP BASIS. Even after reprocessing the tRFC, if the job is in yellow status then turn the status into “Red”. Now restart / repeat the step. After completion of the job force complete.
Before turning the status to Red/Green, make sure whether the load is of Full/Delta and also the time stamp is properly verified.
Time Stamp Verification:
Select Info Package-> Process Monitor -> Header -> Select Request -> Go to source System (Header->Source System) -> Sm37-> give the request and check the status of the request in the source system -> If it is in active, then we have to check whether there any struck/failed tRFC’s/IDOC’s If the request is in Cancelled status in Source system -> Check the Info Pack status in BW system -> If IP status is also in failed state/cancelled state -> Check the data load type (FULL or DELTA) -> if the status is full then we can turn the Info Package status red and then we can repeat/restart the Info package/job. -> If the load is of Delta type then we have to go RSA7 in source system-> (Compare the last updated time in Source System SM37 back ground job)) Check the time stamp -> If the time stamp in RSA7 is matching then turn the Info Package status to Red -> Restart the job. It’ll fetch the data in the next iterationIf the time stamp is not updated in RSA7 -> Turn the status into Green -> Restart the job. It’ll fetch the data in the next iteration.
Source System BW System Source System RSA7 Source System SM37 Action
I/P Status RED(Cancelled) I/P Status (Active) Time stamp matching with SM37 last updated time Time stamp matching with RSA7 time stamp Turn the I/P Status into Red and Restart the Job
I/P Status RED(Cancelled) I/P Status (Cancelled) Time stamp matching with SM37 last updated time Time stamp matching with RSA7 time stamp Turn the I/P Status into Red and Restart the Job
I/P Status RED(Cancelled) I/P Status (Active) Time stamp is not matching with SM37 last updated time Time stamp is not matching with RSA7 time stamp Turn the I/P status into Green and Restart the job
I/P Status RED(Cancelled) I/P Status (Cancelled) Time stamp is not matching with SM37 last updated time Time stamp is not matching with RSA7 time stamp Turn the I/P status into Green and Restart the job
Processing the IDOC Manually in BI:
When there is an IDOC which is stuck in the BW and successfully completed the background job in the source system, in such cases we can process the IDOC manually in the BW. Go to Info Package -> Process Monitor -> Details -> select the IDOC which is in yellow status(stuck) -> Right click -> Process the IDOC manually -> it’ll take some time to get processed.******Make sure that we can process the IDOC in BW only when the back ground job is completed in the source system and stuck in the BW only. 3. DSO Activation Failure: When there is a failure in DSO activation step, check whether the data is loading to DSO from PSA or from the source system directly. If the data is loading to DSO from PSA, then activate the DSO manually as follows
Right click DSO Activation Step -> Target Administration -> Select the latest request in DSO -> select Activate -> after request turned to green status, Restart the job.
If the data is loading directly from the source system to DSO, then delete the bad request in the PSA table, then restart the job
4. Failure in Drop Index/ Compression step: When there is a failure in Drop Index/ compression step, check the Error Message. If it is failed due to “Lock Issue”, it means job failed because of the parallel process or action which we have performed on that particular cube or object. Before restarting the job, make sure whether the object is unlocked or not. There is a chance of failure in Index step in case of TREX server issues. In such cases engage BASIS team and get the info reg TREX server and repeat/ Restart the job once the server is fixed. Compression Job may fail when there is any other job which is trying to load the data or accessing from the Cube. In such case job fails with the error message as “Locked by ......” Before restarting the job, make sure whether the object is unlocked or not. 5. Roll Up failure: “Roll Up” fails due to Contention Issue. When there is Master Data load is in progress, there is a chance of Roll up failure due to resource contention. In such case before restarting the job/ step, make sure whether the master data load is completed or not. Once the master data load finishes restart the job. 6. Change Run – Job finishes with error RSM 756 When there is a failure in the attribute change run due to Contention, we have to wait for the other job (Attribute change run) completion. Only one ACR can run in BW at a time. Once the other ACR job is completed, then we can restart/repeat the job. We can also run the ACR manually in case of nay failures. Go to RSA1-> Tool -> Apply Hierarchy/Change Run -> select the appropriate Request in the list for which we have to run ACR -> Execute. 7. Transformation In-active: In case of any changes in the production/moved to the production without saving properly or any modification done in the transformation without changing, in such cases there is a possibility of Load failure with the error message as “ Failure due to Transformation In active”. In such cases, we will have to activate the Transformation which is inactive. Go to RSA1 -> select the transformation -> Activate In case of no authorization for activating the transformation in production system, we can do it by using the Function Module - RSDG_TRFN_ACTIVATE Try the following steps to use the program "RSDG_TRFN_ACTIVATE” here you will need to enter certain details:Transformation ID: Transformation’s Tech Name (ID)Object Status: ACTType of Source: “Source Name”Source name: “Source Tech Name”Type of Target: Target NameTarget name: “Target Tech Name”
Execute. The Transformation status will be turned into Active.
Then we can restart the job. Job will be completed successfully.
8. Process Chain Started from the yesterday’s failed step:
In few instances, process chain starts from the step which was failed in the previous iteration instead of starting from the “Start” step.
In such cases we will have to delete the previous day’s process chain log, to start the chain form the beginning (from Start variant).
Go To ST13-> Select the Process Chain -> Log -> Delete.
Or we can use Function Module for Process Chain Log Deletion: RSPROCESS_LOG_DELETE.
Give the log id of the process chain, which we can get from the ST13 screen.
Then we can restart the chain.
Turning the Process Chain Status using Function Module:
At times, when there is no progress in any of the process chains which is running for a long time without any progress, we will have to turn the status of the entire chain/Particular step by using the Function Module.
Function Module: RSPC_PROCESS_FINISH
The program "RSPC_PROCESS_FINISH" for making the status of a particular process as finished.
To turn any DTP load which was running long, so please try the following steps to use the program "RSPC_PROCESS_FINISH" here you need to enter the following details:
LOG ID: this id will be the id of the parent chain.
CHAIN: here you will need to enter the chain name which has failed process.
TYPE: Type of failed step can be found out by checking the table "RSPCPROCESSLOG" via "SE16" or "ZSE16" by entering the Variant & Instance of the failed step. The table "RSPCPROCESSLOG" can be used to find out various details regarding a particular process.
INSTANCE & VARIANT: Instance & Variant name can be found out by right clicking on the failed step and then by checking the "Displaying Messages Options" of the failed step & then checking the chain tab.
STATE: State is used to identify the overall state of the process. Below given are the various states for a step.
R Ended with errors
G Successfully completed
F Completed
A Active
X Canceled
P Planned
S Skipped at restart
Q Released
Y Ready
Undefined
J Framework Error upon Completion (e.g. follow-on job missing)
9. Hierarchy save Failure:
When there a failure in Hierarchy Save, then we have to follow the below process...
If there is an issue with Hierarchy save, we will have to schedule the Info packages associated with the Hierarchies manually. Then we have to run Attribute Change Run to update the changes to the associated Targets. Please find the below mentioned the step by step process...
ST13-> Select Failed Process Chain -> Select Hierarchy Save Step -> Rt click Display Variant -> Select the info package in the hierarchy -> Go to RSA! -> Run the Info Package Manually -> Tools -> Run Hierarchy/Attribute Change Run -> Select Hierarchy List (Here you can find the List of Hierarchies) -> Execute.
10. Why there is frequent load failures during extractions? and how to analyse them?
If these failures are related to Data, there might be data inconsistency in source system. Though we are handling properly in transfer rules. We can monitor these issues in T-code -> RSMO and PSA (failed records) and update.
If we are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system. These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO). and restart the extraction.
11. Explain briefly about 0record modes in ODS?
ORECORDMODE is SAP Delivered object and will be added to ODS object while activating. Using this ODS will be updated during DELTA loads.. This has three possible values ( X D R).. D & R is for deleting and removing records and X is for skipping records during delta load.
12. What is reconciliation in bw? What the procedure to do reconciliation?
Reconcilation is the process of comparing the data after it is transferred to the BW system with the source system. The procedure to do reconcilation is either you can check the data from the SE16 if the data is coming from a particular table only or if the datasource is any std datasource then the data is coming from the many tables in that scenario what I used to do ask the R/3 consultant to report on that particular selections and used to get the data in the excel sheet and then used to reconcile with the data in BW . If you are familiar with the reports of R/3 then you are good to go meaning you need not be dependant on the R/3 consultant ( its better to know which reports to run to check the data ).
13. What is the daily task we do in production support.How many times we will extract the data at what times.
It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided. If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures... long times are expected..
Usually You need to work on RSMO and see what records are failing.. and update from PSA.
14. What are some of the frequent failures and errors?
As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be becuase of previuos load failure , if the load is dependant on other loads
d) Can be because of erreneous records
e) Can be because of RFC connections
These are some of the reasons for the load failures
In a typical SAP BW system landscape, often the the lower
systems ( Sandbox, Dev,Quality) are refreshed with the Production copy. Here we
will discuss some of the common issues and the solutions.
After copying BW Prod copy to lower system(Pre Prod or Quality)
Netweaver team have to perform few more activities.
Among them are .
1. Logical System Conversion : Using BDLS Programs
2. RFC Connection
3. Partner Profile Maintenance (We20)
If any of the above steps are not performed properly we
might see issues related Dataload and Datasources will be inactive and so
on. Some of them are discussed here and will add anymore if i come across.
1. In a BI 7.0 system, the DTP’s in the refreshed system will
result in pointing to the Production system.
Solution: To convert the DTP’s pointing to your refreshed
system,
1. Run the program RSBKDTP_BDLS in
SE38. Provide the old logical system name and new logical system name and
execute the program to convert the old logical system name to new logical
system name. Now you will be able to see all your DTPs are pointing
to your refreshed system logical name.
2. Opening the Infopackages for the delta loads result in
shortdump in ST22 in the refreshed system.
Solution: Go to SE38, run RSSM_OLTP_INIT_DELTA_UPDATE to
update the init conditions back in the source system (ECC..). Go to RSA7 in the
source and verify the particular datasource is in green state.
3. In a typical SAP BI landscape, BW Accelerator is attached
to the Production system. Your Quality system is refreshed from Production
which has BWA Appliance and when you try to load the data into the Infocubes,
the rollup steps result in error and the running the queries also ends up with
error saying there is no RFC to BWA.
Solution: You see the TREX RFC in RSCUSTA screen pointing to Production
in the Quality system. Remove the destination and ask your Basis to create a
dummy RFC for the BWA in Quality system and place the RFC details in RSCUSTA
screen. You can say that you can delete the incorrect TREX RFC from the RSCUSTA
screen but the queries copied from Prod will not work.
4. When you try to load the new request to the DSO, it
says the request has been duplicated/Activated and the reason is the inconsistency
in the metadata tables. Your DTP will try to load all the requests from the PSA
table even though they were already loaded to the target.
Solution: Go to the manage screen of the data target and look for
the recent request and open up the monitor for that particular DTP, check the
request id in the selections and go to the PSA and look for that request id.
Now, you can delete all the PSA requests starting with this request id and all
previous requests. Run the DTP again to load the request that you were trying
to load.
5. Your datasources will be in active version and you will not
be able to activate them. it is due to the USEROBJ Field in RSTSODS
is still pointing to the Old Source system instead of
New Source System.
Solution: Execute the BDLS PSA program : RSAR_EXECUTE_BDLS_PSA_EXIT to
change the old logical system name to present source system.
6. When the BDLS program was not executed the old logical system
names are not converted to new logical system names. Due to that
1 . Datasources will become inactive,
2. Dataloads won’t work.
3. Transformation won’t work
Solution: Ask the Netweaver Team to execute the BDLS program
to fix the logical system issues. There is old BDLS program and New BDLS
program. Sometimes if the issue is not fixed with New BDLS program then try to
execute the Old BDLS program to fix this issue.
Some of the BDLS Program which you are useful to fix BDLS Issues
Program Name
|
Description
|
RBDLS100
|
Logical
System Name Conversion
|
RBDLS2LS
|
Conversion of
Logical System Names: Old Version of RBDLSMAP
|
RBDLS2LS710
|
Convert Logical
System Names
|
RBDLSCHECK
|
Report to
test whether a logical system exists
|
RBDLSMAP
|
Tool: Conversion of
Logical System Names
|
RBDLSMAP_OLD
|
Tool:
Conversion of Logical System Names
|
RBDLSMAP_RESET
|
Delete All
Table Entries After BDLS
|
RBDLSMAP2
|
Tool: Conversion of
Logical System Names
|
RSAR_BDLS_PSA_EXIT
|
Program
RSAR_BDLS_PSA_EXIT
|
RSAR_EXECUTE_BDLS_PSA_EXIT
|
Execute
Logsys Conversion of PSA Manually
|
RSBK_REPAIR_BDLS
|
Report
RSBK_REPAIR_BDLS
|
RSBKDTP_BDLS
|
Report RSBKDTP_BDLS
|
RSDS_CONVERT_TADIR_BDLS
|
Implementation
of TADIR for BDLS
|
RSPC_BDLS_VARIANT_EXIT
|
Program
RSAR_BDLS_PSA_EXIT
|
RSTRAN_BDLS
|
Report RSTRAN_BDLS
|
7. RFC connection : If RFC connection is not active you
might be facing the dataload issues
Solution : Ask Netweaver
Team to recreate the RFC connection between your source systems.
8. Inactive Partner Profile : IF Partner Profile from your BW
system to your source system are inactive.
Solution: If Inactive partner
Profile exists then dataload doesn’t work then ask your Netweaver Team to
activate the partner profiles in We20 Tcode for all source systems.
9. Apart from the above dataload issues w.r.to logical
system you will see dataload failure due to logical systems issues in Data
saying No Source type maintained for logical system(pointing to Source System
of Production).
Solution : Initially we
might get confused with this type of error thinking it is a BDLS issues. But
It may not be a BDLS issues but it occurred due to old Psa Data
exists for particularDatasource which has a logical system field which is
having the production source system as value in it. When we execute it checks
the logical system name with the table : RSSOURSYSTEM which doesn’t have
production source system details existing in it. Just delete the temporary data
which exists in PSA and bring fresh data through infopackage and then execute
DTP.
For ex : 0SRM_TD_PO datasource has a logical system field which
stores logical system name from where the data came from.
��
ReplyDelete