In this video, we will be editing the test script we recorded, and replacing the hardcoded values with parameters.
Lesson 5 :Creating Test Script Parameters
Best Regards,
Gopal Nair.
In this video, we will be editing the test script we recorded, and replacing the hardcoded values with parameters.
Best Regards,
Gopal Nair.
In this video, we will discuss test data container, "Internal" and "External" variants. We will see how to import parameters we defined in our test script (discussed in part 5 of this video series). Finally, the internal variants defined will be used to create a template file for creating external variant file.
Best Regards,
Gopal Nair.
Below is the easy to remember short description of Eclipse IDE
(E)ditor for many programming language
(C)ode Faster
(L)ess Typing with Code Completion
(I)ntegrated Development Environment
(P)latform
(S)yntax Highlighting
(E)xtensible
Note: This is not an official expansion of Eclipse.
If you are still wondering what does it means. Please read ahead.
Following are the advantages of using Eclipse as Development tool.
It openness and inter operability through standards and facilitate open source integration.
Following are the Components of Eclipse Platform
References
The SAP Eclipse Story - http://www.sdn.sap.com/irj/sdn/nw-devstudio?rid=/library/uuid/10c671f2-6364-2a10-8d96-8b3145d4a478]
Tutorials
Sometimes we need to debug a process but the logic that you need to debug is after a button event after 3 pop-ups. So, what you do? Debug everything trying to figure it out when your point starts... NO! You can create a shortcut on your desktop and drop down it into your pop-up or before the event and the debug will start after it.
Creating a debug shortcut:
Change the title to help you to identify the client, change the tcode to /h and chose a place to save the shortcut
And Finish.
Go to your desktop and find your shortcut:
Now, how the magic happens:
A message will be shown...
Continue the process... And the debbug will starts after your click event!
Hope it helps
Bugs in your custom ABAP code can be quite expensive when they impact critical business processes, which is why quality assurance of custom ABAP code is receiving more and more attention in business. Detecting bugs early in the development stages before they can be moved across the landscape ensure that the cost and risk impact is minimal. To reach this goal, SAP offers theABAP Test Cockpit (ATC)andCode Inspectoras quality assurance tools.
The ATC is available with EhP2 for SAP NetWeaver 7.0 support package stack 12 (SAP Basis 7.02, SAP Kernel 7.20) and EhP3 for SAP NetWeaver 7.0 support package stack 5 (SAP Basis 7.31, SAP Kernel 7.20).
The transport organizer is a tool for managing the objects that gather the changes carried on during the development and configuration phases, and fortransportingthem across the landscape. The two kinds of objects used are theRequestand theTask.
The Request is the main container, which contains zero to any number of Tasks.The CTS automatically creates one task for each user who adds objects to the Request. An ABAP transport request may contain many tasks that are assigned to different users.When you want to transport the Request, you have to firstreleaseall the tasks of the request, and then the request itself. When it is released, the transport is done automatically or manually by the administrator. The transport goes towards the systems and clients defined in thetransport routes.
Current behavior of Code Inspector checks during the release of a transport request or a transport task
Releasing a transport request or a task can be considered as the first quality gate to ensure that poor quality custom code is not transported across the landscape. Currently, Code Inspector checks can be activated during the release of a transport request. To activate this feature, perform the following steps
Now this activates the check of a transport request. But there may be the requirement to check also the single 'tasks'. Currently, automatically triggering Code Inspector checks during the release of a 'task' is not available as a standard. To address this requirement, SAP provides a standard BAdI 'CTS_REQUEST_CHECK’, that can be implemented by customers to trigger code inspector checks during the release of a task.
In this blog, I will illustrate the steps required to implement the BAdI, which when activated will trigger the checks during the release of a task.
(please adapt the naming conventions, texts, badi names, class names etc as per your requirement)
Steps for triggering Code Inspector Checks during the release of tasks
3. Provide a BAdi implementation name
4. Provide a short text. Click on the „Save‟ button. Provide package details when prompted
5. Double clicking on the method ‚check_before_release„ of the BAdi interface takes you to the method implementation of the generated ABAP object class that was created during BAdi impl creation.
6. In the method „CHECK_BEFORE_RELEASE‟ first check if the release concerns a transport request or a transport task.Code the following portion in the method 'CHECK_BEFORE_RELEASE'
7. For calling the actual Code inspector check itself create a new private method sci_check in the class „ZCL_IM__CTS_REQUEST_CHECK‟
Provide the following parameters for the method SCI_CHECK
create the method exception
8. The rest of the method SCI_CHECK contains the various steps of creating Code inspector check, assigning variants, object sets etc.It is sufficient if you copy the piece of code from the attachment 'sci_check.txt.zip'
9. Finally create the message class „ZSCI‟ with the following values
10. Save and activate all your changes. Do not forget to activate the BAdI implementation in transaction SE19.
* Deactivate the BAdI in SE19, if you do not wish to use this feature
While there are quite some good documents about the setup of the ABAP Test Cockpit (ATC) on SDN (cf. http://scn.sap.com/docs/DOC-32791 and http://scn.sap.com/docs/DOC-32628) I haven't seen any experience reports about a roll out of ATC yet. Therefore I decided to blog about my current experiences in rolling out the ATC in our development organization.
Before starting to describe what we did in our ATC roll out I want to give you some background about the environment of the roll out. At my company we are managing and maintaining several SAP system landscapes for different customer. A typical customer landscape consists of a SAP CRM, a SAP IS-U (ERP) and a SAP BW together with several non-SAP systems (e.g. an output management system and an archive system). In addition to that we have a central development which is used to develop core functionality and distribute these across the customer systems. These core functionalities are typically developed in our own namespace. Therefore, each of our customer system contains a set of custom development in the customer namespace and a set of developments in our own namespace.
The second important aspect of our environment is the diversity of developers developing in the system. Firstly, we have a core development team. This team consists of people with a deep knowledge around software development and mostly some formal training (e.g. a computer science degree) in the area. Secondly, we have a team of functional consultants with a wide range of development skills, ranging from some basic ABAP knowledge to very deep knowledge. And finally we usually have several external consultants developing in the different customer systems as well.
As you might have guessed the result of this environment is a quite diverse code base containing anything from well designed, reusable components to unmaintainable one-time reports.
The first step I took in order to roll out ATC was to perform a first check run using a default check variant in the customer system with the largest code base as well as in our central development system. The result of this first analysis was quite disillusioning. The first run of the default check variant of the ATC across this code base resulted in roughly 700 priority 1 errors, 2500 priority 2 errors and nearly 10.000 priority 3 errors.
The next step was to discuss the check results with the core development team. This discussion basically consisted of two parts.
Firstly, when I presented the tool everyone agreed that it would be very useful and we should use it. When we then had a detailed look at the check results from the two systems they were not that positive any more. The main criticism was around the errors raised by the ATC. Especially some of the more common errors lead to quite some discussions whether the reported error was really an error or rather a false positive. Furthermore, it turned out that some of the default checks simply are not valid in our system landscape. An example of such a check is the Extended Program Check that checks for conditions that are always false. In the context of SAP IS-U the pattern "IF 1 = 2. MESSAGE..." is used extensively throughout the SAP standard. Consequently, it is also widely used in our custom code. However, the Extended Program Check reported each of these if statements. There reason is, that the check only allows for the pattern "IF 0 = 1. MESSAGE....".
Secondly, we discussed extensively how we should approach the large number of issues in our code base. It was obvious that we wouldn't be able to fix all reported issues. This would also have been not very sensible. One reason is that a lot of the programs for which issues were reported might not be in use any more.
As a result of the discussion we decided to:
The next step we took was the definition of a custom check variant. The process of the definition of the custom check consisted of several parts. We started by defining an initial set of checks that we wanted to use. Furthermore, we adjusted the priorities of the checks to our need. It's pretty obvious that each error that might cause a short dump needs to be error of priority one. However, with other checks the correct priority is not that clear. Consider for example the check for an empty where clause in a select. A program containing such a statement might cause severe performance problems in production if it is executed on a large table, nevertheless it might be fine in a small correction program that is only executed once. Last but not least we modified some of the default checks (cf. the IF 1 = 2 pattern mentioned above) to suite ore needs. Unfortunately, the modification of the default checks required a modification of the SAP standard in some cases.
After the initial definition of the check variant we set up daily check runs in the QA system including the replication of the results into the development system. With this set up we worked for some weeks and iteratively refined our default check variant.
Besides the executed checks we also needed an approach to cope with the large number of errors present in our code base. For this we decided that from now on we only wanted to transport objects into the production system without any priority 1 or priority 2 errors. However, we also decided that we didn't want to correct legacy code unless we were modifying it anyway (for example as a result of a bug fix or new feature request). Therefore we created a custom object set and a custom object collector. The custom object collector will only ad objects to the object set if it has been modified after a certain date. This way we were able to get check results only for new or recently modified objects.
Note that this approach has an important drawback. If for example the interface of a method is changed (e.g. by adding a additional required parameter) this might cause a syntax error in some other program using the class. However, with our custom object collector ATC will not be able to find this error as the program using the class itself is not changed. Nevertheless this was the approach we choose to cope with the large amount of legacy code.
After the core development team had been working with the described set up for a while we were quite comfortable with the results that the ATC produced. Therefore we decided to roll out the ATC to all developers working in our system. This was done by informing everybody about the ATC as well as setting up the execution of the ATC checks upon release of transport request. Note that we for now only executed the checks upon release of a transport but did not block any transports because of ATC errors.
As a result of executing ATC upon the release of a transport request basically every developer was immediately using ATC, even if they had not integrated it into their workflow yet. This proved very successful, especially with the less experienced developers. As the ATC provides useful explanations together with each error it resulted in quite some discussion and learning regarding good ABAP code that wouldn't have happened otherwise.
After working with the described set up now for a few weeks the roll out of ATC proved quite successful in or development organisation. Especially the detailed documentation of the ATC errors help to improve the knowledge across the organisation. With respect to the roll out I think involvement of the core developers from the very beginning was very important. Only by agreeing on a set of ATC checks, sometimes only after a few discussions, everyone accepts the raised errors and fixes them. If we would have simply used the default check variants without the adaptations mentioned above I don't thinks the ATC would have been accepted as a tool to improve the code quality (e.g. due to a large number of false positives).
The next step we will take is the roll out of the ATC exemption process in our development organisation.The reason is that we already noticed that some priority 2 errors can't be fixed due to different restrictions (e.g. usage of standard SAP functionality in custom code that leads to error messages). Therefore we need the exemption process in order to remove the errors in those special cases. Furthermore, I see the exemption process also a prerequisite to disable the release of transport request as long as ATC errors are present.
Finally, I'd be happy to discuss experiences with other ATC users.
Christian
Summary
This blog is about changing the way of work of Source Code Inspector(tcode:SCI) especially when Transport Organizer integration is activated. Transport Organizer integration can be activated using tcode SE03. Thanks to SAP for this talented and flexible tool.
Problem
In one of our projects we needed to seperate SCI controls according to creation date of objects. We needed this because afore mentioned project is started 12 years ago and as you guess quality and security standarts are changed over time. At sometime integration of SCI/ATC and Tranport Organizer(SE01) is activated. So developers can not release a request before handling errors given by SCI. But how can you force developer who made just single line of change to a huge program? How can he/she handle all errors given after checks without knowing semantics of this huge program? What if this change should be transported to production system immediately? The solution was to seperate check variants according to creation date of objects .
Also periodic checks can be planned to adduct old objects to new standarts.
In this blog i will try to explain what I did to workaround this issue. To benefit from the solution you should be familiar with adding your own test class to SCI.
You can find information about adding your own test class to SCI at : http://scn.sap.com/community/abap/blog/2006/11/02/code-inspector--how-to-create-a-new-check and http://wiki.scn.sap.com/wiki/download/attachments/3669/CI_NEW_CHECK.pdf?original_fqdn=wiki.sdn.sap.com
Solution summary
First, I created a test class ZCL_SCI_TEST_BYDATE (derived from CL_CI_TEST_ROOT) that has just 2 parameters date (mv_credat) and check variant(mv_checkvar). This class decides if tests in mv_checkvar is required for the object under test by checking creation date . If object is 'new' it runs additional tests.
Secondly, I created two SCI check variants : BASIC_VARIANT and EXTENDED_VARIANT. The first one is for old development objects and second one is for additional tests for ‘new’ objects. ‘new’ means that object is created after certain date(ZCL_SCI_TEST_BYDATE->mv_credat). First check variant includes my custom test which is mentioned above (ZCL_SCI_TEST_BYDATE) and EXTENDED_VARIANT is given as mv_checkvar parameter. Also second check variant is complementary for the first one and includes different tests than the first one.
Finally, to enable navigation by double clicking at check results I had to make one simple repair and 2 enhancements.
Step 1 : ZCL_ SCI_TEST_BYDATE class :
Most important method of this class is -normally- run() .
run method checks if object is created after date mv_checkvar, gets test list for EXTENDED_VARIANT and starts new test procedure for new test list.
Another important method is modify_insp_chkvar which returns test list for EXTENDED_VARIANT.
Important points about my custom class definition is ok now. I attached full source code.
If you want to add parameters to your custom test classes look at query_attributes, get_attributes, put_attributes methods of ZCL_SCI_TEST_BYDATE.
To add new test class to SCI test list I opened SCI->Management of tests and chose my new test class and clicked save button.
Step 2 : Check variants
As I mentioned before I created 2 checkvariants. Below is BASIC_VARIANT which is valid for all programs.Selected test list in figures below is just an example. Notice that my new test ‘Additional tests for new programs’ is selected. Parameters ofnew test can be seen in this picture.
Next picture depicts second checkvariant which is valid for objects created after ’01.01.2014’ (mv_date).
PS: SE01 uses SCI checkvariant TRANSPORT as default. But there is a way to change this – thanks to SCI : I changed default checkvariant with my BASIC_VARIANT. To achieve this I changed the SCICHKV_ALTER table’s record which has ‘TRANSPORT’ at CHECKVNAME_DEF field.
Note that AFAIK DEFAULT checkvariant is used by SE80, so it is modifiable too .
Step 3 : Adding check results of custom test class to SCI.
-This step is not related to the main idea, first 2 steps are sufficient to express my idea-
After creation of new test class and checkvariants I had been able to run additional checks for new objects but SCI result list was not navigating to EXTENDED_VARIANT’s test results when I double clicked. As I guess SCI is just aware of BASIC_VARIANT’s test list and can not navigate to unknown test’s results. I should add my additional tests to inspection object’s test list.
I made a single line of repair(CL_CI_INSPECTION->EXECUTE_DIRECT) and enhancement to CL_CI_TESTS->GET_LIST. The aim of these modifications is to fill the ‘inspection’ property of ZCL_SCI_TEST_BYDATE .( ZCL_SCI_TEST_BYDATE has a property named inspection inheriting from CL_CI_TEST_ROOT but it is empty when tests are running. I don’t know if it’s a bug or not ).
PS: CL_CI_TEST_ROOT class has method ‘inform’ and event Message. But I could not be able to pass my additional check results to SCI result list . I will work on this and if its ok step3 will be useless.
CL_CI_TESTS->GET_LIST enhancement
I have used this testing technique during one of my test phase, where we were testing the portal applications
This test technique is applicable where we have portal application & equivalent functionality in R/3(back-end) as well.
I will take the examples from EAM where we have portal & R/3 transactions available to create/change/display the objects Equipment, Functional Location, Orders, Task list, Notifications etc.
Portal applications have its own benefits, End User need not to remember all the transactions. But at the same time its mandatory functionality should behave same whether it’s is launched from portal or R/3 transactions.
We have tested different combinations and ensured that functionality is behaving in same manner in all the cases. Wherever this deviates from expected behavior we can analyze the behavior further & report an issue.
If we test both (R/3 & portal) of them separately without comparison, it’s difficult to validate the exact & expected behavior.
Prerequisites:
I have described below few aspects of functionality which we should test.
Few Combinations which we validated were:
Open the object in change mode in portal & try changing it in R/3 and vice verse.
Expected results are: object should be locked & not available for changes.
Change few customization in R/3 and check in portal for the impact.
Expected results are: customization change should have impact on portal too.
Create Object in Portal & check in R/3 transaction/ tables & vice verse.
Expected results are: Objects created in portal should be available in database table in back-end system.
Block the Object status as inactive in R/3.
Expected results are: Status should be updated on portal for respective object & we should not be able to change it any further.
There are many other cases/combinations which can be compared. With this test technique we can ensure the functionality is robust & identical, does not change its behavior with change in test environment or technology.
This article might be useful for testers who are testing the portal & will help them in designing there testing even better.
I will further share my findings & new ways of testing for any new functionality from my future test phases.
In this blog I would like like to describe the idea of data-driven testing and how this can be implemented in ABAP Unit.
Data-driven testing is used to separate test data and expected results from unit test source code.
It allows running the same test case on multiple data sets without the need of modifying test code.
It does not replace such techniques as test doubles and mock objects. It is still a good idea to abstract your business logic in a way that will allow you to test independently of data. But even if your code is build in that way you can still benefit from parametrized testing and the ability to check many inputs on the same code.
It is particularly useful for methods which solve more complex computational formulas and algorithms. Input space is very wide in such cases and there are many boundary cases to consider. It is easier to maintain them outside of the code then.
Other xUnit frameworks like .Net nUnit Java jUnit provide the built-in capabilities to run parametrized test cases and implement data-driven testing.
I was missing such features in ABAP Unit and started looking for potential solutions.
The solution which I will present is based on eCATT test data containers and eCATT API.
eCATT Data containers are used to store input parameters and expected results. ABAP unit is used as an execution framework for unit tests.
For the sake of example let's take simple class with method which determines triangle type.
It returns:
and throws exception if provided input is not a valid triangle
METHODS get_type
IMPORTING
a TYPE i
b TYPE i
c TYPE i
RETURNING value(triangle_type)TYPE i
RAISING lcx_invalid_param.
Now we proceed with creating unit tests.
There are two typical approaches:
- Creating a separate test method for each test case
- Bundling test cases in single method with multiple assertions
Usually I'm in favor of the second approach as it provides better overview in the test logs when some of the test cases are failing. It is also easier to debug single test case.
Example test case could look like this:
...
METHODS test_is_equilateral FOR TESTING.
...
METHOD test_is_equilateral.
cl_abap_unit_assert=>assert_equals(
act = lcl_triangle=>get_type( a = 3
b = 3
c = 3)
exp = lcl_triangle=>c_equilateral ).
ENDMETHOD.
Each time we want to add coverage and test some additional inputs either new test method has to be created or new assertion has to be added.
To overcome this we create a test data container in transaction SECATT.
And define test variants
In ABAP code we define test method which uses eCATT API class CL_APL_ECATT_TDC_API to retrieve variant values
METHOD test_get_type.
DATA: a TYPE i,
b TYPE i,
c TYPE i,
exp_type TYPE i.
DATA: lo_tdc_api TYPEREFTO cl_apl_ecatt_tdc_api,
lt_variants TYPE etvar_name_tabtype,
lv_variant TYPE etvar_id.
lo_tdc_api = cl_apl_ecatt_tdc_api=>get_instance('ZTRIANGLE_TEST_01').
lt_variants = lo_tdc_api->get_variant_list().
"skip default variant
DELETE lt_variants WHEREtable_line = 'ECATTDEFAULT'.
" execute test logic for all data variants
LOOPAT lt_variants INTO lv_variant.
get_val: 'A' a,
'B' b,
'C' c,
'EXP_TRIANGLE_TYPE' exp_type.
cl_abap_unit_assert=>assert_equals(
exp = exp_type
act = lcl_triangle=>get_type( aa = a
bb = b
cc = c )
quit = if_aunit_constants=>no ).
ENDLOOP.
ENDMETHOD.
...
DEFINE get_val.
lo_tdc_api->get_value(
exporting
i_param_name = &1
i_variant_name = lv_variant
changing
e_param_value = &2 ).
END-OF-DEFINITION.
In my project I ended up creating a base class for parametrized unit tests which takes care of reading variants and running test methods.
It has one method which does all the job:
METHOD run_variants.
DATA: lt_variants TYPE etvar_name_tabtype,
lo_ex TYPEREFTO cx_root.
"SECATT Test Data Container
TRY.
go_tdc_api = cl_apl_ecatt_tdc_api=>get_instance( imp_container_name ).
" Get all variants from test data container
lt_variants = go_tdc_api->get_variant_list().
CATCH cx_ecatt_tdc_access INTO lo_ex.
cl_aunit_assert=>fail(
msg = |Variant { gv_current_variant } failed: { lo_ex->get_text() }|
quit = if_aunit_constants=>no ).
RETURN.
ENDTRY.
"skip default variant
DELETE lt_variants WHEREtable_line = 'ECATTDEFAULT'.
" execute test method for all data variants
" method should be parameterless and public in child unit test class
LOOPAT lt_variants INTO gv_current_variant.
TRY.
CALLMETHOD(imp_method_name).
CATCH cx_root INTO lo_ex.
cl_aunit_assert=>fail(
msg = |Variant { gv_current_variant } failed: { lo_ex->get_text() }|
quit = if_aunit_constants=>no ).
ENDTRY.
ENDLOOP.
ENDMETHOD.
Modified test class using this approach looks as follows:
CLASS ltc_test_triangle DEFINITIONFOR TESTING DURATION SHORT RISK LEVEL HARMLESS
INHERITING FROM zcl_zz_ca_ecatt_data_ut.
PUBLICSECTION.
METHODS test_get_type FOR TESTING.
METHODS test_get_type_variant.
METHODS test_get_type_invalid_tri FOR TESTING.
METHODS test_get_type_invalid_tri_var.
ENDCLASS.
CLASS ltc_test_triangle IMPLEMENTATION.
METHOD test_get_type.
"run method TEST_GET_TYPE_VARIANT for all variants from container ZTRIANGLE_TEST_01
run_variants(
imp_container_name = 'ZTRIANGLE_TEST_01'
imp_method_name = 'TEST_GET_TYPE_VARIANT').
ENDMETHOD.
METHOD test_get_type_variant.
DATA: a TYPE i,
b TYPE i,
c TYPE i,
exp_type TYPE i.
get_val: 'A' a,
'B' b,
'C' c,
'EXP_TRIANGLE_TYPE' exp_type.
cl_abap_unit_assert=>assert_equals(
exp = exp_type
act = lcl_triangle=>get_type( a = a
b = b
c = c )
quit = if_aunit_constants=>no
msg = |Wrong type returned for variant { gv_current_variant }| ).
ENDMETHOD.
METHOD test_get_type_invalid_tri.
"run method TEST_GET_TYPE_INVALID_TRI_VAR for all variants from container ZTRIANGLE_TEST_02
run_variants(
imp_container_name = 'ZTRIANGLE_TEST_02'
imp_method_name = 'TEST_GET_TYPE_INVALID_TRI_VAR').
ENDMETHOD.
METHOD test_get_type_invalid_tri_var.
DATA: a TYPE i,
b TYPE i,
c TYPE i.
get_val: 'A' a,
'B' b,
'C' c.
TRY.
lcl_triangle=>get_type( a = a
b = b
c = c ).
cl_abap_unit_assert=>fail(
msg = |Expected exception not thrown for invalid triangle - variant { gv_current_variant }|
quit = if_aunit_constants=>no ).
CATCH lcx_invalid_param.
" OK - expected
ENDTRY.
ENDMETHOD.
ENDCLASS.
As you can see with this approach it's very easy to create parametrized test cases where data is maintained in external container. Adding new cases requires just modifying TDC by adding new variant.
It proved to be very useful for test cases checking complex logic requiring multiple input sets to be covered.
There are also some challenges with this approach:
- you need to remember to pass quit = if_aunit_constants=>no in assertions otherwise test will stop at first failed variant
- in ABAP Unit results report there is only one method visible and it is not reflecting number of variants tested
For those challenges I would love to see some improvements in the future versions of ABAP Unit. Similarly to what is available in other xUnit frameworks.
Ideally there should be a way to provide the variants in a declarative way and they should be visible as separate nodes in test run results.
Kind regards,
Tomasz
By Ramesh Vodela
A couple of months back I wrote blog in interoperability section ( mobile development with C# and Xamarin) I felt it was too Technical and wanted to write a blog which can be fun reading but also helps readers and readers can participate to help others. I titled this blog SAP Consulting X issues ( like X files) as I found some issues quite strange - However the issues I find as X issues listed here could be N issues ( Normal issues for others). I would really encourage others to declassify my X issues as their N issues ( if they have answer) or raise new X issues so that readers can benefit by being aware of some issues and work out a suitable solution or avoid a potential time consuming issue.
X1) In the year 1996 I was given SAP help CD(My first exposure to SAP I'am a developer) and randomly clicked on a topic and the topic turned out to be Special Purpose Ledger ( FI configuration).I Came to US in 1997 thru a consulting company and went to my first project in Hershey for Hershey(PA) Canada Project to develop Report Painter report. I was in FI team. In the first team meeting there a issue that was becoming critical (to do with multi-currency reporting). Prior to my arrival the team had about 12 possible solutions to solve this. I suggested the use of Special Purpose Ledger to create a ledger with the required data and this was the 13th solution. The idea was accepted to be tried. I was given a sandbox to try this out. I configured the SPL and could populate all the fields except two fields which involved the use of ABAP Exits - As a developer I thought this will easy as I did the config which was not my skill set. I wrote the exists and configured the ABAP program as mentioned in the documentation. But no matter what I did the control did not come to the exit and hence the two field could not be populated (BATCH population was not accepted). The manger was obviously disappointed. Some colleagues used to call me SAP ALL as although I was developer I showed interest in functional modules- From SAP ALL I came to SAP NONE. After this I went about the job I came to Hershey - developed 50 Report Painter reports - Hershey Canada Project Went live - There was party for go live. My project ended - The next phase was Hershey US which was to start later.
PS1) Late 2001 I was watching CNN news and heard that there were problems with the SAP implementation which affected share price.
PS2) Sydney 2003 - I was asked by a professor in Accounting to configure and document the Special Purpose Ledger - I had the exact same document which I used at Hershey - I configured and wrote the exists as well - The Exit worked the very time with the exact same steps I has used in Hershey. I was dumb founded and tried to search for answer on the net. I am not 100% sure of the accuracy of what I read which is "There is a Basis setting that actually makes sure that Flow control does not come to the Exit" . This was strange finding.
X2) After Hershey my consulting company sent to another project in Wisconsin (1998). This project was Reporting using Logistic Information System (Can the client put of BW reporting and manage with LIS reporting). Having faced the Exit issue I made sure that all the Exists were working in my company's system before heading off to Wisconsin. Again in this project I configured LIS and wrote the Exists - Again I had the same issue the control was not coming. I spoke to the manager and we had finalized to raise OSS - But before I could that Exists started working on its own. I find this strange as well
X3) In 2006 I was doing Application development with .NET C# and ABAP Services - my ASP.NET screens invoke ABAP Services. In one situation I found that I was sending a char30 field to SAP. What I found in the debugger ( I could step from ASP.NET to ABAP code) one of the characters in the middle of the string was getting corrupted(not the same as the one sent from ASP.NET). This was happening only to one particular FM. I had no explanation but could circumvent this by sending another duplicate variable which was not getting corrupted. I find this very strange
X4) 2013 I was developing ABAP in ECC with CRM and PI. The Sales order Creation Starts in CRM and flows to ECC- I had to make a number of enhancements in ECC to implement some rules - As there different teams working and to make trouble shooting easy I create a Z table which populates some values which CRM is sending so that if any issue came up I could classify this as CRM issue or ECC issue so that the problem can be resolved. To populate the Z Table I implemented an enhancement in the FM in ECC which is the first point of entry from CRM to ECC. After a few weeks I found this table was not getting populated and on close examination I found the FM that was being called (Sales order Creation FM) was totally different from the FM it was calling before - The other FM where I populating earlier was not being called at all. The Basis people told me after verifying the system that they made no changes. I find this issue was strange
If you have experienced such issues do document it as it will help others.
Ask a layman what he understands by "Automation" and the most expected answer is "Doing something automatically" .
Right!!! When something is done without the intervention of a human, it is automation. And how would you answer "Why automation?”Is it because we trust the machines more than humans or because machines can work tirelessly or because they can do the same job tenfold faster?
The answer is "All of it and much more”.
Automation helps us with all of this. But keep in mind that we are humans, and 'it is human to err'. What if the creator of this unit of automation (in our context the automated script) does it the wrong way? The “wrong” would also get multiplied and multiply faster than we can realize something is not right. The whole idea is to do it the right way, and in the very beginning itself. It is those small things we ignore in the initial stages which later manifest as huge problems when the automation happens at a large scale. Everything multiplies, including the mistakes we have done and it becomes very difficult to correct it.
This is one of the reasons why some people still prefer manual testing, as they think more time goes in the maintenance and the correction of scripts in addition to their creation and execution.
The power of automation has always been undermined because of the lack of being organized, structured and methodical in its creation. An automated script is best utilized when it is most reliable and re-usable. These two factors contribute towards easy execution (once the scripts are ready), maintenance (whenever there's a change in application) and accuracy of results (when the scripts get executed).
A reliable script can be created only when the tester has a good understanding of the application, its usage and the configurations behind it. This requires a lot of reading and investigation of the application to know how it behaves under a given circumstance. Once this is done, the script can be created such that it handles the application for all possible application flow.
A reusable script truly defines the meaning and purpose of automation. With a perfectly reusable script, further automation of upcoming applications becomes easier and faster. Maintenance is another take away from this attribute of a script. Reusability is a result of standardization of a script in all aspects like structure and naming convention. Let us look at them individually and see how they add to the script’s reusability.
Structure of an automated script: A well-structured script becomes easy to understand and adapt especially to those who take it over from others. It makes the script crisp without any unwanted coding. It is important to strictly limit the script to its purpose and keep only the absolute necessary.
For example, when it comes to validation part, which can be done in many ways (message check, status check, table check, field check and so on) it might not be required for every case. Also remember that a DB Table check takes extra efforts from the script to connect to the system and read the table. One execution may not make a difference, but on a large scale execution, it – does – matter.
Such additional coding needs to be identified and eliminated. Let us analyze the necessary coding according to the purpose of the test:
1. Functional Correctness: Validation is required before and after the execution of the application to see how the transaction has affected the existing data.
Validation before test --> Execution of tcode under test --> Validation after test
2. Performance Measurement: Performance is considered only after the application has been tested for its functional correctness. Validation has no purpose here as the focus of test is non-functional
Execution of tcode under test
3. Creation of Data for Performance: Usually massive data is required for Performance measurement.
For e.g. a 1000 customers with 150 line items each… the same could be repeated for vendors, cost centers, and so on. Table checks on this scale of execution would create a huge load on the system and it would take hours to create such data, may be even days in some cases. It is best to avoid validation/table reads of any kind. Another point to keep in mind here is that using a functional module or a BAPI to create data saves a lot of time and effort. A TCD recording or a SAPGUI recoding should only be the last option.
Execution of tcode for data creation --> Validation after test
4. Creation of data for system Setup: this is usually done on a fresh system, with no data. Hence verification only at the end would suffice.
Execution of tcode for data creation --> Validation after test
There is also a subtle aspect of being structured… The Naming Convention.
Testers usually tend to name their script to suit their need, ignoring the fact that these transactions can be used by anyone in an integrated environment. Searching for existing scripts becomes easy when proper rules are followed while naming them. It may happen that more than one script exist for the same purpose, such duplication has to be avoided. Attributes like the purpose (unit or integration testing or customizing or performance), tcode executed, action (change or create or delete), release need to be kept in mind while setting up the rules.
The same goes for Parameters as well. Work becomes easy while binding the called script and the calling script (script references). Also quick log analysis is another take away from common naming conventions for parameter.
There is another factor that makes automation more meaningful and complete in all sense. That is documentation. Documentation is a record of what exactly is expected of the script. Its importance is realized at the time of hand over, maintenance and adaptation. However ‘Document Creation’ itself can be dealt with as a separate topic. The idea is that document creation should not be disregarded as unimportant.
Having done all this, we need to watch out for the scope of test. With new functionality getting developed over the older ones (e.g. enhancement of features), re-prioritization needs to be done regularly. Old functionality may not be relevant anymore or they must be stable enough to be omitted from the focus topics. This way the new features/developments get tested better.
Now let us summarize the write up. All the aspects mentioned above are not something we cannot do without. Automation can still happen without any of these factors. However, the benefits we draw from them can make a huge difference on time and efforts of both automation and maintenance. Understanding a script authored by someone else, Knowledge transfer, Adaptation, Corrections... these are just a few advantages to list down.
The world of automation is very vast and its benefits still remain unexplored.
Documentation is an important aspect of scripting. Good documentation should always go hand-in-hand with the automation script and it should clearly explain the whole purpose of the script. Moreover, nothing to beat, if this documentation is easily accessible to the user. Normally the documents would be stored in folders in local servers. For some reason, if the server is down, then these documents are not accessible. Also we might end up losing the documents if the server crashes.
The reason I’m writing this blog is to create awareness and also share my experience about one of the useful features of the eCATT which allows attachment of documents (usually eCATT Specification/Design documents) to the eCATT script. It provides an option to either directly attach the document or to provide a link to document. Once the documentation is attached, it will be visible from the Test Catalog and also from eCATT Log file. Anybody who executes the eCATT script can easily find the documentation as part of the eCATT log file. This documentation serves as a ready-reckoner and one point reference for information regarding the script. Therefore this helps the script executor in understanding what the script does and also in troubleshooting the issue, if faced. Using this feature has helped me in effective maintenance of the documents and also it has freed up the local server space. I do not need to now go searching for the script documentation. It has also immensely helped in easy and effective handover of the scripts to the new joinees in the team.
Benefits:
Limitations:
Steps to be followed:
1. Call Transaction Code SECATT and give the eCATT Test Configuration name in “Test Configuration” field.
2. Navigate to the “Attributes” tab and then navigate to “Attachments” tab.
3. Attach the document either as a File or as a link at the Test Configuration level. If you have maintained individual documents for each variant within the test configuration, then you can attach the same for each variant.
Hope this information is useful. This has helped me and I am sure that this is going to help you as well.
IMHO, "Business hard coding" is one of the worst and underestimated ABAP programming practice.
Here just an intro to the topic while in a subsequent blog you’ll find useful stuff to get rid of it.
I always considered hard coding really a bad practice but, only recently, I’ve got the real evidence of how much it is used. It happened during the Custom ABAP Code review services we're delivering at TechedgeGroup.
Hard coding requires the program's source code to be adjusted any time the context changes and in business, it happens quite often.
With “Business hard coding” I'm referring to the practice of hard coding strings (literals) corresponding to codes (IDs) related to Organizational Units, Document Types and even Master Data and that is one of the worst kind of hard coding.
Some examples are Company Codes, Purchase Organizations, Sales Organizations and Accounting document types.
Instead, I would not be too much worried about "Technical hard coding", the practice of hard coding strings corresponding to technical stuff like dictionary objects and output formats (e.g. tables, fields, colors, icons).
In add, hard coded strings returned to end-users as part of messages, titles and columns headers belong to a different bad practice related to the internationalization (i18n) topic.
For a better comprehension, a couple of examples follow.
In the next picture, method ADDRESS_CONTROLS_IN contains two hard coded strings used to differentiate message severity. The first is related to Company Code and the second to Purchasing Group. Here hard code is even used generically to check everything starting with IN*.
I would guess that India has a specific business requirement.
In the next picture, method MANDATORY_VATCODE contains multiple hard coded strings to differentiate message severity. The first is related to Country Code, then to Company Code and my favorite one verifies that the GL Account beginning with ‘004’.
I would guess that Poland has a specific business requirement to be combined with a type of GL Accounts.
I'm sure, most of developers will justify the use of Business hard coding explaining they have been in hurry and the time to create new customizing tables or BRF+ rule was missing. In part they are right, I know that customers (internal or external) demand often for very fast results and developers operate accordingly.
I also have evidence that a large number of developers consider Business hard coding the only way to go and let's say even a good practice.
Discussing with them, to demonstrate they are wrong, I’m used to say that in hundreds of millions of lines of standard SAP code there is no occurrence of "Business hard coding" (to be honest with very few exceptions like country codes and partner functions).
Probably the hard-coder (the author) will be proud to show his/her skills solving issues and adjusting the business hard code only he/she is aware of (lock-in). Even if at customizing level everything is correct and identical to a working scenario, different behaviors of a transaction/report are often due to business hard code.
Time saved during the development phase will lead to much additional effort during the next roll-out or at next Merge&Split when business requirements will change.
In the reality, Business hard coding is an acceptable practice in:
Maybe it can also be useful to classify above exceptions assigning the objects to specific throw away Packages (Development Classes) similar to $TMP but transportable to production.
Speaking about serious and productive Custom ABAP Code, I'm sure you want to get rid of Business hard coding as soon as possible.
In modern SAP systems, there are lot of alternatives to Business hard coding for example like:
I'm going to share very soon also the way we use at TechedgeGroup to perform a full scan of your Custom ABAP Code looking for Business hard coding and I’m also very interested to hear your experiences and ideas.
In previous blog STOP filling your Custom ABAP Code with Business hard coding I started a discussion about a popular coding bad practice that affects most of the SAP ERP systems.
In a week, the blog got more than 2.000 visits, 5 stars rating and the several interesting comments are even more valuable than the blog itself.
To get rid of Business hard code, I'm describing here a way scan your SAP system (e.g. SAP ECC) and get a clear picture of its occurrences.
With the term Business hard code, I'm referring to the practice of hard coding strings (literals) corresponding to codes (IDs) related to Organizational Units or Document Types and even Master Data. Examples are Company Codes, Purchase Organizations, Sales Organizations, Accounting document types and also Country Codes.
The ABAP workbench provides lot of tools to perform source code scanning.
Occurrences of a given literal (e.g. 'IT01' ) can be easily obtained via report RS_ABAP_SOURCE_SCAN or the Code Inspector check Scan for ABAP Tokens. In the real-life, this is the use case of Split & Merge when, for example, a Company Code is going to be merged with another.
Here I try to solve the problem of obtaining occurrences of any literal referred to business related domains without knowing the values to be found.
Before deep diving into the solution, let me confirm the attitude we have at TechedgeGroup to share stuff (for free) in SCN.
First, we are proud of the idea and first implementation of abap2xlsx by Ivan Femia . I think it is one of the most popular SCN projects in terms of downloads, usage and software contributors.
Specifically in the domain of Application Lifecycle Management (ALM), it follows a short list of ideas and tools we shared in the years with the community:
This time Techedge is sharing with the SCN community the product Doctor ZedGe - Hard!Code that you can get for free without worrying about license or expiration time.
Doctor ZedGe - Hard!Code is the Community Edition of the larger product Doctor ZedGe that includes an advanced DashBoard to analyze ABAP Test Cockpit results and publish them in nice looking MS Excel reports and also a specific ABAP to download to MS Excel the ATC results including the statements with issues.
So, at the bottom of the page Doctor ZedGe | Techedgeyou will find instructions To order Doctor ZedGe . Simple ask for the Community Edition. You'll soon receive the comprehensive documentation and the complete source code simply installable via Copy & Paste .
This time we decided to distribute Doctor ZedGe - Hard!Code from our Techedge web site not only because Code Exchange has been closed.
Indeed since we are delivering the software, we can assure enterprises that the software is secure, is well developed and well documented. We'll also provide technical support in case of issues.
Thanks to the step-by-step guides you will get something like the following ATC result in less than an hour.
Or if you prefer, here it is the result in ABAP in Eclipse (AiE):
As you know, ABAP Test Cockpit provides an handy Statistics overview (top), the worklist (middle) and the finding detail (bottom). Navigation to the code (picture on the right) is a click-away.
The idea of this custom Code Inspector check is first to get the hard-coded string (literal), then discover the corresponding operand (context) and recognize if it refers to a Business entity.
As you know, ABAP syntax is very flexible and the challenge is to determining the context (the related operand) of a given literal. In the above example, the related operand of the literal '3000' is field LT_FILE-PLANT2. This time it is on the left of the operator '='.
In case of '3200' it is instead GT_FILE-WERKS that is on the right of the operator '='.
CASE and WHEN are even more challenging:
In the above example, the context of both 3000 and 3200 is to be found jumping back to the CASE statement to identify LT_FILE-PLANT2 as context (related operand).
Literals are strings and represent the hard coding Anti-pattern.
The Code Inspector check Doctor ZedGe - Hard!Code includes a Literal length parameter defaulted to consider those with length between 2 and 18.
In this version, the analysis considers the following statements that cover mostly of the scenarios:
It’s easy to catch hard coded strings (literals) but you'll get a huge number of false positives that make the scan unusable.
It’s not instead so easy to distinguish those related to business entities. After scanning millions of lines of code, we believe that Doctor ZedGe - Hard!Code can identify around 95% of the Business hard coding related to the following set of critical domains:
Domain | Description |
BUKRS | Company Code |
WERKS | Plant |
EKORG | Purchase Org. |
VKORG | Sales Organization |
VTWEG | Distribution Channel |
SPART | Division |
LGORT | Storage Location |
GSBER | Business Areas |
WAERS | Currency |
LAND1 | Countries |
MSHEI | Unit of measurements |
PARVW | Partner Function |
KTOKD | Customer account groups |
KTOKK | Vendor account groups |
MTART | Material Type |
AUART | Sales Document Type |
LFART | Delivery Type |
FKART | Billing Type |
BSART | Purchasing Document Category |
BWART | Movement Type |
VSBED | Shipping conditions |
PSTYP | Item Category in Purchasing Document |
PSTYV | Sales document item category |
KSCHL | Condition Type |
BLART | Document Type (FI) |
PLTYP | Price list type |
MATNR | Material |
KUNNR | Customer account groups |
LIFNR | Vendor |
SKA1 | G/L Account Master (Chart of Accounts) |
BELNR | Accounting Document Number BELNR |
Note that in case you need, I confirm it's very easy to extend the code to take into account other domains.
Keep in mind that, since we target an installation performed via one copy & paste (one ABAP class) we avoided in this version the use of tables to define the list of domains to be analyzed. We'll see in the future if it makes sense.
In add, since Doctor ZedGe - Hard!Code leverages the power of ABAP Test Cockpit and Code Inspector, it also suffers of the same known limitations:
Doctor ZedGe - Hard!Code could provide value not only to Developers and Quality Managers but also to Functional specialists, Team leaders, Project managers and even IT managers.
Follow a list of possible use cases:
SAP has been doing some really good work upgrading its tools. We have recently upgraded to SAP_ABAP 740. I'm an advocate of ABAP Unit testing and this upgrade gave me the opportunity to try an example on the new Test Double Framework. Prajul Meyana's ABAP Test Double Framework - An Introduction says that the new framework is available from SP9. We're in SP8, but I can't wait to tesdrive this. So I started poking around. One of my colleagues pointed out that CL_ABAP_TESTDOUBLE is delivered with the release. YEY!
Below is an example of behavior verification using the framework and it appears to work. Maybe later, I'll make a much simpler cut. At this stage I just wanted to run it through a real life example within our code base.
Below is my application code. It's a simple custom service implementation to create Chart of Authority records for Opentext Vendor Invoice Management. ( It's not relevant here but note that we use FEH to manage exceptions for enterprise service errors. Maybe I can show a test of that exception in a later blog. )
Further below is one of my test classes with one of the test methods implemented.
The test double framework does three important things in this example .
I use a factory implementation to inject test doubles. Some of you don't like it. I understand that. Hope that doesn't distract from the intent. Have fun. As I mentioned when my colleague Custodio de Oliveira pointed it out that it's available, "Let's break it".
App Code |
---|
METHOD ZIF_FEH~PROCESS.
DATA: lo_coa_user TYPE REF TO zif_opentext_vim_coa_user, lo_cx_opentext TYPE REF TO zcx_opentext_service, lo_cx_coa_user TYPE REF TO zcx_opentext_service,
ls_main_error TYPE bapiret2, lt_coa_details TYPE zopentext_coa_details_tt, ls_coa_details TYPE LINE OF zopentext_coa_details_tt, lv_manager_id TYPE /ors/umoid, lv_max_counter TYPE /opt/counter.
FIELD-SYMBOLS: <ls_process_coa_details> TYPE LINE OF zopentext_coa_detl_process_tt.
me->_s_process_data = is_process_data.
TRY. TRY. lo_coa_user = zcl_vim_coa_user_factory=>get_instance( )->get_coa_user( iv_windows_id = _s_process_data-windows_id iv_active_users_only = abap_false ). lv_manager_id = lo_coa_user->get_manager_id( ). CATCH zcx_opentext_service INTO lo_cx_coa_user. CLEAR lv_manager_id. ENDTRY. LOOP AT _s_process_data-coa_details[] ASSIGNING <ls_process_coa_details> WHERE start_date <= sy-datum AND end_date >= sy-datum. " Record removed in ECC if not in validity date ADD 1 TO lv_max_counter. ls_coa_details-counter = lv_max_counter. ls_coa_details-expense_type = <ls_process_coa_details>-expense_type. ls_coa_details-approval_limit = <ls_process_coa_details>-approval_limit. ls_coa_details-currency = <ls_process_coa_details>-currency. ls_coa_details-bukrs = '*'. " Functional requirement in ECC to set CoCode to *. Assumption : From corp - 1 user = 1 co code ls_coa_details-kostl = '*'. ls_coa_details-internal_order = '*'. ls_coa_details-wbs_element = '*'. ls_coa_details-manager_id = lv_manager_id. "For new entries, Manager Id is the same as that on existing COA entries for the user. APPEND ls_coa_details TO lt_coa_details. ENDLOOP.
" Ignore the message IF ( lo_cx_coa_user IS NOT INITIAL or lo_coa_user->is_deleted( ) ) " The user is deleted or does not exist AND lt_coa_details IS INITIAL. " AND All the inbound records are deletions RETURN. " Ignore transaction - finish ok. ENDIF.
" Raise missing user IF lo_cx_coa_user IS NOT INITIAL. RAISE EXCEPTION lo_cx_coa_user. ENDIF.
" Updates IF lo_coa_user->is_deleted( ). " User &1 is deleted. COA cannot be updated. ""**** ZCX_FEH EXCEPTION RAISED HERE ***** ENDIF.
lo_coa_user->set_coa_details( lt_coa_details[] ). lo_coa_user->save( ).
CATCH zcx_opentext_service INTO lo_cx_opentext. "**** ZCX_FEH EXCEPTION RAISED HERE ***** ENDTRY.
ENDMETHOD. |
Local Test Class |
---|
CLASS ltc_process DEFINITION FOR TESTING DURATION SHORT RISK LEVEL HARMLESS FINAL.
PRIVATE SECTION. METHODS: setup. METHODS: test_2auth FOR TESTING. * METHODS: test_2auth_1obsolete FOR TESTING. * METHODS: test_missinguser_coadeletions FOR TESTING. * METHODS: test_update_on_deleted_user FOR TESTING. * METHODS: test_opentext_error FOR TESTING.
DATA : mo_coa_user TYPE REF TO zif_opentext_vim_coa_user. CLASS-DATA : mo_coa_user_factory TYPE REF TO zif_vim_coa_user_factory. DATA : mo_si_opentext_delegauth_bulk TYPE REF TO ycl_si_opentext_coa.
ENDCLASS.
CLASS ltc_process IMPLEMENTATION.
METHOD setup. mo_si_opentext_delegauth_bulk ?= ycl_si_opentext_coa=>s_create( iv_context = zcl_feh_framework=>gc_context_external ). ENDMETHOD.
METHOD test_2auth . *----------------------------------------------------------------------* * This tests the scenario where the user has 2 authority records * * and both are saved properly. * *----------------------------------------------------------------------* DATA ls_process_data TYPE zopentext_deleg_auth_process_s. DATA ls_coa_details_process TYPE zopentext_coa_detl_process_s.
DATA lt_coa_details TYPE zopentext_coa_details_tt. DATA ls_coa_details TYPE LINE OF zopentext_coa_details_tt.
"config the test double call to manager id mo_coa_user ?= cl_abap_testdouble=>create( 'ZIF_OPENTEXT_VIM_COA_USER' ). cl_abap_testdouble=>configure_call( mo_coa_user )->returning( 'WILLIA60' ). mo_coa_user->get_manager_id( ).
" expected results ls_coa_details-counter = 1. ls_coa_details-currency = 'NZD'. ls_coa_details-approval_limit = 200. ls_coa_details-expense_type = 'CP'. ls_coa_details-bukrs = '*'. ls_coa_details-kostl = '*'. ls_coa_details-internal_order = '*'. ls_coa_details-wbs_element = '*'. ls_coa_details-manager_id = 'WILLIA60'. APPEND ls_coa_details TO lt_coa_details.
ls_coa_details-counter = 2. ls_coa_details-currency = 'NZD'. ls_coa_details-approval_limit = 300. ls_coa_details-expense_type = 'SR'. ls_coa_details-bukrs = '*'. ls_coa_details-kostl = '*'. ls_coa_details-internal_order = '*'. ls_coa_details-wbs_element = '*'. ls_coa_details-manager_id = 'WILLIA60'. APPEND ls_coa_details TO lt_coa_details.
"configure the expected behavior of the set_coa_details( ) cl_abap_testdouble=>configure_call( mo_coa_user )->and_expect( )->is_called_times( 1 ). mo_coa_user->set_coa_details( lt_coa_details ).
" Inject the test double into the factory which will be used inside the method under test. TRY. zcl_vim_coa_user_factory=>get_instance( )->set_coa_user( mo_coa_user ). CATCH zcx_opentext_service ##no_handler. ENDTRY.
" SETUP - INPUTS To the Method under test ls_process_data-windows_id = 'COAUSER'.
ls_coa_details_process-currency = 'NZD'. ls_coa_details_process-approval_limit = 200. ls_coa_details_process-expense_type = 'CP'. ls_coa_details_process-bukrs = '1253'. ls_coa_details_process-start_date = '20060328'. ls_coa_details_process-end_date = '29990328'. APPEND ls_coa_details_process TO ls_process_data-coa_details.
ls_coa_details_process-currency = 'NZD'. ls_coa_details_process-approval_limit = 300. ls_coa_details_process-expense_type = 'SR'. ls_coa_details_process-bukrs = '1253'. ls_coa_details_process-start_date = '20060328'. ls_coa_details_process-end_date = '29990328'. APPEND ls_coa_details_process TO ls_process_data-coa_details.
" EXECUTE the method under test TRY. mo_si_opentext_delegauth_bulk->zif_feh~process( is_process_data = ls_process_data ). CATCH zcx_feh ##no_handler. ENDTRY.
" Verify interactions on test double cl_abap_testdouble=>verify_expectations( mo_coa_user ).
ENDMETHOD.
ENDCLASS. |
Some Test Tools available in SAP_ABA 740
Test Summary - 1 test method successful |
---|
Test Coverage - only 1 test >> so it's pretty poor |
---|
Test Coverage - lots of untested code in red! |
---|
(Sorry for the eclipse fans. I re-flashed my PC to 64 bit. I haven't had the chance to re-install my Eclipse tools. Those coverage tools are there too! ).
We faced a weird issue today in the production system that we hadn't faced in quality. The scenario was we had to send the Location information of vendors to an external system from ECC. For this we are using the message type CREMAS with the basic type CREMAS05. This was extended with an extension to add a custom segment and custom fields in it. The change pointers were configured for these custom fields as well. And when the IDocs were triggered using BD21 for CREMAS after changing a vendor, it worked fine in the quality system.
However, when we moved the custom code i.e. the enhancement implementaion in the function module MASTERIDOC_CREATE_CREMAS along with the change pointers and all other configurations for partner profiles etc, it did not work as expected in the production system.
The difference was that no filters were maintained in the quality systems, or the IDocs were working fine for the filters maintained in the quality system, but not for the filters maintained in the production system.
The scheduled background job of the program RBDMIDOC was failing with the error saying that the custom segment created using the extension does not exist.
Exact Error message: "Segment <our Y custom segment name> does not exist for message type CREMAS"
Although when we checked, all the transports had happened correctly and we were able to view the custom segment in WE30 in the production system.
Then we checked the partner profile too to see if the extension had been missed, but no, even that was maintained correctly.
After scratching our heads for a few days and trying out everything possible under the sun, we figured out that it was the filters in the distribution model that was causing this issue. On removing the filters, the IDocs were getting triggered fine. So, we narrowed down to the filters and then on searching around on SDN and on the internet in general, we stumbled across a few posts that said it had something to do with the conversion routines etc but finally after a lot of trial and error of the various solutions we found on the internet, the one that worked for us was to pass the name of the custom segment needed to Function module MASTER_IDOC_DISTRIBUTE that exists at the end of the code in the function module MASTERIDOC_CREATE_CREMAS that we were using.
The structure F_IDOC_HEADER which is a work area contains the field CIMTYP which needs to be populated with the name of the extension that has been created for the standard IDoc.
So, on adding one line of code:
F_IDOC_HEADER-CIMTYP = 'YR1UMMCREMAS05'.
before the line
CALL FUNCTION 'MASTER_IDOC_DISTRIBUTE'
solved our problem.
Now, the CREMAS IDoc started flowing fine eve with the filters for company codes and purchasing organizations maintained in the distribution model.
ANST – Automated Notes Search Tool, is a powerful tool to help searching SAP notes for issues you encounter in your SAP system. As this tool is part of SAP standard applications now and has been of great use for end customers, partners & development teams, in this blog I am exploring the possibilities of using it for Quality engineers from testing point of view.
Before doing any scenario testing & test automation, it is most important to ensure that the required customizing is correct & complete to support the execution of test case. This tool can be of great help in ensuring the same & achieving more effective testing.
Lets start from the point when we design a test case , its very important to define the prerequisite steps including the required customizing correctly here. Often, one way of finding important customizing tables involved in process testing is through development colleagues/application responsible, however in that case we are dependent on correctness & completeness of the information provided.
Here is another way to do it using ANST, to find the right tables/views ensuring all customization are done in test automation prior to scenario testing. ANST has got capability to give all the tables which are used in a particular test execution. Before test automation, to get maximum coverage of tables which may impact the test execution, manually perform each test step while using ANST. The trace will capture all the table from different components during the test execution.After capturing all the tables , you can select the area you want to test/automate and from there you can navigate to corresponding tables/view.
With this, now you can ensure to include all of the required customization in your automation script to avoid any customization errors during test execution. This tool offers excellent capability of getting all the customizing tables in one place for a scenario or transaction which is to be tested.
Lets do this with a simple scenario where user wants to create a warranty claim using transaction WTY:
Steps :
Login to test system and start the transaction ANST. Enter The transaction you are testing and Description .
I suggest you to give a meaningful description as this will help to search the trace later if needed.
After executing , tool will take you to the transaction screen. Enter the necessary parameters & perform the transaction.
On completion of transaction, click on customizing tables button on below screen.
On below screen, it will show all the tables which are touched upon during this test.There are component specific table lists as well and Important tables can be scanned for the data checks.
You can double click on a particular table to navigate to the details of table. With this analysis, you can decide to include which all customizing steps should be included as prerequisite steps in the test automation script for this transaction.
You can check the trace later too by opening it with the description saved earlier, as below:
Hope this will help in designing automated tests better. If you want to know more about ANST, you can refer to some of the other blogs on this:
What is ANST....and why aren't you using it?
The power of tools - How ANST can help you to solve billing problems yourself!
You need to exchange ST12 trace with your counterpart (e.g. SAP support).
You have created traces in transaction ST12 as described here:
Single Transaction Analysis (ST12) – getting started[http://scn.sap.com/community/abap/testing-and-troubleshooting/blog/2009/09/08/single-transaction-analysis-st12-getting-started]
or here
ST12 – tracing user requests (Tasks & HTTP) [http://scn.sap.com/community/abap/testing-and-troubleshooting/blog/2010/03/22/st12-tracing-user-requests-tasks-http]
To store your trace into file, perform the following:
5. Select menu: Download -> Text file download -> Export to frontend
6. Enter file name and format (leave ASC), click Transfer.
To upload the trace from the file, perform the following steps:
3. Select your file (e.g. D:\trace1.trc) and click Open
4. Click "Yes" on Import Analysis popup.
Hint: When exchanging the traces files don't forget to compress them. You can use RAR or ZIP archivers for that.
Below is the easy to remember short description of Eclipse IDE
(E)ditor for many programming language
(C)ode Faster
(L)ess Typing with Code Completion
(I)ntegrated Development Environment
(P)latform
(S)yntax Highlighting
(E)xtensible
Note: This is not an official expansion of Eclipse.
If you are still wondering what does it means. Please read ahead.
Following are the advantages of using Eclipse as Development tool.
It openness and inter operability through standards and facilitate open source integration.
Following are the Components of Eclipse Platform
References
The SAP Eclipse Story - http://www.sdn.sap.com/irj/sdn/nw-devstudio?rid=/library/uuid/10c671f2-6364-2a10-8d96-8b3145d4a478]
Tutorials
It is common knowledge that buffering of database tables improves the system performance, provided the buffering is done judiciously – i.e. only those tables that are read frequently and updated rarely are buffered. But how exactly can we determine if a table is read frequently or updated rarely?
Also, the state of a buffered table in the buffer area is a runtime property which keeps changing with time. How can an ABAP developer, know whether a buffered table actually exists in the buffer, at a given time instant? This is a critical question to be considered while analyzing the performance of queries on buffered tables.
This blog post attempts to answer the above questions.
This blog post is divided into 3 sections and structured as follows:
You might find the blog to be slightly lengthy but the content will NOT be more than what you can chew. Trust me!
Section 1: Prerequisites (Recap of Table Buffering Fundamentals and its Mechanism)
Buffering is the processing of storing table data (which is always present in the database) temporarily in the RAM of the Application Server. Buffering is specified in the technical settings of a table’s definition in the DDIC.
The benefits of buffering are:
The buffering mechanism can be visualized in Figure 1 below:
Figure 1: Buffering Mechanism
The SAP work processes of an application server have access to the SAP table buffer. The buffers are loaded on demand via the database connection. If a SELECT statement is executed on a table selected for buffering, the SAP work process initially looks up the desired data in the SAP table buffer. If the data is not available in the buffer, it is loaded from the database, stored in the table buffer, and then copied to the ABAP program (in the internal session). Subsequent accesses to this table would fetch the data from the buffer and the query need not go to the database to fetch it.
It must be understood that RAM space in the application server is limited. Let’s say – dbtab1 is a buffered table whose data is present in the buffer. When there is a query on another buffered table - dbtab2, its data will have to be loaded into the buffer. This might result in the data of dbtab1 getting displaced from the buffer.
When there is a write access to a buffered table, the change is done in the database and the old table data which is present in the buffer (of the application server from which the change query originated) is just flagged as “Invalid”. At this instant, the buffer and the database hold different data for the same table. A subsequent read access to the table would initiate a reload of the table data from the database to the buffer. Now the buffer holds the same data as the database.
Buffering a table that gets updated very frequently might actually end up increasing the load on the DB and increasing the network traffic between the application layer and the database. This would slow down the system performance and defeat the purpose of buffering.
Key Takeaways from Section 1:
(a) Read frequently
(b) Updated rarely
(c) Contains less data
Section 2: How to use the Table Call Statistics Transaction
This is accessed by the Tcode – ST10. The following is the initial screen:
Figure 2: ST10 - Initial Screen
A few points may be noted in Figure 2:
Let’s explore the results returned by the transaction when the radiobutton – “Not Buffered” is chosen.
Figure 3: Results of ST10 when the radio buttons - “Non-Buffered”, “This Server” and “From startup” are chosen
Let me explain the significance of each column –
Total = Direct Read + Seq. Reads + Changes.
Let’s explore the results when the radio button – “Generic Key Buffered” is chosen:
Figure 4: Results of ST10 when the radio buttons - “Generic Key Buffered”, “This Server” and “From startup” are chosen
There are some new columns here, which were not present in Figure 3. They are:
(a) SNG – Single Record Buffered Table
(b) FUL – Fully Buffered Table
(c) GEN – Generic Area Buffered Table
(a) VALID - The table content in the buffer is valid. Read access takes place in the buffer.
(b) ABSENT – The table has not been accessed yet. So the table buffer is not yet loaded with data.
(c) DISPLACED – The table buffer has been displaced
(d) INVALID - The table content is invalid and there are open transactions that modify the table content. Read access takes place in the database.
(e) ERROR - The table content could not be placed in the buffer, because insufficient space.
(f) LOADABLE – The table buffer in the buffer area is invalid, but can be loaded in the next access.
(g) MULTIPLE – Relevant only in the context of Generic Area Buffered Tables. These have different buffer statuses.
NOTE: All the table buffers in the current application server can be cleared by entering the Tcode- “/$TAB”.
Note that the user can toggle between one result set and another by using the buttons in the Application Toolbar (as shown in Figure 5):
Figure 5: Application Toolbar of the primary list screen of ST10.
Figure 6: Secondary List
Section 3: Interpreting the results of the Table Call Statistics Transaction to answer the questions posed above.
How to determine a non-buffered table which is suited to be buffered?
(a) Low Change Rate (under 0.5%)
(b) High number of reads (Direct Reads + Seq.Reads)
(c) Data volume not too large
If it is to be buffered, what should be its buffering type?
How to determine the efficiency of the buffer setting of already buffered tables?
NOTE: Ensure that the time frame for which the transaction is run is significant enough such that all the reports/applications were run in that period and all business scenarios occurred in that period. Only then, can this transaction guide us effectively in deciding which table’s buffer settings are to be altered.
Case Study:
Based on the above guidelines, let’s consider some examples in Figure 7, which shows the Non-Buffered Tables:
Figure 7: List of accesses to non-buffered tables.
I would like to draw your attention to the 3 tables enclosed by a green rectangle. Based on the trends for these three tables, it can be temporarily concluded that:
The above points are not the final decisions but just guidelines. Other aspects like data volume, size category, access frequency etc are to be considered.
How can an ABAP developer, know whether a buffered table actually exists in the buffer, at a given time instant?
Figure 8: Buffer State of TSTC table after clearing the buffers using - /$TAB.
DATA: GW_TSTC TYPE TSTC.
CONSTANTS: C_SE38 TYPE TSTC-TCODE VALUE 'SE38'.
SELECT SINGLE *
FROM TSTC
INTO GW_TSTC
WHERE TCODE = C_SE38.
Figure 9: Buffer State of TSTC table after the above code snippet is run
Figure 10: ST05-SQL Trace when the above code snippet is run for the first time. Data is fetched from database.
Figure 11: ST05-Buffer Trace when the above code snippet is run for the second time. Data is fetched from buffer.
Conclusion:
ST10 is a very useful transaction that can guide you in answering the following questions:
References:
[1] Gahm, H., “Chapter 3 – Performance Analysis Tools,” ABAP Performance Tuning, 1st ed., Galileo Press, Boston, 2010, pp. 51-54.