On Linux Red Hat open the /etc/ssh/sshd_config and ensure there is the line below removing the comment:
# override default of no subsystems
Subsystem sftp /usr/libexec/openssh/sftp-server
Then restart the service:
/etc/init.d/sshd restart
This blog covers different aspects around software development and test. There are also general information useful for who needs to test performances and needs to analyze bottlenecks on application servers and databases. Disclaimer: test before you implement any advice, no warranty is provided. Before use the software mentioned in this blog read and understand all licenses agreements. Use of the information contained in this blog is under your own responsibility.
giovedì 19 marzo 2015
venerdì 6 marzo 2015
nmon a fundamental tool for the Performance Engineer
To check and track the system usage resources in terms of CPU, memory, I/O, kernel, Disk, Network, it is fundamental to have installed on all servers the nmon tool. It was an internal project at IBM for many years. On AIX is nmon is a native command. For Linux has been released to open source under GPL.
Linux:
#nmon -lfT -s 10 -c 360 <collects for 1 hour>
AIX:
#nmon -lMPT -fT -s 10 -c 360 <collects for 1 hour>
AIX options :
-l Specifies the number of disks to be listed on each line. By default, 150 disks are listed per line. For EMC disks, specify a value of 64.
-M Includes the MEMPAGES section in the recording file. The MEMPAGES section displays detailed memory statistics per page size.
-P Includes the Paging Space section in the recording file.
-T Includes the top processes in the output and saves the command-line arguments into the UARG section. You cannot specify the -t, -T, or -Y flags with each other.
-f Specifies that the output is in spreadsheet format. By default, the command takes 288 snapshots of system data with an interval of 300 seconds between each snapshot. The name of the output file is in the format of hostname_YYMMDD_HHMM.nmon.
-s Specifies the interval in seconds between 2 consecutive recording snapshots.
-c Specifies the number snapshots that must be taken by the command. The default value is 10000000.
For Linux you can download here (and untar) from or directly with this command:
#sudo apt-get install nmon
Approach:
The approach of a performance engineer is to track the resource consumption with nmon for the entire time window of the execution of the performance test . So start the nmon on all servers involved on the test. If the test runs for one hour I usually run the command with these options:
#sudo apt-get install nmon
Approach:
The approach of a performance engineer is to track the resource consumption with nmon for the entire time window of the execution of the performance test . So start the nmon on all servers involved on the test. If the test runs for one hour I usually run the command with these options:
Linux:
#nmon -lfT -s 10 -c 360 <collects for 1 hour>
AIX:
#nmon -lMPT -fT -s 10 -c 360 <collects for 1 hour>
Linux options:
-l <dpl> disks/line default 150 to avoid spreadsheet issues. EMC=64.
-T as -t plus saves command line arguments in UARG section
-f spreadsheet output format
-s <seconds> between snap shots
-c <number> of refreshes
-l <dpl> disks/line default 150 to avoid spreadsheet issues. EMC=64.
-T as -t plus saves command line arguments in UARG section
-f spreadsheet output format
-s <seconds> between snap shots
-c <number> of refreshes
-l Specifies the number of disks to be listed on each line. By default, 150 disks are listed per line. For EMC disks, specify a value of 64.
-M Includes the MEMPAGES section in the recording file. The MEMPAGES section displays detailed memory statistics per page size.
-P Includes the Paging Space section in the recording file.
-T Includes the top processes in the output and saves the command-line arguments into the UARG section. You cannot specify the -t, -T, or -Y flags with each other.
-f Specifies that the output is in spreadsheet format. By default, the command takes 288 snapshots of system data with an interval of 300 seconds between each snapshot. The name of the output file is in the format of hostname_YYMMDD_HHMM.nmon.
-s Specifies the interval in seconds between 2 consecutive recording snapshots.
-c Specifies the number snapshots that must be taken by the command. The default value is 10000000.
Analyze the output:
After recording the nmon samples you can display using:
After recording the nmon samples you can display using:
- nmon analyser excel spreadsheet download here
- java application nmonvisualizer download here
Etichette:
AIX,
linux,
nmon,
performance
Assign AIX cores to Websphere application server
On AIX when you create a cluster of Websphere application server you could run each application server (java process) on a specific core (processor).
Example:
you have a node of the cluster with 4 application server on AIX box with 4 processor and you want to assign 1 processor to each java process.
# prtconf | grep Processor
Processor Type: PowerPC_POWER7
Processor Implementation Mode: POWER 7
Processor Version: PV_7_Compat
Number Of Processors: 4
Processor Clock Speed: 3300 MHz
Model Implementation: Multiple Processor, PCI bus
+ proc0 Processor
+ proc4 Processor
+ proc8 Processor
+ proc12 Processor
#execrset -F -c 8-11 -e AppServer_dir/profiles/profileName/bin/startServer.sh server3
#execrset -F -c 12-15 -e AppServer_dir/profiles/profileName/bin/startServer.sh server4
Terminology of IBM core:
http://www-01.ibm.com/software/passportadvantage/pvu_terminology_for_customers.html
Other commands to get useful information on processors:
lparstat -i
topas -C
mpstat
Example:
you have a node of the cluster with 4 application server on AIX box with 4 processor and you want to assign 1 processor to each java process.
# prtconf | grep Processor
Processor Type: PowerPC_POWER7
Processor Implementation Mode: POWER 7
Processor Version: PV_7_Compat
Number Of Processors: 4
Processor Clock Speed: 3300 MHz
Model Implementation: Multiple Processor, PCI bus
+ proc0 Processor
+ proc4 Processor
+ proc8 Processor
+ proc12 Processor
SMT is the the Simultaneous Multithreading Mode. Running the command smtctl output below it shows each core (Processor) can run 4 threads, so each Processor has like 4 "virtual" core (Processor):
#smtctl
This system is SMT capable.
This system supports up to 4 SMT threads per processor.
...
proc0 has 4 SMT threads.
Bind processor 0 is bound with proc0
Bind processor 1 is bound with proc0
Bind processor 2 is bound with proc0
Bind processor 3 is bound with proc0
Then the commands to assign CORE-JVM:
#execrset -F -c 0-3 -e AppServer_dir/profiles/profileName/bin/startServer.sh server1
#execrset -F -c 4-7 -e AppServer_dir/profiles/profileName/bin/startServer.sh server2#execrset -F -c 8-11 -e AppServer_dir/profiles/profileName/bin/startServer.sh server3
#execrset -F -c 12-15 -e AppServer_dir/profiles/profileName/bin/startServer.sh server4
Terminology of IBM core:
http://www-01.ibm.com/software/passportadvantage/pvu_terminology_for_customers.html
Other commands to get useful information on processors:
lparstat -i
topas -C
mpstat
How to enable license on IBM Optim Query Tuner for Index Advisor
If you have installed IBM Query tuner for DB2, you have to enable the product license before you can run the index advisor on a query, otherwise you get errors running the tool. Check on this links:
Enable SQL debug with IBM Smart Cloud Control Desk
Two methods to check the SQL statements executed. You can use both for troubleshooting purposes.
1) ref: http://www-01.ibm.com/support/docview.wss?uid=swg21291250
Set Logger to DEBUG value to logs the SQL statements:
property name: log4j.logger.maximo.sql value: DEBUG
Set system property to decrease time limit to logs more statements:
property name: mxe.db.LogSQLTimeLimit
To check the SQL logged open the SystemOut.log on the Websphere profile.
2) ref: http://www-01.ibm.com/support/docview.wss?uid=swg21577811
Enable Maximo Activity Dashboard:
property name: mxe.webclient.activitydashboard
Description: Maximo Activity Dashboard (PerfMon)
Global Value: true
Maximo Default: false
Online Changes Allowed?: CHECKED
Live Refresh?: CHECKED
To check the SQL executed open:
http://<host>:<port>/maximo/webclient/utility/profiler/PerfMon.jsp
1) ref: http://www-01.ibm.com/support/docview.wss?uid=swg21291250
Set Logger to DEBUG value to logs the SQL statements:
property name: log4j.logger.maximo.sql value: DEBUG
Set system property to decrease time limit to logs more statements:
property name: mxe.db.LogSQLTimeLimit
To check the SQL logged open the SystemOut.log on the Websphere profile.
2) ref: http://www-01.ibm.com/support/docview.wss?uid=swg21577811
Enable Maximo Activity Dashboard:
property name: mxe.webclient.activitydashboard
Description: Maximo Activity Dashboard (PerfMon)
Global Value: true
Maximo Default: false
Online Changes Allowed?: CHECKED
Live Refresh?: CHECKED
To check the SQL executed open:
http://<host>:<port>/maximo/webclient/utility/profiler/PerfMon.jsp
Etichette:
debug,
maximo,
sccd,
sql,
sql statements
martedì 3 marzo 2015
IBM Rational Performance Tester fundamentals
Rational Performance Tester (a.k.a RPT) provides to the performance tester the possibility to record the HTTP interactions with a web interface registering all HTTP requests and responses of the scenario that need to be tested in terms of performance with a certain number of concurrent users.
The main topics I think are really important and I'm going to talk here are:
1. RPT script
I suggest to have in mind very clear the scenario you want to record, so first try manually it for a couple of times and take notes of the steps. So let you record navigating the webui of your application inserting RPT notes before each click-step, then finally after the recording phase, you can start the interesting one to customize the generated RPT script.
2. Datapool
The datapool contains the variables to need to pass at runtime during the execution of the test script. A tipical example of datapool is the login username and password you enter at the very beginning step of every enterprise web interface. To pass the value of the datapool you have to create a substitution on the data, header or on the URL of the HTTP request. The values of the datapool can be fetched and passed with different policies (random, sequential, wrap or not).
3. Correlation
The most important thing is that RPT correlates automatically the attributes in the URLs, POST data, headers, etc. creating like a chain of HTTP requests linked with one or more of the previous HTTP responses. This is a point really important to understand. During the development of a RPT script after recording it happens that you have to work manually and change or parametrize some correlations. One correlation consists of two parts:
- the reference that is defined on a piece of the HTTP response
- the substitution that is defined on a piece of the HTTP request, URL, header, POST data
For example with the first HTTP request, RPT creates two references, one to the host of the URL opened and another to the port. This particular references are grouped in the Server Connection variables container. Then for each request recorded the first part of the URL will be substituted with the reference variable of host and with that of the port. This is nice because you can switch to another host simply changing the variable values.
References and substitution can be created even manually where you need. Also there is one important reference type you can create and is called: "field reference". Basically it consists on a variable that refers to the whole content of a response. This is useful when you have to customize your script and pass this content to some custom code that has to process it or simply because you want to debug displaying that content.
4. Custom code
Sometimes you need to introduce some additional processing in order to pass to some HTTP request the specific parameters calculated on the base of particular logic.
For example if you have a table of elements and the user want to select one row randomly, then you first have to process with a java class the HTTP response that is displaying the whole table, in order to extract the identifiers of each row, then with a random java API custom code you can return the identifier of the row to be passed to the next HTTP request in the URL or in the POST data depending on how is made the HTTP request that select the element of the row.
5. Regular expression
RPT creates automatically many references using regular expression. Sometimes you need use the same approach to extract specific string or values. For example you want to define a new reference to the extract the value of a html field and pass it this referenced value to another HTTP request, or to pass this value to a custom code and implement some logic on that value.
6. Think time and delays
During recording the user can wait some seconds before to click on the web page, this is the think time, and after the click the HTTP requests starts from the browser with some delay of ms or seconds.
So put attention, do not forget them. If you start playing a script without modifying the think time of the recording phase, RPT will execute the script at the same speed, so the response time that will be reported will sum also the think time of each step executed. And the same the delay will be added to the actual response time if you keep for each HTTP request the same delay of the recording phase. You can simply disable think time and delays on the RPT script.
7. Substitute multiple items.
When defining a substitution it is useful to find out if there are other strings with the same value. If so, RPT can substitute all with the same reference or variable.
8. Rules
When creating some references and substitution is a recurring activity, it is time to create rules to accumulate and automatize this effort to be applied easily on new scripts. Basically a rule contains a regular expression to find the string to be extracted from a HTTP data or URI etc, depending on the rule, and it also contains the substitution to create.
9. Verification points
The preparation of the script consists not only to record and customize the steps to navigate in the interface, but also it needs verify that each steps is correctly executed. You can verify this introducing for each test step a verification point on the HTTP response content related to the HTTP request performed for the step. Each verification point can be also a regular expression to verify the specific text is present on the response.
10. Schedule
Is the object where you can define the RPT scripts to run specifying the workload in terms of number of concurrent (and virtual) users and in terms of frequency.
11.Reporting.
Once the schedule is ready you can start running (but do not forget at the same time start the nmon process to start collecting CPU metrics and other system info from the target machine). The nice is you can watch at different charts showing how is performing the application. The main interesting is the response times of the worst steps of the scripts, the throughput in terms of web pages and bytes. So you can verify easly if/where there is a bottleneck. Also you can calculate the number of transactions for each test script.
You have to study and practice the technique, but I ensure you RPT is a very powerful performance tool.
Hopefully I'll prepare another post with some examples.
Enjoy!
For official reference and documentation check here:
http://www-01.ibm.com/support/knowledgecenter/SSMMM5/welcome
The main topics I think are really important and I'm going to talk here are:
1. RPT script
I suggest to have in mind very clear the scenario you want to record, so first try manually it for a couple of times and take notes of the steps. So let you record navigating the webui of your application inserting RPT notes before each click-step, then finally after the recording phase, you can start the interesting one to customize the generated RPT script.
The datapool contains the variables to need to pass at runtime during the execution of the test script. A tipical example of datapool is the login username and password you enter at the very beginning step of every enterprise web interface. To pass the value of the datapool you have to create a substitution on the data, header or on the URL of the HTTP request. The values of the datapool can be fetched and passed with different policies (random, sequential, wrap or not).
3. Correlation
The most important thing is that RPT correlates automatically the attributes in the URLs, POST data, headers, etc. creating like a chain of HTTP requests linked with one or more of the previous HTTP responses. This is a point really important to understand. During the development of a RPT script after recording it happens that you have to work manually and change or parametrize some correlations. One correlation consists of two parts:
- the reference that is defined on a piece of the HTTP response
- the substitution that is defined on a piece of the HTTP request, URL, header, POST data
For example with the first HTTP request, RPT creates two references, one to the host of the URL opened and another to the port. This particular references are grouped in the Server Connection variables container. Then for each request recorded the first part of the URL will be substituted with the reference variable of host and with that of the port. This is nice because you can switch to another host simply changing the variable values.
References and substitution can be created even manually where you need. Also there is one important reference type you can create and is called: "field reference". Basically it consists on a variable that refers to the whole content of a response. This is useful when you have to customize your script and pass this content to some custom code that has to process it or simply because you want to debug displaying that content.
4. Custom code
Sometimes you need to introduce some additional processing in order to pass to some HTTP request the specific parameters calculated on the base of particular logic.
For example if you have a table of elements and the user want to select one row randomly, then you first have to process with a java class the HTTP response that is displaying the whole table, in order to extract the identifiers of each row, then with a random java API custom code you can return the identifier of the row to be passed to the next HTTP request in the URL or in the POST data depending on how is made the HTTP request that select the element of the row.
5. Regular expression
RPT creates automatically many references using regular expression. Sometimes you need use the same approach to extract specific string or values. For example you want to define a new reference to the extract the value of a html field and pass it this referenced value to another HTTP request, or to pass this value to a custom code and implement some logic on that value.
During recording the user can wait some seconds before to click on the web page, this is the think time, and after the click the HTTP requests starts from the browser with some delay of ms or seconds.
So put attention, do not forget them. If you start playing a script without modifying the think time of the recording phase, RPT will execute the script at the same speed, so the response time that will be reported will sum also the think time of each step executed. And the same the delay will be added to the actual response time if you keep for each HTTP request the same delay of the recording phase. You can simply disable think time and delays on the RPT script.
7. Substitute multiple items.
When defining a substitution it is useful to find out if there are other strings with the same value. If so, RPT can substitute all with the same reference or variable.
8. Rules
When creating some references and substitution is a recurring activity, it is time to create rules to accumulate and automatize this effort to be applied easily on new scripts. Basically a rule contains a regular expression to find the string to be extracted from a HTTP data or URI etc, depending on the rule, and it also contains the substitution to create.
9. Verification points
The preparation of the script consists not only to record and customize the steps to navigate in the interface, but also it needs verify that each steps is correctly executed. You can verify this introducing for each test step a verification point on the HTTP response content related to the HTTP request performed for the step. Each verification point can be also a regular expression to verify the specific text is present on the response.
10. Schedule
Is the object where you can define the RPT scripts to run specifying the workload in terms of number of concurrent (and virtual) users and in terms of frequency.
11.Reporting.
Once the schedule is ready you can start running (but do not forget at the same time start the nmon process to start collecting CPU metrics and other system info from the target machine). The nice is you can watch at different charts showing how is performing the application. The main interesting is the response times of the worst steps of the scripts, the throughput in terms of web pages and bytes. So you can verify easly if/where there is a bottleneck. Also you can calculate the number of transactions for each test script.
You have to study and practice the technique, but I ensure you RPT is a very powerful performance tool.
Enjoy!
For official reference and documentation check here:
http://www-01.ibm.com/support/knowledgecenter/SSMMM5/welcome
Iscriviti a:
Commenti (Atom)