<?xml version='1.0' encoding='UTF-8'?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:blogger="http://schemas.google.com/blogger/2008" xmlns:georss="http://www.georss.org/georss" xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr="http://purl.org/syndication/thread/1.0" version="2.0"><channel><atom:id>tag:blogger.com,1999:blog-6142634977059070215</atom:id><lastBuildDate>Wed, 24 Dec 2025 19:24:52 +0000</lastBuildDate><category>All Things QA Introduction</category><category>Allthingsqa on twitter</category><category>Bug severity</category><category>Christmas QA challenge</category><category>Getting started on performance testing</category><category>Performance Testing 101</category><category>Performance analysis tools</category><category>Performance test tools</category><category>Six degrees of separation from a skill</category><category>Test Management tool</category><category>Test Plan</category><category>bug priority</category><category>fix vs defer</category><category>load testing</category><category>performance testing</category><category>performance testing best practices</category><category>persistence testing</category><category>severity vs priority</category><category>soak testing</category><title>All Things QA</title><description>Roughly 6 million things about software testing they don&#39;t tell you :)</description><link>http://allthingsqa.blogspot.com/</link><managingEditor>noreply@blogger.com (Unknown)</managingEditor><generator>Blogger</generator><openSearch:totalResults>13</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-6681647902298942532</guid><pubDate>Sun, 31 Jan 2010 20:13:00 +0000</pubDate><atom:updated>2010-01-31T12:13:55.378-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Getting started on performance testing</category><category domain="http://www.blogger.com/atom/ns#">Performance Testing 101</category><title>7 steps to performance testing bliss  :)</title><description>This post goes out to those that approached me recently with the question &quot;I would like to learn performance testing. How do I get started?&quot;&lt;br /&gt;
&lt;br /&gt;
It&#39;s a step by step approach to getting started on performance testing.&lt;br /&gt;
A couple of points may cause deja-vu from previous posts, but repeating those was necessary evil towards creating this summary guide&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Step 1 : Identify tests !&lt;/b&gt;&lt;br /&gt;
Identify the performance test needs of the application under test (AUT). The AUT could be a web application, a guest OS, host OS, API or just a client server application.&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;Hints to identify crucial components:&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
1. Log user behavior in production to measure the most concurrently and frequently used features&lt;br /&gt;
&lt;br /&gt;
2. Identify brittle system components that may not scale well. &lt;br /&gt;
Dive deep into the system architecture diagrams to figure out these nuances. An example to explain this would be files or database tables that hold ever increasing information&lt;br /&gt;
&lt;br /&gt;
Remember, throughput is just a unit of transactions/time, and is not restricted to requests/sec. Your client server application may have a file upload feature. Your throughput could be the size of the file that&#39;s uploaded single or multiple. &lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Step 2: Match environments !&lt;/b&gt;&lt;br /&gt;
&lt;br /&gt;
If your QA env is one that&#39;s rebuilt frequently, it would be wise to automate this comparison&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;Hints to compare key differentials:&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
1.System resources on the server in production like CPU, Memory, Hard disk, Swap.. you get my drift.&lt;br /&gt;
2.System specifics like log level correlation, any other processes running on the server other than the application under test&lt;br /&gt;
3.Network and system architecture comparison, including load balancer settings, if implemented for any server&lt;br /&gt;
4.If production system has redundancy in servers, QA env should mimic atleast to scale, if not the same count of redundancy&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Step 3: Plan !&lt;/b&gt;&lt;br /&gt;
&lt;br /&gt;
Now that you&#39;ve identified what to test, write a detailed plan with what features you intend to load/stress test, what features you intend to measure performance of, the min throughput you will apply etc. An earlier &lt;a href=&quot;http://allthingsqa.blogspot.com/2009/12/performance-testing-web-applications.html&quot;&gt;post&amp;nbsp;&lt;/a&gt; explains this in detail. &lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Step 4: Automate !&lt;/b&gt;&lt;br /&gt;
&lt;br /&gt;
Identify your automation needs based on the performance test plan.&lt;br /&gt;
&lt;br /&gt;
If it&#39;s a web application where user behavior can be completely tracked via HTTP requests, use a tool like JMeter (refer to my earlier &lt;a href=&quot;http://allthingsqa.blogspot.com/2009/12/web-application-performance-testing.html&quot;&gt;post &lt;/a&gt;on performance test tools for details)&lt;br /&gt;
To get started on JMeter, read this section of their &lt;a href=&quot;http://jakarta.apache.org/jmeter/usermanual/jmeter_proxy_step_by_step.pdf&quot;&gt;user manual&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
If all the user actions on the web application cannot be tracked by HTTP requests (client side AJAX UI behavior), you may want to use UI automation tools that let you customize the code to run it multi-threaded. (hint: Selenium IDE and RC/Server) &lt;br /&gt;
&lt;br /&gt;
If you are trying to increase concurrent calls to specific system methods/features, scripting multi threaded behavior using the application&#39;s API calls would be a good way of also implicitely load testing API calls.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Step 5: Execute !&lt;/b&gt;&lt;br /&gt;
Key points to remember while executing performance tests:&lt;br /&gt;
&lt;br /&gt;
1.&lt;i&gt;Isolate &lt;/i&gt;the QA env where performance tests are run.&amp;nbsp; Functional tests or any other concurrent use of this system will skew your results&lt;br /&gt;
&lt;br /&gt;
2.&lt;i&gt;Restore &lt;/i&gt;system state after stress tests. The automated stress tests should include a step to restore system to default state. Ensure that this step is run regardless of whether your tests completed or crashed. Ofcourse, it would be prudent to check the state before starting the tests.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Step 6: Analyze !&lt;/b&gt;&lt;br /&gt;
I am writing up detailed posts on this topic, meanwhile an earlier &lt;a href=&quot;http://allthingsqa.blogspot.com/search/label/performance%20testing%20best%20practices&quot;&gt;post &lt;/a&gt;covers some of the basics as well . &lt;br /&gt;
Result analysis involves tracking the different values being measured and monitored and checking if they satisfy the verification/pass criteria&lt;br /&gt;
In case of inconsistencies, isolate the test/behavior that causes failures. &lt;br /&gt;
Identify the cause of the bottlenecks with profilers, stepping through code etc.&lt;br /&gt;
Resolve bottlenecks by tuning your settings, adding more memory, load balancing servers, indexing database tables, improving slow running queries etc.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Step 7: Report !&lt;/b&gt;&lt;br /&gt;
&lt;br /&gt;
Performance test results are best reported in two formats. One is a basic summary that highlights what the stakeholders care to know or should know. And a detailed report for archival and comparison with regression runs.&lt;br /&gt;
&lt;br /&gt;
Stakeholder summary could contain max concurrency/load/throughput and average response time for best and worst concurrency cases&lt;br /&gt;
Detailed report could contain the resource monitoring graphs, output from the file handle monitoring scripts and logs created by the automation tool.</description><link>http://allthingsqa.blogspot.com/2010/01/7-steps-to-performance-testing-bliss.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>2</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-476568022820135077</guid><pubDate>Wed, 06 Jan 2010 08:11:00 +0000</pubDate><atom:updated>2010-01-09T09:31:00.355-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Performance analysis tools</category><title>Nifty tools for performance analysis</title><description>&lt;i&gt;Happy New Year ! I am back from holiday hibernation to spread testing joy :)&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
Picture this: You&#39;ve run your web app performance tests and bench-marked response times for your application and you are trying to make sense out of the numbers and the only thing that occurs to you is &quot;Why? why me?&quot; :)&lt;br /&gt;
&lt;br /&gt;
Now let&#39;s drop these nifty little tools in your lab (pun intended) and watch the magic:&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;&lt;a href=&quot;http://developer.yahoo.com/yslow/&quot;&gt;YSlow &lt;/a&gt;(from Yahoo) or &lt;a href=&quot;http://code.google.com/speed/page-speed/&quot;&gt;Page Speed&lt;/a&gt; (from Google)&lt;/b&gt;&lt;br /&gt;
These are lightweight tools/ plug-ins that don&#39;t actually simulate load on your application. Instead they predict performance issues based on whether your application conforms to basic rules/ best practices. A simple rating system that lists all the ways in which your application is built to perform or break.&lt;br /&gt;
Why should I ramble on when google can say it better: &lt;a href=&quot;http://code.google.com/speed/page-speed/docs/rules_intro.html&quot;&gt;http://code.google.com/speed/page-speed/docs/rules_intro.html&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;a href=&quot;http://browsermob.com/load-testing&quot;&gt;&lt;b&gt;BrowserMob&lt;/b&gt;&lt;/a&gt;&lt;br /&gt;
I am planning to evaluate this tool, so don&#39;t consider this a review yet, more an fyi. Most performance test tools send http requests and receive response without simulating an actual browser. This tool lets you figure out performance implications via real browser simulation. Check out &lt;a href=&quot;http://browsermob.com/load-testing&quot;&gt;http://browsermob.com/load-testing&lt;/a&gt; &lt;br /&gt;
&lt;br /&gt;
Stay tuned for the my performance test series to be continued, with some serious coaching on JMeter and such. &lt;br /&gt;
&lt;br /&gt;
Cheers,&lt;br /&gt;
Dharma&lt;br /&gt;
(The more you teach, the more you learn)</description><link>http://allthingsqa.blogspot.com/2010/01/nifty-tools-for-performance-analysis.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-945384726776879763</guid><pubDate>Fri, 25 Dec 2009 06:19:00 +0000</pubDate><atom:updated>2010-01-09T09:28:24.757-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Christmas QA challenge</category><title>QA Challenge in the holiday spirit !</title><description>I&#39;ve started a QA challenge via a LinkedIn discusion forum and wanted to present the same question here. It&#39;s in the spirit of Christmas and holiday cheer.&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;&lt;b&gt;If you were an Elf and were asked to test Santa&#39;s sleigh so he can have a nice safe journey around the earth and everybody can wake up to their presents, what would be your test plan? (If it helps, consider it a software model of a sleigh or a real sleigh.. but be clear about it in the test plan) &lt;/b&gt;&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
It&#39;s probably the eggnog, but I think it&#39;s fun :)&lt;br /&gt;
&lt;br /&gt;
Reply privately, if you do, because I am also using this challenge to find good QA engineer prospects for my team. Figured it would be a fun way to find good talent after going through like a million resumes.&lt;br /&gt;
&lt;br /&gt;
If any of you do respond, I&#39;ll ask you to re-post here as a comment after the holidays.&lt;br /&gt;
&lt;br /&gt;
Happy holidays!&lt;br /&gt;
Dharma</description><link>http://allthingsqa.blogspot.com/2009/12/qa-challenge-in-holiday-spirit.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-5808252864452130696</guid><pubDate>Sat, 19 Dec 2009 21:25:00 +0000</pubDate><atom:updated>2010-01-09T09:29:39.982-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Six degrees of separation from a skill</category><title>Six degrees of separation from a skill</title><description>My pet peeve as a hiring manager is what I like to call the &quot;six degrees of separation from a skill&quot; syndrome :)&lt;br /&gt;
&lt;br /&gt;
Beware when you see everything but the kitchen sink listed as skills, especially in short career spans. More often than not, my phone screens of such candidates have taken bizarre and disappointing turns such as these:&lt;br /&gt;
&lt;br /&gt;
1.This one is the killer.. inspiration for the title of this blog:&lt;br /&gt;
&lt;br /&gt;
A candidate&#39;s resume listed JAVA, which was in the JD. When asked what was his level of experience in JAVA, his response was.. brace yourself for this..&lt;br /&gt;
&lt;blockquote&gt;&quot;I was working with a developer in the same building who was using JAVA for his programming&quot;&lt;br /&gt;
&lt;/blockquote&gt;&lt;blockquote&gt;&lt;/blockquote&gt;2.Other lame but fairly common ones:&lt;br /&gt;
&lt;br /&gt;
When asked how recent is your Perl experience/ how do you rate yourself 1-5?&lt;br /&gt;
A brief silence on the other end usually followed by &lt;br /&gt;
&lt;blockquote&gt;&quot;I use a Perl script written by someone else and I can understand the code and even modify it&quot;&lt;br /&gt;
&lt;/blockquote&gt;&lt;br /&gt;
I call the next one &quot;look over someone&#39;s shoulder and quickly add it to your resume&quot; syndrome&lt;br /&gt;
&lt;br /&gt;
When asked to explain experience with say performance testing when a ton of tools are listed on the resume, candidate says &lt;br /&gt;
&lt;blockquote&gt;&quot;I was involved in a project where my team/colleagues created the script and monitoring and I pressed the start button (or something to that effect)&quot; &lt;br /&gt;
&lt;/blockquote&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-style: italic;&quot;&gt;I am yet to hear amusing ones such as &quot;My mother read aloud Perl scripts to me when I was in her womb hence it&#39;s on my resume&quot;:) &lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ps: For anyone wondering why I interrupted my sequence of performance testing related posts with this inane rant,  I am phone screening a &lt;span style=&quot;font-style: italic;&quot;&gt;lot &lt;/span&gt;of candidates currently for QA engineer, so it&#39;s been on my mind.  Now that it&#39;s out of my system and I can move on back to technical topics.&lt;br /&gt;
&lt;br /&gt;
Cheers,&lt;br /&gt;
Dharma</description><link>http://allthingsqa.blogspot.com/2009/12/six-degrees-of-separation-from-skill.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-3468691296497956494</guid><pubDate>Tue, 15 Dec 2009 05:38:00 +0000</pubDate><atom:updated>2009-12-14T21:52:57.564-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">performance testing best practices</category><title>Best practices in performance testing</title><description>&lt;span style=&quot;font-weight: bold;&quot;&gt;Essential Do&#39;s and Dont&#39;s in performance testing&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;Please read my previous detailed posts on Web application performance testing methodology before moving on to this post.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-style: italic; font-weight: bold;&quot;&gt;1.Lab resources&lt;/span&gt;&lt;br /&gt;It is important to match the architecture of the system between QA lab and production. It could be a scaled down version, in which case the results are analyzed and predicted to scale&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;How-to match production&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Hardware specifications of the servers should match&lt;/li&gt;&lt;li&gt;Load balanced architecture of the servers should be replicated in lab&lt;/li&gt;&lt;li&gt;Database clustering/replication should match, including type of replication set up in production&lt;/li&gt;&lt;li&gt;Software versions should match&lt;/li&gt;&lt;li&gt;Log level settings and log archival settings should match (otherwise production with a more verbose log level or without sufficient archival will die pretty soon and you can never recreate it in the lab)&lt;/li&gt;&lt;li&gt;Innovate ways to simulate hardware or other resources that cannot be matched apples for apples in the lab for budget constraints. Understand the blueprint of the system to determine these vulnerabilities&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;span style=&quot;font-style: italic; font-weight: bold;&quot;&gt;2.Result archival&lt;/span&gt;&lt;br /&gt;The results from each test should be recorded and easily accessible for comparison with future/past results. There is no way to determine improvement or degradation in performance without comparison data&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-style: italic; font-weight: bold;&quot;&gt;3.Isolated testing&lt;/span&gt;&lt;br /&gt;Exercise different actions or transactions in the application separately as atomic actions to isolate bottlenecks easier. ie., don&#39;t execute all actions on one or more page(s) sequentially in the same script.&lt;br /&gt;&lt;br /&gt;Different concurrent threads may end up executing different actions together and it will not be possible to isolate which action caused the breakdown or bottleneck&lt;br /&gt;&lt;br /&gt;There is merits to running tests that exercise a combination of actions together. But this should be one of the scripts/plans of execution to simulate real world scenarios, not to analyze specific component performance.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold; font-style: italic;&quot;&gt;3.Analyze bottlenecks&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;Other than the action/transaction separation discussed above, it&#39;s also important to exercise different system components separately.&lt;br /&gt;Examples of such isolation methods:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Exercising just the HTTP requests for required transaction rates without involving the GUI components. Command line tools like HTTPerf are very useful for these &quot;headless performance tests&quot; as they are called. Ofcourse  JMeter can also be used for this, but the time involved in recording, removing image links etc that are not needed and then playing back the bare script is not worth the time in this context.&lt;/li&gt;&lt;li&gt;Exercising selected actions (as discussed above)&lt;/li&gt;&lt;li&gt;Only invoke requests/ transactions relevant to execute a specific action.. less is more !&lt;/li&gt;&lt;li&gt;Exercise database queries separate from application layer. Determine the queries used by the application for each action (where applicable) and run them separately as JDBC transactions from the performance test tool&lt;/li&gt;&lt;li&gt;Exercise authentication/LDAP separately by using LDAP transactions. The time it takes to login may have more to do with LDAP than with login code in the application&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Related Posts to look forward to (coming soon to a browser near you):&lt;/span&gt;&lt;br /&gt;Stress testing&lt;br /&gt;Scalability testing&lt;br /&gt;Performance test tool cheat sheets/Tool instructions&lt;br /&gt;&lt;br /&gt;Your comments/suggestions will be much appreciated !</description><link>http://allthingsqa.blogspot.com/2009/12/best-practices-in-performance-testing.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-7331525771737430181</guid><pubDate>Tue, 15 Dec 2009 04:29:00 +0000</pubDate><atom:updated>2010-01-09T09:30:00.604-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Allthingsqa on twitter</category><title>Tweet much?</title><description>All Things QA can now be followed on Twitter @ http://twitter.com/allthingsqa for quick and easy access to updates.&lt;br /&gt;
&lt;br /&gt;
Happy tweeting :)</description><link>http://allthingsqa.blogspot.com/2009/12/all-things-qa-now-on-twitter.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-68166942933312488</guid><pubDate>Mon, 14 Dec 2009 07:41:00 +0000</pubDate><atom:updated>2009-12-13T23:51:03.140-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Performance test tools</category><title>Web Application Performance Testing tools</title><description>&lt;span style=&quot;font-style: italic;&quot;&gt;Highlights and lowlights of a few popular performance testing tools:&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://jakarta.apache.org/jmeter/&quot;&gt;JMeter&lt;/a&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Open source, active project&lt;/li&gt;&lt;li&gt;Supports HTTPs traffic via HTTPS spoofing (works quite well)&lt;/li&gt;&lt;li&gt;Supports various protocols other than HTTP, like JDBC to analyze SQL bottlenecks directly&lt;/li&gt;&lt;li&gt;Supports cookie and session management&lt;/li&gt;&lt;li&gt;Can be executed in distributed mode from remote host to prevent the host machine from throttling the test&lt;/li&gt;&lt;li&gt;Captures screen (if specified) to be used as functional test (capturing screen is not recommended during load/performance tests)&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;a href=&quot;http://opensta.org/&quot;&gt;OpenSTA&lt;/a&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;OpenSource, not actively developed or maintained&lt;/li&gt;&lt;li&gt;Support for HTTPS is patchy/unreliable at best&lt;/li&gt;&lt;li&gt;Good scripting interface, better than JMeter for customization and parametrization&lt;/li&gt;&lt;li&gt;Falls apart easily under load even in distributed mode&lt;/li&gt;&lt;li&gt;Easy access for DOM objects to be accessed from scripts&lt;/li&gt;&lt;li&gt;Monitoring of servers possible&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;a href=&quot;http://www.hpl.hp.com/research/linux/httperf/httperf-man-0.9.pdf&quot;&gt;HTTPerf&lt;/a&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Free, open source&lt;/li&gt;&lt;li&gt;Command line tool (not GUI based)&lt;/li&gt;&lt;li&gt;Granular support via command line options for inputs, delays, target throughput etc&lt;/li&gt;&lt;li&gt;SSL supported&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;a href=&quot;http://grinder.sourceforge.net/&quot;&gt;Grinder&lt;/a&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Very similar to Jmeter; Open source JAVA based record and playback tool&lt;/li&gt;&lt;li&gt;Supports HTTP and HTTPS&lt;/li&gt;&lt;li&gt;HTTP(S), JDBC, LDAP and more protocols supported&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;a href=&quot;http://www.communities.hp.com/online/blogs/loadrunner/default.aspx&quot;&gt;Mercury Load Runner&lt;/a&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Not inexpensive&lt;/li&gt;&lt;li&gt;Method level Diagnostics on application&lt;/li&gt;&lt;li&gt;Monitoring of servers during test built in&lt;/li&gt;&lt;li&gt;Record and playback via GUI&lt;/li&gt;&lt;/ul&gt;IXIA and Avalanche&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Not inexpensive :)&lt;/li&gt;&lt;li&gt;Large scale/higher concurrency use, loads that JMeter cannot handle well&lt;/li&gt;&lt;li&gt;More granular control over inputs and target throughput and output monitoring&lt;/li&gt;&lt;/ul&gt;</description><link>http://allthingsqa.blogspot.com/2009/12/web-application-performance-testing.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-7346046676574513830</guid><pubDate>Sun, 13 Dec 2009 06:38:00 +0000</pubDate><atom:updated>2010-01-09T09:30:24.529-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">persistence testing</category><category domain="http://www.blogger.com/atom/ns#">soak testing</category><title>Persistence testing web applications</title><description>&lt;span style=&quot;font-weight: bold;&quot;&gt;Pre-requisites:&lt;/span&gt;&lt;br /&gt;
My previous post on performance testing is the prequel to this post&lt;br /&gt;
&lt;br /&gt;
Persistence / stability / soak testing is the next step after figuring out how well your application performs under load.&lt;br /&gt;
&lt;br /&gt;
It&#39;s time to determine if it continues to perform well under reasonable load over a period of time.&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-style: italic;&quot;&gt;Reasonable load in this context:&lt;/span&gt;&lt;br /&gt;
Either 60% of stress point or average throughput that application is subject to on production&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-style: italic;&quot;&gt;Steps:&lt;/span&gt;&lt;br /&gt;
Let the load test with above load run uninterrupted over a period of time (Industry standard for minimum time seems to be 72 hours but we tend to cut corners to stay within tight project deadlines, maximum time depends on the project deadline)&lt;br /&gt;
&lt;br /&gt;
During this time, continue all the monitoring an measurement described in previous post, but not as frequent. Example, if you were monitoring/measuring every minute in a shorter performance test, in persistence test, you would need to monitor every 15 mins or so.&lt;br /&gt;
&lt;br /&gt;
It would be essential to cause small infrequent spikes in the load during persistence tests, to simulate real world behavior.&lt;br /&gt;
&lt;br /&gt;
These tests analyze if the application is stable, even what (if any) is the performance degradation over time.</description><link>http://allthingsqa.blogspot.com/2009/12/persistence-testing-web-applications.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>3</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-5464925370194035334</guid><pubDate>Mon, 07 Dec 2009 07:58:00 +0000</pubDate><atom:updated>2009-12-07T00:11:37.137-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">load testing</category><category domain="http://www.blogger.com/atom/ns#">performance testing</category><title>Performance Testing Web Applications</title><description>&lt;span style=&quot;font-style: italic;&quot;&gt;This section explains how the terms performance, load and stress testing seamlessly integrate towards end to end performance testing&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Performance testing&lt;/span&gt;:&lt;br /&gt;&lt;br /&gt;What to measure:&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Response time of an action (Performance testing tool)&lt;/li&gt;&lt;li&gt;Actual throughput vs target throughput (Performance testing tool)&lt;/li&gt;&lt;li&gt;Responsiveness of the action/app (Example successful HTTP requests -200 response code) (Performance testing tool)&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;Baselining:&lt;br /&gt;Exercise each action in the application with no load/concurrency&lt;br /&gt;In the absence of a specific requirement of supported throughput, measure the actual throughput without increasing/manipulating target throughput under no load&lt;br /&gt;&lt;br /&gt;If a requirement is clear as to minimum throughput to be supported, then execute the action specifically for target throughput set to the min required value and after execution, verify the actual generated throughput&lt;br /&gt;&lt;br /&gt;Throughput is transaction rate (business transactions in a given interval of time)&lt;br /&gt;For instance: bytes transferred/second, requests/second etc&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Load testing:&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;What is load:&lt;/span&gt;&lt;br /&gt;Number of concurrent users/connections&lt;br /&gt;Number of concurrent threads accessing each action in the application&lt;br /&gt;Throughput for each action&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;What to measure:&lt;/span&gt;&lt;br /&gt;1.Breaking/stress point&lt;br /&gt;Steadily increase the load (as defined above) for each action and execute test&lt;br /&gt;The point where the test returns 500 response code (Server error due to high load) is the breaking/stress point for your application (Will be continued as part of the stress testing section in future)&lt;br /&gt;&lt;br /&gt;2.Stability under acceptable load&lt;br /&gt;Acceptable load is the load that you can expect your application to be subject to in production&lt;br /&gt;Three ways to arrive at this acceptable load:&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Management specifies in a requirement doc&lt;/li&gt;&lt;li&gt;Monitor production usage of the application by tailing logs or such and determine the load or frequency of use of each action&lt;/li&gt;&lt;li&gt;60% of the breaking point/stress point, if accepted by stakeholders&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;Measuring stability for increased load means checking responsiveness and that latency introduced by load is within acceptable level (latency in this context is is response time with load - response time without load)&lt;br /&gt;&lt;br /&gt;What&#39;s acceptable level defines verification criteria&lt;br /&gt;In the absence of requirement of response time, observed latency is provided to stakeholders. If accepted, this is the baseline/yardstick&lt;br /&gt;If the latency is not accepted by stakeholders, back to the drawing board for developers to tune the performance of the system (that&#39;s a topic for another day)&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;What to monitor:&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;1.System resources on the application server (System monitoring tool - &lt;a href=&quot;http://www.nagios.org/&quot;&gt;Nagios&lt;/a&gt;, VMStat, PerfMon)&lt;br /&gt;Monitor the CPU and memory utilization, Disk and Network I/O, Swap size during different load levels and between load tests to track/determine memory leaks&lt;br /&gt;Nagios UI interface displays all the utilization info from the server as graphs&lt;br /&gt;PerfMon results can also be analyzed as graphs&lt;br /&gt;VMstat results must be collected and exported to Excel and graphs must be generated from this info&lt;br /&gt;The reason I stress on graphs, is it&#39;s easier to find spikes in utilization when observed as graphs between time and utilization&lt;br /&gt;&lt;br /&gt;One &lt;span style=&quot;font-style: italic;&quot;&gt;quick test for memory leaks&lt;/span&gt; is to run a high load test, stop the test and re-run it. Expect to see a dip in utilization before it rises again. If the spike continues with no dip, there you have your memory leak bug ..woo hoo !&lt;br /&gt;&lt;br /&gt;2.File handles - Small scripts can be written to monitor count of file handles&lt;br /&gt;&lt;br /&gt;3.Garbage collection- Profiling tools (JProfiler) -- note: Profiling tools slow down the system, so do not measure speed during profiling&lt;br /&gt;&lt;br /&gt;4.Sessions in app server - Monitor the application server manager to ensure sessions are cleaned up appropriately after user logs out/test ends&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Verification criteria:&lt;/span&gt;&lt;br /&gt;If tests result in acceptable latency and 200 result codes, no memory leaks, test passes -- that&#39;s your baseline !&lt;br /&gt;If tests result in out of proportion latency and 404/500 response codes, memory leaks, file a bug&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Performance regression testing&lt;/span&gt;&lt;br /&gt;All the measurements of baselines/yardsticks should be noted and compared against all future builds.&lt;br /&gt;&lt;br /&gt;If performance is better in future build, new measurement is the new baseline&lt;br /&gt;If performance is worse, file a bug and don&#39;t change the baseline. If the bug cannot be resolved, hold on to the old baseline and new results to track this long term&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold; font-style: italic;&quot;&gt;Yet to come in future posts:&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;Persistence testing&lt;br /&gt;Stress testing&lt;br /&gt;Scalability testing&lt;br /&gt;Best practices in performance testing&lt;br /&gt;Tools for performance testing will be covered in detail (&lt;a href=&quot;http://jakarta.apache.org/jmeter/&quot;&gt;JMeter&lt;/a&gt;, &lt;a href=&quot;http://www.hpl.hp.com/research/linux/httperf/&quot;&gt;HTTPerf&lt;/a&gt;, &lt;a href=&quot;http://opensta.org/&quot;&gt;OpenSTA &lt;/a&gt;etc)&lt;br /&gt;Performance testing Linux/windows systems (&lt;a href=&quot;http://www.iometer.org/&quot;&gt;IOMeter &lt;/a&gt;etc)&lt;br /&gt;&lt;br /&gt;Stay tuned .. !</description><link>http://allthingsqa.blogspot.com/2009/12/performance-testing-web-applications.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>2</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-2860213123067540751</guid><pubDate>Sat, 05 Dec 2009 22:41:00 +0000</pubDate><atom:updated>2010-01-09T09:31:25.662-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Test Management tool</category><title>Test Management tool selection</title><description>Most QA teams (especially start ups where you have to start testing before you can say the word &quot;test management&quot;) start by writing test cases in Excel files, word documents and post-its ;) &lt;br /&gt;
&lt;br /&gt;
As a lead/manager, your first task should be to end the madness and start using a test management tool or evaluate the current system and replace if it&#39;s ineffecient.&lt;br /&gt;
&lt;br /&gt;
Why? &lt;br /&gt;
To render tests repeatable and to track, archive and present test results to the stakeholders (ps: these are the magic words to convince management to let you set up the tool)&lt;br /&gt;
&lt;br /&gt;
What?&lt;br /&gt;
I am not here to sell a specific product, just to suggest how to arrive at one.&lt;br /&gt;
&lt;br /&gt;
Allright, then ..how?&lt;br /&gt;
&lt;br /&gt;
Considerations/criteria for selecting a test management system:&lt;br /&gt;
&lt;br /&gt;
1.BUDGET:&lt;br /&gt;
Determine the budget ceiling from stakeholders/management. There is no point wasting time evaluating tools that serve you pancakes in bed if they are out of your reach. &lt;br /&gt;
&lt;br /&gt;
If the budget is too low or non existent, research freewares. Downside is set-up and configuration time and mostly lack of features. But most are better than post-its :)&lt;br /&gt;
&lt;br /&gt;
2.LICENSE:&lt;br /&gt;
Budgets are not written in stone.. they change.. frequently! So, evaluate your selection based on the licensing model that&#39;s preditable&lt;br /&gt;
&lt;br /&gt;
License/cost models:&lt;br /&gt;
Per registered user license, recurring cost&lt;br /&gt;
Per seat/concurrent user count license, persistent (one time cost)&lt;br /&gt;
Recurrring cost for max count  (not per user) &lt;br /&gt;
Persistent / one time cost for max count or independant of cost&lt;br /&gt;
Hosted offsite vs Installed onsite&lt;br /&gt;
&lt;br /&gt;
Persistent/One time costs are higher because it&#39;s Capital Expense. It will pay for itself in reasonable time frame and become free afterwards&lt;br /&gt;
&lt;br /&gt;
To arrive at the best license model, you need to:&lt;br /&gt;
&lt;br /&gt;
- Predict reasonably the growth of team &lt;br /&gt;
- Ascertain the possible concurrent connections to the app (for instance offshore/onshore models have lesser concurrency then team count)&lt;br /&gt;
- Length of time of usage of the tool&lt;br /&gt;
- Offsite hosting takes away the rigour of managing hardware and software and upgrades&lt;br /&gt;
- Availability of hardware and personnel to manage onsite installation vs offsite hosting&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3.FEATURES:&lt;br /&gt;
&lt;br /&gt;
- Intuitive and configurable interface&lt;br /&gt;
- Configurable fields across all components (example: able to modify/add test case fields, result fields etc)&lt;br /&gt;
- Import test cases from other documents/excel/applications&lt;br /&gt;
- Export test cases into files&lt;br /&gt;
- Search/ filter interface&lt;br /&gt;
- Modify test cases / Bulk update test cases&lt;br /&gt;
- Assign test cases to individual QA&lt;br /&gt;
- Execute/assign results to tests via sessions&lt;br /&gt;
- Track and archive test session results &lt;br /&gt;
- Integration with other SDLC apps or tools used by the team&lt;br /&gt;
- Associate failed cases with bugs&lt;br /&gt;
- Good result reporting interface&lt;br /&gt;
- Web applications have the merit that you can provide tests and results as URLs anywhere for quick and direct access&lt;br /&gt;
- Scalability - Evaluate the app/tool loading it with considerable tests and sessions and check performance (essential for web apps, most essential for hosted apps)</description><link>http://allthingsqa.blogspot.com/2009/12/test-management-tool-selection.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-7981891685615719302</guid><pubDate>Fri, 04 Dec 2009 19:51:00 +0000</pubDate><atom:updated>2009-12-06T16:14:09.223-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">bug priority</category><category domain="http://www.blogger.com/atom/ns#">Bug severity</category><category domain="http://www.blogger.com/atom/ns#">fix vs defer</category><category domain="http://www.blogger.com/atom/ns#">severity vs priority</category><title>Bug priority and severity guidelines</title><description>Guidelines for assigning severity and priority to a bug&lt;br /&gt;&lt;br /&gt;These guidelines are based on industry standards and my experience on what worked well in various projects/companies. These definitions and criteria should be tailored to best suit a specific project team&lt;br /&gt;&lt;br /&gt;Severity vs Priority&lt;br /&gt;&lt;br /&gt;Training the team on using two different fields for severity and priority in a bug tracking system could seem like an overhead, but ideally QA&#39;s definition of the impact of a bug (severity) should be kept separate from PM&#39;s decision on whether they want to make an exception depending on time constraints and defer it (priority/when to fix) &lt;br /&gt;&lt;br /&gt;In the absence of such segregation, the priority field would be modified when a bug is triaged out of a release&lt;br /&gt;&lt;br /&gt;*The term &#39;critical functionality&#39; refers to new features deemed as acceptance tests by QA/Product Mgmt or existing features that impact customers the most&lt;br /&gt;&lt;br /&gt;Severity/Priority assignment:&lt;br /&gt;&lt;br /&gt;Severity: Critical/Show-Stopper&lt;br /&gt;Priority: Must fix before Feature Complete (in agile context) /Code Freeze milestone/ Could be deferred to an immediate patch&lt;br /&gt;&lt;br /&gt;1. Failure of critical functionality with no workaround&lt;br /&gt;2. Data loss or corruption&lt;br /&gt;3. Billing data or invoice display inconsistencies&lt;br /&gt;4. Issues blocking testing&lt;br /&gt;5. Critical functionality discrepancy between requirements and implementation&lt;br /&gt;6. Critical functionality error handling failures with high occurence likelihood and resulting in high impact to customers&lt;br /&gt;7. Security concerns with customer visibility&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;Severity: High&lt;br /&gt;Priority: Should fix before code freeze (agile and waterfall context)&lt;br /&gt;&lt;br /&gt;1. Partially impaired critical functionality with less than desirable workaround&lt;br /&gt;2. Failure of non-critical functionality with no workaround&lt;br /&gt;3. Non critical functionality discrepancy between requirements and implementation&lt;br /&gt;4. High customer impact UI discrepencies&lt;br /&gt;5. Security concern with no real customer impact&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;Severity: Medium&lt;br /&gt;Priority: Attempt to fix before code freeze or deferred to next release (ie, release candidate should not be tainted for a medium priority fix)&lt;br /&gt;&lt;br /&gt;1. Impaired non critical functionality with satisfactory workaround&lt;br /&gt;2. Impaired critical functionality during low occurence edge cases&lt;br /&gt;3. High impact spelling errors&lt;br /&gt;4. Non critical functionality error handling failure&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;Severity:Low&lt;br /&gt;Priority:If it may be fixed before code freeze else deferred to next release&lt;br /&gt;&lt;br /&gt;1. Rare occurrence&lt;br /&gt;2. Low customer impact feature failure&lt;br /&gt;3. UI layout or low impact spelling errors</description><link>http://allthingsqa.blogspot.com/2009/12/bug-priority-and-severity-guidelines.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-5045257816523890005</guid><pubDate>Fri, 04 Dec 2009 04:46:00 +0000</pubDate><atom:updated>2009-12-06T16:13:15.800-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Test Plan</category><title>Test Planning 101</title><description>A Test Plan ..&lt;br /&gt;&lt;br /&gt;* is a contract of the tests identified and agreed upon as a verification release criteria for a feaure. &lt;br /&gt;&lt;br /&gt;* is signed off by all stakeholders (test cases don&#39;t need that level of granular review from teams outside of QA, since they are by products of the test plan)&lt;br /&gt;&lt;br /&gt;* is written during spec/requirement review. Test Plan includes but is not limited to all use cases in a spec. &lt;br /&gt;&lt;br /&gt;* highlights acceptance tests (applicable especially in agile model)&lt;br /&gt;&lt;br /&gt;Essential components of a test plan:&lt;br /&gt;(Sections that are not self explanatory have been explained alongside)&lt;br /&gt;&lt;br /&gt;PURPOSE &amp; SCOPE (Scope could be iteration specific in an agile mode or release version specific in a waterfall SDLC model)&lt;br /&gt;&lt;br /&gt;COMPONENTS/FEATURES TO BE TESTED&lt;br /&gt;1.Verification (Positive tests)&lt;br /&gt;2.Validation (Negative tests)&lt;br /&gt;&lt;br /&gt;COMPONENTS/FEATURES NOT TO BE TESTED&lt;br /&gt;&lt;br /&gt;SECURITY TESTS  &lt;br /&gt;&lt;br /&gt;LOAD &amp; PERFORMANCE TESTS&lt;br /&gt;&lt;br /&gt;STRESS TESTS (Includes failover testing etc)&lt;br /&gt;&lt;br /&gt;BROWSER COMPATIBILITY TEST MATRIX&lt;br /&gt;&lt;br /&gt;PRODUCTION DEPLOYMENT SMOKE TESTS&lt;br /&gt;&lt;br /&gt;RISKS AND ASSUMPTIONS &lt;br /&gt;&lt;br /&gt;DELIVERABLES (Example: Test Cases via this tool/URL, Bugs via this tool etc)&lt;br /&gt;&lt;br /&gt;HARDWARE &amp; SOFTWARE (eNV) REQUIREMENTS (Especially necessary when more resources than currently available in QA Lab are required to be prepared before the code is dropped to QA)&lt;br /&gt;&lt;br /&gt;SCHEDULE AND STAFFING (This may be handled independant of the test plan, but essentially mentions the QA resoUrces selected for this release)&lt;br /&gt;&lt;br /&gt;RESOURCES (Spec and requirement documents, Engineering/Architecture docs)</description><link>http://allthingsqa.blogspot.com/2009/12/test-planning-101.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-6142634977059070215.post-840431455545704823</guid><pubDate>Thu, 03 Dec 2009 03:06:00 +0000</pubDate><atom:updated>2010-01-09T09:32:16.062-08:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">All Things QA Introduction</category><title>One stop for all things QA</title><description>Roughly 6 million things about QA (they dont tell you)&lt;br /&gt;
&lt;br /&gt;
Topics that I hope to cover via this blog:&lt;br /&gt;
Test planning - Strategy, Tools&lt;br /&gt;
Automation - Strategy, Tool evaluation and instructions&lt;br /&gt;
Performance Testing - Strategy, Tool evaluation and instructions&lt;br /&gt;
Offshore model - Highlights and pitfalls&lt;br /&gt;
Career Track - What you should consider before becoming a QA&lt;br /&gt;
Interview Preparation - Go get &#39;em :)&lt;br /&gt;
Bugs - Filing, prioritizing and tools suggestions&lt;br /&gt;
Agile testing - Learn what the hype is all about&lt;br /&gt;
Helpful Resources - Books and links I&#39;ve found useful&lt;br /&gt;
&lt;br /&gt;
For more interactive discussions on these and other QA topics beyond the scope of the blog, please sign up to get access to chat/forums on allthingsqa.ning.com (I may ask for a LinkedIn public profile link or equivalent before accepting the signup, just to ensure credibility of opinions posted in the forums)&lt;br /&gt;
&lt;br /&gt;
Please check back/follow the blog for updates..  this is just the beginning :)&lt;br /&gt;
&lt;br /&gt;
Cheers!&lt;br /&gt;
Dharma</description><link>http://allthingsqa.blogspot.com/2009/12/hmm-does-world-need-one-more-blog.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item></channel></rss>