Friday, September 26, 2008
Virtualization benefits for QA
Increasingly software development companies are focusing towards products that can run on multiple platforms and integrates well with the other applications. Early visibility of working software, Time-to-market and the quality of the product being a key concern has lead to a paradigm shift towards agile development methodologies. This demands availability of QA environment in early stages of SDLC.
The demand of early availability of complex QA environment has posed a major challenge related to the associated appraisal cost (cost of QA).
Virtualization overcomes this challenge by providing a mechanism that allows ability to simulate different environments and experiment with different scenarios, without significant expansion of hardware and physical resources.
This article discusses the benefits of virtualization for QA
2 What is Virtualization?
Virtualization is the creation of a virtual version of something, such as an operating system, a server, a storage device or network resources.
Using virtualization different operating systems and softwares can be configured in a single machine instance. This helps in testing the application against different environments and experiment with different scenarios, without significant expansion of hardware and physical resources. Any defects related to operating system or browser incompatibility can be detected in early stages of SDLC.
3 The Reality
By and large development shops are forced to buy servers for testing purposes. It results in organizations having plethora of servers that are utilized less than 20%. They're only required when the QA team needs to test a new release against a particular. The disadvantages of this approach are:
· Cost of hardware increase
· High server to staff ratio
· Reduced ROI
· As the cost involved is huge the decision to procure the hardware is generally delayed and the team has to wait long for hardware to perform real time testing.
· Difficulty in testing the software under varying hardware configurations E.g. to validate how the system works when memory allocated is increased or decreased.
4 Major Benefits of Virtualization
Development and Test Lab Infrastructure:
· Significant cost benefits can be achieved by Pooling servers, networking, storage & other resources and sharing them across development and test teams.
· It also provides on-demand access to a shared library of complex environment giving the developers and testers instant use of the resource they need.
· The server-to-staff ratio is greatly reduced and also helps in accelerating the development lifecycle.
Portability Testing
· A tester or developer can test the application for multiple platforms with a single machine.
· There is an additional advantage if developers perform the portability testing, the feedback cycle is shortened. If the code does working correctly, the developer knows what has changed since the last working build and can figure out what needs to be changed
· An additional benefit of this approach is that developers can develop for multiple platforms and multiple browsers with no additional purchase of hardware.
Platform Standardization:
· Virtual softwares allow varying of the hardware configuration available for use e.g. the amount of RAM available for use by an application can be increased or decreased to check the application’s performance under varying conditions. This helps in testing an application in a standard platform that has reduced resources and thereby helps in determining the memory requirements for an application to perform optimally.
Defect Snapshots
· Virtualization gives QA engineers an ability that they have historically lacked: capturing the entire state of the machine at the point where a bug manifests. This is possible because the entire VM can be saved at any time in its current state to a single large file. Once saved this way, the VM can be made available to developers who can now see for themselves how the problem manifests. At sites that use defect-tracking software, it is easy to place a URI to the saved VM in the defect report, and thereby greatly improve the quality of the information exchanged between QA and the development teams.
Centralized Configuration Management
· Complete virtual machine environment is saved as a single file; easy to save, move, and copy.
· Standardized virtualized hardware is presented to the application to ensure compatibility to the application to ensure compatibility
5 Virtualization trend in Software Industry
· Virtualization market will grow to $11.7B by 2011!
· More than three-quarters of all companies with 500+ employees today are deploying virtual servers.
· Customer satisfaction is high.
· Survey respondents currently using server virtualization technologies report that they expect 45% of new servers purchased next year will be virtualized.
· More than 50% of all virtual servers are running production level applications, including the most business critical workloads
· Source: IDC
6 Implementation approach
I would suggest the following approach for implementing virtualization in any organization
· Determine the Capabilities Your Organization Needs: Build a requirements matrix and determine the types of testing and usage patterns typically seen in your organization.
· Software for Virtualization: Some of the popular softwares for virtualization are:
o VMware
o Microsoft
o XenEnterprise
o Swsoft Virtuozzo
o Virtual Iron
Depending on the business and technical requirements at organizational level a choice of one of these can be exercised.
· Evaluate Total Cost of Ownership (TCO): Build a Total Cost of Ownership model. Be sure to include software, hardware, implementation, and administration costs.
· Conduct a Trial Project: Conduct a proof-of-concept or trial project using your short-list of solutions.
· Integrate into the QA Process: Once you’ve tested and selected your solution, evaluate your current test practices and update them to reflect the new virtual lab capabilities and invest in training your team before rolling out the solution broadly.
7 Summary
There are many good reasons to adopt virtualization. The cost and testing benefits alone (portability, verification of the correct platform, testing under constrained physical resources, etc.) make virtualization preferred option. On-demand availability and the defect-snapshot capability are additional productivity boost.
Tuesday, March 11, 2008
Cost of quality
In order to maximize ROI out of any quality framework the organizations must track the cost of quality continuously and maintain their cost of quality at an optimum level. What is this optimum level and how to find it out? Before delving into this, let’s understand what is cost of quality?
Cost of quality of a software product comprises of four components: prevention costs, appraisal costs, internal failure costs, and external failure costs. Each of these is discussed in details below.
Prevention costs are investments made ahead of time in an effort to ensure conformance to requirements. Examples include activities such as orientation of team members, training, quality planning, and the development of project standards and procedures.
Appraisal costs include the money spent on the actual testing activity (Unit,Integration and system testing). Any and all activities associated with searching for errors in the software and associated product materials falls into this category. This includes all testing: by the developers themselves, by an internal test team, and by outsourced software test organization. This also includes all associated hardware, software, labor, and other costs. Once a product is in the coding phases, the goal is to do the most effective appraisal job, so that internal failure work is streamlined and well-managed and prevents skyrocketing external failure costs.
Internal failure costs are the costs of coping with errors discovered during development and testing. These are bugs found before the product is released. As we mentioned previously, the further in the development process the errors are discovered, the more costly they are to fix. So the later the errors are discovered, the higher their associated internal failure costs will be.
External failure costs are the costs of coping with errors discovered after the product is released. These are typically errors found by your customers. Example: processing customer complaints, customer returns, warranty claims, product recalls. These costs can be much higher than internal failure costs, because the stakes are much higher. Errors at this stage can also be costly in terms of your company’s reputation and may lead to lost customers.
Organizations generally are reluctant to invest in Prevention costs because they rarely have a quantifiable way to evaluate what their "failure" costs really are. Studies have shown that the further along in the process that quality is worked into the product or service, the higher the cost of quality. For example, if a system were to be delivered untested to a customer, the cost of quality to that point would be minimal. However, once the system went live and the inevitable bugs appeared, the operational costs to the customer, rework and damage control costs, and the resulting cost to the professional reputation of the delivery organization, would far outweigh any prevention or appraisal costs that might be incurred upfront.

The diagram above illustrates the relationship between the cost of product and the quality of performance.
It highlights three important points.
1. Insufficient investment in quality management results in excessively high costs related to defect correction.
2. There is a point above which additional investment in quality management proves uneconomical.
3. There is a level of service quality at which the total cost of quality is minimized.Finding this optimum level and then operating at, or above, this level should be our goal.In order to operate at this optimum value the organization’s accounting system needs to track all the components of cost of quality.
The conclusion is that we should try to reduce the overall cost of each product or service, by establishing the optimum level of preventive and appraisal cost that minimizes resultant error costs. The net result of quality improvement should be a re-allocation of costs across the cost of quality categories resulting in a reduction in the overall cost of quality.
Monday, March 10, 2008
Why Software projects fail?
We all have heard the following statement before in project management books and literature, at seminars, symposia, and in professional certification classes.
“25 percent of all software projects fail!” or “80 percent of all software projects fail to meet schedule and cost objectives”
It's not uncommon for projects to fail. True project success must be evaluated on all three components scope, time and cost. Even if there is a failure vis-à-vis any of these components, a project could be considered a "failure."
There are many reasons why projects fail; the number of reasons can be infinite. However, if we apply the 80/20 rule the most common reasons for failure are following
Lousy Project Management
- Inadequately trained and/or inexperienced project manager
- No formal project management methodologies and best practices aligned to the company's specific needs are used to assist project performance.
A project plan that is non-existent, out of date, incomplete or poorly constructed and just not enough time and effort spent on project planning. - inadequate communication, tracking and reporting; not reviewing progress regularly or diligently enough
- Ineffective scope ,time and cost management
- Lack of leadership and/or communication skills
- Ignoring project Warning Sign or no risk management
- Poorly defined roles and responsibilities
- Goals and success criteria of the project not clear to the team
- Team conflicts resulting into major issues.
Project managers will need to ensure that project cost, scope, and time are optimally balanced to achieve the desired deliverables and the desired time. Effective planning and monitoring are necessary to help develop a strong start for the project. However, project managers must remain aware and anticipate change and perform re-planning which is necessary throughout the project. Project manager should identify the warning signs such as anticipated delay in an activity or resource unavailability for some period that affects the critical path etc at early stages and take measures to mitigate risk.
Failure to set and manage expectations
- Mismatch in expectations for schedule, budget and deliverables between customer and the project team.
Project managers should collaborate with key stakeholders in defining reasonable project schedules and deadlines to ensure that business conditions and requirements are met and better manage expectation levels.. Additionally, all people involved in the project effort should have periodic joint sessions, to ensure the same communications on project expectations are received by everyone.
Inadequate Requirements management
- No clear definition of the project's benefits and the deliverables that will produce them.
- Failure to adequately identify, document and track requirements
- Scope creep (increase in the scope) during the life cycle of the project not properly managed and integrated with the change control systems.
- Lack of Change Management Process.
Scope changes can significantly impact the cost, schedule, risks and quality of the entire effort. Project managers should watch out for early and frequent changes to the project scope.
Project managers should collaborate directly with key project stakeholders to define specific detailed project requirements and deliverables. Defining specific project requirements is necessary to maintain alignment of project tasks to desired business outputs, as well as to ensure that projects have clear and specific project objectives established.
A formal and structured change management process is necessary to ensure effects of any changed requirements are properly analyzed, prioritized, and balanced according to the project’s budget, schedule, and scope.
Inadequate budget forecasting due to any of the following reasons.
- Unrealistic schedule/timelines committed to the customer
- Poor effort estimation, scheduling and budgeting.
Inappropriate staffing
- Required resources underestimated and scheduled inaccurately.
- When managers fail to provide timely, adequate and properly trained resources.
Technology
- Technical Lead inexperienced in the technology or new to the domain.
- Technology or Architecture chosen not in alignment with the business needs or project goals.
Software development methodology
- Absence of a sound development methodology.
- Software development methodologies exist but are not religiously followed.
Insufficient quality assurance and quality control
- Absence of testing methodologies.
- Absence or very less time allocated for verification and validation.