I recently started a new project in Canada which is an enterprise implementation of a commercial order capture and order management product. My initial discussions with the PMO went well until we began discussions around the testing strategy where I typically discuss the different testing phases, the number and availability of testing environment, etc. When we came to the composition and contact information for the system test team, the customer's face went blank...
I discovered this customer did not have a test team only a business analyst. Needless to say, I was a little taken a back but went to work on how to test an enterprise level order capture and order management system without a formal test team. I also learned there would be no formal test plans either but there would be a user acceptance test used by the business to make the production deployment decision.
My first thought was to hire a testing firm to come in and execute a traditional testing phase, but I do not have the budget for it so needed to be more creative.
As I am currently in the requirements definition phase, I plan to use this time with the technical and business community to document not only the requirements but also the use cases in a fashion typical of the Rational Unified Process (RUP). Not only will this enable me to confirm that I have requirements in all areas of the business, but will also provide me an informal set of test scenarios. My technical team will use these scenarios to guide the integration testing and perform an integration test of greater than typical depth as it will include many fringe cases identified early and typically left to a more formal test phase. Using the development team to execute on all the test cases also will flush out aspects of the system configuration that may have been overlooked.
After meeting with the business analyst, I have good confidence that she is well versed in all aspects of the business. The issue becomes finding time for a single person to effectively test a system of this size. What I came up with is three fold strategy 1) apply the agile development approach to a commercial software implementation and provide incremental releases of functionality at the soonest possible time and build the system over time, 2) enhance unit and integration testing to include all the use cases collected during the current requirements phase, 3) engage my off shore performance testing group.
As is customary with agile custom development, the superset of functionality is divided into small testable releases. These releases build on each other and the team get into a rhythm whereby a group is designing, a second developing the next release and a third testing what has been delivered. The entire solution is build one releases at a time with the development and testing time periods overlapping. The net result is a longer test cycle. The testable release concept is also applicable to commercial software as subsets of functionality are configured vice coded.
I typically start with the initial activities in the business process under development. In this case, the authentication of users with Active Directory and all fringe cases such as invalid password, password resets, and password changes. As this is an order capture system, the next step will be capturing orders but not sending them to the order management system, no accounting entries, or inventory transactions. This additional functionality is added in subsequent releases in the order of the business process. The final steps in the business process and therefore releases are reverse logistics.
This approach will extend the test cycle from weeks to months and allow the BA the time to focus on smaller sets of functionality as they roll out. The additional time comes from overlapping the development and testing cycles. A formal system test phase can also be added where the group does exist. The performance testing group is off shore so I can get them cheaply to cover the potential issues around performance and effectively isolate the sub systems where the performance is below expectations.
Given the integration testing is enhanced and included the end to end scenarios, I will engage the performance testing team in an incremental fashion as well thus providing the technical team a longer lead time to address any identified performance issues.
The risks in this approach are:
1) The interfaces which the customer is building may not be available in a time frame to support the extended testing cycle
2) The fringe test cases not identified during requirements definition may be missed until the UAT
3) The stabilization period after the production date may be extensive and have a greater than tolerable error issue rate given the less than ideal testing
4) I am relying on the testing expertise of a single individual and my development team where neither group are testers by profession
Below is what I plan to execute and will write a follow up post in the summer of 2010 on how it went.
Comments