Intelligent Software Change Management: Keeping up with changing user requirements
Testing should get faster the more you do it, so why is this not the case?
The more you test, the faster testing should get. You’ve created more test cases, and have already gone through the labor of finding or creating the data needed to execute them.
Testing new functionality should then be as simple as choosing from an existing library of assets, and creating what little new assets are needed.
However, the opposite is usually true, and testing takes longer the further along a product or application is in its journey to production. There are several reasons why this might be the case.
Testing assets are unreactive. They are not typically malleable, as requirements, test cases and data are stored in static formats, with little traceability between them.
There is therefore no automated way to identify the upstream or downstream impact of a change on a system, and this has to be identified manually. This is time-consuming and error-prone, and gets more difficult as a system becomes more complex.
Lack of traceability. Test cases and data have to be updated to reflect the change made to the requirements. This is similarly manual and slow, and at one financial institution we worked with it took two testers two days to check the existing tests after a change was made.
Not enough re-usability. When requirements change, existing assets can rarely be re-used. A data-driven approach to testing is rare, and data is normally found or created with a specific test case in mind. When the real-word moves on or the functionality changes, the data is out-of-date.
Test cases are usually linear, where each test case is a distinct path through the system’s logic. With new or changed functionality, then, you cannot re-use an existing or similar test case’s steps, and a wholly new test must be created.
Making testing more reactive to change
To keep up with changing business requirements, testing needs to be made more reactive, with a greater degree of traceability and re-usability introduced. This can be achieved with intelligent Software Change Management, which starts with the requirements.
In this recent Bloor Research White Paper, Automated Testing: Coping with Change, Philip Howard tackles many of the issues set out above. His suggestions chime well with our experience with clients, and we believe that they can be put into practice with flowchart modelling.
Impact analysis. Howard suggests that a properly “automated” test framework will identify the implications of a change which has been formally captured, and, more than this, will identify all relationships and dependencies system-wide which need to be re-tested.
With flowchart modelling, changes are formally captured when a new piece of logic is added, or an existing block is edited. Automated algorithms can therefore be applied to identify the exact impact this will have across components. Any effected ‘paths’ through the system’s logic are delineated, so that testers know exactly what to re-test.
Introduce traceability and make testing more re-active. With traceability back to the requirements, existing tests, automated tests, and data can be updated automatically when the requirements change.
The thousands of existing test assets which typically exist can therefore be scanned, identifying duplicate, out-of-date or invalid test components which need to be updated.
With a flowchart model, test cases are simply paths through the flow. When the model is updated, mathematical algorithms can be used to automatically identify and update any affected test cases, with any new tests needed automatically generated.
As an object-oriented approach is being taken, only the directly affected components need to be updated. Updating a single block therefore updates the existing test cases, where executing the existing and new components in varying sequences provides maximum coverage.
Re-usability. The ideal is, of course, re-usability. Rather than create a new asset each time, testers should strive to have a library of re-usable test components, with data stored separately as re-usable assets.
As Howard describes, this library can then be scanned when a change is made, identifying any existing assets which can be used to validate that a change has been successfully implemented.
[Image: Pixabay]

Administrator
Become industry leaders with a wide range of high quality services, offering excellent customer service with a dedicated team of professionals.
We stay Connected
RSS Twitter Facebook Vimeo Skype