Here is a post from Adrian Mirabelli - a Customer Quality engineer at STR. The idea for a bug safari came out of a presentation at the ANZ Test Board Conference in March 2009.
Bug safaris at Space-Time Research
For release 6.5, the STR quality team introduced “bug safaris” as a way to effectively and quickly find software bugs.
A bug safari involves the majority of the organisation including development, design, and management to locate bugs. Test cases or scripts are not necessarily provided but guidance should be given. Certain areas are targeted and the amount of interruptions is minimised to increase the effectiveness. Note the bug safari can be held multiple times over a release.
Planning is the key!
At the beginning of a bug safari, the quality manager invites the participants to a planning session or “kickoff”. The purpose of the kickoff is to define:
- The objectives of the session – including to communicate what is being done; all participants should be very clear about this by the conclusion of the session
- Areas to test and who will do it – this is important to ensure coverage and no wasteful duplication
- Configuration required and who will do it
- Test cases or documentation required and who will do it – the structure of these products should be agreed, for example, is it a checklist or a matrix that is filled out on-the-fly?
- Some ideas of how to test – do something unusual or non-typical, test boundary values, do something unusual
- Duration of session
- Method for reporting issues and bugs
Typically the system configuration and documentation will be done by the testing team with the help of technical resources if required. Login information is distributed in advance. The quality manager needs to decide how to report results including submission of bug reports, and therefore plays a crucial role in this testing.
At the agreed date or time, the testing itself is performed, typically no longer than two hours but longer than 45 minutes. This session is generally intense in nature as the mission is to find problems. The system testers are usually assigned to a product and work with the participants to help identify issues and troubleshoot problems. They can also be actively testing the system depending on what is agreed at the kickoff session.
At the conclusion of the session, results are tabulated and any bugs found are raised in the incident tracking system.
Within the next couple of days at the absolute latest, a debriefing is held with the participants including the system testers. The quality manager reports on bugs found, and discussion is held regarding:
- The perceived level of success of the bug safari
- What can be improved for next time
- What worked well this time
- General feelings and sentiments
- Required actions and action owners.
Why not just use structured tests?
Procedural test cases, which follow a step-by-step test script, are excellent for communicating to the wider audience how you are testing and to obtain buy-in and feedback from stakeholders. In my experience, however, you can find bugs by looking around the software, not just looking at the expected results of the test case. Further, bugs are found when testing certain sequences of data, mouse clicks, configuration, operating systems and more, and it is expensive to write test cases for all these combinations.
Why involve people outside the testing team?
You and I are testing software every day. Just by using software you are testing whether it satisfies your need and your purpose. Everyone interacts with software differently, and is likely to try things out in various and different ways, some typical and some strange, so it is good to have such testing sessions to really verify the software is “fit-for-purpose”. It also gives the opportunity for fresh eyes to look and question the software, and test out other important elements such as usability and compatibility. It also increases the participants’ knowledge of the software, whilst testing the accuracy of the configuration and documentation, including the quality of test harnesses and pre-defined scripts.
What are the benefits?
Bug safaris are defined as “exploratory testing” with more tangible results. The results can easily be reported on charts or whiteboards and transferred to the test management and tracking system.
We allow the participants to exercise freedom of thought in executing tests. In this way we can find new bugs as possible new combinations of tests are being exercised. Quality therefore improves as we can address and fix such bugs based on their priority. The participant is encouraged to investigate and should investigate any strange behaviour they find, perform further tests, and ask questions.
By everyone being involved in testing, and not just the test team, it improves the visibility of the test organisation and the importance of testing, whilst sharing the ownership of “quality” to all people involved in the development of the software from concept to implementation.
By performing such tests, we can report and therefore utilise many metrics to find out, for example:
1. Number of bugs or issues found per session
2. Number of sessions run
3. Areas covered in the session with combinations
4. Time required to configure
5. Time required to test
6. Time required to investigate issues
The key is that people work together and discuss openly the software and what it does.
What are the challenges?
This method of testing is still relatively new, and is therefore not a perfect method, nor a substitute for traditional testing methods. The key is to balance out the proportion of how much testing is structured versus unstructured, whilst ensuring that the testing results are captured sufficiently. Such examples of test tracking may require the participant to complete a spreadsheet, matrix or running sheet.
What is being done in future?
STR will run bug safaris in future releases. Bug safaris have been shown to find bugs, and important bugs, and are continuing to win flavour in the testing industry as its true benefits are being realised. Introducing bug safaris have the advantage of not requiring major cultural or system changes, or expensive start-up costs.