Like all software companies, we at Space-Time Research have juggled customer demands, complex software, very different uses of our software, and ever changing requirements. This has sometimes resulted in us delivering release software to our customers that is not of a sufficient quality, and later than we planned.
In the past, and as recently as our 6.3 release of our software, our testing group has passed a release and the software has been delivered to a customer and then a critical issue has been found. One of the main reasons this happens is that every customer has a slightly different environment. We currently support Solaris, Red Hat Linux, Windows 64 bit, Windows 32 bit, Windows XP and Vista for our client applications, browsers including IE6, IE7, IE8, Chrome, Firefox, Safari. We read data from any relational database that has a jdbc driver including Oracle, SQL Server, DB2 and others, plus different types of text files. We provide mapping with ESRI ArcIMS, ArcGIS Server, Google Maps and soon Bing Maps. We test all these environments and on our servers, our testing can pass.
Then we get out to the customer environment and encounter different environments & constraints. Not everyone can host a Tomcat application and we might have to hook to IIS. Firewalls might be an issue. Ports might be an issue. The client might operate in a remote way. Even if we don't officially support a configuration, our clients will implement that way anyway and it's up to us to sort it out.
Once we have the software successfully installed and configured at a client site, they then build some databases and work out how they are going to analyse or visualise their information. Every client has different types of databases, structures and uses of their information. Our testing doesn't cover every different type of database - we try to, but of course we don't cover everything. So sometimes we miss things - heirarchical summation options being a recent example.
Finally, our customers use the software with their own workflow. We follow a standard workflow with our automated tests, and then we conduct exploratory testing that mimics what a customer would do, but as we are not the customer, we don't always get that exactly right either.
So, how do we improve it? What have we done and what are we doing next?
Firstly, for our 6.5 General Availability Release, Space-Time Research defined the following quality vision:
- Timely, relevant, functioning software that works!
- Performance, stability and resiliency focus.
- Deliver releases of SuperSTAR that are perceived within STR and by our partners and customers as better than the previous release.
All decisions about testing, and then which bugs we fix, and when we release our software, are related back to the quality vision.
We implemented a partnership approach with some selected customers to enable them to test pre-release versions of our software. We conducted fortnightly builds, ran a couple of days of testing and then made the builds available to the customer. Builds were provided via FTP site, and customers were able to download the software and install in their own test environments. The customers were able to choose whether they would take a build or not. STR also hosted versions of our web applications so customers could do user interface testing without having to run their own installation and configuration.
The customers reported bugs, severity and their own priority via our normal support channel (via email to firstname.lastname@example.org). We regularly triaged the bugs reported, and communicated via conference call with each customer to advise what we intended to do, or discuss concerns.
The benefits of this approach were clear for each customer involved:
- Integration and configuration issues were ironed out during the pre-release phase.
- Customer-focused testing found issues we would never have found.
- The end delivery held no surprises.
- We delivered on time to those customers and met their deadlines.
6.5 General Availability release is almost complete on all platforms. I'll do another blog and announcement about that separately.
For our next release, we are implementing a fully agile development process. Another blog on that is coming too! But for our customers, please know that we want to:
- Involve more customers in pre-release testing.
- Collect more sample databases from customers.
- Collect reference data sets from customers so we can validate our statistical routines.
- Use client test beds for complex or unusual environments.
- Open up our change management and support processes so customers can track issues they are interested in.