Setting up a continuous delivery process can be quite a momentous task, especially when retrofitting to an existing product, however, I'm quite lucky with a current project in that we're at the very start of the project so there are no existing processes in place.
Once you've read this, I'd love feedback on my solution and ideas for improvements if you have any.
When I joined the project the developers had already put continuous integration and deployment processes into place using Team Foundation Server (TFS) and Octopus. So the challenge placed before me was to extend this process out to a full continuous delivery plan complete with testing, reporting, full end to end life cycle overview, and of course, happy project, change and release managers.
This is taking a bit of thinking. I can easily do the geek thing and get down into the nitty gritty of setting all of this up but the business needs to see how it works at a higher level, and they also need to be given confidence in the solution provided. So this means writing documents.
I started out by figuring out the environments we'd need. Initially I requested far too many environments and after being told 'No' by the infrastructure team (boo sucks!) I had a rethink and came up with the following:
- CI - continuous integration server - already in place.
- DQT - automation server - already in place (just needed to aquire it for my own nefarious means). Needs to match production.
- QA - New - needs to match production.
- Staging - New - needs to match production.
- Production - New - needs to match pro... oh, nevermind...
Product development needs to pass each quality gate in order to move on to the next phase.
Quality Gate 1 - Requirements & Design
The team already had a quality strategy in place for the requirements. All the requirements are reviewed by the test team and they also require business sign-off. The business analysts work closely with the business and project team to ensure accuracy of requirements. Architectural governance and design reviews ensure adherence to best practice and satisfaction of requirements.
Quality Gate 2 – Developing the code
Based on the criteria already set up by the development team, this gateway has the following criteria. All the developers working on building the solution need to follow a strict set of coding standards from the outset, the aim being a high quality product from the beginning of the application development lifecycle. Each check-in of code must be preceded by a code review. Prior to a code review, a developer must build the project and have it pass code analysis. The code is analysed by a code analysis tool that enforces strict coding standards rules. Any overrides or suppressions of code analysis errors or warnings must be discussed as part of the code review. All code checked in to source control is linked to either a requirement, a change request, a task or an issue.
Quality Gate 3 – Automated Build, Source Code & Version Control
The development team automate the build process. When code is checked in, a build is triggered. This automated build process re-runs the code analysis tool, performs unit tests and smoke tests to check that the build process has run correctly and that the software can be deployed to another environment. The scripts that run the automated build process are also subject to Source Code & Version Control, so that any changes to the build scripts are picked up and tested as thoroughly as any other code changes.
Quality Gate 4 - Deployment
Deployments are automated to ensure consistency and accuracy. Once deployed to an environment, automated tests need to run to check that the deployment into the new environment was successful. If the deployment is successful, the deployment tool will pass a success message to the automated test tool that will trigger automated smoke and integration tests. If these tests pass, then automated release notes should be created that include all test results, code changes and any build script changes.
Quality Gate 5 - QA Automation
If the automated smoke and integration tests pass, then the full automated suite of testcases should run. The tool should be set up to run automated regression, smoke, use case, performance, load and integration tests. If these tests pass, the release notes should be updated to show the testing results obtained and the process can then move forward.
Quality Gate 6 - QA Manual Testing
After being successfully tested on the automation server, a deployment is made to the QA server. This allows for another test of the deployment process to a new environment. Automated smoke and integration tests should be performed to ensure the deployment was successful prior to commencing manual testing. Manual Tests are performed on new functionality, changes and defect fixes, these tests are then automated so that they are covered in the next release. Using this process ensures the manual testing is extremely focused and detailed. There should be versions of the software available on the QA environment. One will always be a version behind the current version. This will allow for manual regression/comparison testing should an issue be found in the current version. Once manual testing has been completed on this environment, the release notes should be updated to show the testing results obtained, and the process can then move forward to the staging server.
Quality Gate 7 - Staging/Pre-Prod
Here we perform another test of the deployment process. Once deployment to the is environment has been completed, automated smoke and integration tests should be performed. Release notes should be updated to show that these tests have been performed and the environment is handed over to the business for final evaluation and approval. Only when the business approves can the process move to the next phase.
Quality Gate 8 - Production
If the business passes and approves the staging/pre-prod deployment, the process can then deploy into Production. Again, automated tests should be run across the deployment to smoke and integration test the application but this time without affecting any underlying data - for rather obvious reasons.
Here's the pretty picture I created in Visio to match the words above:
That was the plan, next was figuring out a way to do all of the above. And so began the multiple weeks of evaluating software.
The tool needed to provide an end to end lifecycle view, it needed to hook into TFS and Octopus, it needed to manage manual and automated testing, and test automation needed to be powerful and flexible but still easy to set up and understand. Cut a long story short... we found SpiraTeam and Rapise. They do everything we want them to do, and more.
And now we're onto the next phase, setting up the tools. I've managed to get TFS, SpiraTeam and Rapise all hooked up, the next challenge is setting up the interaction between Octopus and SpiraTeam using the command line interface in SpiraTeam. Once that is set up we'll have an end to end automation from coding through to deployment through to automated testing through to deployment to the next environment.
A few myths I've had to debunk over the past few weeks:
1. Manual deployment into production is better than an automated deployment.
The automated deployment process will have passed testing on multiple environments before reaching production. You can still decide to manually deploy at this point, no-one is preventing that, however, it seems a risky strategy, you could use the well tested automated deployment, or you could use the potentially error prone human deployment.
With Continuous Delivery You can step in and step out of the continuous delivery process at any point. You're not tied to having a fully automated process, in fact, it is probably better to have some manual human check points on the way just to make sure the automation is running the way it should. For example, you could add in a CAB between staging and production to make sure the change board are happy to proceed into production, and then wait for sign-off before proceeding to the deployment into production phase.
2. Quality isn't as good as you don't have a dedicated system test phase.
Quality in a continuous delivery process is much higher as strict rules are put in place from the outset. Instead of the developers spending weeks or months writing code then delivering a package to a system test team to spend a few weeks manually testing, the process is continuously tested throughout the lifecycle using a combination of automated and manual testing. Only code that has passed rigorous testing and has made it through each stage of the continuous delivery process will make it to production.
3. There's no change control
Oh change control, how I love thee. I'm a stickler for change control processes to ensure broken builds don't get into production, however, I can see the benefits of doing it in a new way, both in reduction of time and reduction of risk to the product.
The old way: Build a change register of items requested by the business. Pick 20 or so of the changes and assign them to build x. Developers work on build x and the 20 or so changes touch a lot of other code with risk increased for each additional change included in the build. When the build is complete and tested, it goes to UAT for each and every change to be tested by the business. If one change isn't correct, or the business decide that change isn't quite the way they want it, the entire build is rejected. This is a very high risk method of implementing change both in terms of time and the risk to the code base.
The new way: as a change or defectc is identified, it is coded, built and deployed through the continuous delivery process. The testing is very detailed around this one change and when it gets to UAT there is only one item to sign-off. This is a low risk method of implementing change into a system.
Anyway - that's all I have so far. Feel free to leave feedback. :-)