Concepts in Cruise
Cruise is an advanced Continuous Integration and Release Management system. It takes an innovative approach to managing the build, test and release process. In order to find your way around Cruise, you'll need to understand how Cruise sees the world. This page explains some basic concepts in Cruise.
If you want to know more about Continuous Integration, in general, refer to Martin Fowler's article on the subject: Continuous Integration.
As with all modern continuous integration systems, Cruise lets you distribute your builds across many computers -- think 'build grid' or 'build cloud'.
And why use a build cloud? There are three main reasons:
- Run your tests on several different platforms to make sure your software works on all of them
- Split your tests into several parallel suites and run them at the same time to get results faster
- Manage all your environments centrally so you can promote builds from one environment to the next
It is extremely simple to get a cloud up and running in Cruise. First, install the Cruise agent software on each computer that is to be a part of your cloud. Next, configure each build agent to connect to your Cruise server. Finally, approve every build agent in your cloud from the management dashboard in the Cruise administration page. You are now ready to build. Additionally, you should associate relevant resource tags with each of your agents to better specify the kinds of build tasks with which each agent is compatible.
A pipeline allows you to break down a complex build into a sequence of simple stages for fast feedback, exhaustive validation and continuous deployment.
How Cruise models distributed work
The unit of work in Cruise is called a job . A job is a set of build tasks that can be performed on a single agent in your cloud. You can associate specific build resources with each build agent -- a specific operating system or compiler version, for example. Cruise makes sure build jobs that require specific build resources are directed to build agents with the appropriate resources. By default, build jobs can be picked up by any agent. Resources are simple text tags which you associate with each agent. You can specify as many of them as you want. This flexibility is important as the agent process itself does not automatically determine anything about its environment.
Jobs are grouped into stages. A stage is a collection of build jobs that can be executed in parallel. This is the mechanism that allows you to, for example, split test suites into multiple parallel streams or run the same build on multiple platforms simultaneously. A stage passes only when all the jobs in the stage pass.
Stages are then joined sequentially into a pipeline . Stages trigger in the order they appear in the pipeline's raw configuration. They can be triggered by: a change in your version control system, manually forcing the pipeline to become active or by a dependency on a given stage of another pipeline. When a stage completes successfully, it triggers the next stage in the pipeline automatically, by default. Alternatively, you can require a manual approval to trigger the next stage. This manual approval requires user intervention. You can delegate the permissions for approval of stages to individuals or groups of users.
An example pipeline
So what does a pipeline look like? Here's an example:
The first stage has two jobs. The unit test job compiles the code and runs the unit tests. The compiled code is then uploaded to the artifact repository. This is the one and only time the code is compiled -- and of course if you're using an interpreted language you can skip this step. The second job does static analysis of the code, uploading the results as html test reports and build properties for further analysis.
When the first stage passes, it automatically triggers the functional test stage. The jobs in this stage download that binaries from the artifact repository, and run a series of functional tests. One job runs on a Linux box, the other on Windows. If your tests take a long time to run, you could split them into suites and run these as multiple jobs in parallel.
Finally there is a stage which deploys your software into your UAT environment for manual testing. This stage has a manual approval in front of it, meaning that somebody has to click a button in order to deploy the application into UAT. Running this stage proves out your automated deployment process -- and it should include some smoke tests that make the job fail if the deployment doesn't work.
The pipeline metaphor gives you several important benefits:
- Because of the way pipelines are modeled and presented, it is trivially easy to match up an acceptance test failure, or a flaw in the UAT environment, with the version of the code that caused it.
- Because you only compile once, you ensure that the thing you are testing is the same thing you will release, and you don't waste resources compiling repeatedly.
- Finally, Cruise allows you to build manual steps into your testing process so that your QAs and users can manually test your software.
The Cruise dashboard allows you to visualize pipelines at a glance. You can then group pipelines together into pipeline groups. Beyond the visual convenience of grouping related pipelines, there are access and security features that allow you to control the set of users that can view a particular pipeline group. This can be a powerful way to carve out more secluded build areas for your users. Consult the Security section for more information