We have the test environment in place (if you don’t check here) and we know enough to be dangerous with Amazon Web Services. Next step is probably the most important. We need to get Jenkins installed, configured and running. Jenkins has to be one the testers most useful tools. It’s not just handy for kicking off automated tests it’s indispensable for automating all sorts of processes you have to complete day in day out.

We’re going to cover 4 main areas here…

1. Jenkins Installation
2. Installing Jenkins Plugins
2. Jenkins Configuration
3. Deploying and Building the Application Under Test

By the time we’ve completed the module we will have Jenkins running on our Windows Master machine. The Jenkins configuration will be completed so that it will carry out six main actions…

i. Run up a new/clean EC2 Ubuntu instance (client)
ii. Install the Jenkins client on the Ubuntu client
iii. Download the AUT from GitHub to the Ubuntu client
iv. Build the AUT on the Ubuntu client machine
v. Install and start the AUT running on the client machine
vi. Run some simple post build checks

What You’ll Learn

In this module we’ll go through the 12 parts listed below looking at the practical aspects of setting up Jenkins. By the ended of the module you will understand the following concepts:

 

 

When we’re finished we’ll have this setup:

BTS Overview Module 2 New Page

Note that we’re building on top of the AWS infrastructure we built in the first module by adding Jenkins and a bit of configuration. Whilst we created a client Ubuntu machine in in the first module we’re not actually going to be using that machine. What will happen is that Jenkins will fire up a completely new instance of the Ubuntu machine automatically whenever it’s needed. This is an ideal use case for bringing up Amazon Instances as and when we need them. More on how we do this in a minute.

Why Use Jenkins

On the Jenkins web site they don’t talk about Continuous Integration or Dev Ops or use any other popular buzz words. They simply refer to Jenkins as the “Open Source Automation Server”. And that’s the key. Jenkins isn’t just an app for developers. It’s an automation tool for testers too.

I personally love Jenkins. I don’t think of it as an app for developers or build engineers. I don’t think of it as an app for continuous integration or for building software. Yes it does all of that. But! It’s just the best app for automating a load of the daily or weekly tasks that the test team have to perform. Whether that’s installing the application you want to test or setting up your test environment. It’s just the best way to save yourself loads of time and effort on repetitive tasks.

Prerequisites

A few things we need to make sure we have setup correctly before we start.

Make Sure your Windows Master EC2 Instance is Running

First, if you don’t have your Windows master machine up and running yet then you’ll need to revisit Module 1 and take a look at the section on Running up the Windows Instance.

With the Windows 2008 master machine configured then we’ll need make sure the instance is running and that we have an RDP session open. You can complete these steps with

  1. Open your AWS management console, and view your list of Instances (making sure you have the correct Region selected).
BTS module2 img2 AWS start instance
  1. Once started open an RDP session on this Windows machine.

 

BTS module2 img3 AWS RDP Connect

Then enter the password to open up the desktop session (you may need your .pem private key file to decrypt the password if you’ve forgotten it) .

Create and Save Your Amazon ‘Access Key Id’

Second we need to get our Amazon ‘Access Key ID’. We’ll need this later as we’re configuring Jenkins to start our client/slave machines. For other systems or programs to connect to your AWS account they need an access key (like a password but for an application not a person). Jenkins needs this access key if it’s going to be allowed to start EC2 instances.

To get this Id follow these steps:

  1. Login to your AWS account https://aws.amazon.com
  2. Click on your user account drop down and select ‘Security Credentials’

 

BTS module2 img3 AWS RDP Connect
  1. Expand the ‘Access Keys’ section and click on ‘Create New Access Key’
BTS-module2-img1a-AWS-security-id
  1. You can either download or show the access key. Either way you need to record these two parts of the key

 

Access Key Id
Secret Access Key

NOTE: once you’ve downloaded or viewed the secret access key you CAN NOT obtain it again. If you lose it you’ll have to create another access key (no big deal but it’s easier if you just look after it now).

Make Sure you have your Private key (.pem file)

Third, remember back in Module 1 we created our public and private key pairs? Well at that stage you should have saved your private key pair .pem file (e.g. FirstKeyPair.pem) file. You’ll need this private key when configuring Jenkins later.

If you don’t have have this private key you can go back and create a new key pair. Much easier if you can find the one you created in Module 1 though.

Terminate any Running Unix Client Instances

Lastly, we need to make sure we don’t have any Linux Ubuntu clients running. When Jenkins creates and starts new instances of the Linux clients it checks to see that no existing instances are running OR in a stopped state. So we need to TERMINATE any running instances. To do this follow these steps:

i. login to your AWS management console

ii. go to your AWS EC2 dashboard

iii. select the correct region

iv. click ‘instances’ in the side menu

v. in the list of instances select your Unix-client instance

vi. right click the Unix-client entry, select ‘Instance State’ followed by ‘Terminate’

vii. in the list of instances you should now see this:

BTS-module2-img1b-unix-terminate

What Next?

With our Amazon Access ID, Key pair private key, our Linux machine terminated, our Windows master machine running and a desktop session available we’re ready to start installing and configuring Jenkins.

Part 1: Installing Jenkins

At this point we’re ready to visit the Jenkins-ci.org web site and download the Windows install package. Follow these steps to complete the install on your Windows 2008 server master machine.

  1. Open up Internet Explorer and type in this URL http://jenkins-ci.org
  2. On the home page for Jenkins, right hand side, you’ll see the Download Native packages for Windows section. Click this link.
BTS-module2-img4-Jenkins-home-page
  1. Open the download folder and drill down into the Jenkins zip file. Extract the file in the zip file by dragging it to the desktop
BTS-module2-extract-Jenkins-installer
  1. Run the installer accepting all the default install options. Nothing very exciting seems to happen on completion of the install. Except that after a while a new tab should open in Internet Explorer with the following:
Jenkins-home-page

If this page doesn’t open automatically you can type in this Url:

http://localhost:8080/

This is the default location and port that Jenkins runs on. This’ll be just fine for what we need.

And that’s it. That simple. Jenkins installed and ready to go. Next step configuration and creating your first job.

A Quick Overview of Jenkins

Before we start configuring Jenkins lets take a look at some key features and concepts. As we’ve already mentioned Jenkins is an automation server that gives us the capability to automate many process within our development and test projects.

Part 2: The Main Jenkins Configuration Areas

We need to achieve three things with our configuration of Jenkins

  1. Start a slave machine: we’ll get Jenkins to automatically start our Linux Ubuntu machine when we’re ready to build and install the application under test on this Linux machine
  2. Download, Build and Install the application under test: once the client Linux machine is running we’ll download the source code for the AUT, build the AUT and install it on our Linux client machine.
  3. Run some Smoke Tests Against the AUT: once the AUT is installed and running we’ll run a couple of very simple tests to make sure that it is up and running.

Part 3: Jenkins’ Plugins

A fundamental concept within Jenkins is it’s plug-in module capability. There’s a plug in module to help you automate just about any task you need to automate. A full list of the modules can be found here..

https://wiki.jenkins-ci.org/display/JENKINS/Plugins

These plugins are broken into a number of different types that deliver different types of capabilities:

Source Code Management: Whilst Jenkins supports source code tools like Subversion and CVS out of the box there are other source code tools you might want to work with. Source Code Management plugins support all sorts of other source code tools. We’ll be working with Git and GitHub so we’ll need to add a plugin for GitHub in a minute.

Build Triggers: When do you want, or need, to build and test the AUT? Well plugins for Build Triggers allow you to monitor external activities and trigger a build of your AUT when certain conditions are met. For example a developer may check in some code and you’ll want to trigger a build off that check in.

Cluster Management and Distributed build: Typically with lots of automation activities being carried out, and many builds to process, you’ll distribute these activities to other client machines. You won’t run everything on your main Jenkins server. Distributed build plugins allow you to run up, communicate and control other machines. We’ll be using an Amazon EC2 Plugin that runs up and controls our Amazon EC2 Linux Ubuntu slave machine.

Slave Launchers and controllers: Depending on the type of slave machines you run up you’ll want to find different ways of communicating with those slaves. You’ve already seen how we’ll be communicating between our Windows master machine and Ubuntu client machine with SSH. So we’ll be installing the SSH Slaves plugin so that our Windows master Jenkins machine can communicate and control the Linux Ubuntu slave machine over SSH.

Other Post Build Actions: Once the build is complete you may want to complete other actions like running Windows Power shell scripts or Unix shell scripts. Things like this can be setup with ‘Post Build Action’ plugins. We’ll be using Hudson Post build task’ plugin to run some install completion scripts.

There are many other categories of plugins, including Build notifiers, Artifact up-loaders and build reporting. There are hundreds of plugins so chances are any automation task you need to complete will be covered either by features native to Jenkins or a plugin. If it’s not you can always write your own plugin too (but that’s a topic for another day).

Part 4: Installing Plugins

Let’s install the plugins we need for this project. We’re going to need these:

Amazon EC2 Plugin

https://wiki.jenkins-ci.org/display/JENKINS/Amazon+EC2+Plugin

With this plugin we can use our Jenkins machine to run up an Amazon EC2 instance automatically. We’ll be running up a Linux Ubuntu instance that will then run a Jenkins client application. This Jenkins client application will then be responsible for downloading the AUT source code, building the AUT and running the AUT.

SSH Slaves Plugin

https://wiki.jenkins-ci.org/display/JENKINS/SSH+Slaves+plugin

If you remember from our first Module we connected to our Linux Ubuntu client machine using SSH. This plugin gives Jenkins the capability to use our SSH private key to connect to the Amazon EC2 client machine. Once Jenkins has a connection to the client machine it can install the Jenkins client application on this machine and then control this machine.

GitHub Plugin

http://wiki.jenkins-ci.org/display/JENKINS/Github+Plugin

For this project our AUT is an application called Rocket.Chat. This project’s source is hosted on GitHub. With the Jenkins Github plugin we can connect to GitHub and download the source automatically.

Post Build Tasks

https://wiki.jenkins-ci.org/display/JENKINS/Post+build+task

Once we’ve completed the AUT build and install we’ll want to run a few tasks to check that everything is okay. With this plugin we’ll be able to check the logs for successful install messages and run a few checks to make sure the AUT is running correctly.

How do we install and/or update these plugins? Well Jenkins makes is really easy. Just follow these steps:

 

  1. Click ‘Manage Jenkins’ in the control panel

 

  1. Click ‘Manage Plugins’ in the control panel

 

On the Plugin Manager screen you should see a list of plugins grouped by tabs that cover; Updates, Available, Installed and Advanced tabs. First we’ll need to update one plugin.

  1. On the ‘Update’ tab find the ‘SSH Slaves plugin’, check the ‘install’ check box and click the ‘Download now and install after restart’ button.

 

 

 

  • Let the install and upgrade complete so that ‘Download Successfully’ is displayed. Click the ‘Go back to the top page’ link.

 

Update success

So that’s one plugin updated just another 3 to install now. To install these plugins..

 

  • Select the ‘Available’ tab and search for each of these plugins in turn, then checking the check box:Amazon EC2 pluginselect-plugins
    GitHub PluginHudson Post build task

 

Then click the ‘Download now and install after restart’ button again.

At this point Jenkins should start installing the plugins and all the dependency plugins. So you’ll see a lot more than just 3 plugins installed.

Once you see all of the ‘Downloaded Successfully’ messages you can go ahead and restart Jenkins.

 

  • Restart Jenkins just by clicking on the ‘Restart Jenkins when installation is complete…’ check box

 

At which point you should see this restart message:

With all our plugins installed we’re ready to start configuring Jenkins.

Part 5: Configuring Jenkins

Now for the interesting part. We need to configure Jenkins so that the tasks we need automating are setup to be run by Jenkins. There are four key parts to this configuration.

Firstly we need to set Jenkins up so that it is integrated with our Amazon AWS service. The ‘Amazon EC2 Plugin’ we’ve installed needs to have the ‘Amazon Access Key Id’ installed so that it has permissions to drive AWS.

Secondly, we need to configure Jenkins so that it will start our Linux Ubuntu machine on demand. When Jenkins starts that machine it needs to specify the Amazon AMI to use and install other software packages that we’ll need for our build/instll.

Thirdly, we’ll also need to setup an initialisation script for the Linux Ubuntu instances that we start via Jenkins. This script will be responsible for updating existing software packages on the Linux client and adding some additional software that isn’t included in the AMI we’ve used.

Lastly, we need to configure the ‘job’ that kicks of the build and install of our AUT. Jobs in Jenkins are just automated tasks that you configure Jenkins to run. For each Job you configure you can run that Job (e.g. build now), track the changes you’ve made to it and see all the run results for that job.

Hot tip: this setup is designed so that the client/slave machine that we install our application under test on gets shut down and everything deleted on a regular basis. This means any manual configuration you do, or data you add, on this slave machine gets deleted too. This is by design. By good design.

When you know your test server will get deleted and everything removed on a regular basis it FORCES you to make sure EVERYTHING is in the automation scripts. Every actions that you need to build the test environment is scripted. This means everything is documented and repeatable.

Part 6: Configuring Jenkins – Amazon AWS Integration

Back on the Jenkins home page you’ll need to:

 

  1. click on the ‘Manage Jenkins’ link again

 

  1. click the ‘Configure System’ option

There are a lot of options for configuring Jenkins. All we need to concern ourselves with is the ‘Cloud’ section (at the bottom of the page). The Cloud section is where we define our Amazon EC2 instance start up details.

 

  1. Scroll to the bottom of the Jenkins ‘Configure’ page where you should find the ‘Cloud’ section. Then click ‘Add a new cloud’ and select the ‘Amazon EC2’ option

From here we can enter all of the details needed to run up our EC2 Ubuntu instance. Before we can complete this though we’ll need to get our Amazon ‘Access Key ID’ which you should have from going through the Prerequisites Section of this module.

  1. Enter the following details for your Amazon EC2 instance

 Name: RocketChatServer
Access Key ID: <key created in prerequisite stage of module>
Secret Access Key: <key created in prerequisite stage of module>
Region: <select your region>*1
EC2 Key Pair’s Private Key: <copy in the contents of your .pem file>*2

*1 – make sure you select the same region that your Windows 2008 server machine is running in. Login to the AWS management console and check the region if you’re not sure. It’s critical that you select the right region.

*2 – in the Prerequisite section in this module we talk through getting your .pem private key file that you created in module 1. You’ll need to open this .pem file in a text editor and paste the contents in to the Jenkins ‘EC2 Key Pair’s Private Key’ text box.

NOTE: if you see this error message once you’ve pasted in your private key you can usually just ignore it:

 

Once all of these fields are configured you can click the ‘Test Connection’ button.

Once you’ve completed the test you should see the word ‘Success’ to the left of the ‘Test Connection’ button.

 

Now we’re connected to our AWS account we’re ready to start configuring the details for the Linux Ubuntu slave machine we need to start up.

Part 7: Configuring Jenkins – Starting the Linux Amazon Instance Slave

  1. On the ‘AMI’s line click the ‘Add’ button
  1. Enter the following details for your AMI instance

Description: RocketChat-Server
AMI ID:<see below>

So the AMI id is the Amazon Machine Image number that correlates with our Linux Ubuntu machine that we want to run up. So it’s the same Id as the Linux Ubuntu machine instance that we already have configured in our AWS account. We find this ID with …

i. login to your AWS management console

ii. go to your AWS EC2 dashboard

iii. select the correct region

iv. click ‘instances’ in the side menu

v. in the list of instances select your Unix-client instance

vi. in the instance details panel (bottom of page) look for this:

From here you can pick out the ‘ami’ which should be something like:

ami-9abea4fb

Then we just need to click on the ‘Check AMI’ button. This should give you confirmation that AWS can find the image you need. It should look like this:


  1. Complete the rest of the AMI details

Instance Type: t2.micro (IMPORTANT*)

EBS Optimized: Un-checked

Availability Zone: <leave blank>

Use Spot Instance: Un-checked

Security Group Names: default, Unix-AUT (see below)

Remote FS root:<leave blank>

Remote user: ubuntu

AMI Type: unix

Root command prefix:<leave blank>

Remote ssh port: 22

Labels: RCBuildDeployServer

Usage: Utilize this node as much as possible

Idle termination time: 30

Init script:<leave blank>

*-make sure you select ‘t2.micro’ here as this is the only Instance Type on the free tier. Select anything else and you’ll be charged for it!

The security groups you configure here will be determined by what you’ve setup in your AWS account. To find the right value:

i. login to your AWS management console

ii. go to your AWS EC2 dashboard

iii. make sure you have the correct region selected

iv. click ‘Security Groups’ in the side menu

v. in the list of security groups find your Unix-AUT group name

vi. find the ‘Group Name’. You’re looking for this:

This group will be specific to you. You may have used the same name as me but it’s best to check using the steps above. You’ll need to use both the Unix security group (Unix-AUT) and the default security group (default)

That should give us everything we need for this initial stage. Once you have the settings looking like this click the ‘Apply’ button.

Now we’re ready to check that Jenkins can run up an Amazon EC2 Linux Ubuntu instance for us automatically.

  1. Return to the Jenkins home page (click ‘Jenkins’ link top right)
  2. Click the ‘Build Executor Status’ link on the home page

This will take you to the ‘Nodes’ page. On this page you’ll have a list of nodes that can act as build machiens. You’ll already see your Windows 2008 Master machine listed. You won’t see a slave machine in the list yet because it’s not running yet. However, we can run the salve machine up from here.

  1. Click the ‘Provision via RocketChatServer’ button and select the AMI we defined earlier

At this point you should see a message from Jenkins that ‘This node is being launched’

From here we can do two things:

  1. check the Jenkins log to make sure the node is launched correctly
  2. check our AWS account to see the running node
  1. Check the Jenkins log by clicking ‘See log for more details’

At this point Jenkins is logging all the actions carried out as it brings up this Ubuntu instance. What you need to check for is this…

This just tells us that the instance was started and that Jenkins successfully logged in using our private key under the account ‘ubuntu’. You may see some warning messages but as long as the machine is running and the login worked we’re on the right track.

  1. Check the AWS management console

On the AWS management console we shold be able to see our new instance running now.

i. go to your AWS EC2 dashboard

ii. make sure you’ve selected the right region

iii. click ‘Instances’ in the side menu

iv. in the list look for Unix-client (or similar)

v. select the Unix-client in the list and check it’s running

At this point you should see something like this…

We’ve successfully run up our Linux Ubuntu instance automatically from Jenkins. Great!

A few things worth mentioning…

Firstly, you may have noticed that one of the parameters we configured in Jenkins for this instance was:

 Idle termination time: 30

This means that if the instance hasn’t done anything for 30 minutest Jenkins will automatically shut the machine down for us. No need to do anything ourselves we can just leave it to Jenkins to shut the system down when it’s not needed.

Secondly, you may have noticed, in the Jenkins logs, that there were some error messages around the installation of Java. Something along the lines of:

WARNING: Failed to download Java

We’ll solve that in the next step when we get Jenkins to run some initialisation scripts on the server. We do need Java though as the Jenkins Java client needs to be installed on this Ubuntu machine in order for Jenkins to have full control over it.

Part 8: Configuring Jenkins – Instance Initialisation Script

  1. Set Up the Instance Initialisation Script

The last thing we need to setup during the run up of this instance is the installation of a few additional software packages. These packages aren’t installed on the AMI we’ve started out with. To get round this Jenkins has a field in the EC2 configuration called “Init Script”

To update this we’ll need to:

  1. return ot the main Jenkins dashboard (click the ‘Jenkins’ link top left)
  2. click the ‘Manage Jenkins’ link
  3. click the ‘Configure System’ link
  4. scroll to the bottom of the page where we configured our Amazon EC2 setup
  5. the last section on the page should be the ‘Init script’ field

So we’ll need to add some Unix shell commands to this Init script field. The purpose of these commands is to:

a. update software already on the system

b. install Java (Jenkins needs Java to run it’s remote client app)

c. install npm (more on this later)

You don’t really have to understand this script (it’s just a unix shell script) but as long as you understand why it’s there that’s the important part. So copy the following into the Init Script field:

#!/bin/sh
# Update existing packages and install Java
export JAVA="/usr/bin/java"
if [ ! -x "$JAVA" ]; then
    sudo apt-get update
    sleep 10
    sudo apt-get install -y openjdk-7-jre ca-certificates-java tzdata-java libcups2 libcups2 libjpeg8 icedtea-7-jre-jamvm openjdk-7-jre-headless openjdk-7-jdk git npm
    sleep 5
fi
# Add Swap Space
SWAP=/mnt/swap1
if [ ! -f $SWAP ]; then
    sudo dd if=/dev/zero of=$SWAP bs=1M count=2K
    sudo chmod 600 $SWAP
    sudo mkswap $SWAP
    # add new swap to config and start using it
    echo "$SWAP none swap defaults 0 0" | sudo tee -a /etc/fstab
    sudo swapon -a
fi

You should have this now:

  1. Return to the Jenkins home page (click ‘Jenkins’ link top right)
  2. Click the ‘Build Executor Status’ link on the home page
  3. Click the ‘Provision via RocketChatServer’ button and select the AMI we defined earlier

At this point you should see a message from Jenkins that ‘This node is being launched’. This time round when you click on the ‘See log for more details’ you should see a bit more going on. In summary the log file should show you …

i. the Instance launching (INFO: Launching instance: i-1ebaa1c7)

ii. the ssh connection (INFO: Connected via SSH)

iii. execution of the Init Script (INFO: Executing init script)

iv. updates to existing software packages

v. installation of Java (INFO: Verifying that java exists)

vi. launching the Jenkins slave (INFO: Launching slave agent)

And then the final entry in the log…

Slave successfully connected and online

  1. Go back to the Nodes dashboard (click ‘Jenkins-> Nodes’ in the top menu bar)

In the node list we should now see our RocketChat-Server node up and running:

At which point we have a fully configured Jenkins slave machine, running all the software we need that’s ready for us to start installing our AUT on. Next we’ll look at how to configure Jenkins to get that AUT built and installed automatically.

Part 9: A Few Points about our Jenkins Slave Machine

At this stage we can use Jenkins to start up a slave test machine automatically. There are a few points worth mentioning about how we can control this slave machine now.

Once the slave is running Jenkins runs a ‘Slave Agent’ on this machine. It’s this agent that allows the slave to talk to the master Windows Jenkins machine. It’s this slave that gives Jenkins full control of jobs and actions that can be carried out on the slave.

Slave Idle Termination Time

For example in our configuration for the Amazon EC2 slave (Manage Jenkins -> configuration) we had a setting called ‘Idle termination time’

This tells Jenkins that if the slave is inactive for more than 30 minutes to ‘Terminate’ this slave Linux EC2 instance. Great for keeping control of our Amazon charges (don’t want to be spending money on machines that are standing idle). Not great if we have data on that machine that we need as we’ll lose it all when it’s terminated. For us though we’re not worried about losing data so this’ll be just fine.

Managing Slaves from the Jenkins Master Machine

Return to the Jenkins home page and click on the link for our slave.

Viewing details for our slave you’ll see a number options in the left hand menu. These options allows us to:

Delete Slave: select this and you’ll Terminate the Amazon EC2 instance for this slave.

Configure: from here you can view and modify some of the configuration parameters for this slave

Build History: details about builds we’ve run on this slave.

Load Statistics: stats on how hard this machine is having to work.

Script Console: feature allowing us to run Groovy script on the slave machine from our Jenkins master machine

Log: details about the running up and install activities that took place when we created this instance

System Information: full list of system properties

Disconnect: the ability to disconnect Jenkins from the slave but leave the instance running still (you can reconnect later if you need to)

Part 10: Setting Jenkins up to install the AUT

So the last bit to configure is the download, build and install of the application under test (AUT) on our slave machine. The AUT we’re going to use is a chat application called Rocket Chat.

https://rocket.chat

Now it’s worth mentioning here that this is just an example app we’re using for this course. We’ll go through the steps to build and install this app but you DON’T need to understand the scripts needed to build and install Rocket Chat. These are specific to Rocket Chat.

In practice, when you’re testing your own applications, you’ll need to replace the Rocket Chat build and install scripts with your own scripts. This will mean talking to your developers or build engineers. Then adding the build/install steps for your app into Jenkins.

So from this you just need to understand how you configure Jenkins. You don’t need to understand the Rocket Chat build/install process. In real life you’ll have your own applications build/install process to add to Jenkins.

Jobs and Node Labels

Jenkins is all about configuring and running jobs. Those jobs are typically build jobs but they can include any task you need to automate. For example jobs that run tests and jobs that install applications. Anything you need to automate Jenkins can be configured to run as a job.

Jobs need to run on nodes or slaves. Jenkins has some pretty clever features for working out the best places to run jobs (e.g. on the master machine or a particular slave). We’re going to keep it simple though. We going to give our slave/node a label (e.g. RCBuildDeployServer). Then when we configure the Job we’ll say “the job must be run on a slave/node that has label RCBuildDeployServer”. This way we tell Jenkins not to run everything on the Master machine (which is the default) but to run up an instance of the slave that we need (if it’s not already running) and run the job there.

So first up we to make sure we’ve configured our node/slave label.

Node Labels

To give our Linux Ubuntu instances a lable we need to go back to our Jenkins coniguration page.

  1. On the Jenkins home page click on ‘Manage Jenkins’
  2. On the Manage Jenkins page click on ‘Configure system’
  3. Scroll to the bottom of this page and find the ‘Amazon EC2’ section
  4. In the ‘Labels’ field enter ‘RCBuildDeployServer’

 

  1. Click the ‘Save’ button

Setup the Job

Now we can specify where we want to run a Job (on the slave/node labled ‘RCBuildDeployServer’) we can configure a new job

  1. On the Jenkins home page click on ‘create new jobs’
  2. Enter a Job name (e.g. ‘BuildRocketChatOnNode’), select ‘Freestyle Project’ and click OK
  1. On the Job configuration page set these key values

Restrict where this project can be run: RCBuildDeployServer

Source Code Management: None

Build Triggers: Build periodically

No need to add a schedule here yet. We’ll kick our jobs off manually for now.

  1. Click on the ‘Add Build Step’ button and select ‘Execute Shell’
  1. In the command text box we need to add our build script. Remember you don’t need to understand the detail here. Just that for our demo app these actions build and install the app. You’ll have a completely different set of actions for your application.Copy and paste all of the scripts below into the ‘Command’ field:
    #Clean up if any left overs from last build
    SCREEN_RUNNING=’/usr/bin/pgrep SCREEN’
    ifSCREEN_RUNNING fi NODE_RUNNING=’/usr/bin/pgrep node’ ifNODE_RUNNING fi if [ -f master.zip ]; then rm -f master.zip fi INSTDIR=./Rocket.Chat-master if [ -dINSTDIR fi MONDIR=/home/ubuntu/db if [ -d $MONDIR ]; then rm -Rf /home/ubuntu/db fi pwd #Install packages we need for the build sudo apt-get install unzip curl https://install.meteor.com/ | sh sudo npm install -g n sudo n 0.10.40

     

    #Configure Mongo Database
    MONDIR=/home/ubuntu/db
    if [ ! -d $MONDIR ]; then
    mkdir /home/ubuntu/db
    fi
    pwd

     

    <h#Build and install Rocket Chat
    PUBLIC_HOSTNAME=”$(curl http://169.254.169.254/latest/meta-data/public-hostname 2>/dev/null)”
    wget https://github.com/RocketChat/Rocket.Chat/archive/master.zip
    unzip -o master.zip
    cd ./Rocket.Chat-master
    meteor build –server $PUBLIC_HOSTNAME –directory .
    cd ./bundle/programs/server
    npm install
    cd ../..

     

    #Make sure processes continue to run when Jenkins script completes
    echo -e “\n”
    ps -ef
    export BUILD_ID=dontKillMe
    echo -e “\n”

     

    #Start mongo DB and Rocket Chat
    screen -d -m /home/ubuntu/.meteor/packages/meteor-tool/.1.1.10.ki0ccv++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/mongodb/bin/mongod –bind_ip localhost –dbpath /home/ubuntu/db/ –smallfiles
    sleep 10
    export MONGO_URL=mongodb://localhost:27017/rocketchat
    export ROOT_URL=http://localhost:3000
    export PORT=3000
    screen -d -m node /home/ubuntu/workspace/BuildRocketChatOnNode/Rocket.Chat-master/bundle/main.js
    sleep 10

     

    # list system details and what's running on the system before we complete script
        echo -e "\n"
        pwd
        echo -e "\n"
        env
        echo -e "\n"
        ps -ef

    So at this stage we should have the Build Execute script entered as shown

    Like I’ve said this little lot is particular to the Rocket Chat app we’re using for this example. When you come to implement this in the real world you’ll work with your developers to determin what goes in here.

    We can then click on the ‘Save’ button

  2. On the Job summary page click the ‘Build Now’ button
  3. Jenkins will go through a few steps now. First it will run up a Linux Ubuntu EC2 slave machine. This is the slave machine Jenkins needs to buid and install the application on.If you check your AWS dashboard you should see this instance initializing. It may take a while as it has to run through all the initialisation scripts we defined too.Then, once the slave is ready, we’ll see the build start.

    This is our first build, as indicated by the ‘#1’ tag. We can view the detail and progress on this build by clicking on the ‘down arrow’ next to the #1 and selecting ‘Console Output’

    The console output will list all of the actions being carried out as part of this build and install.

    This may take some time!

    Once the build and install is complete you should something like this at the end of the log file

    From here we can check if the application is actually running on our slave EC2 Linux Ubuntu instance.

  4. On your Windows RDP session, open a new tab in Internet explorer. Enter the following address: http://localhost:3000

You should see the application running like this:

If you want to login and play with Rocket Chat you can click on the ‘Register a new account’ link (you don’t need to follow any email confirmation process – the first login can be created direct from the GUI).

If you go back to your Jenkins home page you should see the successful build reported as follows:

Accessing Rocket Chat from your Local Machine

If you want to access the Rocket Chat application from you desktop/laptop then follow these steps:

  1. in your EC2 management console select the Linux host
  2. find the ‘Public DNS’ value
  3. Construct the following Url: http://<public-dns-value>:3000
  4. Enter this Url in your browser

Note that this will only work if your security groups are configured correctly and Public DNS settings are configured. See module 1 of this course for more info.

All that remains now is to setup a few clean up actions and a few tests to make sure Jenkins reports the build/install as a success.

Part 11: Jenkins Post Build Actions and Smoke Tests

Once we have a setup that will build and install our application it’s worth configuring Jenkins to run a few checks and tests just to confirm that things completed without any issues.

We’ll use the Post-Build Actions feature to carry out two actions:

a) check the logs for successful install messages
b) use ‘wget’ to check that the Rocket Chat home page is displayed

We can set these up by following these steps:

  1. go back to the Job configuration by selecting configure on the home page.

 

  1. Scroll to the bottom of the configuration page to find the Post Build Actions section
  2. Click the ‘Add post-build action’ button and select the ‘Post Build Task’ option

Check the Logs

We’ll configure a task to look for 2 log messages. Two log message that we know we should see during the install if it’s been a successful install.

  1. In the Log text field enter this text (it’s a reg ex expression)

00:..:.. SCREEN -d -m node /home/ubuntu/workspace/BuildRocketChatOnNode/

Then click the ‘Add’ button and add a 2nd log message check

00:..:.. SCREEN -d -m /home/ubuntu/.meteor

These checks are specific to the Rocket Chat application install. Again when you build this for your own application you’ll have to replace these with checks specific to your build. For info though these messages are checking that the Rocket Chat processes were successfully started and running.

  1. In the Script field enter this shell script text
!/bin/bash
#keep checking to see when Rocket Chat is up and running
for I in 1 2 3 4 5 6 7 8 9
do
sleep 60
# use wget to see if we can download index page
# if wget returns true we can download index page
# -T 3 => wait 3 seconds for response
# -t 1 => means only retry once
if (exec /usr/bin/wget -T 3 -t 1 http://localhost:3000)
then
echo “Rocket Chat now running ”
exit 0
else
echo “Waited $I minute(s) - Rocket Chat not running yet ”
fi
done
echo “ Rocket Chat didn’t start ”
exit 1

Again this is specific to Rocket Chat. We’re just running this script on the Linux Ubuntu Rocket Chat server to see if Rocket Chat starts running. We’re using a Unix command called ‘wget’ which gets a web page, the Rocket Chat home page. If this command returns a 404 over 10 attempts then we fail the check. If it returns a http 200 response code then we pass the check.

  1. Check the ‘Run script only if all previous steps were successful’ check box

It’s only worth running the shell script if the checks on the log file pass.

  1. Check the ‘Escalate script execution status to job status’ check box

If these checks pass then mark the build in Jenkins as a pass. If any of these checks fail then mark the build as failed in Jenkins. We’ll see the reporting of these results in a minute.

Then click the ‘Save’ button

Part 12: Starting Subsequent Jenkins Builds

Now at this point, do you remember this setting in your Amazon Ec2 configuration…

 Idle termination time: 30

Well your Linux slave machine has probably been idle for 30 minutes now. As a result it’s probably shut itself down. Then again maybe you went through this really quickly and it’s still running. Either way, depending on the state of this slave machine Jenkins will do one of two things when we start a new build:

a. If it has shut itself down:

When we kick off a new build it will start from the beginning and create a new EC2 instance.

NOTE: if the node has been shutdown and then a new node run up for a subsequent build then the new node with have a NEW IP ADDRESS and HOSTNAME.

b. If it hasn’t shut itself down:

The slave is still running, so Jenkins just needs to start a new build and install using the existing slave machine

If you want to avoid these nodes shutting down you can always extend the Idle termination time to more than 30 minutes. Or set it to 0 so that it always stays running (mind your AWS spend if you do this though).

Start the next build

  1. Return to the Jenkins home page
  2. Click the Job name (just to the right of it) and select ‘Build Now’ from the drop down menu

Assuming you didn’t have a slave running the whole process will kick off again, starting with the running up of the Amazon EC2 instance. Then the build/install will take place. Then we’ll run our completing checks/tests.

Don’t forget it can take a few minutes for these Amazon EC2 instances to run up. It can also take up to 10 minutes to complete the full job including the build and install. This is because we’re using free tier instances within AWS. Please be patient.

If the build, install and tests all succeed then we’ll see this Blue lamp and the sun icon on the Jenkins home page:

If anything fails we’ll see a red lamp and the weather will start to turn cloudy:

BTS-module2-subsequent-builds

Getting it badly wrong over a number of builds and the weather turns to Thunder and Lightning.

BTS-module2-subsequent-builds

In short the lamp indicates the state of the last job. The weather is an indication of job stability. So an aggregation of results across the last few builds.

Conclusion

Building on our AWS environment we’ve configured Jenkins so that we can control a slave machine (an Ubuntu Linux machine) and build and install the application under test. We’ve also factored in a few small checks to make sure everything is up and running.

The key take away here is that Jenkins is not just for building software. It’s not even just for implementing continuous integration. Both of these are great goals. However, on it’s own it’s just a great tool for testers. It allows us to automate a lot of our repetitive tasks like running up test environments and installing the software we need to test.

Then as we add to this the capability to kick of automated tests we’re into making our test process far more efficient. And what we’ll move on to next, in module 3 of this course, is linking in automation tools like Selenium.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
facebooktwittergoogle_plusredditpinterestlinkedinmail

Our 6 module course on Building the Test Automation Framework starts with Amazon Web Services (AWS). We’re going to use AWS, and more specifically, the Elastic Compute Cloud (EC2) to build our test environment and automation system. It all starts with configuring and running up the virtual machines we need to run everything on.

If you haven’t read our introduction to Building the Test Automation Framework you can read more here:

An Introduction to Building the Test Automation Framework

This first module is all about understanding Amazon Web Services (AWS) and Elastic Compute Cloud (EC2).

 

We’re going to cover six key areas:

  • First, creating an AWS account (you can skip this part if you already have one)
  • Secondly, configuring two security groups.
  • Thirdly, creating a security key pair.
  • Fourthly, we’ll run up two Virtual Machines (one Windows master and one Unix slave)
  • Fith, we’ll install Putty for our Secure Shell (so we can connect from the windows machine to the unix machine)
  • Lastly, Monitoring Usage

The last few parts we’ll look at checking your AWS usage and making sure you run your virtual servers down. We want to avoid you getting charged for using these Amazon virtual machines (more on this in a moment).

Why Use Amazon Web Services?

As a tester why would you want to use Amazon Web Services (AWS)? Well it’s fast to get setup, it’s scalable and it’s cost effective. Let’s address each of those points in turn.

So you need a new environment to test in. If don’t already have the hardware you’ll need to purchase the hardware, install the operating system, configure the network, etc, etc. If you have an AWS account all you need to do is pick a machine image and fire it up. No purchase, no ordering, no hardware configuration and no operating system install. It’s all there in minutes.

The AWS Compute side of things isn’t called Elastic Compute Cloud (EC2) for nothing. If you don’t need it you run it down. If you need more of it you scale it up. What you need is there, when you need it – on demand. No acquiring physical machines that are obsolete within a few years.

The whole concept is you use, and pay for, what you need when you need it. The traditional model is that you purchase a PC and pay the full cost up front. If you only use your PC for half the time (e.g. you don’t use it throughout the night) you don’t pay half price for it. You still pay full price. And then there’s all the hidden costs, like network infrastructure, power, racking, cooling, etc. This all adds up, but you never see it. Okay, you can rack up big costs fast with AWS if you’re not careful. Manage it well though and you’ll save money.

Yes this is a new way of working (well not even that new anymore), but it’s a smart way for testers to work.

Prerequisites

You don’t need a lot to cover this or any of the other module in this series. However, you will need the following:

1. A windows desktop or laptop that runs RDP
You can check if you have RDP on your machine with:

  • press the Windows button + r
  • in the Run window type ‘mstsc’
  • this should bring up the Remote Desktop Connection window

2. A credit card
No, this course doesn’t cost you anything. However, to sign up for an AWS account, even if you stick to using the AWS free tier, you’ll need to provide credit card details. Sorry that’s not our requirement – it’s Amazon’s requirement.

We have designed this course so that you use the Amazon Free Tier. So even though you have to provide your credit card details you should not be charged. HOWEVER, a word of WARNING! It is YOUR responsibility to monitor your usage and keep track of your AWS resource usage. If you go above the free tier allocation Amazon WILL BILL YOU. If this happens it will be YOUR bill to pay. We can’t be held responsible for any charges you incur! Sorry.

What You’ll Learn

In this module, covered by the 11 parts listed below, we’ll go through the practical aspects of using AWS. By the end of the module you’ll understand the following concepts:

 

What will your setup look like when we’re finished?

Over the course of the 6 modules you’ll get to this….

BTAF-Overview

For this module you’ll run up the Windows master machine (that we’ll run Jenkins on later) and one slave Unix machine (that we’ll use to build and install the application under test on). Don’t worry if you’re not familiar with Unix or you’re not familiar with Windows. We’ll cover all the steps you need. By the end of this module we’ll have created the following:

BTS-Overview-Module-1

You’ll start out with a master/control machine, running Windows Server 2008 (yes I know that’s old but it’s less resource hungry and delivers everything we need). This machine will be running Jenkins which will control all the other machines. You’ll access this machine using RDP from your desktop or laptop.

Then you’ll add a Unix (Ubuntu) machine where we’ll build, install and test our Application Under Test (AUT). Normally you’ll have build and test on different machines but for this module one machine will do. The application we’ll be working with is Rocket Chat (see https://Rocket.chat). More on why we’ve chosen Rocket Chat back in the introduction post here.

Over time, and as you progress through these modules, we’ll be adding various other nodes/hosts to this network. They’ll include Selenium, JMeter and SoapUI nodes.

So we’re looking for a distributed network of nodes that will run within their own Amazon Virtual Private Cloud (VPC). We’ll make sure the machines have public DNS records so that we can access these machines from outside of the VPC. Within that VPC you’ll configure the security groups so that these machines only allow access for the services we need. Finally, we’ll connect both machines (Windows and Unix) by installing Putty (a Secure Shell client) on the Windows machine. Putty will allow us to SSH from the Windows machine to the Unix machine. If you’ve not come across Putty and SSH we’ll cover it all in just a moment.

For this module we’re only going as far as setting up the AWS virtual machines. We’ll be looking at installing Jenkins and setting up the AUT in the next module.

Let’s get started!

Part 1: Creating an AWS Account

We’re going to build our test system using Amazon Web Services. This allows us to spin up virtual machines in the cloud on demand. Once you’ve signed up for an AWS account you’ll have access to all of these services through the AWS dashboard:

AWS Dashboard

You’ll only need the EC2 service (along with the assoicated EBS storage, security and machines images) for now. So how do we create our AWS account?

If you already have an AWS account you can skip the following and jump to AWS Fundamentals.

Be warned though……. if you have an existing AWS account you may be out of your 12 month Free tier period. Or you may have nearly exghasted all of your free tier resources. So Amazon may charge you for running up the virtual servers covered here. It’s YOUR responsibility to monitor and, if necessary, pay for any usage above the free tier. If you don’t know how to monitor this we show at the bottom of this module.

Follow these steps to create your AWS account:

1. Click the ‘Sign In To The Console’ Button here (you can’t miss it!)…

http://aws.amazon.com/

2. On the “Create an AWS Account” page:

AWS Sign up

– enter your email
– select ‘I am a new user’
– click ‘Sign in’

3. On the “Login Credentials” page:

AWS sign up step 2

– complete all the fields
– click ‘Create account’

4. On the ‘Contact Information’ page:

AWS Sign up form

– select ‘Company Account’ or ‘Personal Account’
– complete all the fields (including a CORRECT telephone number*)
– Read the AWS Customer Agreement and confirm (no I didn’t read it … but you sould)
– click ‘Create account and Continue’

* – I don’t like giving out my telephone number either but it’s the only way to sign up for an AWS account as they use this to confirm your details.

5. Enter your ‘Payment Information’

AWS phone number

– review the Free Usage Tier info
– enter credit card details
– click ‘Continue’

6. Complete the ‘Identity Verification Step’

AWS Phone Number

– confirm the correct telephone number
– click ‘Call Me Now’
– complete steps 2 and 3

7. Select your ‘Support Plan’

AWS Select Your Free Plan

– just stick to the Basic (Free) option
– click ‘Continue’

8. Sign in to the console

At this point you should see the welcome screen and be able to…

– click the ‘Sign In to the Console’ button
– enter your email address
– select ‘I am a ternurning user…’
– enter password
– click ‘Sign in using our secure server’

And that’s it. At this point you should have an AWS account, you should be logged in and see the management console that lists all of the available services.

AWS Dashboard

Part 2: AWS Fundamentals

These fundamentals are all you’ll need to complete all of our modules in this series. It’s all you’ll need to get to productive with your testing on AWS. Trust me, despite all the different services and options, it’s easy to get started with AWS. You just need to grasp the following:

Services: Amazon provide a range of different services. A service can be thought of as the type of work a particular cloud resource provides. These services are grouped into categories like Compute, Storage, Database, Networking and others.

As an example, in the ‘Compute’ category, we have the EC2 (Elastic Compute Cloud) service along with other services like VPC (Virtual Private Cloud). In the ‘Storage’ category we have the EBS (Elastic Block Store) service along with other storage services like S3 (Simple Storage Service).

All of the services available are listed in the first page you see when you login to AWS:

AWS Management Console

For the purpose of these modules we’ll be focusing on EC2, EBS and VPC. Each of these are described in more detail below.

Zones and Regions: are used to reduce latency between the end users and the services that are provided. Amazon provisions these services in data centers located in different regions. For example services are provided from locations like US East (N. Virginia), EU (Ireland), Asia Pacific (Singapore), and many others. Note that not all services are provided in each region.

When you start services you’ll want to make sure that you have the correct regieon selected so that the service is started in your regeion. Also, and this is important, when you create VPCs, Security Groups and Key Pairs these are all linked to a region. Everything you need should be defined and created in the same regeion. So it’s best to select the region up front and stick with it.

Management console: the AWS management console gives you the ability to start, configure and manage the services you need. For example you can spin up an EC2 instance and from the management console start an RDP (remote desktop) session on that EC2 instance.

The key components of the management console are highlighted in this image:

AWS EC2 Management Console

EC2: is an abbreviation for Elastic Compute Cloud. Essentially EC2 is the service that delivers resizeable computing capacity. When you select EC2 in the management console you are given the capability to start, configure and manage virtual machines in Amazons cloud. These virtual machines are known as ‘Instances’.

AWS Management Console EC2 Option

EC2 Instances: from the management console you can run up virtual machine instances running all sorts of different operating systems, with different hardware platforms and with a range of different default software installed. Each virtual machine you run up is referred to as an ‘instance’. For example you can run up a virtual machine instance that has Windows 2008 Server, with 2 CPUs, 4 GB memory on a 64 bit architecture.

AWS Management Console EC2 Instance

EC2 Instance Types: Each ‘Instance’ you run up will be of a specific type. A ‘Type’ defines the the CPU, memory, storage and networking capability of the ‘Instance’. Types typically range from ‘T2.nano’ (low capacity on all fronts) to different families of ‘Large’. In the example above we have an instance Type of ‘t2.micro’ which has 1 vCPU and 1GiB of memeory.

You will find a list of all the different instance Types here…

https://aws.amazon.com/ec2/instance-types/

Images and AMIs: each time you start a virtual machine you don’t want to have to install and configure the operating system and additional software from scratch. To save you building the instance from scratch Amazon give you the capability to start the machine with a predefined image already installed. These images are known as AMIs (Amazon Machine Images). AMIs define the operating system, architecture (32 bit or 64 bit), launch permissions and storage for the root device.

A full break down of the AMI components can be found here….

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html

Elastic Block Store and Volumes: a virtual machine is no good without storage. Amazon provide a huge number of options here.

The most common data storage service is known as EBS (Elastic Block Store). EBS is a service that provides storage volumes that can be attached to running ‘Instances’, the data stored is persistent (e.g. data is retained across reboots, shutdowns, etc) and each EBS volume can persist independently from the life of the ‘Instance’ if need be.

Also available, and commonly used, is Instance Store storage. This storage service is automatically provisioned with some EC2 instances and AMIs. The defining feature is that the storage is physically attached to the host machine. You should be aware though that Instance Storage is NOT persistent. Data is not retained across shutdowns (stopping or terminating a machine).

For us, we’ll be using EBS. EBS because the data stored is persistent. Also, EBS is automatically provisioned on some AMI’s. The AMI’s we’ve selected for this course all have EBS storage (e.g. so you’ll have storage that is automatically setup and that will persist when you stop or terminate the machine).

You can see the storage type you have associated with your Instance here:

AWS Management Console EC2 Instance Type

And if you want to know more about AWS storage it’s all in here…

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html

Network and Security: when you start creating instances from your Amazon management console you might notice that Amazon creates a Virtual Private Cloud (VPC) for you automatically. Every Instance you create is placed in your VPC automatically too. You will find you click on the console home icon then select VPC you’ll be taken to the VPC dashboard. If you didn’t notice then this is where you’ll see details for the VPC that your instances are placed in..

AWS Virtual Private Cloud

Pretty much all of the setting here you don’t need to worry about. AWS takes care of all of this for you. However, there’s one setting ‘DNS Resolution’ that you will need to update and set to ‘yes’. We’ll talk about how and why later but this makes all the Intances you create visible with public hostnames and Ip addresses outside of your VPC. More on this later.

Once Instances are run up you’ll see them associated with your VPC here…

AWS Virtual Private Cloud

This VPC is essentially the same as the networks you’d find for you physical PCs and laptops on in your office. Just be aware that by default your instances are NOT given public IP addresses and host names. Everything is defined by default with local VPC IP addresses and host names. Again we’ll look at configuring the VPC to provide public IP addresses and host names in a moment.

Security Groups: Built in to the whole AWS framework are some pretty clever security capabilities. Key to running our instances is the concept of Security Groups. Each security group contains a set of fire wall rules. Each time you start an instance you specify the security group you want to associate to the instance and the instance inherits these firewall rules.

AWS Security Groups

So for the example above when we run our AUT (Rocket Chat) on one of our instances the AUT needs the rules defined to allow access to the AUT services that will be running.

What we’ll do for your setup is define a couple of security groups that contain rules for each Instance that you want to setup. Then each time we start and instance that will be running either Rocket Chat or some other applications we’ll associate the required security group with the instance.

Key Pairs: AWS uses key pairs to encrypt and decrypt login details (e.g. passwords). These are the login details you’ll need to access your both your Windows and Unix Instances.

The concept here is that a public and (related) private key are created. Amazon encrypt the password with the public key. Amazon provide you with the encrypted password that no one can decrypt. That is unless you have the associated private key. Assuming you do have the private key can then decrypt the encrypted password.

So to log in to any Windows or Unix Instance you create you will need to create a public and private key pair first. Only once you have the key pair can Amazon encrypt the login details, and you decrypt the login details.

It’s slightly different for Windows and Unix Intsances here. For Windows instances it’s the login password that is encrypted and used to login. For Unix instances it’s the key pair that’s needed in conjunction with SSH (Secure Shell) to login (there are no passwords for Unix machines).

More about all of this in a moment. For now though just be aware that you can create your public and private keys in the EC2 dashboard in the management console here…

AWS Key Pairs

We’ll go into much more detail about this in a moment. Just remember that once you have the private key you MUST NOT lose it. You can ONLY download the private key you need ONCE!!!! Don’t lose it!!!!

So what’s next then?

Well we understand the fundamentals so lets start running up our instances and building our test system. First we’ll need to configure some security groups and setup a security key pair. Once this is complete we’ll be ready to run up our Windows 2008 instance and our Ubuntu Unix instance.

Part 3: Configuring Security Groups

First make sure you select the right region (select this from the management console top right). Security Groups are created FOR each region and can’t be used across regions.

Once you’ve selected the region we need to create 2 security groups. The first will be used for our Windows master machine. The second security group for our Unix machine that is running Rocket Chat (the application under test).

You can configure these two security groups by following these steps….

1. on the Management Console home page click the “EC2″ icon
2. on the EC2 Dashboard page click on “Security Groups” menu item
3. on the security groups page click the ‘Create Secuirty Group’ button

The settings for both of these security groups you need to create are as follows:

Windows-Master
Security group name: Windows-Master
Description: Windows Jenkins Master Instance
VPC: “Select your default VPC”
Inbound rules (click to enlarge this image):
AWS Security Group Example 1

What this will give you is a machine that lets you access Jenkins which is web based serving up HTTP and/or HTTPS traffic to your local browser. It will also all SSH (Putty) access so that you can create a terminal on the Unix instance you’ll be setting up. Finally RDP access is provided so that you can create an RDP session from your local PC or laptop.

Note that we’re selecting a ‘Source’ of ‘My IP’ so that only YOUR MACHINE will have an inbound connection to the instances we run up that use this security group. No other machines will have access (note that if you’re on a laptop and you move locations you may end up being blocked yourself). You can leave the source as 0.0.0.0/0 for each rule but it’s less secure.

Unix-AUT
Security group name: Unix-AUT
Description: Unix Rocket Chat AUT Instance
VPC: “select your default VPC”
Inbound rules (click to enlarge this image):
AWS Security Group Example 2

What this will give you is access to the Rocket Chat AUT that serves up HTTP/HTTPS traffic to your local server. The Rocket Chat application also needs to provide and consume data from the the Mongo Database that is accessed on port 27017. It will allow SSH for terminal access from the Windows master machine. That’s all we need for now.

Just as a note you’ll see under security groups that by default you already have one security group created. This group will have a source id that refers to the same security group id (e.g. sg-xxxxx). It’s a circular reference if you like but just means that any instances in this security group can access any other instances in the same security group.

Once you’ve completed this you should have something like this…

AWS EC2 Instance Security Groups

Part 4: Creating a Security Key Pair

Again, make sure you have the right region selected (select this from the management console top right). Key pairs are create FOR each region and can’t be used across regions.

Now you need to create your security key pair. The private key you obtain from this process you’ll need to keep safe as we’ll need it later in the process. It’ll be needed to decrypt the windows password and to login to the Unix machine using SSH/Putty.

To create your key pair follow these steps:

1. on the EC2 Dashboard page click on “Key Pairs” menu item
2. on the key pairs page click the ‘Create Key Pair’ button
3. give the Key Pair a name (e.g. FirstKeyPair)
4. at this point you should be prompted to download the .pem file
5. click okay and save the .pem file somewhere safe

AWS EC2 Key pairs

DON’T LOSE THIS FILE!

You can open this .pem file with a text editor (e.g. notepad) if you like. You’ll see it’s just an RSA Private Key. Dont’ worry about this for now though. You just need to know that we’ll need this later when we want to get access to our instances.

Now we’re ready to start the fun stuff!

Part 5: Running up the Windows Instance (VM)

There’s four things we need to create our Windows Instance:

  1. AMI – Microsoft Windows Server 2008 R2 Base – ami-c5a7bea4
  2. Instance Type – t2.micro
  3. Security Group – Windows-Master (we created this earlier)
  4. Private Key – .pem file (from our key pair created earlier)

Next then, on the EC2 console/home page click “EC2 Dashboard”. You should see a resource list like this….

AWS Resource List

From here you click the ‘Launch Instance’ button. You should be able to follow the steps using the ‘four’ items of info listed above to create your first instance. Just accept all the defaults as you go through the steps.

Step 1: select the AMI listed above (2008 R2 Base – ami-c5a7bea4)
Step 2: select the Type listed above (t2.micro)
Step 3: accept all the ‘Instance Config’ defaults
Step 4: accept all the ‘Storage’ defaults
Step 5: [optional] add a tag for the Name if you like (e.g. Windows-Master)
Step 6: for the security group we’ll need to:
Step 6a: Select an existing security group
Step 6b: Check the ‘Windows-Master’ security group AND
Step 6c: Check the ‘default’ security group *
Step 7: Review and then ‘Launch’

At this point you’ll be asked to select a Key Pair. We’ll use the Key Pair we created earlier. The public key from this pair will be used to encrypt the windows password for the instance that is about to be created. If you don’t have the .pem file with the private key in you’ll never be able to decrypt the password and you’ll never gain access to your instance. You’ve to the private key safe right?!

Step 8a: Choose an existing key pair and select the ‘FirstKeyPair’
Step 8b: Acknowldege the warning and click ‘Launch Instances’

* Selecting both of these security groups will give you public access to the instance and give private (within your VPC) access to any other machines we create in your VPC.

Whilst this instance is coming up we’ll start creating our Unix instance.

If you’d like a more detailed account of how to launch a Windows Instance I’d recommend this:

http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/EC2_GetStarted.html

Part 6: Running up the Ubuntu Unix Instance (VM)

There’s four things we need to create our Unix Instance

  1. AMI – Ubuntu Server 14.04 LTS (HVM), SSD Volume Type – ami-5189a661
  2. Instance Type – t2.micro
  3. Security Group – Unix-AUT (we created this earlier)
  4. Private Key – .pem file (from our key pair created earlier)

Next then, go back to the EC2 console/home page click “EC2 Dashboard”. From here you click the ‘Launch Instance’ button again. You should be able to follow the steps using the ‘four’ items of info listed above to create your second instance. Just accept all the defaults as you go through the steps.

Step 1: select the AMI listed above (Ubuntu Server 14.04 LTS*)
Step 2: select the Type listed above (t2.micro)
Step 3: accept all the ‘Instance Config’ defaults
Step 4: accept all the ‘Storage’ defaults*
Step 5: [optional] add a tag for the Name if you like (e.g. Unix-Client)
Step 6: for the security group we’ll need to:
Step 6a: Select an existing security group
Step 6b: Check the ‘Unix-AUT’ security group AND
Step 6c: Check the ‘default’ security group
Step 7: Review and then ‘Launch’

At this point you’ll be asked to select a Key Pair. We’ll use the Key Pair we created earlier. The public key from this pair will be used to encrypt the windows password for the instance that is about to be created. If you don’t have the .pem file with the private key in you’ll never be able to decrypt the password and you’ll never gain access to your instance. You’ve to the private key safe right?!

Step 8a: Choose an existing key pair and select the ‘FirstKeyPair’
Step 8b: Acknowldege the warning and click ‘Launch Instances’

* This instance does not come with EBS storage. It comes with SSD storage. That is we have storage that is not Persistant. When we ‘stop’ or ‘terminate’ this instance we’ll lose all our data. That’s fine for what we want to do but just be aware that nothing is retained on this instance.

If you’d like a more detailed account of how to launch a Linux Instance I’d recommend this:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html

If you go to the EC2 Dashboard now you should see something like this…

AWS EC2 Running Instances

And if you click on ‘2 Running Instances’ you should see a list of your instances like this….

AWS EC2 Running Instances 2

Now we’re ready to login to the windows master machine, install putty and then connect to the unix machine. At which point it’ll be job done. So not much left now.

Part 7: Connecting to the Windows Master Machine

Pretty straight forward to connect to the Windows machine. Just follow these steps:

Right click and select ‘Get Windows Password’
AWS EC2 Get Windows Password

Select the path to the ‘Private Key’ you saved earlier
AWS EC2 Decrytp Windows Password

Click the ‘Decrypt Password’ button

Write down the Windows Admin password and the Public DNS host name

IMPORTANT: If you don’t get a public DNS host name you’ll have to complete these steps and change this setting:

AWS EC2 Public DNS
  1. Click ‘Services’ (top menu bar)
  2. Select ‘VPC’ (in the drop down menu)
  3. Select ‘Your VPCs’ (in the side menu)
  4. Right click on your VPC entry
  5. Select ‘Edit DNS hostnames’ (in the context sensitive menu)
  6. Select the ‘Yes’ radio button and then ‘Save’
  7. Then return to ‘Services -> EC2′ and your list of running hosts

At this point you should have the IP address, Hostname of your Windows server, your user name (Administrator) and your password. You can right click on your ‘Windows Instance’ entry in the list and select ‘Connect’

AWS EC2 Windows connect

Open the ‘Remote Desktop File’. When you open the file with the default application ‘Remote Desktop Connection’…

AWS EC2 Windows connect with RDP

… you should then be able to connect using the credentials you have. With the RDP session established and access to the Windows server desktop we’re in a position to start installing Putty and connecting to our Unix machine.

Please note that you’ll need to connect using this method each time you need access. Don’t bother saving the RDP file as this will have a specific Public DNS record / Host name. When you restart this machine the Public DNS / Host name will change. This won’t be a problem if you always connect following the steps above because Amazon takes care of updating the RDP file with the correct IP address and host name each time.

Part 8: Installing Putty and SSH on the Windows Machine

Not sure if you’ve come across Putty yet but this is a great little (actually not that little if you consider how much has gone into this suite of tools) application for connecting via a secure shell from a windows machine to a unix machine. If you set it up correctly you can open a Shell session (command prompt) on the Unix machine at a click of a button without even entering a password. Here’s how:

  1. Open Internet Explorer within the RDP session on your Windows server
  2. Either search for Putty or enter this URL

    http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

    (MAKE SURE YOU GO TO THE ‘chiark.greened.org.uk’ SITE)

  3. Download the Putty Installer: putty-0.66-installer.exe
  4. Run the installer selecting all the defaults

At this stage you should be able to see all the Putty tools in the Start menu. The tools we’ll be needing are…

  • PuTTYgen: allows us to convert our .pem key (see below)
  • Pagent: authentication agent that runs on the windows machine
  • PuTTY: Secure Shell client that connects to our unix machine

Part 9: Connecting to the Unix Client Machine

Three very straight forward steps to getting connected to our Unix machine. Very important we set it up correctly though. First, because it makes our life easier (quickly creating a terminal session on the unix machine) and secondly because later our Jenkins setup will connect automatically from this Windows machine to the Unix machine using SSH. SSH is a key component in our process of automating everything.

Step 1. Convert our .pem key
So the .pem private key Amazon gave us for connecting to our Windows and Unix machines isn’t supported by default by Putty. But it’s easy to convert it to the right format. Just….

  1. On your desktop/laptop copy your .pem file (e.g. FirstKeyPair.pem)
  2. On the windows server paste the .pem file to the desktop
  3. On the windows server start ‘PuTTYGen’
  4. In ‘PuttyGen’ select SSH-2 DSA
  5. Then click ‘Load’

    AWS EC2 Load pem file in puttyGen

  6. Load the .pem file (making sure you select ‘All Files (*.*)’

    AWS EC2 Load pem file in puttyGen

  7. Enter a passphrase and click the ‘Save private key’ button

    DON’T FORGET THE PASSPHRASE YOU USE!

  8. Save the new .ppk file to the desktop (call it FirstKeyPair.pkk if you like)
  9. This should leave you with the following icon on the desktop

    AWS EC2 Load pem file in puttyGen

Step 2. Run Putty Agent
Now we have our .pkk file that Putty can use we setup Putty Agent. Putty Agent runs in the background and holds our private key (.pkk file). When you make a connection to the Unix machine Putty take the key from the agent and uses it to establish a secure connection. Don’t forget that when AWS created our Unix instance it used the public key as part of the process for setting up the connection credentials. So when we try to connect using our private key it should work in conjunction with the other half of the key pair, the public part of the key pair. So lets setup Putty Agent:

  1. Double click the putty icon on the desktop
  2. Enter your Passphrase and click okay
  3. Check you have the putty agent running

So you should see Putty Agent running in the task tray. If you double click on the icon you should see that your private key is loaded.

AWS EC2 Load pem file in puttyGen

Now all we need to do is open the connection up to the unix machine.

Step 3. Start Putty and connect with SSH
Now we’ll start Putty and connect to our Unix machine. We’ll configure Putty correctly so that it’s easy to connect each time we need a terminal open on our Unix machine. Complete thesesteps:

  1. From the Start menu start Putty

    AWS EC2 Run Putty

  2. From your AWS Management console find and copy your Unix Private IP

    AWS EC2 Run Putty b

  3. Paste the IP Address into the Putty window
  4. Make sure SSH is selected
  5. Enter a saved session name (e.g. Unix-client)
  6. Click ‘Save’

    AWS EC2 Run Putty c

  7. Click ‘Connection -> Data’ in the side menu
  8. Enter ‘ubuntu’ for the Auto-login username

    AWS EC2 Run Putty d

  9. Click ‘Session’ in the side menu
  10. Click the ‘Save’ button again
  11. Click ‘Open’

At this point you should be prompted with a security alert. First time round it is valid to select ‘Yes’. And from here you should go straight into a SSH shell session on your Unix client machine with NO login required.

At this point you can just type ‘Exit’ and return in the shell window.

When ever you need a connection to this machine you can just carry out the followign action….

AWS EC2 Run Putty e

This will take you straight into the SSH shell prompt on your unix machine (no password required).

To be honest we don’t have much requirement to actually use the shell prompt for what we need. However, Jenkins does require SSH setup and configured to use it as a build and install node. If this is working we should be okay for the next stage.

The next stage then is installing Jenkins on the Windows machine and setting the Unix machine up as a Jenkins node. Before we jump to that though, a couple of small points…

Part 10: The Difference Between AWS Terminate and AWS Stop

You’ll notice in the management console that you have 2 options for bringing your servers down (both for Windows and Unix).

AWS EC2 Shutdown and Stop

Stop: when you stop an instance the instance is shutdown. If it has EBS storage (like our Windows server does) the data on this storage is maintained. If you have SSD storage (like our Unix server does) the data on this storage is NOT maintained. When an instance is shutdown you can restart the instance when you need it again. Things like intance ID, EBS storage, private DNS and private IPs are maintained and restored. Things like Public DNS and Public IPs may change (they will on our setup).

Note that when an instance is in a ‘Stopped’ state you are not charged for is use. However, any EBS storage that is maintained you will be charged for. If you don’t want to be charged then you will need to ‘Delete’ the volume OR Terminate the instance.

Terminate: If you terminate the instance everything is deleted. Terminate an instance if you no longer need it as you can NOT restart it or connect to it again. So only use this if you don’t need any of the data anymore. You can also use this option if you want to make absolutly sure you are not charged anymore. Any EBS storage you have when you terminate an instance is deleted too (so you won’t be charged for EBS storage after you’ve terminated either).

NOTE that during this course we…

– DO NOT want to terminate the Windows instance. We need to retain the data on this machine as it’ll be our master machine running Jenkins.

– We don’t really want to terminate our Unix instance either. However, on this machine no data will stored that we need to keep. So if you do terminate it that’s okay as we can just run up another instance with the same AMI.

– We recommend that you ‘STOP’ instances when you are not using them. This will mean that you are not charged for the instances. However, YOU WILL continue to be charged (if you go outside of your free tier allocation) for EBS storage that is retained.

For this reason we strongly advise that you monitor your AWS spend.

Part 11: How to Check Your AWS Spend

Within the AWS management console you’ll find a ‘Billing and Cost Management’ option:

AWS EC2 Cost Management

In here you’ll see what your current balance is (should remain at $0.00 if you stay within your free tier constraints). The most important section though is th e’Top Free Tier Services by Usage’ section.

AWS EC2 Cost Management Details

Monitor the stats here to see how close you are to going over your free tier allocation. If you’ve started with a clean AWS account for this course you shouldn’t go over your free tier allocation.

IT IS YOUR RESPONSIBILITY TO MONITOR THIS. CHARGES INCURED ARE YOUR CHARGES TO PAY.

If you want to be absolutely sure that you don’t go over the free tier allocation then I would recommend reading this….

http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/free-tier-alarms.html

 

Conculsion

So we’ve started with a new AWS account. Then we went through the sign up process to create a new account. We’ve learnt about the AWS fundamentals covering Virtual Private Clouds, Elastic Cloud Computing and storage. From here we created our first Windows and Unix instances from Amazon Machine Images. To finish off we set up our servers so that we have RDP and SSH access. In short we started out with nothing and now we have our own cloud environment with virtual machines and storage.

In the next module we’ll be installing Jenkins and looking at the process of automating the build/install of our application under test. Lastly, if you’d like notifications about this course and the Pdf course notes please feel free to sign up below.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
facebooktwittergoogle_plusredditpinterestlinkedinmail

Building the Test Automation Framework

January 18th, 2016 by Bill Echlin

Welcome to Building the Test Automation Framework. A 6 part course that takes you step by step through the process of building a test automation framework. A test automation framework we’ll build completely from open source tools. Each of the 6 modules is a short course in it’s own right. Each module covering tools like; Amazon Web Services, Jenkins, Selenium, Git, SoapUI and JMeter.

When we’ve finished we’ll have a distributed test automation framework. A framework based on the principals of Continuous Integration and Devops. This is about building a framework at the system level that gives you the platform to automate much of what you need to do on a day to day basis.

 

To build this platform we have a series of 6 modules to get through. You’ll be learning about putting together the following Open Source and free components to build an automation rig:

Amazon Web Services: the platform we’ll use for building out our
windows and unix test automation environment.

Jenkins: the continuous integration tool we’ll use to build the
application under test and trigger all our automated test actions.

Selenium: the test automation tool we’ll use to run our GUI browser based automation tests.

SoapUI: the test automation tool we’ll use to run our API based automation tests.

Git: the source code tool we’ll be managing and retrieving the source for our application from.

JMeter: the load test tool we’ll use to asses the performance of the application under test.

At the end of all 6 modules YOU will have built this…

Test Automation Framework

 

You don’t need to know anything about any of these tools to follow these tutorials. Each of these 6 modules will provide the key details you need to understand how the tools work. The important point is that you’ll learn how to bring these tools together to create a working test automation framework .

All the machines will be hosted in the cloud with AWS. This makes it easy for you to replicate the system using exactly the same hardware and software we’ve used in these tutorials. Jenkins will act as the control tool. We’ll cover the build process for the application under test and focus on how Jenkins can control our machines in this test environment.

Selenium will be used to run a range of browser based tests on different platforms. Again the focus is not so much on how to use Selenium, more on how to pull Selenium into a fully featured test system. SoapUI will be trigger to run REST based API tests. JMeter kicked off to give us some feedback on application performance.

With Git we’ll be using an existing source code repository hosted on GitHub. GitHub contains the source for our application under test. To start out we’ll just pull down the source, build and then deploy the application. As things progress we’ll look at how to trigger builds, deploy and execute tests based on code check-ins.

What are we going to test?

We’ve chosen an open source web based chat application called Rocket.Chat. This is a cross platform, multi browser, iOS, Android and Windows chat application. A decent feature set enabling us to build out a full featured automation rig. More on the Rocket.Chat application here…

https://rocket.chat/

And that’s it. We’re starting out with nothing. By the end we’ll have an integrated test environment using many of the biggest open source test tools available. In the first module, up next, we’ll look at Amazon Web Services. By the end of the second tutorial you’ll have automated the Rocket.Chat install. Then with the third and forth modules you’ll have a range of GUI and API tests in place. In the fith module we’ll be triggering tests from source code check-ins. And, finally, in the last module we’ll be kicking off our performance tests with JMeter.

Hope you enjoy the journey.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
facebooktwittergoogle_plusredditpinterestlinkedinmail

Is it just me or do you get this feeling too?

January 5th, 2016 by Bill Echlin

One of the stand out moments in 2015 has to be the high profile fall out in one of the biggest sports in the world. A fall out between one of the most successful partnerships of all time in Formula 1. A fall out that had many parallels with the way testers are treated in our industry.

The Chairman of Renault, Carlso Ghosn, stated publicly that he wanted to sever ties with Red Bull. One of the most successful F1 partnerships of all time, Red Bull and Renault, fell out. Why?

Carlso explained that…

“Unfortunately when we were winning championships the Renault name was never mentioned. It was the [Red Bull] team that was winning”.

And when things weren’t going so well….

“….some of the teams using our engine did not fare well, and the reasons for which they are not performing became the engine.”

Maybe it was the engine. Maybe it wasn’t. Yet when….

“….you are in the game, when you perform very well you are never mentioned, and when there is a problem with the team you are the first guy to be pointed at.”

I felt for this guy. I’ve been there. I’m guessing, as a tester, you’ve been there too.

No one questions the fact that to succeed in the software development world you need good testers. When things go well we rarely get any credit. Yet, when things go wrong it’s the testers that usually take the flak.

“Why didn’t you spot that issue before we went live? We’re paying you to pick these issues up before it’s too late!”

When things go well how much recognition do we get? We didn’t actually build anything that the company could sell. We didn’t contribute to helping a project meet it’s deadlines. In fact it probably looks like we were there contributing to missing the targets.

We just managed the testing and contributed to some nebulous concept called quality. Kind of like that F1 engine. It’s out of sight. You can’t see it. Few people outside the team talk about it. When it goes wrong though – all hell breaks lose.

When things go well ….. no one mentions us. When things go wrong we’re there to blame.

Is it just me or do you get this feeling too?

It’s understandable that sometimes we’ll feel like odd balls sitting on the side line. After all we’re sitting there picking faults with everything we see. Just that picking up those faults is quite important. What we produce is never seen by the end customer. Yet it’s what’s not seen that makes the difference. Much the same as that engine hidden from view in a Formula 1 car.

With the best teams, when things are working well, you feel included. You’re made to feel like you’re part of the reason for the success. You’re complimented for your contribution. People outside of the team are even made aware of your critical contribution.

It’s a defining mark of a great team. When the team does well everyone gets a mention. And that means the test team get mentioned too….. even if we are hidden from view!

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
facebooktwittergoogle_plusredditpinterestlinkedinmail

Why Don’t We Thrash Out The API Design At The Start?

December 15th, 2015 by Bill Echlin

It’s the start of a new project. Everyone is throwing in design ideas. Lot of excitement. Bit of a buzz.

High level architecture is taking shape. Wire diagrams for the GUI are nearly finished. ‘This’ system will talk to ‘that’ system. You know which systems are storing the different pieces of data. Front end will be mobile iOS and Android. We’ll use a REST api to talk to the back end. The design is looking good.

And everybody’s off. All beavering away. All building their own little bit of the system.

Won’t be long before we put all the different parts of the system together for the first time. See it all working as a system.

Sure… we’ll find a few bugs. The Testers will find them. Few fixes and we’ll be up and running.

Then, when everything is plugged together, we start asking, “why did we not stop to look at defining the APIs in detail in that initial design?”

Oh sure, someone mentioned it early on. Everyone was too busy though. And anyway we had a rough idea of which calls and end points we’d need. All looked pretty straightforward.

BUT it’s not. Never has been. Never will be!

The whole damn team, including a tester, should have been locked in a room for 3 days. They should have been confined to that room (with pillows, bread and water if necessary) and not allowed out. Not allowed out until the API calls were defined in detail.

Defining the API is just the BEST opportunity the whole project team have to validate the design right at the start. It’s a huge chance to save a massive load of work and fixes on the back end of the project.

And guess what? It’s fun. It’s interesting. It’s great for team building!

BUT it does take a room full of people, a significant amount of effort and a good bit of brain power to pull this off in the early stages of a project.

The idea is that you walk through every use case, every wire diagram and every requirement. You look at the data that’s needed at every stage. Then pull together a list of all the different calls and the responses.

Absolutely key to this is having a tester in the room. Someone who’s going to ask awkward questions. Maybe even daft questions. Someone who’s going to make life a little bit more difficult for everybody involved. Someone who’s going to push the team to go through the process properly.

It’s a detailed design review centred around the most fundamental component of the project. The API. A detailed design review where the tester pushes “everyone” to test the design on paper before anything is built.

There are so many benefits to running through this early on in the project…

Firstly, everyone comes out of this understanding what’s being built. It’s a process of educating everyone in the team.

Secondly, you will find bugs. Significant bugs in the design up front, early on in the project life cycle.

Thirdly, if you’ve designed the calls and data up front you can mock the services. Then each part of the team has something concrete to work with.

I don’t care if you work in an agile fashion and you’re going to build little bits at a time. You know the “we’re using agile – everything will be okay” kind of mentality. That’s like going on a 1000 mile road trip and only buying a map at the start of each 100 mile section. You wouldn’t start out on that trip without looking at a high level map to validate you’re heading in the right general direction.

All sounds so simple. So logical. Such a smart thing to do. But do we do it? No.

I’ll tell you this though, you’ve only got to see this work in practice once and you’ll do it on every project you work on. Every time.

Try it!

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
facebooktwittergoogle_plusredditpinterestlinkedinmail
1 2 3 17