In module 2 we looked at creating our first key word automated test project. In that module we touched on a number of key areas of the user interface like the Project Workspace and the Object Browser. Each of those key areas giving us the ability to develop our tests and examine the applications we’re testing.

In this module we’re looking at some other key aspects of the user interface. Key components that you’ll find yourself using on a regular basis to help you develop, debug and run your automated tests.

 

 

In the next few paragraphs we’ll go through some of the most useful components within the user interface.

 

module3-view-menu

 

First up then the view menu. In here you’ll find the ‘Select Panel’ option. This option allows you to select which panels you want to display in the TestComplete user interface.

 

 

 

 

 

 

 

 

 

module3-properties-panel

 

 

One useful panel is the Properties panel that shows you the properties for each of your test items. For example you can pick a test item, like a keyword test, and see the path and file name to the keyword test on your file system. Useful if you want to back up a particular file. Or if you just want to find out where the complete project suite is located on your file system.

 

 

 

 

 

 

Next then the object spy. This allows us to identify objects in our application using a cross-hair and then inspect their properties and methods.

module3-object-spy

 

You can either drag the cross-hair over to the object you want to look at. Or you can use the ‘point and fix’ method (put your cursor over the object and press Shift + Ctrl + A). Once you’ve picked out an object you can see the full name for the object, a list of the methods and the properties for the object (more on methods and properties back here).

 

 

 

 

 

module3-object-browser

 

One key feature here (pretty innocuous but very very useful) is the highlight in Object Browser button. Once you’ve identified an object you are interested in click this button and the object will be shown in the object browser. This is useful because you’ll get to see the context and position of the object in relation to it’s parent and child objects. That might not seem important now but it does become absolutely key to getting a good feel for the construction of the application you’re testing as you progress.

 

 

 

module3-visualizer-record

Next up then is the Visualizer. The visualizer is one of the most useful components in TestComplete when it comes to fixing and modifying your scripts. You can configure the Visualizer to take pictures when you record your tests, in which case you’ll see the images in the Key Word test workspace panel.

 

 

 

 

 

module3-visualizer-playback

And you can see the comparison ‘Expected’ (from the recorded Keyword test case) and ‘Actual’ (from the test run) in the log files after you’ve run a test.

 

 

 

 

 

 

 

module3-visualizer-settings

 

Typically it’s kept on whilst you develop your tests. And also kept on when running your tests for the first few times. You don’t want to keep it on for all your test runs as it’s pretty resource intensive. Once your test are running smoothly you would switch the visualizer off and enable just the ‘post image on error’ setting. This way TestComplete only takes a visualizer image when your tests pick up an error. Visualizer settings are found in the Project Properties tab (more on this later).

 

 

 

module3-restore-default-docking

 

Next on our list of user interface components to look at is the Integrated Development Environment itself. There’s lots you can configure here. Just try dragging different components in the gui and re-arranging them. You can also close panels and open new panels (see above). What usually happens though is that you lose a panel or you just end up with a right mess. At which point you want to return everything to normal and reset. You can do this in the ‘View’ menu.

 

 

 

 

module3-keyword-workspace

Up next is the Keyword test editor space then. This is where you’ll spend most of your time working. We’ll look at Keyword test development later but for now you just need to know about the main components in this panel. These are…

 

 

 

 

 

 

Test Steps: select this tab and you’ll see the panel where you have a list of all the test steps that make up the Keyword test. We also have the ‘Variables’ and ‘Parameters’ tabs but we’ll talk about those in a later module.

Operations: this panel gives you a list of test actions and other items that you can use in your keyword tests. For example there’s the ‘On Screen Action’ item that you’ll use countless times to complete actions within your application.

Visualizer: Images of the application as these test steps are completed (we’ve talked about this above).

Menu Bar Buttons: this is a panel with a range of buttons used to create and modify your Keyword tests. For example the ‘Append to Test’ button which allows you to start recording again and add more test steps to your Keyword test.

 

Whilst we’ve covered all the key components in TestComplete IDE there is one last, very important, bit to cover. This is a three step process you’ll use on a regular basis when inspecting, investigating and capturing the objects in your application.

This is VERY IMPORTANT. You may not understand exactly why yet (we’ll come to that as we start building tests) but get into the habit of following these steps. They should be 2nd nature to you before you move on.

 

module3-highlight-in-object-tree

Step 1. Open the object spy and identify the object you’re interested in. Or at least get close to the object you’re interested in (for example if it’s an Html table structure you might find it a bit tricky getting the exact object). Once you’ve identified the object click the “Hightlight Object in Object Tree”.

 

 

 

 

 

 

 

 

 

module3-object-in-object-tree

Step 2. At this point you should see the object tree with the object you’re interested in highlighted. Now you really get to see if you’ve picked the right object and you start to get a feel for the context of the object in relation to the rest of the objects in your application.

Now you can look in detail at the object, examine the other related objects and make sure you have the right one. When you have selected the correct object right click and select ‘Map Object’.

 

 

module3-map-the-object

 

Now if you’re working with a particularly difficult application (e.g. the object are difficult to identify uniquely) then you can opt to ‘Choose a name and properties manually’ or you can just pick the first option and let TestComplete map the object. If in doubt pick the first option for now.

 

 

 

 

module3-show-mapped-object

Step 3. The object browser is the list of everything on your system (all processes, windows, browsers, etc). When you map and object, and add it to the Name Map, you’re basically saying to TestComplete add this to the list of object that I’m interested in for my automation project. I don’t want a list of everything on my system, just a list of object I’m interested in for my project. That list is the Name Map.

 

 

 

 

module3-name-map

And once you click the ‘Show Object in Name Map’ TestComplete will show you that object in this focused list of objects. This is the list we’ll build that will contain everything we need to know about our application for the purpose of our automation effort.

 
This might seem convoluted at this point in time. It may seem a little obscure as to why you want to repeat these steps. Just go with it for now though. Practice it a several times to get a feel for jumping between the Object Spy, Object Browser and Name Map. This interaction and movement between the Object Spy, Object Browser and Name Map is key to getting the most out of your automation with TestComplete. You’ll find out exactly why in the coming modules.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
Facebooktwittergoogle_plusredditpinterestlinkedinmail

 

In module 1 of this Fast Start TestComplete course we’ve looked at the TestComplete development environment. You might not necessarily understand what everything does but you should be familiar with these core components:

Project Workspace – where you develop your automated tests
Object Browser – where you can examine the application you’re testing

And within each of those components you should understand what these sub components are for:

Project Workspace

module2-project-workspace

 

 


- Project Explorer: shows the artifacts created for your automation project
- Workspace: where you create and modify those different artifacts
Object Browser
 module1-object-browser







- Object List: hierarchy of objects (processes, windows & browsers) on your system
- Properties: a list of characteristics relating to a specific object
- Methods: a list of actions a specific object can carry out

 

If you’re still wondering about any of these bits then maybe it’s worth going back and looking at the Getting Started Module. If you’re happy with this then let’s walk through the steps you’ll need to follow to build your first automated test in TestComplete.

We’ll break this down into 3 key stages:

  1. Creating the Project Suite and Project
  2. Recording your first test
  3. Replaying your first test

Before we can create our first test we need to create a project that will contain that test. Before we can create our first project we’ll need a Project Suite that will hold the project. Remember, a project suite can contain one or more projects. A project holds all the artifacts needed for a particular test automation effort. So from the starting page when you first open TestComplete. We’ll walk you through this in the next few sections of this blog post and the video below:

 

1. Creating the Project Suite and Project

module2-new-project-suite

Create your Project suite, by clicking on the ‘New Project Suite’ button (1) and then enter the name for this suite. Once that’s created click on the ‘New Project’ button (2). It’s a bit back to front but you’ll be creating more Projects than Project Suites so it’s kind of makes sense to have the ‘New Project’ button on the left.

When you create the project you’ll be walked through a wizard where you’ll need to enter the following:

  • Project Name – you chose this
  • Type of Applicaiton – pick Generic Windows Application
  • Add Application to the project – just click next (we’ll do this later)
  • Test Visualizer – just click next (we’ll look at this later)
  • Scripting Language – select Python*

* – we wont get into scripting in this course but we might touch on the odd little bit of code so we may as well select Python at this stage.

Then click ‘Finish’ and we’ll have our first Project which is contained with our first Project Suite.

module2-new-project

Before we go any further we’ll need an application to test. To make our lives easy we’re going to use an application called Calc Plus from Microsoft.

Search the web for ‘Download Calc Plus’

Microsoft don’t actually provide this as a download’able application anymore but you will find it on sites like ‘download.cnet.com’.

We’re using Calc Plus because it exposes a lot of it’s object information to TestComplete. We’re not using the default calcualtor supplied with Windows because this one doesn’t expose everything we need. Calc Plus just makes our life easier as we start out learning TestComplete.

 

 

A Little About the Application Under Test – Calculator Plus

Now you have Calculator Plus, and before we start recording our first test, lets just check out the ‘Tested Apps’ feature. There will be a lot of processes running on your PC or Server. Adding your Calc Plus application as a ‘Tested App’ makes it easy to focus on the application we’re testing. If you follow the next few steps you’ll see how…

1. start Calc Plus
2. click on the Object Browser tab

module2-obj-browser-tab

 

3. locate calculator plus in the objects list
module2-calc-plus-obj

 

 

 

4. right click and select ‘Add Process to Tested Apps’
module2-add-tested-app

 

 

5. click the ‘Yes’ button to confirm the addition

6. If you’re prompted with ‘Do you want to add the Tested Applications’ project item select ‘Yes’ followed by ‘Ok’

7. go back to the ‘Project Workspace’ tab
module2-proj-tab

 

 

8. double click on the ‘Tested Apps’ node

module2-tested-app-basic-settingsAt this point you should see ‘Calculator Plus’ in the list of Tested Apps. What this gives us is the ability to filter, auto start and focus our test effort on just this application. For now the settings for this tested app should be as follows:

 

 

 

 

Next we’ll create our first test. Our first test will be a mixture of adding test steps manually and recording parts of the test too.

 

2. Recording Your First Test

To start off then we’ll manually add a few test steps before we record some test steps.

1. Rename the keyword test ‘Test1’ to “Start Calc”
module2-startcalc

 

 

 

 

2. Add a new Keyword test by click on ‘KeywordTests’ and selecting ‘Add New Item’
module2-new-keyword-test

 

 

 

3. Call the new Keyword test ‘Close Calc’
module2-new-keyword-test2

 

 

 

 

4. Double click the ‘StartCalc’ node so that it opens in the workspace and drag the ‘Run TestedApp’ operation into the workspace, selecting ‘Calc Plus’ as you do so
module2-run-tested-app

 

 

 

 

 

Then right click on the ‘StartCalc’ test in project explorer and select run start calc. This’ll just make sure we have Calc Plus running so that we can complete the next step.

5. Click the ‘Record New Test’ button
module2-record-test

 

 

 

You should see the recording tool bar open as TestComplete records your actions. And then record a few actions in Calc Plus (e.g. click the keys 2 * 2). Do NOT close the Calc Plus application at this point.

 

6. Then click the stop button on the recording tool bar
module2-stop-recording

 

 

 

 

7. Rename the new test ‘DoCalc’
module2-rename-test

 

 

 

 

8. Double click on the ‘CloseCalc’ test item to open it in the workspace area
module2-rename-test2

 

 

 

9. Click the ‘Append to Test’ button to record and add a new test step to this test
module2-append-to-test

 

 

10. The recording tool bar should open. And at this point we just want to click the Calc Plus close window ‘X’ button
module2-close-calc

 

 

 

 

Then click stop on the recording tool bar.

At this point we’ve created all the component tests we need to run a full test scenario. All we need to do is link these together and run them. We do this at the project level by adding multiple project test items that call these tests in sequence.

 

1. Double click the ‘Project’ node in the Project Explorer
module2-open-project-item

 

 

 

 

 

2. At this point you should have a blank ‘Test Items’ panel open for the project. We need to drag our tests in to here
module2-add-testitems

 

 

 

 

 

3. Dragging all three items in, so that we have them in this order…
module2-project-testitems

 

 

3. Replaying Your First Test

 

At this point we have a project with three test items that is ready to run. We can click the run project icon and see everything run in sequence

module2-run-project

 

 

 

 

Assuming this runs successfully you’ll see a log file created which shows each test item running in turn successfully

module2-log-items

 

 

 

 

 

 

 

 

 
And that’s it. Our first automated test. Created using a combination of building keyword tests manually (using drag and drop) and by recording tests. We’ve built the project in a modular fashion with three test cases pulled into the project list for execution. This way we can reuse these tests in other scenarios as we build out our test cases.

 

 

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
Facebooktwittergoogle_plusredditpinterestlinkedinmail

If you’re looking to learn TestComplete fast then this is the place to start. We’ve pulled together 14 fast start training modules teaching you all you need to know when you start out with TestComplete. Everything you need to become productive in the shortest time possible. Each module comprises of one short video along with a list of key learning point and concepts.

All this designed to get you productive with TestComplete in the shortest time possible. The quicker you become familiar with TestComplete the quicker you’ll be writing and running effective automated tests.

Over the course of 12 modules we’ll cover the following topics:

  • Module 1 – Getting Started and Key Components
  • Module 2 – Creating our first test
  • Module 2 – User Interface
  • Module 3 – Managing Projects
  • Module 4 – Options and Settings
  • Module 5 – Objects and Methods
  • Module 6 – Keyword Testing
  • Module 7 – Projects and Project Suites
  • Module 8 – Checkpoints and stores
  • Module 9 – Test Logging
  • Module 10 – Name Mapping
  • Module 11 – Debugging
  • Module 12 – Data Driven Testing

Each module is designed to take no more than 30 minutes to complete. In fact I’ve specifically kept every video to about 5 minutes. There’s a lot packed into each video though. The key learning points accompanying the video will take no longer than 10 minutes to scan. You might have to watch each video a couple of times but, spend just 30 minutes each day for two weeks and you’ll have mastered the basics of TestComplete.

Module 1 – Getting Started and Key Components

In this module we’ll look at the core components in TestComplete and get you familiar with the IDE (Integrated Development Environment). Whilst we’ll look in more detail at the concepts of Project Suites and Projects in the next module we’ll need to get started by creating our first Project Suite and Project. Watch the video and we’ll walk you through this:

Remember that you’ll start out by creating a project suite to hold your projects. Each project suite then contains one or more projects. Each project is a container for all the artifacts you need for a specific chunk of automation.

Once you’ve created your first Project Suite and Project (we’ll walk your through this process in the next module) you’ll see two main tabs; the Project Workspace and the Object Browser.

Project Workspace: is where you develop and work on all of your automated tests. It is split into two main areas:

  1. Project Explorer – where you can navigate all of the artifacts in your test projects
  2. Workspace – where you create and modify the artifacts in your test projects

Each time you double click on an item in the project explorer it opens a new tab in the workspace so that you can edit that item.

Object Browser: is where you inspect and investigate your system and the applications you’re testing. The object browser area is split into two main areas too:

  1. The list (or tree) of objects on your system
  2. The properties/methods view

The list/tree area shows all the objects on your system. Objects are either Processes, Applications or Browsers running on your system. Those objects are arranged in a hierarchy where the top parent object is your system (Sys). All other objects are child objects of the System object. For example your system (computer) might have a child object called ‘Process(“calcplus”)’ which is the CalcPlus application running under your System object. This CalcPlus process will then have it’s own child objects which could be ‘Windows’ that are displayed on your desktop.

When you select an Object in the left hand panel you will see the Properties and Methods for that specific object displayed in the right hand panel.

Properties can be considered as characteristics of the object. For example you ‘Sys’ object will have a property called ‘Hostname’. That property would have a value (e.g. The host name of your system).

Methods can be considered as actions that the object can carry out. For example if you have the CalcPlus application/object running on your system, this object could have the ‘close’ method. If this method is run then the object would be closed on your system.

 

If you’re still struggling with the concept of objects, properties and methods read the following analogy:

Objects and child objects: You can think of yourself and your body as an object. As an object you have lots of child objects. You have a head, you have arms, you have legs, etc. These child objects have their own child objects. For example an arm has child objects like shoulder joint, elbow joint, wrist joint, forearm, top arm and hand.

Properties: Each object will have a number of properties. You have a height property. That property could have a value (for example 1.6 meters). You body will have a list of properties and each of it’s child objects will have it’s own list of properties too. Take your ‘arm’ object. We’ve seen that the arm has a list of child objects. The arm itself could then have a property called ‘Number of child objects’. The value of this property is 6 (the six child objects for the arm being the shoulder joint, elbow joint, wrist joint, forearm, top arm and hand). Other properties for the arm could be things like colour, texture, etc. All of these properties could have values.

Methods: These are the actions that the object can carry out. Your overall body object might have methods, or actions, like Sleep, Run, Walk, etc. Each child object may have it’s own set of methods too. So your arm object may have methods like bend, twist, raise, lower, etc

These principals apply in exactly the same way to everything on your computer or laptop. The top level object can be considered your computer system. This system has child objects which might be processes (like the notepad process running on your system). The notepad process then has child objects which can be windows that are displayed on your desktop. If we take the notepad window this window will have properties; like height, width, colour, title, etc. This window will have methods too. These methods are likely to include actions like ‘minimise’ and ‘maximise’.

 
Project Suite and Project basics: When you start an automation project in TestComplete everything will be contained in a project suite. A project suite is just a container for one or more ‘Projects’. A project is a collection of items that you need to create in order to run your automated tests. A project will contain things like keyword tests, connections to databases, files containing test data, and much much more. Everything you need for a particular automation effort is contained in a ‘Project’. And a project is contained within a ‘Project Suite’. Thus you could have one Project Suite that contains a project for your automated system integration tests. And the suite could contain another project specifically for GUI tests.

Tests: in a project you can have two types of tests. Either Keyword tests or scripted tests.

  1. Keyword tests are graphical based tests that you build by pulling test items together. A test item might be an ‘on screen’ action like click a button
  2. Scripted tests are code that’s written (in a language like Python for example) to carry out test actions. For example you might write code like

We’ll look at scripted tests much later but for now this course focuses on Keyword tests.

NameMapping: The Namemap can be thought of as TestComplete’s list of objects that you want to interact with as part of your automation project. It lists the objects, their position in the object hierarchy and their identification properties. Much, much more to come on this later.

Stores: The Stores entity in your project is the repository that holds any other artifacts that you need to run your automated tests. For example you can add database connections here, you can add images for comparisons during test replays and files that you might want to compare too.

TestedApps: Here you can list the applications that you want TestComplete to focus on testing. Your system will be running lots of applications and processes but you only want to focus your automation efforts on one or a few specific applications. Listing those applications here helps TestComplete focus on what’s important and ignore everything else.

And that’s the basic TestComplete components. Become familiar with these and you’ll find creating your first few automated tests far easier and everything will fall into place far quicker.

In the next module we’ll walk your through creating your first Project and creating those first few automated tests.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Module 6 – Using Source Code Control to Manage Our Test Artefacts

In this final module we’re looking how we can best control all of our test resources and files. We’ve created Selenium, SoapUI and JMeter tests. The files for all of these tests are now scattered all over our distributed test automation environment. Not great for colloaboration, maintaining versions and backups. Down right dangerous really.

What we need to do is pull all of our files together into one central repostiory. Well, with the tool we’re using, Git, it’s more a central distributed repostiory. ‘Central distributed repository’ sounds like a bit of a contradiction. We’ll explain that contradiction as we go through this.

Anyway, we’ll be running up an Amazon Liniux instance and installing the source code management tool, Git. Then on all our client machines and our master Windows machine we’ll install the Git client. This will enable us to store and maintain all our test files across our automation network.

To configure this we’ll need to cover 5 key areas:

  1. Setting up and configuring our Git Source Code Control server
  2. Configuring our Source Code Control clients (both Windows and Linux)
  3. Adding our test files to our Git Source Code Control repository
  4. Updating our Jenkins jobs to use test files from the Git repository
  5. Modifying our test files and updating files in our Git repository

With all of this in place we’ll have the last piece of our jigsaw complete. We’ll have the Git component implemented as shown in our schematic here:

Test Automation Framework

The important concept to grasp here is that we’re managing our ‘test’ source code. We’re not managing our ‘development’ source code. The development source code is managed by our development team. We need to meet the same levels of good development practice that our dev team employ. And that means we’re responsible for managing our test artifacts and source code properly.

What You’ll Learn

The aim is to pull all our test files together and manage them effectively from one location. That means pushing any changes to this central location or “repository” as it’s better known. This means getting our Jenkins jobs to automatically use the test files stored in this repository. And it means learning to collaborate on changes to test files by making those changes easily accessible to everyone in your team.

The concept then is that every test file we create needs to be stored in our Git source code repository. That means, from our SoapUI, JMeter and Selenium development environments any code we write needs to be ‘pushed’ and stored on our Git server. When ever Jenkins comes to run a job it will be responsible for ‘pulling’ this source from Git server. This way the Jenkins server will always be picking out the latest test source files that anyone in our test team has checked into the Git repository (assuming our test team are diligent about pushing their changes to the Git repository that is).

What Jenkins actually does, when it initiates the jobs on the remote machines, is get it’s Jenkins slaves to pull the latest version of the test files from the Git repository. So whilst we’ll configure the jobs on the Jenkins server to use Git it’s actually the Jenkins slaves that are responsible for pulling our test files from the Git server.

All that we’re aiming for though is making sure everyone, including Jenkins, is using the right files from the right location. To goal to ensure that we are developing our tests in a collaborative environment where we’re using the right versions of the test files in our test environment and we have all of our test artifacts and files safely stored and backed up.

The SCM Tool We’ve Chosen

As we’ve already mentioned we’ve chosen Git. Git isn’t an “unpleasant or contemptible person” as the dictionary definition points out. Git is version control system that will store all our test artifacts or files. Git maintains a changes to those files over time so that we can revert to previous versions if we need to. All our changes are tracked so that we can see what changes were made, when and by who. Why’s this important?

Well take that scenario where your colleague makes a small innocuous change to a Selenium script. Nothing radical but when you come to run the latest version of the this automation script nothing works anymore. Well with Git we can see exactly what the change was and revert back quickly to the working version.

Why have we chosen Git in particular? Well it’s the de facto open source, source code repository tool. It’s one of, if not ‘the’, most popular source code control tool in use today. It’s an open source project that’s still actively maintained even though it was started back in 2005. Not only that, but there’s a massive amount of material (free books, free videos, etc) on the web to help you learn more once you’ve finished learning the basics here.

Prerequisites

Make Sure you have your Private key (.pem file)

Back in Module 1 we created our public and private key pairs? Well at that stage you should have saved your private key pair .pem file (e.g. FirstKeyPair.pem) file. You’ll need this private key when configuring Jenkins later.

If you don’t have have this private key you can go back and create a new key pair. Much easier if you can find the one you created in Module 1 though.

If you’ve followed upto Module 4 so far you should already have your Amazon Virtual machine environment up and running along with Jenkins, Selenium and SoapUI. This existing setup gives us the 2 machines we’ll need to use in this module.

  1. Windows Master machine: this is running Jenkins and controls all our other machines (including the installation of the AUT and the execution of our Selenium tests). This machine will be responsible for kicking off our SoapUI API tests
  1. Linux Client machine: this Ubuntu linux machine is run up on demand by Jenkins and then has the AUT (Rocket Chat) automatically installed on it. This machine provides the web interface for the Rocket Chat application and the API for the Rocket Chat application.

Check the Status of your AWS Machines

Your Windows Master machine should already be running. The Linux machine (running the Rocket Chat application) may or may not be running. The Linux machine is run up automatically by Jenkins so it’s fine if it’s not running right at this moment. Whatever the state of the linux machine you should see the Windows machines status in the AWS console as follows:

 

 

Open an RDP Terminal Session on the Windows Master Machine

With these Windows machines running you’ll need to open an RDP session on the Windows Master machine. This is where we’ll configure Jenkins.

 

 

Then enter the password (you may need your .pem private key file to Decrypt the password if you’ve forgotten it) to open up the desktop session.

Start the Linux Client Machine

IF the Linux machine isn’t running with the AUT installed then we need to start it. We can get Jenkins to do this for us. Once you have an RDP session open to your Windows Master machine you should have the Jenkins home pages displayed. If not open a browser on this machine and go to this URL:

 > http://localhost:8080/

From here you can start the ‘BuildRocketChatOnNode’ job and start up the AUT.

 

Once RocketChat is up and running we’ll need to know the host name that Amazon has given our new Linux instance. We save this in our ‘publicHostname.txt’ file that is archived as part of our build job. So if you go to this directory using Explorer

C:\Program Files (x86)\Jenkins\jobs\BuildRocketChatOnNode\builds\lastSuccessfulBuild\archive

You should find this publicHostname.txt file…

 

Open this with notepad and make a note of the hostname. We’ll need this while we configure our performance tests.

At this point you should have…

  1. A copy of your your Private key (.pem file)
  2. An RDP session open to your Windows Master machine
  3. Your Linux Ubuntu machine running with Rocket Chat installed

From here we’ll setup a new Linux/Unix Ubuntu machine that will hold our Git repository.

Part 1: Sart a Unix Ubuntu Git SCM Server

First we need a Linux AWS server that will run our Source Code Management (SCM) tool Git. We’ve done this a few times before now so we’ll step through the AWS Linux server configuration quickly.

The other Linux machines we’ve setup in this course are designed to be started automatically by Jenkins on demand. Slightly different with this Linux machine. We need a machine that’s not started by Jenkins, that’s always on, has ??? storage (not emphiperal?) and is protected from being shut down.

  1. In the Amazon AWS interface launch a new instance (click on a Launch Instance button). Select the ‘Free tier only’ option and configure this new AMI with the following parameters:

STEP 1 : Amazon Machine Image
AMI ID: ami-9abea4fc *1

STEP 2 : Instance Type
Instance Type: T2Micro

STEP 3 : Instance Details
Select all the defaults and
Protect against accidental termination: <checked> *2

STEP 4 : Storage
Type: Root
Size: 8GiB
Volume Type: General Purpose SSD
Delete on Termination: <checked>

STEP 5 : Tag Instance
Key: Name
Value: Unix-Git

STEP 6 : Security Groups
Select an existing security group: <checked>
Security group names: Unix-AUT, default

*1 – note that the AMI may be different for you. This depends on which AWS region you’re using. Of course Amazon may just have removed this AMI and added a new one with a different AMI ID. You’ll need to search in your AWS console for something similar to “Ubuntu Server 14.04 LTS (PV),EBS General Purpose (SSD) Volume Type. ”

*2 – on Step 3, ‘Configure Instance’ details you’ll see a parameter that allows you to enable termination protection. Just need to make sure this is checked so that you prevent anyone terminating our server. It’s going to be holding all our test source which is critical to everything.

Once you’ve clicked the ‘Review and Launch’ button you should see a configuration summary page like this:

When you ‘Launch’ this instance you’ll need to configure the SSH security key pairs.

  1. Configure the SSH security key pairs by selecting:

Choose an existing key pair
<select your SSH key pair>

So back in module 1 we created an SSH security key pair. You should have this saved safely somewhere (see the Prerequisite section in this module for more info on your Key Pairs). As you need to select it on this dialogue box:

This is the key pair AWS created for us back in module 1 and that AWS stores and uses. We need to have access to the .pem file that was created as part of the initial setup back in module 1. The important point, as AWS put it, is:

“I acknowledge that I have access to the selected private key file (*.pem), and that without this file, I won’t be able to log into my instance.”

You just need to make sure you can find your copy of your *.pem file. Assuming you have it check the ‘acknowledge’ check box and click the ‘Launch Instance’ button.

  1. Check your new Linux instance is running.

Back on the AWS EC2 dashboard you should see your new instance running. You can search based on the name you gave the instance if you like.

 

 

Once this is running we can check the security groups and make a connection using SSH.

Part 2: Setup the Security Group and SSH Connection

Now we have our Unix-Git Source Code Control (SCC) machine running we need to make sure the AWS security settings will let us connect to it and configure an SSH terminal connection.

  1. First then the AWS Security Group Configuration.

Set up the security group by first checking that this linux machine has the right security group assigned to it. You can do this by selecting the host in the AWS EC2 console.

 

 

Click on the ‘Unix-AUT’ link which will take you through to the security groups page for this specific group. Then click on the ‘Inbound’ tab.

 

At this point we can ‘edit’ the security group and add a new rule:

 

 

This rule we’ll configure with the SSH port from our local laptop/desktop machine. So select these parameters:

Type: SSH
Protocol: TCP
Port Range: 22
Source: My IP

Which should give us something like this

 

 

Once we have this we’ll have access to our Linux machine direct from our laptop/desktop using the AWS Java SSH Client (MindTerm).

  1. Second we need to connect using an SSH terminal.

We’ll make this connection using an in-built SSH client (MindTerm) that is integrated with the AWS management console. To connect using this method, first go back to your list of AWS instances and then right click to select ‘Connect’

 

 

Once you see the connection dialogue box you should select this option:

A Java SSH Client directly from my browser

You will of course need Java installed on your laptop/desktop machine in order to follow through with this. Once you have selected this you just need to find the ssh key you created way back in Module 1. Should be a file you saved with a name like ‘FirstKeyPair.pem’. Everything else can be left as defaults giving you something like this:

 

 

Once you click on the ‘Launch SSH button’ you should see a window like this open up

 

 

Of course this is the first time we’ve connected to this linux server from our laptop/desktop machine. So SSH on the Linux machine warns us that we’re not a ‘Known Host’ and looks for confirmation that we want to add our laptop/desktop as a ‘known host’. We just need to click ‘Yes’ for this.

If you run into any error messages or have trouble connecting at this stage just close the terminal window and open it again. Second time round the connection usually works without any problems.

Now we have the Unix-Git machine running and we have a shell SSH connection. Next step is to configure our Git server that will run on this machine. Once configured we’ll be able to store our test cases on this server.

Part 3: Install Git

We have our server up and running with an SSH shell connection open. Just need to install Git now. Pretty straightforward. Just run this command:

sudo apt-get install git

Select all the defaults as you are prompted.

 

 

This should complete cleanly having installed all the required packages:

 

 

And that’s it. Simple.

Just need to configure a Git user account and set the Git server up.

Part 4: Configure the Git User Account

To configure our Git server we’ll need to run through a few steps.

  1. configure a Git user
  2. set up ssh for that user
  3. create and store this users SSH key pair
  4. copy the private key pair somewhere safe
  5. install the private key on the Windows Master machine
  6. install the private key on the Windows Client machine

What are going to have is a user defined on our new Git server. This user (called ‘ae’ for automation engineer) will be setup with SSH (Secure Shell) access. We’ll be able to have all our clients of this Git repository running with the SSH private key so that they can login to this Git server without having to authenticate with username and password details. When these client machines have direct access they will be able to check out and check in code (e.g. our Selenium, JMeter and SoapUI scripts) directly to this Git server. In order to do this we need to setup this user and SSH. The next 5 steps take you through this process.

  1. First then, let’s configure a Git user from our SSH terminal that’s running. Enter the following commands which will create a new unix user ‘ae’ (which stands for automation engineer):

sudo adduser ae

Enter a password (one that you can remember) so that you have the following in your SSH terminal

 

 

  1. Then we can configure SSH by entering the following commands:

su ae
cd
mkdir .ssh
chmod 700 .ssh
touch .ssh/authorized_keys
chmod 600 .ssh/authorized_keys

Once you’ve run through these commands your SSH terminal should look something like this:

 

 

What this set of commands does is create the directory that SSH will need for our Git connections. Then, with the ‘chmod’ command it makes sure the permissions for the SSH director are set for only the user (ae) to have access. SSH is very sensitive about permissions.

Then we create a new file called ‘authorized_keys’ which is where we’ll store our authorized key for this ‘ae’ user. Again we modify the permissions of this file so that only the ‘ae’ user has access to it.

  1. Now we can create an ssh key for this user. The command we use for this is ‘ssh-keygen’:

ssh-keygen -t dsa

This takes you through a set of questions that are need to create this ssh key pair.

Enter file: <accept the default>
Enter passphrase: <your passphrase> *1
Enter same passphrase: your passphrase>

The passphrase you choose is used to protect the private component of the ssh key pair. Remember that an SSH key pair comprises of private and public components. The private component you keep safe, never reveal and never send over the internet.

Once you’ve completed this step you should see the following:

Your identification has been saved in /home/ae/.ssh/id_dsa.
Your public key has been saved in /home/ae/.ssh/id_dsa.pub.

What we need now is to copy the private component back to our own laptop/desktop. We’ll need to install this private key on our Jenkins Windows master machine later. This way our Jenkins machine will be able to use this user account with the SSH connection to get all our source code from this Unix-Git machine.

  1. We now need to add the public key (id_dsa.pub) to our authorized keys file.

We can add this key with the following commands

cd ~/.ssh
cat cat id_dsa.pub > authorized_keys

This cat command kind of prints the id_dsa.pub file but sends that print out to the authorized_keys file. With this public key installed any machine connecting that has the private key installed will be allowed to authenticate directly with this machine.

Next then we need to store a copy of our private key pair so that we can use it later.

  1. Display the private key and copy it somewhere safe.

If we use the Unix ‘cat’ command to read and display our private key we can then copy the text locally. So run this command:

cat /home/ae/.ssh/id_dsa

You should see something like this

—–BEGIN DSA PRIVATE KEY—–
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,3A4052FF706445E55E0BE77A36560A28

2wqLvMXmVhctPSXasdfqerueTrEOB00V/b3Lv5aCdXLO6DSD3KCoNItkOhcW0ghzy
skFYV8nhF37ZkZmAj+//x8HKLA0xMerqewreqwrdso1nxELEh4ZWCPhGf9kzP5+PN
XoaLjuBviaQUMH8rhIHbbk+WobMCO74lB9zzq9G7ppkTcsA0AICbALvt3B+C6z9r
oIY7L/nFtLfiIjaXfEW3Q8Wx/1E8hWBA1u+bPNYg30hKOUg0ucvzf5GOHSsZr17q
F8LR4UTRQC3/U97BHtc/LsvtN4bxl/qlQcgFnCgH5HWOtOB9hqfd3hAEQufBTclZ
oqAXIBHBhyRXwkQ2asdfaLWJa5LVBqZp+hptGwTJJWAAQvDckZ6hcGBr3tPQbaPp
JqeFgXmDUg18vzUT3eBfwsvHBpMfIs1buGUWHXoLmRZohDYZuyN29BE5b55GnQm1
525s6jPXKi+Xgr67adsfdOyZBFyJi6i6SSicyaH1STwc1GkTm6tx4EOalttpR0cd
S2so71kldWWGXreJbqsZqqweref0hXErLuEwga+W5G5x9Wd3cpHdknzeDwQa8CTo
w+4APOiCUbsSQ6hoInM6zQ==
—–END DSA PRIVATE KEY—–

Copy this text, paste it into Notepad (or any text editor) on your local laptop/desktop machine. Call the file something like:

aeKeyPair.pem

Save the file and please try not to lose it. We can get it again from this Ubuntu machine if we need but it’s easier if we save it somewhere safe. JUST DON’T foget the passphrase you used. That’s the most important point.

  1. Now we need to install this key on our Jenkins windows master machine.

Once we’ve installed this key our Jenkins windows master machine will have access to the ‘ae’ user and the Git repository that will store all our automation scripts. Next step then is to open the RDP session to the Windows master machine.

 

 

On the desktop of this master windows machine create a new text file.

 

 

Name the file:

 

aeKeyPair.pem

and open it in notepad…

 

 

Once opened we need to copy in our DSA private key details. Open the aeKeyPair.pem file that’s on your laptop/desktop (or on the Ubuntu server) and copy/paste the content into notepad on the windows master machine.

 

 

Save this file. We need to convert this into a format that Putty (our SSH client tool running on our Windows master machine), understands. So open “PuttyGen” from the start menu:
 


 

We need to import our .pem file into PuttyGen now:
 


 

And then select the aeKeyPair.pem file that we’ve just saved to the desktop. At which point you’ll need to enter your Passphrase … hope you can remember it!

 

 

Then click “Save Private Key” and save the new .ppk format file as:

aeKeyPair.ppk

Which should take you through this step:

 

 

Close PuttyGen and find the Putty Agent that should be running in your task try. Right click on this and select ‘Add Key’

 

 

At which point we need to select our new ‘aeKeyPair.ppk’ file:

 

 

Enter your Passphrase and we should be ready to configure our new Linux Ubuntu machine as a new Putty client. So open Putty again and select ‘New Session’ this time round:

 

 

Now we need to configure our new Unix-Git machine as a new Session in Putty. First you’ll need to find the ‘Private IP address’ from your AWS terminal:

 

 

Jot this IP address down and configure the following in Putty:

Connection -> Data -> Auto-login Username: ae

 

 

Session: <private ip adddress>
Saved Sessions: Unix-Git

 

 

You have to be careful here that you click save and not load. If you click load it loads up another sessions details, without warning, and you lose your new session details. So save this. Then click on the new ‘Unix-Git’ entry and select ‘Open’

 

 

Accept the security warning by selecting the ‘Yes’ button

 

 

At which point we should be in business. You should have a direct session open to our Unix-Git machine directly from our Windows Master machine

 

  1. install the private key on the Windows Client machine

Now we just need to take that aeKeyPair.ppk private key file, copy it to our Windows Client machine and install the key there too.

Make sure you have this file on you Windows Master machine desktop:

aeKeyPair.ppk

Once you’ve located it you’ll need to copy it

 

 

Then open an RDP session to the Windows client machine

 

 

You’ll need to go to your AWS console and get the Public DNS/IP value for this host if you can’t remember it. Once you have the Public DNS/IP value enter it in the RDP dialogue box and connect.

 

 

You shouldn’t need to authenticate with user and password details. It should connect immediately. We set up automatic authentication back in Module 3.

Once connected paste the ‘aeKeyPair.ppk’ file to the desktop of the Windows client machine.

 

 

Download the Putty SSH tools and install on the Windows client machine:

http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

You need to select, download and install using the ‘putty-0.xx-installer.msi’ file:

 

 

Step through the instal wizard and select all the defaults. Start Putty Agent from the install directory:

C:\Program Files (x86)\PuTTY

 

 

Right click on Putty Agent in the task tray and select ‘Add Key’

 

 

At which point we need to select our new ‘aeKeyPair.ppk’ file:

 

 

Enter your Passphrase and we should be ready to add our new Linux Ubuntu machine as a new Putty client. So open Putty again and select ‘New Session’ this time round:

 

 

Now we need to configure our new Unix-Git machine as a new Session in Putty. First you’ll need to find the ‘Private IP address’ from your AWS terminal:

 

 

Jot this IP address down and configure the following in Putty:

Connection -> Data -> Auto-login Username: ae

 

 

Hot Name: <private ip adddress>
Saved Sessions: Unix-Git

 

 

You have to be careful here that you click save and not load. If you click load it loads up another sessions details, without warning, and you lose your new session details. So save this. Then click on the new ‘Unix-Git’ entry and select ‘Open’

 

 

Accept the security warning by selecting the ‘Yes’ button

 

 

At which point we should be in business and have a direct session open to our Unix-Git machine.

 

 

All of this is essential if we want our machines to be able to automatically check out code from our Git source code repository without having to authenticate everytime with a user name and password.

Now we’re ready to setup our Git source code repository and start saving all our test scripts (Selenium, JMeter and SoapUI) safely in this repository.

Part 5: Configure the Git Server

Git is already installed on our Ubuntu Linux server (we set this up earlier). We just need to run a handful of commands to configure Git as we need it. We can do this from either of the SSH shells we have access to. Either the Putty SSH shell running on the Windows Master machine or the SSH shell provided as part of the AWS management console. I’m going to use the SSH shell from Putty on our Windows master machine.

  1. Connect to the Linux Ubuntu machine using Putty

On the windows master machine from the Putty saved sessions select ‘Unix-Git’

This should take you straight into an SSH terminal with no authenticating required. And if you type ‘whoami’ you should see that you’re logged in as the ‘ae’ (automation engineer) account.

  1. Now we need to create a ‘bare’ (empty) Git repository

We’ll run these commands to create an empty directory for our Git repositories.

mkdir ~/git
mkdir ~/git/selenium
cd ~/git/selenium
git init –bare

Which should give you:

We’re creating a directory for our Selenium source code first. Then we’re changing to that Selenium directory and initialising a new git repository with the ‘git init –bare’ command. The ‘bare’ option just means create an empty Git project.

  1. Create repositories for our other projects

Now we know how to create bare repositories we can create them for the other JMeter and SoapUI source code we’re working with. Just run these commands:

mkdir ~/git
mkdir ~/git/jmeter
cd ~/git/jmeter
git init –bare

Which creates our JMeter repository. Just SoapUI left

mkdir ~/git
mkdir ~/git/soapui
cd ~/git/soapui
git init –bare

Now we have one Git source code repository (or Git project) for each of our test tools.

Next we need to make that initial commit of source code for the tools we’re using. We’re going to work through doing this for our Selenium code in the next few sections.

Part 6: Commit Our Selenium Source Code to the Git Server

First then we need a Git client running on our machines where we currently have our source code. For example we have developed our Selenium scripts on our Windows client machine. We’ll need to install a Windows Git client on our Windows master machine and our Windows client machine. That Git client can then commit our Selenium scripts to our Git server. Then all our machines (e.g. our Jenkins slave machines) will have access to this source when they need it.

Let’s install this Git client then. These steps need to be repeated BOTH on your Windows Master machine AND your Windows Client machine.

 

 

  1. In IE download Git

Open IE and go to this Url

https://git-scm.com/download/win

Fight your way through all the IE security warnings if you have to. Then click the ‘64-bit Git for Windows Setup’ link

 

 

If you run into download issues you may need to adjust your IE security settings

 

 

You’ll need to add these domains to the zone:

https://github.com
https://github-cloud.s3.amazonaws.com

That should allow you to download the installer once you click on the IE warning

 

 

  1. Install Git on your Windows Master machine

Then click run:

 

 

Accept all defaults in the install wizard EXCEPT for the ‘Choosing the SSH executable’ option. For this make sure you select ‘Use (Tortoise) Plink’ and enter the path to ‘plink.exe’

 

 

And that should be it. Next you should see the completion Window

 

 

  1. Check the Git GUI Starts

From here on the Start menu you should be able to start the Git GUI application

 

 

Which should give you:

 

 

You can click ‘Quit’ for now.

  1. Check the Git command line application

Also check your Git install works from your command line. So open a command windows:

 

 

And type this command:

git –version

This should confirm the Git command line tools work as you’ll see the version of Git that’s been installed.

 

 

From here we’re ready to start pushing our Selenium script to our repository.

Part 7: Commit our Selenium Script to our Git Repository

In the previous parts we initialised the Git repositories on the Linux Ubuntu machine and we installed the Git client application on our Windows machines. From here we need to use that Git client on our Windows client machine to “add” our Selenium code to the Git repository on the Linux Ubuntu machine.

Not surprisingly we’ll be using the Git ‘add’ command along with the Git ‘commit’ command. We’ll be doing all of this with the Git command we just ran from the Windows command prompt.

  1. Open an RDP session to the Windows Client machine

Then open an RDP session to the Windows client machine

 

 

You’ll need to go to your AWS console and get the Public DNS/IP value for this host if you can’t remember it. Once you have the Public DNS/IP value enter it in the RDP dialogue box and connect.

 

 

You shouldn’t need to authenticate with user and password details. It should connect immediately. We set up automatic authentication back in Module 3.

  1. Locate our Selenium scripts

Back in Module 3 we wrote and ran our Selenium scripts. We should find our Selenium script on our Windows Master machine on the Desktop

 

 

Not the best place to store them! Which is exactly why we’re setting up a Git source code repository to keep them safe.

  1. Create a new folder to store the Selenium script

In an Explorer window create a new folder ‘projects’

 

 

Followed by a sub directory ‘selenium’

 

 

So you should have this new directory path:

C:\Users\Administrator\Documents\projects\selenium

  1. Copy your Selenium script into this folder:

 

  1. Open a Git command prompt session

Right click in the Explorer window and select the ‘Git Bash Here’ option

 

 

At which point you should see a Git terminal open

 

 

From here we can copy (or check in) our Selenium scripts to our Git source code repository on the Linux Ubuntu server

  1. Commit your Selenium Script to Git

First we need to prepare Git so that it knows who and what needs committing to our souce code repository. We’ll need to run these commands:

$ git init
$ git config –global user.name “Automation Engineer”
$ git config –global user.email “ae@ae.com
$ git add .
$ git commit -m ‘initial commit’

This should give you a series of commands like this:

 

 

The init command setups Git in this directory (if you run the command ‘ls -la’ you’ll see a new hidden directory that contains all the Git data). Then the config command sets up the git user on this machine. You can’t do anything without setting up the git user as this information is tied closely to everything you checkin to git. Then we tell git that we want to ‘add’ this directory to our repository. Finally the commit command commits our files locally ready for them to be pushed to the Linux Ubuntu server.

To add the files to the Linux Ubuntu server we’ll need to run these final two commands:

$ git remote add gitserver Unix-Git:/home/ae/git/selenium
$ git push gitserver master

Which should give you something like this:

 

 

What we’re doing here is first adding a remote server. Essentially identifying the Linux Ubuntu machine where we want to push and commit our Selenium scripts to. Note that we define the server name as ‘Unix-Git’ which is the ‘Putty’ name we defined for this server earlier.

Once we’ve added a remote called ‘gitserver’ we can use this as part of our final ‘push’ command. The ‘push’ command sends the files to the ‘gitserver’ and adds them to the branch called ‘master’.

Now are Selenium script is safe in our Linux Ubuntu source code repository. It’s available for other users and servers in our framework of machines to use. In the next two sections we’ll look at…

1. Modifying the selenium script on another server and checking the changes in
2. Updating our Jenkins job so that it uses the latest Selenium source code from the Git server

At this point though we’re well on the way to having a distributed source code repository that stores all our automation scripts safely. All we need to do is update our Jenkins job so that it uses the Selenium source code from the repository. From there any updates that are made to the sripts, so long as they are pushed to the repository, will be picked up by our Jenkins job.

Part 8: Updating our Jenkins Job to Use the Git Repository

With our Selenium source code safely stored in our Git Repository all we need to do is make sure that our Jenkins job, that executes the Selenium scripts, pulls the latest source from the repository before it starts. To do that we just need to make a few updates to our ‘RunSeleniumTests’ job.

TODO: put the schematic diagram in here

  1. Configure the ‘RunSeleniumTests’ job

Click on the configure menu option for the ‘RunSeleniumTests’ job on the Jenkins home page:

http://www.testmanagement.com/wp-content/uploads/2015/01/BTS-module6-img10-configure-selenium-job.png” alt=”” />

  1. Change the ‘Source Code Management’ option

In the ‘Source Code Management’ section change the option from ‘None’ to ‘Git’

 

  1. Enter details for the Git Repository

We need to point this job at our new Git repository residing on our Linux Ubuntu machine. Update the ‘Repository URL’ field with this value:

Unix-Git:/home/ae/git/selenium

This tells Jenkins to use our already configured (back in part 4) Putty ssh connection. If you remember we configured a Putty client called ‘Unix-Git’. We use this Putty client name as the first part of the repository Url (this is what established the link to the Git server over Putty Ssh). Then we define the location of the Selenium Git project on the Git server. This should look like this…

 

 

 

We don’t need any security credentials defined (we’ve already specified Putty Ssh host) and the branch to build should automatically be set to “*/master”.

With this configured Jenkins will pull our SelLoginTest.py script out of the Git repository prior to running the build commands specified in this job. So next we need to tell Jenkins to use this Selenium script that it gets from the Git repository.

  1. Update the ‘Execute Windows batch command’

Once Jenkins, executing the RunSeleniumTests job on the remote Windows machine, pulls out the SelLoginTest.py script from Git it can be executed. The Git pull request executed on the Windows client machines places a copy of the SelLoginTest.py script in this directory on the Windows client machine

C:\jenkins\workspace\RunSeleniumTests

Where the ‘RunSeleniumTests’ in this path is the name of our Jenkins job. So you’ll see it on the Windows client machine here:

 

 

The only problem is that our Jenkins job, the ‘Execute Windows batch command’ section, points at the ’SelLoginTest.py’ script on the Desktop. Back on our Windows Jenkins master machine you see this configured in the Jenkins job here:

 

 

We need to update this so that it points at the checked out script in the Jenkins work space. So change this to:

set
%WORKSPACE%\SelLoginTest.py Ie %PUBLIC_HOSTNAME%
%WORKSPACE%\SelLoginTest.py Chrome %PUBLIC_HOSTNAME%
%WORKSPACE%\SelLoginTest.py Firefox %PUBLIC_HOSTNAME%

Notice that we’ve used the Jenkins Environment variable %WORKSPACE%. Jenkins knows where the work space is on the Windows client machine so we may as well let Jenkins work that out each time the job runs. Once updated the command field should look like this:

 

  1. Run and Test the Updated Job

Save the updated Jenkins job and return to the dashboard. You can run this job now and check it works:

 

 

If you view the console output for this job you should see it start off with something like this:

 

 

The part here being these few lines:

Building remotely on Windows-client (i-2c2ac7ea) (SeleniumTestClient) in workspace c:\jenkins\workspace\RunSeleniumTests
Fetching changes from the remote Git repository
git.exe config remote.origin.url Unix-Git:/home/ae/git/selenium # timeout=10
Checking out Revision e47f98e0c04922d02e990337126dc0376f50f029 (refs/remotes/origin/master)
git.exe config core.sparsecheckout # timeout=10

This shows that Jenkins is using Git to fetch any changes to the SelLoginTest.py scripts. In this case there haven’t been any changes so not much happens. In the next part we’ll see what happens when we have made changes.

The other piece of note is the last section in this console output.

 

 

You’ll see here, for example, that Jenkins has expanded the %WORKSPACE% environment variable and replaced it with the full path to the workspace where our SelLoginTest.py file has just been checked out from Git to:

c:\jenkins\workspace\RunSeleniumTests>c:\jenkins\workspace\RunSeleniumTests\SelLoginTest.py Chrome http://ec2-54-200-24-122.us-west-2.compute.amazonaws.com:3000

The final part in this is making sure we can make changes to our SelLoginTest.py scripts from other machines and checking them into to Git. Then we’ll want to see those changes being checked out by our Jenkins job on the Windows client machine. We’ll see this in action in the next section.

Part 9: Modify and Commit our Selenium Scripts from another server

At this stage then, maybe another tester decides our Selenium script needs a little updating (more comments perhaps). On our Windows master machine we can check out the scripts, make our modifications and then push the mods back to the repository. Next time we run our Jenkins ‘RunSeleniumTests’ job we should see those changes in the execution of the Selenium script.

We’ll see this in action as we complete the next few steps where we make changes to the SelLoginTest.py script on the Windows master machine. We’ll then push those changes to our Git repository. When our Jenkins job runs on the Windows client machine we should see those changes incorporated in that test run.

So back on the Windows Master Machine

  1. Create a folder for the Selenium scripts

In explorer, in the Document folder, create a new folder called ‘automation’

 

 

We’ll pull our Selenium project and SelLoginTest.py script out of Git into this directory.

  1. Open the Git GUI

From the Start menu select the ‘Git GUI’ application

 

 

We have three options here. ‘Create New Repository’ which we don’t need as we already have our repository created on our Unix-Git machine. ‘Open Existing Repository’ which means start using a repository that already exists on this local machine (we don’t have anything yet so this is no good). And ‘Clone Existing Repository’ which allows us to take a copy of our repository that is residing on our Unix-Git machine. This is the option we’ll select.

 

 

On this next screen we’ll need to enter the location of the repository that’s on our Unix-Git machine and tell the ‘Git GUI’ where is needs to copy that repository to locally. So enter the following:

Source Location: ae@Unix-Git:/home/ae/git/selenium
Target Directory: C:\Users\Administrator\Documents\Automation\Selenium

Then click on the ‘Clone’ button:

 

 

What we’re doing here is using our (already created) Putty ae@Unix-Git ssh connection and the location of our git/selenium project as the source. We’ll clone that project that resides on the Unix-Git machine into a new directory ‘Automation\Selenium’ on this local machine. In Explorer you should now have this…

 

 

And Git GUI should show you this window…

 

 

Lets ingnore this window for a second and quickly update our Selenium script

  1. Open the SelLoginTest.py script

Open ‘notepad’ and edit the SelLoginTest.py script:

 

 

Add a new comment or something, just so that we’ve made a change to the script:

 

 

Then close notepad and save the update…

 

 

  1. Git GUI Rescan

In Git GUI click the ‘Rescan’ button to check for the changes we’ve just made. You should see the modification listed like this…

 

 

  1. Git Config
    Now we can ‘Commit’ the changes to our local repository. Then we can ‘Push’ the changes to our master repository on our Unix-Git machine. Before we can commit the changes we need to setup our identity on this machine (using ‘git config’ like we did on the Windows client machine).

 

 

Then run these two commands at the prompt:

$ git config –global user.name “Automation Engineer”
$ git config –global user.email “ae@ae.com

Which should give you this:

 

 

That step is just a one off config step. We need to run it otherwise ‘Git GUI’ will complain that it doesn’t know your identity to complete the commit. Once you’ve run it you won’t need to do it again prior to future commits.

Then back in ‘Git GUI’ we should be able to run our commit. First enter some text in the ‘Commit Message’ section to sumarise the change you’ve made. Then click the ‘Rescan’, then the ‘Stage Changed’ button followed by the Commit’ button

 

 

Right, all that’s done is commit your changes locally. It’s hasn’t pushed the changes to the Git Unix repository. We’re ready to push them though.

  1. Git Push

Now we’ll be able to ‘Push’ the changes so that our Jenkins job can pick up these changes.

 

 

On the ‘Push’ dialogue box just select all the defaults and click ‘Push’:

 

 

You should see this confirmation box showing the successful push:

 

 

Now we’re ready to see if Jenkins will pick up these changes in the next run of the ‘RunSeleniumTests’ job.

  1. Build the ‘RunSeleniumTests’ Job and Confirm pull of Updated Source

Back in Jenkins on the Windows master machine lets run the Selenium test job again. This time we’ll check the job pulls the latest source out of Git before running the SelLoginTest.py script.

 

 

As this is running we can check the build log:

 


And in here we should see some of these messages at the start of the console output:

git.exe rev-parse –is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
git.exe config remote.origin.url Unix-Git:/home/ae/git/selenium # timeout=10
Fetching upstream changes from Unix-Git:/home/ae/git/selenium
git.exe –version # timeout=10
git.exe -c core.askpass=true fetch –tags –progress Unix-Git:/home/ae/git/selenium +refs/heads/:refs/remotes/origin/
git.exe rev-parse “refs/remotes/origin/master^{commit}” # timeout=10
git.exe rev-parse “refs/remotes/origin/origin/master^{commit}” # timeout=10
Checking out Revision 89293a45f2accb9b4191c717c23363901fd247d6 (refs/remotes/origin/master)
git.exe config core.sparsecheckout # timeout=10
git.exe checkout -f 89293a45f2accb9b4191c717c23363901fd247d6
git.exe rev-list e47f98e0c04922d02e990337126dc0376f50f029 # timeout=10

For the observant of you you’ll notice that the check out id (‘89293a45f2accb9b4191c717c23363901fd247d6’) is different. Indicating that we have a different version of our SelLoginTest.py script. If you open the SelLoginTest.py file in the Jenkins ‘workspace’ on this Windows client machine you’ll see our updates:

 

 

And that’s it! We’ve gone full circle making changes to our Selenium scripts on one machine. Then seeing those changes picked up automatically by our Jenkins job, and the changed Selenium script being used on our Windows client machine.

We can now say that we have control of our test source code. You’re job now it to complete the same set of steps for the SoapUI and JMeter source files.

Conclusion

When we started this module we had everything running in our automation framework. Jenkins could install the application under test, run the Selenium tests, run the SoapUI tests and execute some performance tests. What we didn’t have was control over the source code that was created for these Selenium, SoapUI and Jmeter tests. None of our tests were stored in a central location and none of them version controlled.

In this module we showed you how to setup a central Git server, commit our test files to this Git source code repository and then configure our Jenkins jobs to use the test files stored on this Git server. Finally we looked at how you can develop on one machine and then commit changes to the Git repository. Of course Jenkins then automatically picks up and runs with those latest changes.

All of this makes it easier to collaborate during the development of your tests. It makes it easier to maintain the different versions of your test files and of course revert to old versions if you break something. This whole setup also giving you a distributed repository that’s effectively a backup of all your test files.

Finally, it puts you in the same league as your development team who will undoubtedly be using a source code control tool to manage the development of the application you’re testing.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
Facebooktwittergoogle_plusredditpinterestlinkedinmail

In this Moudule we’re focusing on running our performance and load tests. We’ll create some simple scripts in JMeter and link the execution of these scripts into our build process with Jenkins. Once our Selenium functional tests and our SoapUI REST API tests are complete we’ll kick off these JMeter tests. The setup of this will cover these topics:

1. Install JMeter
2. Configure and Create Performance Tests
3. Add and Configure Jenkins Performance Plugin
4. Deploy and Run all our test jobs

 

To keep things simple we’ll use JMeter to test the performance using the Rocket Chat Rest Api. This will allow us to focus on the key concept of building the automation framework rather than getting to hung up on writing the performance tests. You’ll be pleased to know, at this point in the course, it’s a pretty straight forward process of adding JMeter to the Jenkins configuration.

Yet again we’ll add a new plugin to Jenkins (the ‘Performance Plugin’) which allows us to consume the test result reports from JMeter and display them graphically within Jenkins. We’ll work out how to configure the JMeter tests on our Windows Master machine but we’ll deploy them to a Linux machine for execution. This approach will serve you well for future larger performace tests where you might want to start running distributed performance tests.

What You’ll Learn

The aim is to assess the performance of the build once the application is built and installed. We’re looking to catch issues where performance degredation occurs after changes have been made to the code base. To this end our Jenkins configuraiton will not only kick off the performance tests but it will also asses the performance statistics from one build to the next. If the performance degrades from one build to the next by say 10% then we’ll get Jenkins to notify us.

 

To acheive this we’ll need to have these pieces in place on our test automation rig:



As we’ve already mentioned this isn’t just going to be about running Performace tests. We’ll need to create the tests on the Windows master machine using the JMeter GUI. Then we’ll distribute the JMeter configuration and tests to a Linux machine for execution.

The Tools We’ve Chosen

We’re using JMeter mainly because of it’s popularity and it’s wide use within the industry. Mind you that’s no good if it doesn’t support our technical requirements. We need to create REST Api requests, listen for the REST responses and then store the test results.

Another reason for using JMeter is that there is a Jenkins ‘Performance’ plugin that supports JMeter. This plugin allows us to process the log files created by Jenkins and create some neat charts. You can never have too many charts!

It’s also worth mentioning that JMeter runs (under Java) on both Windows and Linux platforms. We can create tests using the GUI on our Windows Master test machine. We can also run these tests from a Linux machine from a command line when we need to. Again this is all about being able to run from the command line – which is the simplest way to approach things when using Jenkins.

Prerequisites

If you’ve followed upto Module 4 so far you should already have your Amazon Virtual machine environment up and running along with Jenkins, Selenium and SoapUI. This existing setup gives us the 2 machines we’ll need to use in this module.

  1. Windows Master machine: this is running Jenkins and controlls all our other machines (including the installation of the AUT and the exection of our Selenium tests). This machine will be responsible for kicking off our SoapUI API tests
  1. Linux Client machine: this Ubuntu linux machine is run up on demand by Jenkins and then has the AUT (Rocket Chat) autoamtically installed on it. This machine provides the web interface for the Rocket Chat application and the API for the Rocket Chat application.

Check the Status of your AWS Machines

Your windows Master machine should already be running. The Linux machine may or may not be running. The Linux machine is run up automatically by Jenkins so it’s fine if it’s not running right at this moment. Whatever the state of the linux machine you should see the Windows machines status in the AWS console as follows:


Open an RDP Terminal Session on the Windows Master Machine

With these windows machines running you’ll need to open an RDP session on the Windows Master machine. This is where we’ll configure Jenkins.



Then enter the password (you may need your .pem private key file to Decrypt the password if you’ve forgotten it) to open up the desktop session.

Start the Linux Client Machine

If the Linux machine isn’t running with the AUT installed then we need to start it. We can get Jenkins to do this for us. Once you have an RDP session open to your Windows Master machine you should have the Jenkins home pages displayed. If not open a browser on this machine and go to this URL:

 > http://localhost:8080/

From here you can start the ‘BuildRocketChatOnNode’ job and start up the AUT.

 



Once RocketChat is up and running we’ll need to know the host name that Amazon has given our new Linux instance. We save this in our ‘publicHostname.txt’ file that is archived as part of our build job. So if you go to this directory using Explorer

C:\Program Files (x86)\Jenkins\jobs\BuildRocketChatOnNode\builds\lastSuccessfulBuild\archive

You should find this publicHostname.txt file…



Open this with notepad and make a note of the hostname. We’ll need this while we configure our performance tests.

At this point you should have…

  1. An RDP session open to your Windows Master machine
  2. Your Linux Ubuntu machine running with Rocket Chat installed

From here we’ll setup our Windows Master machine and install/configure JMeter on the Master machine.

Part 1: Install JMeter

We’re going to use our existing Windows Master machine for this. We’ll install JMeter on this machine so that we can develop our performance tests on this Master test machine. We just need to open the RDP session, download JMeter and install it.

On the Windows Master machine follow these steps:

  1. Open a browser
  2. Enter the following address:

    http://jmeter.apache.org/download_jmeter.cgi

  3. Download the Apache JMeter Zip’ package

  1. Open the download folder and extract the JMeter folder from the zip file…

  1. Open the folder that contains JMeter.

Before you can run JMeter we need to configure the PATH environment variable so that it contains the path to our Java install.

  1. Configure the environment variable by ‘right clicking on Computer’. Then selecting ‘Properties’ followed by ‘Advanced system settings’

  1. In the ‘advanced system settings’ click on ‘Environment Variables’ and find the ‘Path’ environment variable to edit:

  1. At the end of the ‘Path’ environment variable add this text at the end:

    ;C:\Program Files (x86)\Jenkins\jre\bin\

Note the ‘semi-colon’ at the start of the text and the ‘’ at the end. When you’ve finished the full path should be something like this:

> %SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%systemroot%\System32\WindowsPowerShell\v1.0\;%systemroot%\System32\WindowsPowerShell\v1.0\;C:\Program Files\Amazon\cfn-bootstrap\;C:\Program Files (x86)\Jenkins\jre\bin\

Save this. If you want to check this works then open a command prompt and just type ‘java -version’. This should show you the version of Java we have installed.

  1. From here you should be able to run JMeter by clicking on the ‘JMeter Batch file’ in explorer:

You can create a desktop short cut to this batch file if you want to. This will make your life a little easier later on.

  1. At this point you should be presented with the JMeter GUI.



Now we’re ready to start configuring our performance tests.

Part 2: Initial Configuration of JMeter

Initially we just need to setup JMeter and check that we can send a single request and get a response. We’ll configure a Thread Group (users) against the Rocket Chat /api/version end point. Once we can get a response from this end point we’ll move on to configure a more realistic set of load scenarios.

  1. Right click on ‘Test Plan’ and select ‘Thread Group’ so that we can define the number of users we need to simulate



 

  1. Update these settings for the Thread Group:

    Number of Threads (users): 10
    Loop Count: 20

This way we’ll simulate 10 user and repeat our test 20 times.

 



 

  1. Right click on the ‘Thread Group’ node and select ‘Add’ followed by ‘Sampler’. Then select HTTP Request:



 

This will allows us to create a REST API request that uses the Http protocol (similar to the functional tests we’ve already created with SoapUI).

  1. On the ‘HTTP Request’ pane we need to specify the details for the request. Update the following:

    Server Name or IP: <host name for Rocket Chat>*
    Port Number: 3000
    Method: GET
    Path: /api/version

    this is the host name we obtained from the publicHostname.txt file when we completed the Prerequisite tasks at the start of this module. You should not put the ‘http://’ at the start of this field.

  1. On the Test Plan create a ‘View Results Tree’ listener:



 

You can leave all the defaults set here so that the Result Tree settings look like this



 

  1. Save the Test Plan at this point (click on file and select ‘Save Test Plan as’)



 

Save the test plan as ‘RocketChat.jmx’.

  1. Click on ‘Start’ and check you get valid responses



 

Now that we have a successful connection, with requests and responses, we can configure some more scenarios that will give us a more relaistic load.

Part 3: Configuring Our Performance Tests

In SoapUI we created a test case that logged in and then passed the userId and authToken to the subsequent test cases (API calls). We need to create a similar setup in JMeter. The main difference this time round being that we’ll have to simulate multiple users and track multiple userID and authToken sessions. We do this in JMeter by configuring Threads, where a single thread represents a single user.

  1. First we need to create a new test plan in JMeter



 

We can leave all the default settings for the test plan as they are, although you can change the name if you need to.

  1. Next we can add a new thread group. Right click on the test plan and add the ‘Thread Group’…



Again we can leave all the default settings for this thread group (e.g. 1 user and loop count of 1)

  1. Probably a good point to save this project, so click on the save button and save this as ‘RocketChat.jmx’



 

  1. Now we need to start building out the Http requests that we’ll need to send to the Rocket Chat API in order to load test it. The spec for the Rocket Chat API can be found here

https://rocket.chat/docs/master/developer-guides-4-rest-api

First then we’ll setup a default HTTP Header by adding an ‘HTTP Header Manager’ node.



 

And for our Rocket Chat request to work we’ll need to add one Header record as follows:



 

So add the following:

Name: Accept-Encoding
Value: gzip, deflate

  1. Then we need to add an ‘HTTP Request Defaults’ record. This will allow us to define the default URL and end point for all the subsequent Http requests we define.



 

And define the settings for this as follows:



 

Where we have set these two fields:

Server Name: <your AWS Rocket Chat host name>
Port Number: 3000

It’s these values that will be passed on to all subsequent requests as the defaults. Just saves us having to define the same values in all the other requests.

  1. So now we’re ready to actually send our first Http request the Rocket Chat Api. So we’ll an ‘HTTP Request’:

 

All we need to configure here is the End point path and method (and provide a more meaningful name)

Name: HTTP Request – Get Version
Path: /api/version
Method: GET

Like this…

 

All this is going to do is get the version of Rocket Chat that we’re running. It does check we’ve configured everything correctly though. We’ll need to add a listener that tracks the requests and responses and then we’ll be able to run this.

  1. Add a ‘View Results Tree’ listener as follows:

 

You don’t need to configure anything in this, but once it’s in place we can check our script so far. Just click on the run button and look at the ‘sampler results’

 

At this point we just need configure a few more requests and make sure we have the userId and authToken passed to those requests.

Key to the other requests is the login. When we login we get a userId and AuthToken returned in the response. The userId and AuthToken need to be used in the following requests. So we’ll login and store the userID and AuthToken as variables for the other requests. For this we need to setup the login request followed by two post processor ‘Regualr Expressin Extractors’

  1. Add a another HTTP Request (Thread group -> Add -> Sampler -> Http Request) and configure it as follows:

Name: HTTP Request – login
Path: /api/login
Method: POST
user: admin
password: tester123

Making sure you add these user and password login parameters:


The response when we run this request will include this json content:

{
“status”: “success”,
“data”: {
“authToken”: “uzdJZQNrtCYdGcnBL8kfKOxRNe6EmBArmzcTZKTghj0”,
“userId”: “6eJ6cZG6azfLj24QX”
}
}

So in our JMeter configuration we now need to pass this and capture the authToken and userId values.

  1. Now we’ll add TWO post processors that will use regular expressions to extract the values we need:

 

Adding two of them as follows:

Name: Get userId
Apply to: Main sample only
Field to check: Body
Reference Name: userId *
Regular Expression: “userId”: “(.+?)”
Template: $1$

 

And …

Name: Get authToken
Apply to: Main sample only
Field to check: Body
Reference Name: authToken *
Regular Expression: “authToken”: “(.+?)”
Template: $1$

 

  • it’s the Reference Name that is the variable name that stores the values used in the following Http requests.

Both of these values need to be passed in the header for other Http requests that require authentication. So we’ll add a new ‘HTTP Header Manager’ and add these userID and authToken values in there.

  1. Add a 2nd ‘HTTP Header Manager’ config element

 

Where this is configured as follows:

X-User-Id: ${userId}
X-Auth-Token: ${authToken}

The ${} nomenclature is used to insert the variables that we’re set in the regular expressions extraction.

Now we have the authentication details in the header we can call the Get Rooms and Logout HTTP requests.

  1. Add the ‘Get Public Rooms’ HTTP request (Thread group -> Add -> Sampler -> Http Request) and configure this as follows:

Name: HTTP Request – Get Rooms
Path: /api/publicRooms
Method: GET

 

  1. Add the ‘logout’ HTTP request (Thread group -> Add -> Sampler -> Http Request) and configure this as follows:

Name: HTTP Request – Logout
Path: /api/logout
Method: GET

 

Now let’s check that end to end

  1. Click on the ‘View Results Tree’ and clear the previous results

 

Then run this thread group (that only simulates 1 user at the moment). If you click on the Get Rooms request and look at the response data you should see something like this:

Which basically proves that we’ve picked up the correct authentication data from the Login request and used it in the headers for subsequent requests successfully.

We’ll add one more Listener to give us a graph of our results and increase the number of users. With that we’re pretty much done.

  1. Add a response time graph

 

And stick with the defaults to that this graph is configured as follows:

  1. Increase the number of simulated users by altering the thread group settings

 

So we’ll change just these two values:

Number of Threads (users): 20
Loop count: 10

With these settings we’ll create 20 simulated users and each will go through our set of requests 10 times. So effectively we’ll run the scenario we’ve created 200 times.

At this point your JMeter Test Plan structure should look something like this…

 

(You can add the ‘Constant Timers’ in to your test plan if you like just to make things a little more realistic.)

  1. Run the simulated load test by clicking on the ‘View Results Tree’ and clear the previous results

 

Then we can run the load test by clicking on the run button and viewing the results:

 

And if we click on the ‘Response Time Graph’ we should be able to see what the response times for all of our different requests look like:

 

Nothing too much to worry about in those results but they will form the basis of our trending across multiple builds/test-runs as we plug this into Jenkins. And that the next bit. Tying all of this into Jenkins so that it’s run automatically and the results are pulled back for reporting in Jenkins too.

Probably not a bad idea to save your JMeter project at this point too.

Part 4: Configuring our Linux Performance Test Server

We’ve developed our tests using the JMeter GUI on a windows machine. That’s fine and makes our life easier on the development side of things. However, to deliver our distributed and scalable performance capacity we’ll deploy these tests on an AWS Linux instance that’s run up automatically.

Normally we’d have an SVN or Git source code repository that we’d check our tests into (all our Selenium, SoapUI and JMeter). During development of the tests we’d check our JMeter tests in to this source code repository from the Windows machine. At excution time we’d have our Linux machine check out the latest version of these tests.

For the purpose of this course though we’re just going to take our JMeter configuration file, the RocketChat.jmx file, store this on our Windows Master machine. Our Jenkins on our Windows master machine can then deploy this JMeter config file to our Linux performance test server when we start the test.

First off then lest configure our Linux machine. Similar Jenkins setup to the Rocket Chat Linux server we setup at the start of this course. We’ll start this Linux machine on demand and shut it down automatically when it’s not needed.

  1. On our Windows master machine add the new AMI in the Jenkins configuration section:

 

Which should take you to this Jenkins URL:

> http://localhost:8080/configure

We already have our Amazon EC2 service configured in Jenkins. And under this configuation we have two servers configured already:

i. RocketChat-Server (AUT running on Linux)
ii. Windows-Client (Selenium windows server)

We’re going to add a third now (very similar to the RocketChat-Server)

iii. Perfromance-Client (JMeter Linux server)

Click the ‘Add (List of AMIs to be launched as slaves)’ button right at the bottom of the page:

 

  1. Configure this new AMI with the following parameters:

Description: Performance-Client
AMI ID: ami-9abea4fc *1
Instance Type: T2Micro
Security group names: Unix-AUT, default
Remote user: ubuntu
AMI type: unix
Remote ssh port: 22
Labels: PerfServer *2
Usage: Utilize this node as much as possible
Idle temination time: 60 *3
Init script: <see below>
Number of executors: 1

You’ll still have to enter the ‘Init script’ as show below but every thing else can be left blank for this AMI’s settings.

*1 – note that the AMI may be different for you. This depends on which AWS regieon you’re using. Of course Amazon may just have removed this AMI and added a new one with a different AMI ID. You’ll need to search in your AWS console for something similar to “Ubuntu Server 14.04 LTS (PV),EBS General Purpose (SSD) Volume Type. ”

*2 – we’re going to user this Label (PerfServer) to force the performace job to run on this machine

*3 – setting the idle temination time to 60 minutes means that we’ll automatically shut this server down when it’s not in use. We only need this AMI running when we’re running our performance tests.

The init script we’ll want to enter is as follows.


‘’’
#!/bin/sh
# Update existing packages and install Java
export JAVA="/usr/bin/java"
if [ ! -x "$JAVA" ]; then
    sudo rm -rf /var/lib/apt/lists/
    sudo apt-get -y update
    sleep 10
    sudo apt-get install -y openjdk-7-jre ca-certificates-java tzdata-java libcups2 libcups2 libjpeg8 icedtea-7-jre-jamvm openjdk-7-jre-headless openjdk-7-jdk git npm jmeter
    sleep 5
fi

# Add Swap Space
SWAP=/mnt/swap1
if [ ! -f $SWAP ]; then
    sudo dd if=/dev/zero of=$SWAP bs=1M count=2K
    sudo chmod 600 $SWAP
    sudo mkswap $SWAP

    # add new swap to config and start using it
    echo "$SWAP none swap defaults 0 0" | sudo tee -a /etc/fstab
    sudo swapon -a
fi

‘’’

All we’re doing with this is making sure Java is correctly installed (for our Jenkins slave process), setting up our disk swap space and installing JMeter. Note that this init script is the same as our Rocket Chat server script except that we’ve added ‘jmeter’ to the list on the ‘apt-get install’ line.

At the end of this you should have something like this…

 

Lastly we need to configure a Tag. So click on the ‘Advanced’ button:

 

Find the ‘Tags’ section and click the ‘Add’ button:

Now enter the following values:

Name: Name
Value: Perf-client

You should have something like this:

 

Setting this up means that we’ll see the Name field populated automatically:

 

That’s nice but there’s a more important reason behind this. We have a Linux AWS instance that runs the Rocket Chat application. We also have a Linux AWS instance that will run our performance tests. Trouble is both of these machines are configured identically. If one is running in AWS and we try to start the other Jenkins thinks we already have one machine running that meets our criteria, so Jenkins won’t run the second machine up. By setting this tag Jenkins will run up two separate machines (even though they are identical configs). So it’s very important to set these tag values.

Save this and then we can check it launches okay.

  1. At this point we can run this performance client up and just check everything is okay the JMeter install. In Jenkins manage the nodes:

 

Which should take us to this Jenkins URL

http://localhost:8080/computer/

And from here we can provision the Performance client AMI

 

  1. You can check this is running up in your AWS account as follows:

 

And then login using ssh. So get the ‘Private IP’ (it’s IMPORTANT that it’s the private ip) address from your AWS account for this new server and then open the ssh client that’s already running on the windows master machine:

 

Enter the private IP address of the new linux machine and ‘Open’ the ssh terminal, accept the putty security alert and you should have your new linux terminal session. You just need to enter ‘ubuntu’ as the user name and you’re in.

  1. Check that JMeter is installed

Once you have the terminal open type this at the command prompt:

jmeter -v

And you should see the version of JMeter something like this…

 

Which confirms that JMeter is installed. You can close this terminal. We’re ready to start configuring our Jenkins performance job and deploying our performance scripts to this new server.

If you get a message saying that JMeter is NOT installed then go through the next section. This will explain what to do if your Performance client isn’t installed correctly.

Part 5: Parameterise our JMeter Project

We’ve configured our JMeter project and we’re ready to set this up as part of our Jenkins jobs now. Only slight problem is that the hostname of our Rocket Chat server is going to change everytime we re-build and install Rocket Chat. That means the hard code Server Name we’ve entered in JMeter won’t work. So we need to parameterise this value in our JMeter project.

The aim is to pass the Rocket Chat hostname as a command line argument to JMeter when we kick off the performance test. To do this we need to update our JMeter ‘HTTP Request Defaults’ properties.

  1. In JMeter open the ‘HTTP Request Defaults’ config element

 

  1. Update the Server Name field with this string

${__P(serverName)}

Like this

  1. Save your JMeter project

 

Now when we run our JMeter project from our Linux performance machine we’ll be able to run it with a command line like this:

jmeter -n -t RocketChat.jmx -JserverName=ec2-54-201-180-143.us-west-2.compute.amazonaws.com -l rcPerfResults.jtl

Where the arguments we pass on the command line are:

-n – run in headless/non-gui mode
-t – use the RocketChat.jmx configuration file
-JserverName – run the test against the Rocket Chat server with this hostname
-l – log the results to the rcPerfResults.jtl file

Only slight problem with this is that the server name is hardcoded. We need to pick up the public or private Ip of the host that is running our Rocket Chat application. If you remember in one of the earlier modules when we configured the AWS machine that is running Rocket Chat we had a script in Jenkins that created a publicHostname.txt file. In this file we had the public host name of the Rocket Chat. This txt file is archived as part of the ‘BuildRocketChatOnNode’ job and then passed to other jobs that use values as environment varialbes. We’ll hijack this file to add the private ip of the Rocket Chat server too. We can then use this value in our jmeter command line.

To configure this we’ll start by modifying the ‘BuildRocketChatOnNode’ job. On Jenkins dashboard click on the ‘Configure’ menu item for this job

 

Scroll to the of the config page and find the “Execute shell” section. At the bottom of this script you’ll find these couple of lines:

This is where we create the publicHostname.txt file. We need to add to this by including this line:

ifconfig eth0 | grep “inet addr” | awk -F: ‘{print $2}’ | awk ‘{print “PRIVATE_IP=” $1}’ >> publicHostname.txt

Once you’ve added this you should have:

 

Click the ‘Save’ button to save this.

This will run the linux ‘ifconfig’ command that spews out loads of network information for the host. We then run this through grep and awk to pick out the ip address and concatenate this with the text ‘PRIVATE_IP’. So it will add this line to the publicHostname.txt file:

PRIVATE_IP=172.31.29.xxx

We can then pass this publicHostname.txt file to our Jenkins Performance job and use the environment variable PRIVATE_IP to get the correct Jmeter arguments in the command line. So something like this:

jmeter -n -t RocketChat.jmx -JserverName=$PRIVATE_IP -l rcPerfResults.jtl

Don’t worry about this for now though, more on how we run this on the command line as we configure the Jenkins job in a moment. Before we do that we’ll need to run our ‘BuildRocketChatOnNode’ job so that we have the correct values added to our publicHostname.txt file

Once that job has completed, on your Jenkins machine you should be able to find this directory:


Opening this text file should confirm that we have two environment variables recorded in here.

All we need to do now is add the Jenkins Performance Plugin and configure the Jenkins performance job.

Part 6: Add and configure the Jenkins Performance Plugin

In Jenkins, in order to capture reports from JMeter we’ll add the ‘Performance Plugin’. Once this is installed and configured we’ll be able to chart the trend of performance results from one build to the next. It’s this trend that’s key to us spotting potential performance issues before it’s too late.

  1. Add the ‘Performance plugin’

 

  1. Once this is installed you should see the plugin listed on the ‘Installed’ tab in the Plugin Manager page on Jenkins (you might need to apply the filter to see this amongst all the other plugins installed)


We have our ‘Performance Linux Client’ configured. We have the Jenkins plugins installed that we need. Now we’re in a position to create the Job to run this performance from Jenkins.

Part 7: Configure the Jenkins Job to Deploy our Performance Tests

As mentioned previously our performance test files ought to be stored in a source code control system (possibly Git or Svn or something similar). That way when we develop our JMeter tests on our Windows machine we’d check them into our source code control system. Then we’d configure our Linux performance host to check out these tests from our source code control system before running them.

We’ll look at doing this the in a later stage but for now we’ll just get Jenkins to copy the JMeter configuration files on to the Performance Server when we need them.

First then, we’ll copy the JMeter ‘.jmx’ file to our Jenkins master machine in the ‘userContent’ directory. With the file in this location it will be available to the Jenink’s jobs to copy to our Linux Performance Server. Let’s get started on this.

  1. On the Windows master machine find the RocketChat.jmx JMeter configuration file.

The file should be in this directory (or whereever you installed JMeter):

C:\Users\Administrator\Desktop\apache-jmeter-2.13\bin\RC

 


This then needs to be copied to this Jenkins directory:

C:\Program Files (x86)\Jenkins\userContent

 

With this JMeter configuration file in the Jenkins ‘userContent’ directory it will be available to the new job that we’ll create next.

  1. Create the new performance job by clicking on ‘New Item’ on the Jenkins home page

 

Select ‘Freestyle project’ and call your new item ‘RunPerformanceTests’

 

  1. Configure the new job

This new job needs to complete a couple of key tasks. First we need to ensure it runs on our Linux Ubuntu AWS performace instance. Then we need to copy over the JMeter ‘.jmx’ file and then run the JMeter test (from the command line). To acheive this configure the job as follows:

Project name: RunPerformanceTests
Restrict where this project can be run: <checked>
Label Expression: PerfServer
Copy files into the job’s workspace before building: <checked>
Files to copy: RocketChat.jmx
Paths are relative to: $JENKINS_HOME/userContent

These settins should look like this when configured:

Restrict Where this project can be run:

 

Copy files:

 

With the core settings configured we can now add a few build steps:

  1. The ‘Copy Artifacts’ Build Step

First build step is the copy artifact from other project. We need to pull the publicHostname.txt file in from another project as that contains the hostname where Rocket Chat is running. We’ll create this as follows:

Add the ‘Copy artifacts from another project’ build step

 

And then set these parameters:

Project Name: BuildRocketChatOnNode
Which Build: Latest successful build
Artifacts to Copy: publicHostname.txt

So this build step should look like this:

 

  1. The ‘Inject Environment Variables’ Build Step

Next build step is the ‘Inject Environment Variables’ build step:

 

And for this one we need to get Jenkins to parse the data in the publicHostname.txt file and create an environment variable that will tell our performance host machine where Rocket Chat is running. Create this with just:

Properties File Path : publicHostname.txt

Which should look like this:

 

  1. The ‘Execute Shell’ Build Step

Now add the execute shell build step that will run the JMeter tests.

And set this up as follows:

Command:
jmeter -v
echo $PUBLIC_HOSTNAME
echo $PRIVATE_IP
jmeter -n -t RocketChat.jmx -JserverName=$PRIVATE_IP -l rcPerfResults.jtl

So the first three lines give us a little bit of debug info if we need it. The forth line starts and runs our JMeter tests sending the results to the rcPerfResults.jtl file.

  1. Run and test the Performance Test Execution

Lets just run this quickly and check that we see the results we expect. Click ‘Build Now’ for the ‘RunPerformanceTests’ job from the Jenkins home page

Then check the log file for the build

 

Which should give you something like this:

 

Now all we need to do is configure a Post Build Build Action that publishes the performance report.

  1. Configure the Post Build Performance Report

Next then add a post build task, which should give you the option to add a Performance Report as we have the Jenkins Performance Plugin installed.

Click on the ‘Add post-build action’ button and select ‘Publish Performance test result report’

 

In this build job section click the ‘Add New Report’ button and select ‘JMeter’ :

 

IN the new Report file field that is displayed we just need to add our ‘rcPerfResults.jtl’ file name:

Report files: rcPerfResults.jtl

Which gives us this …

 

Save the whole job and return to the Jenkins home page. We’re ready to run Performance Tests from Jenkins now.

Part 8: Check the Performance Job and Configure in the Whole Build Chain

Everything is configured now. We need to check our ‘RunPerformanceTests’ job in isolation and then link this job into the whole build chain.

  1. From the Jenkins home page click on the ‘build now’ icon for the ‘RunPerformanceTests’ job:

 

As the performance reports are ‘trending’ reports you may want to run this job two or three times. That way when you view the results we’ll have some trend data to view. So click the build job icon again.

Once the job has completed for the 2nd or 3rd time check the build job results by clicking on the job name:

 

From here you’ll see the Performance Trend graphs:

 

And if you click on the ‘Performance Trend’ links you can drill down into more detail (both of these charts can be viewed in a larger scale by clicking on them):

 

Now all we need to do is add this to the chain of other jobs that build and test our RocketChat application.

  1. Update ‘RunPerformanceTests’ to run after ‘RunApiTests’

From the Jenkins home page we’ll need to modify the ‘RunPerformanceTests’ job so that it waits for the ‘RunApiTests’ to finsh then starts. Configure the ‘RunPeformanceTests’ job

 

 

Set the parameters for the ‘Build after other projects are built’ option:

Build after other projects are built: <checked>
Projects to watch: RunApiTests
Trigger even if the build is unstable: <set>

This should give us the following:

 

Where we will be waiting for the ‘RunApiTests’ to complete. Even if the ‘RunApiTests’ fail we want to run our performance tests. So we’ve set this to trigger even if the build is unstable.

Save the changes to this job and we’ll check the test jobs all run in turn.

  1. Check all the test jobs run in sequence

Again from the Jenkins main home page we’ll manually trigger the build of the ‘RunClientRDPSession’. This is the job that’s triggered after the Rocket Chat build is complete. We already have Rocket Chat running so we’ll simulate the run from this point in the chain.

 

 

What we should see once we’ve started this is

i. the RDP session open to the client windows test machine
ii. the browsers open in turn as the Selenium tests run
iii. the Api tests run in the background
iv. the RDP session close
v. the performance tests run in the background

Both the Api and performance tests are running in the background so not a lot to see when these run. However, on completion we should have a full set of passed test results all run in sequence:

 

 

At this point you can drill down into the Api tests and check these results. Click on the build number for the ‘RunApiTests’ job and then select ‘History’ or ‘Test Results’:

 

Then do the same to check the performance test run results. click on the link for the ‘RunPerformanceTests’ job and then select ‘Performance Trend’:

 

And there we have it. A full test run. End to end, functional tests with Selenium, Api functional tests with SoapUI and Api performance tests using JMeter. All run with a single click of a button.

Conclusion

Once you start to get a feel for Jenkins building up each of the blocks to create an end-to-end test run covering Gui functional, Gui Api and performance tests isn’t too difficult. Admitidly we’ve only touched the surface of these tools like Selenium, SoapUI and JMeter. With the basics though it’s easy to start building out a lot of these tests and increase the test coverage significantly.

The key from here though is making sure the build and the test run are run on a regualr basis. Either configuring the build job for Rocket Chat to run every night or triggering it from source code check-in monitoring (a topic for another course). Kick everything off automatically on a regular basis AND add a few test cases each day. Before you know it you’ll have a large regression pack.

What’s more is you’ll soon find it easier to add an automated test case than writing and running a manual test case. And once you hit this you’ve nailed test automation!

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
Facebooktwittergoogle_plusredditpinterestlinkedinmail
1 2 3 18