Fast Start TestComplete – Module 4: Managing Projects

November 29th, 2017 by Bill Echlin

In the last module, Module 3, we looked at the TestComplete user interface. In this module we’re going to look in detail at how TestComplete manages projects and projects suites. In TestComplete there is a hierachy of items that you’ll create and work with as you implement your automation project. That hierarchy starts at the top with the ‘Project Suite’.


The Project Suite is just a container for 1 or more projects. Each project being a distinct group of test items that go together to make up a particular test automation effort. For example you might have an application to test that has, desktop, mobile and web client applications. It may also have a server component that’s accessed via an API.


In this example we’ve used TestComplete’s Project Suite to cover all the automated testing of one particular application that has a number of different components. Within the Project Suite we’ve created separate projects for the desktop, mobile, web and Api components of this application. You could even create another project that covers integrated testing across all 4 of these applications.

The key thing to remember is that each TestComplete project suite can hold 1 or more TestComplete projects. Each project is a container for the test components and artifacts needed to automate the testing of a distinct aspect of your application. Grouping projects into a project suite allows us to reuse artifacts across different projects whilst isolating the test effort for a particular aspect of our testing into neat manageable project containers.



Each project will contain all the elements you need for a specific test effort. By default those elements include:


Project: the project node, when opened in the workspace, allows you to select which tests to run within the project and set project properties (e.g. enabled screen captures during test replays).

Advanced folder: in here you’ll find your scripted (e.g. Python) tests and other more advanced project components. Components like Events or Low Level Procedures. This folder is really designed to hide some of the more complex project elements whilst you get started with TestComplete.

Keyword Tests: Your automated tests.

NameMapping: a list of the objects in your application under test that you want to interact with. The Name Map allows you to specify the properties you want to use to identify those objects too. Much more on this later.

Stores: a container for lots of other objects you might need as part of your test automation project. For example objects to connect to databases, containers for storing images and containers to store files you might need to do comparisons against.

Tested Apps: this project object allows you to list the applications you are writing automated tests for. This helps because you can tell TestComplete to only focus on these applications during development and execution of your automated tests (and this helps because TestComplete then hides all the other unimportant stuff from you – to keep you focused on what’s important).



Lets look at the Project Suite entity in a little more detail then. To configure the Project Suite, double click project suite node in the Project Explorer. This gives you a list of Test Items for the suite. When you ‘run’ the suite, what you’ve selected here gets run. You want to run a series of Projects in turn? Select each of those projects here then run the Project Suite. TestComplete will run each project you’ve selected one after the other.



In the same vein if you double click on the Project node, open the project workspace, you’ll see a list of Project Test items. When you ‘run’ the Project, it’s the selected items in this list that get executed. You can add tests to this list of ‘Test Items’ just by dragging Key Word tests and scripted tests into this workspace.




So we understand the concept of our test elements, all of which are listed in the Project Explorer. And we know that it’s that list of Test Items that gets executed when we run our Project. What if we want to do more than just run Keyword tests and/or scripted tests? Well TestComplete gives you a lot more capabilities that aren’t immediately visible to you. In the Project Explorer you can add other types of elements and test items to your project.








Right click on the project and select ‘Add’ followed by ‘New Item’. From here you’ll get a list of other project items that you can include in your project. You add other test elements like Manual Tests, Events and Low-Level Procedures. Each element giving you a new capability within your test project.








module4-remove-itemAs well as adding you can remove and delete too. If you right click on an existing item and click on remove you can remove or delete a project element.


There is a clear distinction here. When you get prompted to ‘Remove’ or ‘Delete’ these options do very different things. Remove just stops displaying the item in your ‘Project Explorer’ view. The related files on your computers file system still exist. Once you’ve removed you can add ‘Existing’ items to include the item back in your project at a later date.

If you choose ‘Delete’ it does what it says. Not only does it remove it from the ‘Project Explorer’ view but it deletes the assoicated files on your computers files system. You won’t be able to add these items back into your project ever again. Gone for good they are.



This same concept applies to the Project log files. Here though you may well want to delete log files for good. As you run your tests log files build up over time and you’ll want to purge and delete many of them.




module4-remove-log-file module4-remove-or-delete

Again though, if you want to keep the files and only remove them from the displayed list in your Project Explorer then select ‘Remove’. Then you can add them back in later if you need them. If you don’t need them any more (ever!) then select delete.




Next up, Module 5, where we look at some of the more important options and settings you’ll find in TestComplete. There are millions of settings and options (slight exaggeration) so we’ll only focus on the important ones for now.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.

In module 2 we looked at creating our first key word automated test project. In that module we touched on a number of key areas of the user interface like the Project Workspace and the Object Browser. Each of those key areas giving us the ability to develop our tests and examine the applications we’re testing.

In this module we’re looking at some other key aspects of the user interface. Key components that you’ll find yourself using on a regular basis to help you develop, debug and run your automated tests.



In the next few paragraphs we’ll go through some of the most useful components within the user interface.




First up then the view menu. In here you’ll find the ‘Select Panel’ option. This option allows you to select which panels you want to display in the TestComplete user interface.













One useful panel is the Properties panel that shows you the properties for each of your test items. For example you can pick a test item, like a keyword test, and see the path and file name to the keyword test on your file system. Useful if you want to back up a particular file. Or if you just want to find out where the complete project suite is located on your file system.







Next then the object spy. This allows us to identify objects in our application using a cross-hair and then inspect their properties and methods.



You can either drag the cross-hair over to the object you want to look at. Or you can use the ‘point and fix’ method (put your cursor over the object and press Shift + Ctrl + A). Once you’ve picked out an object you can see the full name for the object, a list of the methods and the properties for the object (more on methods and properties back here).








One key feature here (pretty innocuous but very very useful) is the highlight in Object Browser button. Once you’ve identified an object you are interested in click this button and the object will be shown in the object browser. This is useful because you’ll get to see the context and position of the object in relation to it’s parent and child objects. That might not seem important now but it does become absolutely key to getting a good feel for the construction of the application you’re testing as you progress.





Next up then is the Visualizer. The visualizer is one of the most useful components in TestComplete when it comes to fixing and modifying your scripts. You can configure the Visualizer to take pictures when you record your tests, in which case you’ll see the images in the Key Word test workspace panel.







And you can see the comparison ‘Expected’ (from the recorded Keyword test case) and ‘Actual’ (from the test run) in the log files after you’ve run a test.










Typically it’s kept on whilst you develop your tests. And also kept on when running your tests for the first few times. You don’t want to keep it on for all your test runs as it’s pretty resource intensive. Once your test are running smoothly you would switch the visualizer off and enable just the ‘post image on error’ setting. This way TestComplete only takes a visualizer image when your tests pick up an error. Visualizer settings are found in the Project Properties tab (more on this later).






Next on our list of user interface components to look at is the Integrated Development Environment itself. There’s lots you can configure here. Just try dragging different components in the gui and re-arranging them. You can also close panels and open new panels (see above). What usually happens though is that you lose a panel or you just end up with a right mess. At which point you want to return everything to normal and reset. You can do this in the ‘View’ menu.






Up next is the Keyword test editor space then. This is where you’ll spend most of your time working. We’ll look at Keyword test development later but for now you just need to know about the main components in this panel. These are…







Test Steps: select this tab and you’ll see the panel where you have a list of all the test steps that make up the Keyword test. We also have the ‘Variables’ and ‘Parameters’ tabs but we’ll talk about those in a later module.

Operations: this panel gives you a list of test actions and other items that you can use in your keyword tests. For example there’s the ‘On Screen Action’ item that you’ll use countless times to complete actions within your application.

Visualizer: Images of the application as these test steps are completed (we’ve talked about this above).

Menu Bar Buttons: this is a panel with a range of buttons used to create and modify your Keyword tests. For example the ‘Append to Test’ button which allows you to start recording again and add more test steps to your Keyword test.


Whilst we’ve covered all the key components in TestComplete IDE there is one last, very important, bit to cover. This is a three step process you’ll use on a regular basis when inspecting, investigating and capturing the objects in your application.

This is VERY IMPORTANT. You may not understand exactly why yet (we’ll come to that as we start building tests) but get into the habit of following these steps. They should be 2nd nature to you before you move on.



Step 1. Open the object spy and identify the object you’re interested in. Or at least get close to the object you’re interested in (for example if it’s an Html table structure you might find it a bit tricky getting the exact object). Once you’ve identified the object click the “Hightlight Object in Object Tree”.











Step 2. At this point you should see the object tree with the object you’re interested in highlighted. Now you really get to see if you’ve picked the right object and you start to get a feel for the context of the object in relation to the rest of the objects in your application.

Now you can look in detail at the object, examine the other related objects and make sure you have the right one. When you have selected the correct object right click and select ‘Map Object’.





Now if you’re working with a particularly difficult application (e.g. the object are difficult to identify uniquely) then you can opt to ‘Choose a name and properties manually’ or you can just pick the first option and let TestComplete map the object. If in doubt pick the first option for now.






Step 3. The object browser is the list of everything on your system (all processes, windows, browsers, etc). When you map and object, and add it to the Name Map, you’re basically saying to TestComplete add this to the list of object that I’m interested in for my automation project. I don’t want a list of everything on my system, just a list of object I’m interested in for my project. That list is the Name Map.






And once you click the ‘Show Object in Name Map’ TestComplete will show you that object in this focused list of objects. This is the list we’ll build that will contain everything we need to know about our application for the purpose of our automation effort.

This might seem convoluted at this point in time. It may seem a little obscure as to why you want to repeat these steps. Just go with it for now though. Practice it a several times to get a feel for jumping between the Object Spy, Object Browser and Name Map. This interaction and movement between the Object Spy, Object Browser and Name Map is key to getting the most out of your automation with TestComplete. You’ll find out exactly why in the coming modules.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.


In module 1 of this Fast Start TestComplete course we’ve looked at the TestComplete development environment. You might not necessarily understand what everything does but you should be familiar with these core components:

Project Workspace – where you develop your automated tests
Object Browser – where you can examine the application you’re testing

And within each of those components you should understand what these sub components are for:

Project Workspace




- Project Explorer: shows the artifacts created for your automation project
- Workspace: where you create and modify those different artifacts
Object Browser

- Object List: hierarchy of objects (processes, windows & browsers) on your system
- Properties: a list of characteristics relating to a specific object
- Methods: a list of actions a specific object can carry out


If you’re still wondering about any of these bits then maybe it’s worth going back and looking at the Getting Started Module. If you’re happy with this then let’s walk through the steps you’ll need to follow to build your first automated test in TestComplete.

We’ll break this down into 3 key stages:

  1. Creating the Project Suite and Project
  2. Recording your first test
  3. Replaying your first test

Before we can create our first test we need to create a project that will contain that test. Before we can create our first project we’ll need a Project Suite that will hold the project. Remember, a project suite can contain one or more projects. A project holds all the artifacts needed for a particular test automation effort. So from the starting page when you first open TestComplete. We’ll walk you through this in the next few sections of this blog post and the video below:



1. Creating the Project Suite and Project


Create your Project suite, by clicking on the ‘New Project Suite’ button (1) and then enter the name for this suite. Once that’s created click on the ‘New Project’ button (2). It’s a bit back to front but you’ll be creating more Projects than Project Suites so it’s kind of makes sense to have the ‘New Project’ button on the left.

When you create the project you’ll be walked through a wizard where you’ll need to enter the following:

  • Project Name – you chose this
  • Type of Applicaiton – pick Generic Windows Application
  • Add Application to the project – just click next (we’ll do this later)
  • Test Visualizer – just click next (we’ll look at this later)
  • Scripting Language – select Python*

* – we wont get into scripting in this course but we might touch on the odd little bit of code so we may as well select Python at this stage.

Then click ‘Finish’ and we’ll have our first Project which is contained with our first Project Suite.


Before we go any further we’ll need an application to test. To make our lives easy we’re going to use an application called Calc Plus from Microsoft.

Search the web for ‘Download Calc Plus’

Microsoft don’t actually provide this as a download’able application anymore but you will find it on sites like ‘’.

We’re using Calc Plus because it exposes a lot of it’s object information to TestComplete. We’re not using the default calcualtor supplied with Windows because this one doesn’t expose everything we need. Calc Plus just makes our life easier as we start out learning TestComplete.



A Little About the Application Under Test – Calculator Plus

Now you have Calculator Plus, and before we start recording our first test, lets just check out the ‘Tested Apps’ feature. There will be a lot of processes running on your PC or Server. Adding your Calc Plus application as a ‘Tested App’ makes it easy to focus on the application we’re testing. If you follow the next few steps you’ll see how…

1. start Calc Plus
2. click on the Object Browser tab



3. locate calculator plus in the objects list




4. right click and select ‘Add Process to Tested Apps’



5. click the ‘Yes’ button to confirm the addition

6. If you’re prompted with ‘Do you want to add the Tested Applications’ project item select ‘Yes’ followed by ‘Ok’

7. go back to the ‘Project Workspace’ tab



8. double click on the ‘Tested Apps’ node

module2-tested-app-basic-settingsAt this point you should see ‘Calculator Plus’ in the list of Tested Apps. What this gives us is the ability to filter, auto start and focus our test effort on just this application. For now the settings for this tested app should be as follows:





Next we’ll create our first test. Our first test will be a mixture of adding test steps manually and recording parts of the test too.


2. Recording Your First Test

To start off then we’ll manually add a few test steps before we record some test steps.

1. Rename the keyword test ‘Test1’ to “Start Calc”





2. Add a new Keyword test by click on ‘KeywordTests’ and selecting ‘Add New Item’




3. Call the new Keyword test ‘Close Calc’





4. Double click the ‘StartCalc’ node so that it opens in the workspace and drag the ‘Run TestedApp’ operation into the workspace, selecting ‘Calc Plus’ as you do so






Then right click on the ‘StartCalc’ test in project explorer and select run start calc. This’ll just make sure we have Calc Plus running so that we can complete the next step.

5. Click the ‘Record New Test’ button




You should see the recording tool bar open as TestComplete records your actions. And then record a few actions in Calc Plus (e.g. click the keys 2 * 2). Do NOT close the Calc Plus application at this point.


6. Then click the stop button on the recording tool bar





7. Rename the new test ‘DoCalc’





8. Double click on the ‘CloseCalc’ test item to open it in the workspace area




9. Click the ‘Append to Test’ button to record and add a new test step to this test



10. The recording tool bar should open. And at this point we just want to click the Calc Plus close window ‘X’ button





Then click stop on the recording tool bar.

At this point we’ve created all the component tests we need to run a full test scenario. All we need to do is link these together and run them. We do this at the project level by adding multiple project test items that call these tests in sequence.


1. Double click the ‘Project’ node in the Project Explorer






2. At this point you should have a blank ‘Test Items’ panel open for the project. We need to drag our tests in to here






3. Dragging all three items in, so that we have them in this order…



3. Replaying Your First Test


At this point we have a project with three test items that is ready to run. We can click the run project icon and see everything run in sequence






Assuming this runs successfully you’ll see a log file created which shows each test item running in turn successfully










And that’s it. Our first automated test. Created using a combination of building keyword tests manually (using drag and drop) and by recording tests. We’ve built the project in a modular fashion with three test cases pulled into the project list for execution. This way we can reuse these tests in other scenarios as we build out our test cases.



Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.

If you’re looking to learn TestComplete fast then this is the place to start. We’ve pulled together 12 fast start training modules teaching you all you need to know when you start out with TestComplete. Everything you need to become productive in the shortest time possible. Each module comprises of one short video along with a list of key learning point and concepts.

All this designed to get you productive with TestComplete in the shortest time possible. The quicker you become familiar with TestComplete the quicker you’ll be writing and running effective automated tests.

Over the course of 12 modules we’ll cover the following topics:

Each module is designed to take no more than 30 minutes to complete. In fact I’ve specifically kept every video to about 10 minutes. There’s a lot packed into each video though. The key learning points accompanying the video will take no longer than 10 minutes to scan. You might have to watch each video a couple of times but, spend just 30 minutes each day for two weeks and you’ll have mastered the basics of TestComplete.

Module 1 – Getting Started and Key Components

In this module we’ll look at the core components in TestComplete and get you familiar with the IDE (Integrated Development Environment). Whilst we’ll look in more detail at the concepts of Project Suites and Projects in the next module we’ll need to get started by creating our first Project Suite and Project. Watch the video and we’ll walk you through this:

Remember that you’ll start out by creating a project suite to hold your projects. Each project suite then contains one or more projects. Each project is a container for all the artifacts you need for a specific chunk of automation.

Once you’ve created your first Project Suite and Project (we’ll walk your through this process in the next module) you’ll see two main tabs; the Project Workspace and the Object Browser.

Project Workspace: is where you develop and work on all of your automated tests. It is split into two main areas:

  1. Project Explorer – where you can navigate all of the artifacts in your test projects
  2. Workspace – where you create and modify the artifacts in your test projects

Each time you double click on an item in the project explorer it opens a new tab in the workspace so that you can edit that item.

Object Browser: is where you inspect and investigate your system and the applications you’re testing. The object browser area is split into two main areas too:

  1. The list (or tree) of objects on your system
  2. The properties/methods view

The list/tree area shows all the objects on your system. Objects are either Processes, Applications or Browsers running on your system. Those objects are arranged in a hierarchy where the top parent object is your system (Sys). All other objects are child objects of the System object. For example your system (computer) might have a child object called ‘Process(“calcplus”)’ which is the CalcPlus application running under your System object. This CalcPlus process will then have it’s own child objects which could be ‘Windows’ that are displayed on your desktop.

When you select an Object in the left hand panel you will see the Properties and Methods for that specific object displayed in the right hand panel.

Properties can be considered as characteristics of the object. For example you ‘Sys’ object will have a property called ‘Hostname’. That property would have a value (e.g. The host name of your system).

Methods can be considered as actions that the object can carry out. For example if you have the CalcPlus application/object running on your system, this object could have the ‘close’ method. If this method is run then the object would be closed on your system.


If you’re still struggling with the concept of objects, properties and methods read the following analogy:

Objects and child objects: You can think of yourself and your body as an object. As an object you have lots of child objects. You have a head, you have arms, you have legs, etc. These child objects have their own child objects. For example an arm has child objects like shoulder joint, elbow joint, wrist joint, forearm, top arm and hand.

Properties: Each object will have a number of properties. You have a height property. That property could have a value (for example 1.6 meters). You body will have a list of properties and each of it’s child objects will have it’s own list of properties too. Take your ‘arm’ object. We’ve seen that the arm has a list of child objects. The arm itself could then have a property called ‘Number of child objects’. The value of this property is 6 (the six child objects for the arm being the shoulder joint, elbow joint, wrist joint, forearm, top arm and hand). Other properties for the arm could be things like colour, texture, etc. All of these properties could have values.

Methods: These are the actions that the object can carry out. Your overall body object might have methods, or actions, like Sleep, Run, Walk, etc. Each child object may have it’s own set of methods too. So your arm object may have methods like bend, twist, raise, lower, etc

These principals apply in exactly the same way to everything on your computer or laptop. The top level object can be considered your computer system. This system has child objects which might be processes (like the notepad process running on your system). The notepad process then has child objects which can be windows that are displayed on your desktop. If we take the notepad window this window will have properties; like height, width, colour, title, etc. This window will have methods too. These methods are likely to include actions like ‘minimise’ and ‘maximise’.

Project Suite and Project basics: When you start an automation project in TestComplete everything will be contained in a project suite. A project suite is just a container for one or more ‘Projects’. A project is a collection of items that you need to create in order to run your automated tests. A project will contain things like keyword tests, connections to databases, files containing test data, and much much more. Everything you need for a particular automation effort is contained in a ‘Project’. And a project is contained within a ‘Project Suite’. Thus you could have one Project Suite that contains a project for your automated system integration tests. And the suite could contain another project specifically for GUI tests.

Tests: in a project you can have two types of tests. Either Keyword tests or scripted tests.

  1. Keyword tests are graphical based tests that you build by pulling test items together. A test item might be an ‘on screen’ action like click a button
  2. Scripted tests are code that’s written (in a language like Python for example) to carry out test actions. For example you might write code like

We’ll look at scripted tests much later but for now this course focuses on Keyword tests.

NameMapping: The Namemap can be thought of as TestComplete’s list of objects that you want to interact with as part of your automation project. It lists the objects, their position in the object hierarchy and their identification properties. Much, much more to come on this later.

Stores: The Stores entity in your project is the repository that holds any other artifacts that you need to run your automated tests. For example you can add database connections here, you can add images for comparisons during test replays and files that you might want to compare too.

TestedApps: Here you can list the applications that you want TestComplete to focus on testing. Your system will be running lots of applications and processes but you only want to focus your automation efforts on one or a few specific applications. Listing those applications here helps TestComplete focus on what’s important and ignore everything else.

And that’s the basic TestComplete components. Become familiar with these and you’ll find creating your first few automated tests far easier and everything will fall into place far quicker.

In the next module we’ll walk your through creating your first Project and creating those first few automated tests.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.

Module 6 – Using Source Code Control to Manage Our Test Artefacts

In this final module we’re looking how we can best control all of our test resources and files. We’ve created Selenium, SoapUI and JMeter tests. The files for all of these tests are now scattered all over our distributed test automation environment. Not great for colloaboration, maintaining versions and backups. Down right dangerous really.

What we need to do is pull all of our files together into one central repostiory. Well, with the tool we’re using, Git, it’s more a central distributed repostiory. ‘Central distributed repository’ sounds like a bit of a contradiction. We’ll explain that contradiction as we go through this.

Anyway, we’ll be running up an Amazon Liniux instance and installing the source code management tool, Git. Then on all our client machines and our master Windows machine we’ll install the Git client. This will enable us to store and maintain all our test files across our automation network.

To configure this we’ll need to cover 5 key areas:

  1. Setting up and configuring our Git Source Code Control server
  2. Configuring our Source Code Control clients (both Windows and Linux)
  3. Adding our test files to our Git Source Code Control repository
  4. Updating our Jenkins jobs to use test files from the Git repository
  5. Modifying our test files and updating files in our Git repository

With all of this in place we’ll have the last piece of our jigsaw complete. We’ll have the Git component implemented as shown in our schematic here:

Test Automation Framework

The important concept to grasp here is that we’re managing our ‘test’ source code. We’re not managing our ‘development’ source code. The development source code is managed by our development team. We need to meet the same levels of good development practice that our dev team employ. And that means we’re responsible for managing our test artifacts and source code properly.

What You’ll Learn

The aim is to pull all our test files together and manage them effectively from one location. That means pushing any changes to this central location or “repository” as it’s better known. This means getting our Jenkins jobs to automatically use the test files stored in this repository. And it means learning to collaborate on changes to test files by making those changes easily accessible to everyone in your team.

The concept then is that every test file we create needs to be stored in our Git source code repository. That means, from our SoapUI, JMeter and Selenium development environments any code we write needs to be ‘pushed’ and stored on our Git server. When ever Jenkins comes to run a job it will be responsible for ‘pulling’ this source from Git server. This way the Jenkins server will always be picking out the latest test source files that anyone in our test team has checked into the Git repository (assuming our test team are diligent about pushing their changes to the Git repository that is).

What Jenkins actually does, when it initiates the jobs on the remote machines, is get it’s Jenkins slaves to pull the latest version of the test files from the Git repository. So whilst we’ll configure the jobs on the Jenkins server to use Git it’s actually the Jenkins slaves that are responsible for pulling our test files from the Git server.

All that we’re aiming for though is making sure everyone, including Jenkins, is using the right files from the right location. To goal to ensure that we are developing our tests in a collaborative environment where we’re using the right versions of the test files in our test environment and we have all of our test artifacts and files safely stored and backed up.

The SCM Tool We’ve Chosen

As we’ve already mentioned we’ve chosen Git. Git isn’t an “unpleasant or contemptible person” as the dictionary definition points out. Git is version control system that will store all our test artifacts or files. Git maintains a changes to those files over time so that we can revert to previous versions if we need to. All our changes are tracked so that we can see what changes were made, when and by who. Why’s this important?

Well take that scenario where your colleague makes a small innocuous change to a Selenium script. Nothing radical but when you come to run the latest version of the this automation script nothing works anymore. Well with Git we can see exactly what the change was and revert back quickly to the working version.

Why have we chosen Git in particular? Well it’s the de facto open source, source code repository tool. It’s one of, if not ‘the’, most popular source code control tool in use today. It’s an open source project that’s still actively maintained even though it was started back in 2005. Not only that, but there’s a massive amount of material (free books, free videos, etc) on the web to help you learn more once you’ve finished learning the basics here.


Make Sure you have your Private key (.pem file)

Back in Module 1 we created our public and private key pairs? Well at that stage you should have saved your private key pair .pem file (e.g. FirstKeyPair.pem) file. You’ll need this private key when configuring Jenkins later.

If you don’t have have this private key you can go back and create a new key pair. Much easier if you can find the one you created in Module 1 though.

If you’ve followed upto Module 4 so far you should already have your Amazon Virtual machine environment up and running along with Jenkins, Selenium and SoapUI. This existing setup gives us the 2 machines we’ll need to use in this module.

  1. Windows Master machine: this is running Jenkins and controls all our other machines (including the installation of the AUT and the execution of our Selenium tests). This machine will be responsible for kicking off our SoapUI API tests
  1. Linux Client machine: this Ubuntu linux machine is run up on demand by Jenkins and then has the AUT (Rocket Chat) automatically installed on it. This machine provides the web interface for the Rocket Chat application and the API for the Rocket Chat application.

Check the Status of your AWS Machines

Your Windows Master machine should already be running. The Linux machine (running the Rocket Chat application) may or may not be running. The Linux machine is run up automatically by Jenkins so it’s fine if it’s not running right at this moment. Whatever the state of the linux machine you should see the Windows machines status in the AWS console as follows:



Open an RDP Terminal Session on the Windows Master Machine

With these Windows machines running you’ll need to open an RDP session on the Windows Master machine. This is where we’ll configure Jenkins.



Then enter the password (you may need your .pem private key file to Decrypt the password if you’ve forgotten it) to open up the desktop session.

Start the Linux Client Machine

IF the Linux machine isn’t running with the AUT installed then we need to start it. We can get Jenkins to do this for us. Once you have an RDP session open to your Windows Master machine you should have the Jenkins home pages displayed. If not open a browser on this machine and go to this URL:

 > http://localhost:8080/

From here you can start the ‘BuildRocketChatOnNode’ job and start up the AUT.


Once RocketChat is up and running we’ll need to know the host name that Amazon has given our new Linux instance. We save this in our ‘publicHostname.txt’ file that is archived as part of our build job. So if you go to this directory using Explorer

C:\Program Files (x86)\Jenkins\jobs\BuildRocketChatOnNode\builds\lastSuccessfulBuild\archive

You should find this publicHostname.txt file…


Open this with notepad and make a note of the hostname. We’ll need this while we configure our performance tests.

At this point you should have…

  1. A copy of your your Private key (.pem file)
  2. An RDP session open to your Windows Master machine
  3. Your Linux Ubuntu machine running with Rocket Chat installed

From here we’ll setup a new Linux/Unix Ubuntu machine that will hold our Git repository.

Part 1: Sart a Unix Ubuntu Git SCM Server

First we need a Linux AWS server that will run our Source Code Management (SCM) tool Git. We’ve done this a few times before now so we’ll step through the AWS Linux server configuration quickly.

The other Linux machines we’ve setup in this course are designed to be started automatically by Jenkins on demand. Slightly different with this Linux machine. We need a machine that’s not started by Jenkins, that’s always on, has ??? storage (not emphiperal?) and is protected from being shut down.

  1. In the Amazon AWS interface launch a new instance (click on a Launch Instance button). Select the ‘Free tier only’ option and configure this new AMI with the following parameters:

STEP 1 : Amazon Machine Image
AMI ID: ami-9abea4fc *1

STEP 2 : Instance Type
Instance Type: T2Micro

STEP 3 : Instance Details
Select all the defaults and
Protect against accidental termination: <checked> *2

STEP 4 : Storage
Type: Root
Size: 8GiB
Volume Type: General Purpose SSD
Delete on Termination: <checked>

STEP 5 : Tag Instance
Key: Name
Value: Unix-Git

STEP 6 : Security Groups
Select an existing security group: <checked>
Security group names: Unix-AUT, default

*1 – note that the AMI may be different for you. This depends on which AWS region you’re using. Of course Amazon may just have removed this AMI and added a new one with a different AMI ID. You’ll need to search in your AWS console for something similar to “Ubuntu Server 14.04 LTS (PV),EBS General Purpose (SSD) Volume Type. ”

*2 – on Step 3, ‘Configure Instance’ details you’ll see a parameter that allows you to enable termination protection. Just need to make sure this is checked so that you prevent anyone terminating our server. It’s going to be holding all our test source which is critical to everything.

Once you’ve clicked the ‘Review and Launch’ button you should see a configuration summary page like this:

When you ‘Launch’ this instance you’ll need to configure the SSH security key pairs.

  1. Configure the SSH security key pairs by selecting:

Choose an existing key pair
<select your SSH key pair>

So back in module 1 we created an SSH security key pair. You should have this saved safely somewhere (see the Prerequisite section in this module for more info on your Key Pairs). As you need to select it on this dialogue box:

This is the key pair AWS created for us back in module 1 and that AWS stores and uses. We need to have access to the .pem file that was created as part of the initial setup back in module 1. The important point, as AWS put it, is:

“I acknowledge that I have access to the selected private key file (*.pem), and that without this file, I won’t be able to log into my instance.”

You just need to make sure you can find your copy of your *.pem file. Assuming you have it check the ‘acknowledge’ check box and click the ‘Launch Instance’ button.

  1. Check your new Linux instance is running.

Back on the AWS EC2 dashboard you should see your new instance running. You can search based on the name you gave the instance if you like.



Once this is running we can check the security groups and make a connection using SSH.

Part 2: Setup the Security Group and SSH Connection

Now we have our Unix-Git Source Code Control (SCC) machine running we need to make sure the AWS security settings will let us connect to it and configure an SSH terminal connection.

  1. First then the AWS Security Group Configuration.

Set up the security group by first checking that this linux machine has the right security group assigned to it. You can do this by selecting the host in the AWS EC2 console.



Click on the ‘Unix-AUT’ link which will take you through to the security groups page for this specific group. Then click on the ‘Inbound’ tab.


At this point we can ‘edit’ the security group and add a new rule:



This rule we’ll configure with the SSH port from our local laptop/desktop machine. So select these parameters:

Type: SSH
Protocol: TCP
Port Range: 22
Source: My IP

Which should give us something like this



Once we have this we’ll have access to our Linux machine direct from our laptop/desktop using the AWS Java SSH Client (MindTerm).

  1. Second we need to connect using an SSH terminal.

We’ll make this connection using an in-built SSH client (MindTerm) that is integrated with the AWS management console. To connect using this method, first go back to your list of AWS instances and then right click to select ‘Connect’



Once you see the connection dialogue box you should select this option:

A Java SSH Client directly from my browser

You will of course need Java installed on your laptop/desktop machine in order to follow through with this. Once you have selected this you just need to find the ssh key you created way back in Module 1. Should be a file you saved with a name like ‘FirstKeyPair.pem’. Everything else can be left as defaults giving you something like this:



Once you click on the ‘Launch SSH button’ you should see a window like this open up



Of course this is the first time we’ve connected to this linux server from our laptop/desktop machine. So SSH on the Linux machine warns us that we’re not a ‘Known Host’ and looks for confirmation that we want to add our laptop/desktop as a ‘known host’. We just need to click ‘Yes’ for this.

If you run into any error messages or have trouble connecting at this stage just close the terminal window and open it again. Second time round the connection usually works without any problems.

Now we have the Unix-Git machine running and we have a shell SSH connection. Next step is to configure our Git server that will run on this machine. Once configured we’ll be able to store our test cases on this server.

Part 3: Install Git

We have our server up and running with an SSH shell connection open. Just need to install Git now. Pretty straightforward. Just run this command:

sudo apt-get install git

Select all the defaults as you are prompted.



This should complete cleanly having installed all the required packages:



And that’s it. Simple.

Just need to configure a Git user account and set the Git server up.

Part 4: Configure the Git User Account

To configure our Git server we’ll need to run through a few steps.

  1. configure a Git user
  2. set up ssh for that user
  3. create and store this users SSH key pair
  4. copy the private key pair somewhere safe
  5. install the private key on the Windows Master machine
  6. install the private key on the Windows Client machine

What are going to have is a user defined on our new Git server. This user (called ‘ae’ for automation engineer) will be setup with SSH (Secure Shell) access. We’ll be able to have all our clients of this Git repository running with the SSH private key so that they can login to this Git server without having to authenticate with username and password details. When these client machines have direct access they will be able to check out and check in code (e.g. our Selenium, JMeter and SoapUI scripts) directly to this Git server. In order to do this we need to setup this user and SSH. The next 5 steps take you through this process.

  1. First then, let’s configure a Git user from our SSH terminal that’s running. Enter the following commands which will create a new unix user ‘ae’ (which stands for automation engineer):

sudo adduser ae

Enter a password (one that you can remember) so that you have the following in your SSH terminal



  1. Then we can configure SSH by entering the following commands:

su ae
mkdir .ssh
chmod 700 .ssh
touch .ssh/authorized_keys
chmod 600 .ssh/authorized_keys

Once you’ve run through these commands your SSH terminal should look something like this:



What this set of commands does is create the directory that SSH will need for our Git connections. Then, with the ‘chmod’ command it makes sure the permissions for the SSH director are set for only the user (ae) to have access. SSH is very sensitive about permissions.

Then we create a new file called ‘authorized_keys’ which is where we’ll store our authorized key for this ‘ae’ user. Again we modify the permissions of this file so that only the ‘ae’ user has access to it.

  1. Now we can create an ssh key for this user. The command we use for this is ‘ssh-keygen’:

ssh-keygen -t dsa

This takes you through a set of questions that are need to create this ssh key pair.

Enter file: <accept the default>
Enter passphrase: <your passphrase> *1
Enter same passphrase: your passphrase>

The passphrase you choose is used to protect the private component of the ssh key pair. Remember that an SSH key pair comprises of private and public components. The private component you keep safe, never reveal and never send over the internet.

Once you’ve completed this step you should see the following:

Your identification has been saved in /home/ae/.ssh/id_dsa.
Your public key has been saved in /home/ae/.ssh/

What we need now is to copy the private component back to our own laptop/desktop. We’ll need to install this private key on our Jenkins Windows master machine later. This way our Jenkins machine will be able to use this user account with the SSH connection to get all our source code from this Unix-Git machine.

  1. We now need to add the public key ( to our authorized keys file.

We can add this key with the following commands

cd ~/.ssh
cat cat > authorized_keys

This cat command kind of prints the file but sends that print out to the authorized_keys file. With this public key installed any machine connecting that has the private key installed will be allowed to authenticate directly with this machine.

Next then we need to store a copy of our private key pair so that we can use it later.

  1. Display the private key and copy it somewhere safe.

If we use the Unix ‘cat’ command to read and display our private key we can then copy the text locally. So run this command:

cat /home/ae/.ssh/id_dsa

You should see something like this

Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,3A4052FF706445E55E0BE77A36560A28


Copy this text, paste it into Notepad (or any text editor) on your local laptop/desktop machine. Call the file something like:


Save the file and please try not to lose it. We can get it again from this Ubuntu machine if we need but it’s easier if we save it somewhere safe. JUST DON’T foget the passphrase you used. That’s the most important point.

  1. Now we need to install this key on our Jenkins windows master machine.

Once we’ve installed this key our Jenkins windows master machine will have access to the ‘ae’ user and the Git repository that will store all our automation scripts. Next step then is to open the RDP session to the Windows master machine.



On the desktop of this master windows machine create a new text file.



Name the file:



and open it in notepad…



Once opened we need to copy in our DSA private key details. Open the aeKeyPair.pem file that’s on your laptop/desktop (or on the Ubuntu server) and copy/paste the content into notepad on the windows master machine.



Save this file. We need to convert this into a format that Putty (our SSH client tool running on our Windows master machine), understands. So open “PuttyGen” from the start menu:


We need to import our .pem file into PuttyGen now:


And then select the aeKeyPair.pem file that we’ve just saved to the desktop. At which point you’ll need to enter your Passphrase … hope you can remember it!



Then click “Save Private Key” and save the new .ppk format file as:


Which should take you through this step:



Close PuttyGen and find the Putty Agent that should be running in your task try. Right click on this and select ‘Add Key’



At which point we need to select our new ‘aeKeyPair.ppk’ file:



Enter your Passphrase and we should be ready to configure our new Linux Ubuntu machine as a new Putty client. So open Putty again and select ‘New Session’ this time round:



Now we need to configure our new Unix-Git machine as a new Session in Putty. First you’ll need to find the ‘Private IP address’ from your AWS terminal:



Jot this IP address down and configure the following in Putty:

Connection -> Data -> Auto-login Username: ae



Session: <private ip adddress>
Saved Sessions: Unix-Git



You have to be careful here that you click save and not load. If you click load it loads up another sessions details, without warning, and you lose your new session details. So save this. Then click on the new ‘Unix-Git’ entry and select ‘Open’



Accept the security warning by selecting the ‘Yes’ button



At which point we should be in business. You should have a direct session open to our Unix-Git machine directly from our Windows Master machine


  1. install the private key on the Windows Client machine

Now we just need to take that aeKeyPair.ppk private key file, copy it to our Windows Client machine and install the key there too.

Make sure you have this file on you Windows Master machine desktop:


Once you’ve located it you’ll need to copy it



Then open an RDP session to the Windows client machine



You’ll need to go to your AWS console and get the Public DNS/IP value for this host if you can’t remember it. Once you have the Public DNS/IP value enter it in the RDP dialogue box and connect.



You shouldn’t need to authenticate with user and password details. It should connect immediately. We set up automatic authentication back in Module 3.

Once connected paste the ‘aeKeyPair.ppk’ file to the desktop of the Windows client machine.



Download the Putty SSH tools and install on the Windows client machine:

You need to select, download and install using the ‘putty-0.xx-installer.msi’ file:



Step through the instal wizard and select all the defaults. Start Putty Agent from the install directory:

C:\Program Files (x86)\PuTTY



Right click on Putty Agent in the task tray and select ‘Add Key’



At which point we need to select our new ‘aeKeyPair.ppk’ file:



Enter your Passphrase and we should be ready to add our new Linux Ubuntu machine as a new Putty client. So open Putty again and select ‘New Session’ this time round:



Now we need to configure our new Unix-Git machine as a new Session in Putty. First you’ll need to find the ‘Private IP address’ from your AWS terminal:



Jot this IP address down and configure the following in Putty:

Connection -> Data -> Auto-login Username: ae



Hot Name: <private ip adddress>
Saved Sessions: Unix-Git



You have to be careful here that you click save and not load. If you click load it loads up another sessions details, without warning, and you lose your new session details. So save this. Then click on the new ‘Unix-Git’ entry and select ‘Open’



Accept the security warning by selecting the ‘Yes’ button



At which point we should be in business and have a direct session open to our Unix-Git machine.



All of this is essential if we want our machines to be able to automatically check out code from our Git source code repository without having to authenticate everytime with a user name and password.

Now we’re ready to setup our Git source code repository and start saving all our test scripts (Selenium, JMeter and SoapUI) safely in this repository.

Part 5: Configure the Git Server

Git is already installed on our Ubuntu Linux server (we set this up earlier). We just need to run a handful of commands to configure Git as we need it. We can do this from either of the SSH shells we have access to. Either the Putty SSH shell running on the Windows Master machine or the SSH shell provided as part of the AWS management console. I’m going to use the SSH shell from Putty on our Windows master machine.

  1. Connect to the Linux Ubuntu machine using Putty

On the windows master machine from the Putty saved sessions select ‘Unix-Git’

This should take you straight into an SSH terminal with no authenticating required. And if you type ‘whoami’ you should see that you’re logged in as the ‘ae’ (automation engineer) account.

  1. Now we need to create a ‘bare’ (empty) Git repository

We’ll run these commands to create an empty directory for our Git repositories.

mkdir ~/git
mkdir ~/git/selenium
cd ~/git/selenium
git init –bare

Which should give you:

We’re creating a directory for our Selenium source code first. Then we’re changing to that Selenium directory and initialising a new git repository with the ‘git init –bare’ command. The ‘bare’ option just means create an empty Git project.

  1. Create repositories for our other projects

Now we know how to create bare repositories we can create them for the other JMeter and SoapUI source code we’re working with. Just run these commands:

mkdir ~/git
mkdir ~/git/jmeter
cd ~/git/jmeter
git init –bare

Which creates our JMeter repository. Just SoapUI left

mkdir ~/git
mkdir ~/git/soapui
cd ~/git/soapui
git init –bare

Now we have one Git source code repository (or Git project) for each of our test tools.

Next we need to make that initial commit of source code for the tools we’re using. We’re going to work through doing this for our Selenium code in the next few sections.

Part 6: Commit Our Selenium Source Code to the Git Server

First then we need a Git client running on our machines where we currently have our source code. For example we have developed our Selenium scripts on our Windows client machine. We’ll need to install a Windows Git client on our Windows master machine and our Windows client machine. That Git client can then commit our Selenium scripts to our Git server. Then all our machines (e.g. our Jenkins slave machines) will have access to this source when they need it.

Let’s install this Git client then. These steps need to be repeated BOTH on your Windows Master machine AND your Windows Client machine.



  1. In IE download Git

Open IE and go to this Url

Fight your way through all the IE security warnings if you have to. Then click the ‘64-bit Git for Windows Setup’ link



If you run into download issues you may need to adjust your IE security settings



You’ll need to add these domains to the zone:

That should allow you to download the installer once you click on the IE warning



  1. Install Git on your Windows Master machine

Then click run:



Accept all defaults in the install wizard EXCEPT for the ‘Choosing the SSH executable’ option. For this make sure you select ‘Use (Tortoise) Plink’ and enter the path to ‘plink.exe’



And that should be it. Next you should see the completion Window



  1. Check the Git GUI Starts

From here on the Start menu you should be able to start the Git GUI application



Which should give you:



You can click ‘Quit’ for now.

  1. Check the Git command line application

Also check your Git install works from your command line. So open a command windows:



And type this command:

git –version

This should confirm the Git command line tools work as you’ll see the version of Git that’s been installed.



From here we’re ready to start pushing our Selenium script to our repository.

Part 7: Commit our Selenium Script to our Git Repository

In the previous parts we initialised the Git repositories on the Linux Ubuntu machine and we installed the Git client application on our Windows machines. From here we need to use that Git client on our Windows client machine to “add” our Selenium code to the Git repository on the Linux Ubuntu machine.

Not surprisingly we’ll be using the Git ‘add’ command along with the Git ‘commit’ command. We’ll be doing all of this with the Git command we just ran from the Windows command prompt.

  1. Open an RDP session to the Windows Client machine

Then open an RDP session to the Windows client machine



You’ll need to go to your AWS console and get the Public DNS/IP value for this host if you can’t remember it. Once you have the Public DNS/IP value enter it in the RDP dialogue box and connect.



You shouldn’t need to authenticate with user and password details. It should connect immediately. We set up automatic authentication back in Module 3.

  1. Locate our Selenium scripts

Back in Module 3 we wrote and ran our Selenium scripts. We should find our Selenium script on our Windows Master machine on the Desktop



Not the best place to store them! Which is exactly why we’re setting up a Git source code repository to keep them safe.

  1. Create a new folder to store the Selenium script

In an Explorer window create a new folder ‘projects’



Followed by a sub directory ‘selenium’



So you should have this new directory path:


  1. Copy your Selenium script into this folder:


  1. Open a Git command prompt session

Right click in the Explorer window and select the ‘Git Bash Here’ option



At which point you should see a Git terminal open



From here we can copy (or check in) our Selenium scripts to our Git source code repository on the Linux Ubuntu server

  1. Commit your Selenium Script to Git

First we need to prepare Git so that it knows who and what needs committing to our souce code repository. We’ll need to run these commands:

$ git init
$ git config –global “Automation Engineer”
$ git config –global “
$ git add .
$ git commit -m ‘initial commit’

This should give you a series of commands like this:



The init command setups Git in this directory (if you run the command ‘ls -la’ you’ll see a new hidden directory that contains all the Git data). Then the config command sets up the git user on this machine. You can’t do anything without setting up the git user as this information is tied closely to everything you checkin to git. Then we tell git that we want to ‘add’ this directory to our repository. Finally the commit command commits our files locally ready for them to be pushed to the Linux Ubuntu server.

To add the files to the Linux Ubuntu server we’ll need to run these final two commands:

$ git remote add gitserver Unix-Git:/home/ae/git/selenium
$ git push gitserver master

Which should give you something like this:



What we’re doing here is first adding a remote server. Essentially identifying the Linux Ubuntu machine where we want to push and commit our Selenium scripts to. Note that we define the server name as ‘Unix-Git’ which is the ‘Putty’ name we defined for this server earlier.

Once we’ve added a remote called ‘gitserver’ we can use this as part of our final ‘push’ command. The ‘push’ command sends the files to the ‘gitserver’ and adds them to the branch called ‘master’.

Now are Selenium script is safe in our Linux Ubuntu source code repository. It’s available for other users and servers in our framework of machines to use. In the next two sections we’ll look at…

1. Modifying the selenium script on another server and checking the changes in
2. Updating our Jenkins job so that it uses the latest Selenium source code from the Git server

At this point though we’re well on the way to having a distributed source code repository that stores all our automation scripts safely. All we need to do is update our Jenkins job so that it uses the Selenium source code from the repository. From there any updates that are made to the sripts, so long as they are pushed to the repository, will be picked up by our Jenkins job.

Part 8: Updating our Jenkins Job to Use the Git Repository

With our Selenium source code safely stored in our Git Repository all we need to do is make sure that our Jenkins job, that executes the Selenium scripts, pulls the latest source from the repository before it starts. To do that we just need to make a few updates to our ‘RunSeleniumTests’ job.

TODO: put the schematic diagram in here

  1. Configure the ‘RunSeleniumTests’ job

Click on the configure menu option for the ‘RunSeleniumTests’ job on the Jenkins home page:” alt=”” />

  1. Change the ‘Source Code Management’ option

In the ‘Source Code Management’ section change the option from ‘None’ to ‘Git’


  1. Enter details for the Git Repository

We need to point this job at our new Git repository residing on our Linux Ubuntu machine. Update the ‘Repository URL’ field with this value:


This tells Jenkins to use our already configured (back in part 4) Putty ssh connection. If you remember we configured a Putty client called ‘Unix-Git’. We use this Putty client name as the first part of the repository Url (this is what established the link to the Git server over Putty Ssh). Then we define the location of the Selenium Git project on the Git server. This should look like this…




We don’t need any security credentials defined (we’ve already specified Putty Ssh host) and the branch to build should automatically be set to “*/master”.

With this configured Jenkins will pull our script out of the Git repository prior to running the build commands specified in this job. So next we need to tell Jenkins to use this Selenium script that it gets from the Git repository.

  1. Update the ‘Execute Windows batch command’

Once Jenkins, executing the RunSeleniumTests job on the remote Windows machine, pulls out the script from Git it can be executed. The Git pull request executed on the Windows client machines places a copy of the script in this directory on the Windows client machine


Where the ‘RunSeleniumTests’ in this path is the name of our Jenkins job. So you’ll see it on the Windows client machine here:



The only problem is that our Jenkins job, the ‘Execute Windows batch command’ section, points at the ’’ script on the Desktop. Back on our Windows Jenkins master machine you see this configured in the Jenkins job here:



We need to update this so that it points at the checked out script in the Jenkins work space. So change this to:


Notice that we’ve used the Jenkins Environment variable %WORKSPACE%. Jenkins knows where the work space is on the Windows client machine so we may as well let Jenkins work that out each time the job runs. Once updated the command field should look like this:


  1. Run and Test the Updated Job

Save the updated Jenkins job and return to the dashboard. You can run this job now and check it works:



If you view the console output for this job you should see it start off with something like this:



The part here being these few lines:

Building remotely on Windows-client (i-2c2ac7ea) (SeleniumTestClient) in workspace c:\jenkins\workspace\RunSeleniumTests
Fetching changes from the remote Git repository
git.exe config remote.origin.url Unix-Git:/home/ae/git/selenium # timeout=10
Checking out Revision e47f98e0c04922d02e990337126dc0376f50f029 (refs/remotes/origin/master)
git.exe config core.sparsecheckout # timeout=10

This shows that Jenkins is using Git to fetch any changes to the scripts. In this case there haven’t been any changes so not much happens. In the next part we’ll see what happens when we have made changes.

The other piece of note is the last section in this console output.



You’ll see here, for example, that Jenkins has expanded the %WORKSPACE% environment variable and replaced it with the full path to the workspace where our file has just been checked out from Git to:

c:\jenkins\workspace\RunSeleniumTests>c:\jenkins\workspace\RunSeleniumTests\ Chrome

The final part in this is making sure we can make changes to our scripts from other machines and checking them into to Git. Then we’ll want to see those changes being checked out by our Jenkins job on the Windows client machine. We’ll see this in action in the next section.

Part 9: Modify and Commit our Selenium Scripts from another server

At this stage then, maybe another tester decides our Selenium script needs a little updating (more comments perhaps). On our Windows master machine we can check out the scripts, make our modifications and then push the mods back to the repository. Next time we run our Jenkins ‘RunSeleniumTests’ job we should see those changes in the execution of the Selenium script.

We’ll see this in action as we complete the next few steps where we make changes to the script on the Windows master machine. We’ll then push those changes to our Git repository. When our Jenkins job runs on the Windows client machine we should see those changes incorporated in that test run.

So back on the Windows Master Machine

  1. Create a folder for the Selenium scripts

In explorer, in the Document folder, create a new folder called ‘automation’



We’ll pull our Selenium project and script out of Git into this directory.

  1. Open the Git GUI

From the Start menu select the ‘Git GUI’ application



We have three options here. ‘Create New Repository’ which we don’t need as we already have our repository created on our Unix-Git machine. ‘Open Existing Repository’ which means start using a repository that already exists on this local machine (we don’t have anything yet so this is no good). And ‘Clone Existing Repository’ which allows us to take a copy of our repository that is residing on our Unix-Git machine. This is the option we’ll select.



On this next screen we’ll need to enter the location of the repository that’s on our Unix-Git machine and tell the ‘Git GUI’ where is needs to copy that repository to locally. So enter the following:

Source Location: ae@Unix-Git:/home/ae/git/selenium
Target Directory: C:\Users\Administrator\Documents\Automation\Selenium

Then click on the ‘Clone’ button:



What we’re doing here is using our (already created) Putty ae@Unix-Git ssh connection and the location of our git/selenium project as the source. We’ll clone that project that resides on the Unix-Git machine into a new directory ‘Automation\Selenium’ on this local machine. In Explorer you should now have this…



And Git GUI should show you this window…



Lets ingnore this window for a second and quickly update our Selenium script

  1. Open the script

Open ‘notepad’ and edit the script:



Add a new comment or something, just so that we’ve made a change to the script:



Then close notepad and save the update…



  1. Git GUI Rescan

In Git GUI click the ‘Rescan’ button to check for the changes we’ve just made. You should see the modification listed like this…



  1. Git Config
    Now we can ‘Commit’ the changes to our local repository. Then we can ‘Push’ the changes to our master repository on our Unix-Git machine. Before we can commit the changes we need to setup our identity on this machine (using ‘git config’ like we did on the Windows client machine).



Then run these two commands at the prompt:

$ git config –global “Automation Engineer”
$ git config –global “

Which should give you this:



That step is just a one off config step. We need to run it otherwise ‘Git GUI’ will complain that it doesn’t know your identity to complete the commit. Once you’ve run it you won’t need to do it again prior to future commits.

Then back in ‘Git GUI’ we should be able to run our commit. First enter some text in the ‘Commit Message’ section to sumarise the change you’ve made. Then click the ‘Rescan’, then the ‘Stage Changed’ button followed by the Commit’ button



Right, all that’s done is commit your changes locally. It’s hasn’t pushed the changes to the Git Unix repository. We’re ready to push them though.

  1. Git Push

Now we’ll be able to ‘Push’ the changes so that our Jenkins job can pick up these changes.



On the ‘Push’ dialogue box just select all the defaults and click ‘Push’:



You should see this confirmation box showing the successful push:



Now we’re ready to see if Jenkins will pick up these changes in the next run of the ‘RunSeleniumTests’ job.

  1. Build the ‘RunSeleniumTests’ Job and Confirm pull of Updated Source

Back in Jenkins on the Windows master machine lets run the Selenium test job again. This time we’ll check the job pulls the latest source out of Git before running the script.



As this is running we can check the build log:


And in here we should see some of these messages at the start of the console output:

git.exe rev-parse –is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
git.exe config remote.origin.url Unix-Git:/home/ae/git/selenium # timeout=10
Fetching upstream changes from Unix-Git:/home/ae/git/selenium
git.exe –version # timeout=10
git.exe -c core.askpass=true fetch –tags –progress Unix-Git:/home/ae/git/selenium +refs/heads/:refs/remotes/origin/
git.exe rev-parse “refs/remotes/origin/master^{commit}” # timeout=10
git.exe rev-parse “refs/remotes/origin/origin/master^{commit}” # timeout=10
Checking out Revision 89293a45f2accb9b4191c717c23363901fd247d6 (refs/remotes/origin/master)
git.exe config core.sparsecheckout # timeout=10
git.exe checkout -f 89293a45f2accb9b4191c717c23363901fd247d6
git.exe rev-list e47f98e0c04922d02e990337126dc0376f50f029 # timeout=10

For the observant of you you’ll notice that the check out id (‘89293a45f2accb9b4191c717c23363901fd247d6’) is different. Indicating that we have a different version of our script. If you open the file in the Jenkins ‘workspace’ on this Windows client machine you’ll see our updates:



And that’s it! We’ve gone full circle making changes to our Selenium scripts on one machine. Then seeing those changes picked up automatically by our Jenkins job, and the changed Selenium script being used on our Windows client machine.

We can now say that we have control of our test source code. You’re job now it to complete the same set of steps for the SoapUI and JMeter source files.


When we started this module we had everything running in our automation framework. Jenkins could install the application under test, run the Selenium tests, run the SoapUI tests and execute some performance tests. What we didn’t have was control over the source code that was created for these Selenium, SoapUI and Jmeter tests. None of our tests were stored in a central location and none of them version controlled.

In this module we showed you how to setup a central Git server, commit our test files to this Git source code repository and then configure our Jenkins jobs to use the test files stored on this Git server. Finally we looked at how you can develop on one machine and then commit changes to the Git repository. Of course Jenkins then automatically picks up and runs with those latest changes.

All of this makes it easier to collaborate during the development of your tests. It makes it easier to maintain the different versions of your test files and of course revert to old versions if you break something. This whole setup also giving you a distributed repository that’s effectively a backup of all your test files.

Finally, it puts you in the same league as your development team who will undoubtedly be using a source code control tool to manage the development of the application you’re testing.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
1 2 3 18