In the first tutorial we looked at why we need SVN and setting up our own Visual SVN server. When working with your team at work or collaboratively over the net, you probably won’t need to setup the server. This will probably already be in place and you’ll be given an account that allows you access. We’ve setup our own Visual SVN server instance because it’s the quickest way to get started and to help us learn. Your own SVN server gives you an environment in which to practice and experiment.
Once the server and repository are configured we need to setup our client to access this repository. The client allows us to pull files out of our central repository and push files back into that repository. The act of pulling files from the repository is known as a ‘Check Out’. The act of putting files back into the repository known as a ‘Commit’. Before we do either of these actions though we’ll start with an ‘Import’. An ‘Import’ is used to put a bunch of files (files that aren’t already version controlled by SVN) into the repository for the first time.
Setting Up Tortoise SVN Client
So the focus for this tutorial, installing Tortoise SVN and importing our initial set of files so that they are controlled by SVN. To start we’ll need to download and install the Tortoise SVN client from here….
Run this installer and work your way through the prompts (accepting most of the defaults).
Worth adding the command line client tools at this point….
These utilities can come in handy at times, especially when integrating with other tools (like TestComplete for example).
You may need to restart your machine after this install but once completed you should be able to confirm the install by opening an ‘Explorer’ window and right clicking….
This should show you a couple of top level menu items that Tortoise now provides us with; SVN Checkout and Tortoise SVN. The Tortoise SVN menu item then having a whole new sub menu related to SVN actions.
At this point we’re ready to start working with files in our repository on our Visual SVN server. We have two main options:
1. SVN Checkout: with this option we’ll pull the files from a particular repository on our SVN server.
2. TortoiseSVN -> Import: with this option you can push files and directories into the repository to start things off.
We have a new SVN Server without any files in the repository. There’s not a lot to check out. So we’ll start by importing some files into our repository on the server. In a future module we’ll go through the SVN Checkout process followed by adding a handful of new files to our existing repository. For now though we’ll create some test files and directories on our local machine and then add them to a repository on the server, with the SVN ‘Import’ command.
With the ‘import’ command we can push a batch of files and directories into an existing repository all in one go. It’s the simplest way to get started if you have a lot of files to add to a reposiotry in bluk. Think of this as the start point where you have some local files/directories that aren’t under version control. You want to start version controlling those files and sharing them with other people. So you put them in a central place…..you import them into the SVN repository on the SVN server. Once they are in that repository they’re available to you and others to work on.
First though we’ll need to create some test files and directories to get us started. Create an empty directory on your system called “SvnDemo”.
Doesn’t matter where you place this directory on your system, you just need to have the ‘SvnDemo’ sub directory. Under this directory we need to create the following folder and file structure:
You can add a bit of text to some of the text files to get started too, but essentially you need to have this….
Then go up a level in the directory hierarchy and select the Import option:
At this point you’ll need the URL for the repository you created earlier. You wrote down something like this earlier right?
If you’ve fogotten what this URL is then you can go back to the VisualSVN server app and right click to select ‘Copy URL to Clipboard’.
Note that at the end this has the name of the repository where we’ll be checking this in. In our example above ‘repo1’. Add this URL to the import dialogue and add some initial ‘Import message’ that just describes what you’re doing.
Then click okay to import all your folders and files to your repository on the server. You’ll be asked to enter your user credentials as shown below (making sure to check the ‘Save authentication’ check box):
Once you’ve clicked OK you be at a point where you see a list of all the files imported into the repository with, hopefully, a ‘Completed’ message.
You’ll notice that it’s given an initial revision of ‘1’. Our first revision number for our first import/check-in of our code to the empty repository we created earlier.
Now if we go back to our VisualSVN Server applicaiton and click on refresh we should see all of our newly imported files on the server.
Note the structure though. The import process didn’t import the top level folder (‘SvnDemo’ in our example) it just imported the folders and files within that directory. The repository (‘repo1’ in our example) on the server now contains the origional set of files that we created:
Or from the SVN servers point of view the following files….
The critical bit is that when the next person checks out this repository, they will get these files with NO top level directory….
Also take note of the fact that (AND THIS IS IMPORTANT) your local copy of the files is not version controlled. Whilst you have imported your files into the repository the local files are not under version control. The import process doesn’t touch, modify, add to or update these files in any way. These are still your original files. This set of files are the originals that are not version controlled.
Whilst the origional (non-version contorolled) files are still on your local machine. We now have our ‘master’ set of files stored on our SVN server. When we get a copy of these from the SVN server (using the ‘check out’ command) we’ll h ave a copy of these origional files on our local machine that ARE version controlled. It’s checking out a version controlling copy of these origional files that we’ll cover in the next module.
In this set of tutorials we’re going to take you through the basics of Subversion with Visual SVN Server and Tortoise SVN. Subversion (from here on in referred to as SVN) is a centralised Version Control System. That is, it’s a tool that allows us to version control files and collaborate on files. SVN deployed with Visual SVN Server gives us a server environment within which to maintain our files. Add to this a graphical user interface called “Tortoise SVN” and this gives us the simplest and quickest way for individuals to collaborate on files and version control those files.
SVN has been developed by CollabNet, and is currently maintained by the Apache Software Foundation. It’s open source project and is predominately a command line tool (both for Windows and Unix). The home of SubVersion is the Apache Org Subversion web site where you can download the source code (you’ll find many mirror sites that host the binaries for various platforms).
Separately to this a company called VisualSVN Software Ltd have developed an SVN server application that works as a graphical front ends to the SVN server component. Then we have an independent Open Source project (GPL) called Tortoise SVN that provides a client front end for windows. At the core though is SVN. We’re just using Visual SVN Server and Tortoise SVN client because they are quickest and easiest ways to use SVN and start learning about SVN. Other front ends are available!
1. Visual SVN Server: a server applications, the central repository that holds all the files you want to version control, called Visual SVN Server
2. Tortoise SVN: the client application called Visual SVN, that allows you to manage your files locally, get your files from the server and commit your updated files to the server.
Let’s just go back to the main features of a version control system for a second. Those are …
a. version control of files
Version control of files is a simple concept in it’s own right. It just means we want to have a file that starts out as the “first version”. We make changes and save a copy of the file as the “second version”. If we want to go back too our original version we just open the ‘first version’ of the document.
b. collaborate of files
When we collaborate on files all we’re looking for is the ability to share a file and for two or more people to be able to update that file. We can achieve this by either sending the file round in emails as an attachment or having the file on a shared file system where many people have access to this one file.
Both of these concepts in there own right are simple concept that are easy to implement and easy to work with. The complexity starts to come when you what to combine both concepts and start having more than 2 or 3 people working on the files.
So for example you have a single file on a server and that file is at version 1. I take a copy of this file and updated it. At the same time you take a copy and update it. I copy version 2 of the file back to the server so that it’s available for everyone else to look at and update. Then shortly after you copy your version of the file back to the server. Technically this is version 2 too. The worst bit though is that you’ve copied it back and overwritten my changes. All my work, in ‘my version 2’ is lost.
Or you could approach this another way. You have a single file on your computer and that file is at version 1. You send this in an email to your colleague and he/she updates it to version 2. At the same time though you update your version 1 of the file on your local system. Now you have version 2 of the file. And your colleague has version 2 of the file….. but both version 2’s of the file are most likely different. Who now owns the master?
It’s issues like this that SVN was designed to solve. However, being a command line tool doesn’t make SVN particularly easy to work with so the Visual SVN tools were developed to help people like you and me work with SVN. And it’s working with SVN, Visual SVN server and Tortoise SVN that we’ll be looking at over the next five tutorials.
We’ve broken this series of tutorials down into the following five parts:
- Setting up the central SVN Server
- Setting Up Tortoise SVN Client and Importing
- Check Out and Commit
- Resolving Conflicts
- Tags and Branching
In this, part 1, we’ll start by installing an SVN server from the Visual SVN website.
Installing Visual SVN Server
From this web page:
Download and install the appropriate version (32-bit or 64-bit) for your system. Run the .msi installer, accepting the licence agreement, the default settings and selecting the ‘Standard Edition’.
Once the install has completed click finish, making sure you select the ‘Start VisualSVN Server Manager’ option.
On the Visual SVN Server start page there are two things that are two parts we’re interested in. First is the ‘Repositories’ section where we’ll create a repository to store our files. The second is the Users folder where we’ll define our users that will be allowed access to our repositories. That’s all we need to learn the basics, everything else is superfluous for now.
Creating Our Users
Let’s create some users first then. We’ll need to create two users so that we can simulate the actions of two users editing the same file later in the course. Follow these three steps:
1. click on the Users folder node and right click to select ‘Create new user’
2. enter the user credentials and a password
3. repeat the above to create a 2nd user
Creating Your First Repository
You can think of a repository as a container for a group of files. That might be a group of files for a particular project. At this point we have our server running and we have two users configured. Now we just need to create that repository where we can store our files.
1. click on the Repositories node and right click to select ‘Create New Repository’
2. select the ‘Regular FSFS repository’ option and click ‘Next’
3. give your new repository a name and click ‘Next’
4. Select the ‘Empty Repository’ option and then ‘Next’
5. Finally, to keep things simple, select the ‘All Subversion Users have Read / Write’ access and then create the repository
On the final dialogue box you’ll see confirmation that your repository has been created. You’ll want to note these details down as these details will be needed when you point your SVN clients to the repository.
At this point you should have 2 users created and 1 repository shown in the Visual SVN Server explorer side panel.
And with the Repository you’ve just created you’ll see the URL for that repository listed in the ‘repo’ panel on the right hand side.
Checking the Repository
At this stage you can check the successful creation of the repository and that your users have access by right clicking in the ‘repo’ panel and selecting ‘Browse’
You should see a browser window open and a dialog box prompting for credentials. You can enter the credentials for one of the users you created:
Once you’ve logged in you should see the contents of the repository displayed in your browser window:
Not a lot to see at this stage but you can pick out the URL for the repository (shown in your browser address bar) and the name of the repo displayed top left. You’ll also notice the revision which is shown as ‘HEAD’. This just means that you’re viewing the latest version of all the files in the repository. Except that at the moment we don’t have any files. It’s adding those files as a user that we’ll look at in the next module.
As you’ve probably realised what’s underlying all of this automated test case development is code. Behind the scenes your Key Word tests are really code. The artifacts like projects files, checkpoints and name maps are all xml files. Ultimately you’re writing code to test code. And if there’s on thing code has it’s bugs. Yes you’re automated tests will have issues that you’ll need work on to debug. Which is why TestComplete has some debugging tools built in. So that’s what we’re about to look at in this module.
We can break these tools down into three distinct areas….
- the context sensitive menu in the Keyword test work space
- the debugger toolbar
- the debugger panels
We’ll go through each of these in the following sections. Starting out with the tools you’ll find in the keyword test case work space.
When you’re editing or developing your key word tests in the TestComplete work space you can right click on a test step/item and you’ll see the menu on the left. In here you’ll find a list of menu items, the first three of which can be useful when debugging your tests.
Run Test: runs the current keyword test (outside of the Project)
Run Selected Operation: runs the highlight test item in isolation
Run From Selected Operation: from the highlighted test item run through to the end
What you’ll typically find is that your tests will run to a certain point then fail. Last thing you want to do is fix an issue and then start right from the beginning again. So use these three options to once you’re at a certain point in your application and you just want to run a select set of tests steps.
The other point that’s really important when you’re in the test case work space is that you can set Break Points. A break point is a point you specify where your test run will stop (or break) and pause. Whilst paused you can use a set of tools to investigate what’s going on. To set break points just click once in the left hand menu against the step that you need to stop at. Once set when you run your test your test will stop and pause just before it runs this step.
When your script is paused you’ll see the line it’s paused on highlighted and the debug menu bar highlighted. This menu bar presents you with a number of options.
Play — Continue to run your test
Stop — Stop your test
Enable/Disable Debugging — switch off (or on) the debugging feature. Switch off if you want to continue to run through to the end even if you have other break points defined later in your tests.
Pause Execution — not enabled once you’ve hit a break point because you’re already paused. However, can be clicked during your test run if you want to stop in and debug something before you hit a break point.
Run to Cursor — Runs the test up to the cursor and stops, just as if there had been a break point at the cursor location.
Step Into — Executes the next step in the test. If another test or routine is called, the debugger continues on the first line of the test or routine.
Step Over — Executes the next step in the test. If another test or routine is called, the debugger executes the entire call at once and continues on the next line after the test or routine.
Evaluation Dialog — Opens the Evaluate Dialog. The dialog allows you to view and modify variables, expressions and objects.
Breakpoints, Watch List, Locals and Call Stack –– These buttons display panels of the same names and are described in the “Exploring Debugger Panels” topic.
Once you’re paused in debugging mode you’ll see you have some new panels displayed displayed below the test case work space. These panels are:
This panel shows a list of all the breakpoints currently set in your project. The highlighted line is the point where you have currently paused at a break point,. From here if you need to you can disable and enable individual breakpoints before you continue to un-pause and run the test.
The watch list panel allows you to add code expressions that will get evaluated when you stop at a break point. Useful for working out what a particular variable is set to or a parameter is when the test has executed.
The locals panel is similar to the watch list. However, here you don’t have to add values. The values that are local to this test (e.g. defined on the tests variables list) will be displayed here.
The call stack tab shows you the traceability between test case calls. So if the 1st test case calls the 2nd test case and the 2nd test case has a break point this will show that stack of calls from the 1st to the 2nd (and further if there are more calls). This helps to identify what was happening outside of the test case before the break point in the current test case kicks in.
I think the only definitive thing to say is that as your tests become more complex and you build larger sets of tests you’ll end up relying on this more and more. Don’t worry too much now about the detail. Just understand the concepts and work out how to set a break point. Then as things grow you’ll soon get to practice working with the debugging features in TestComplete.
So long as there’s code there will be bugs. So, reluctantly, I have to assure you that you’ll end up with bugs in your code. Even if that code is test automation code.
Bit ironic really 🙂
Probably the most critical aspect of any automation project is the ability to reliably identify the objects you need to interact with. Fail to identify the objects at all and you’re dead in the water. End up with unreliable identification and you’re probably in an even worse position. When it works some times and not others you’ll spend inordinate amounts of time trying to work out why tests have failed…. is it your scripts or is it a bug? You have to get to a point where you can reliably identify application objects every-time. In this module we’ll walk you through how you can approach this with TestComplete.
Key to object identification in TestComplete is a feature called the Name Map. You can think of this as your list of objects in your application that you want TestComplete to interact with. More than that, this list contains all the properties that you want to use to identify those objects. But why would you really want this?
Well consider the situation where you write all your tests and each test item/step points directly to the object you want to interact with. More than that many test items/steps (maybe 100’s or even 1000’s) all point to the same object in various places in your scripts. Like testing the CalcPlus application. You’re going to have many test items/steps that point to and click on the ‘=’ button.
Now just say (and I know this sounds a bit daft but it highlights the issue) the developers change the application. They replace the ‘=’ button with a new button that has the text ‘Equals’. Now your test steps don’t have anything to point to. They don’t see the object that was there and they aren’t intelligent enough to work out that the button “=” is now the “Equals” button. Problem for you is that you have to go through all of your test steps, every single one, and update each one to point to the new object.
As you might have guessed this approach is not scalable. The solution is the TestComplete Name Map. Think of the name map as a layer that sits between your scripts and the objects you’re testing.
So rather than have your test steps point directly at the objects they all point at a single entry in the Name Map. The Name Map entry then points at the objects. Thus when the object changes…..
If the Equals button changes from “=” to “Equals” you don’t change all your test steps. In fact you don’t change any of them at all. You leave your test steps pointing at the entry in the nameMap. Then you change the NameMap to point at the new object. And that means just one change in the name map. Makes your life a lot easier.
Now that’s the main purpose of the name map but there are two more. So in total the NameMap presents us with 3 main advantages:
1. object reference abstraction (we’ve just discussed this)
2. object identification
3. object aliasing
‘One’ we’ve already discussed. ‘Two’ is about using the name map to list the properties of the object that we want to use to find the object. And three is about allowing you to define your own name to reference an object (a name that might be more logical and easier to type). Let’s look at the name map and that take those other two points in turn. If you double click on the Name Map node in the Project Explorer panel you’ll see this….
The nameMap workspace is split into 3 panels. The panel shown as ‘2’ is for our Object Identification. And the panels labled ‘3a’ and ‘3b’ are for our object aliasing (or naming).
In panel 2 of the name Map we set the “Properties” that tell test complete what the object looks like. The only way TestComplete (and you for that mater) can find an object in your application is if you tell it what to look for. In the example we’ve defined two properties; WndClass and WndCaption. We’re telling TestCompplete that it needs to look for an object where WndClass=Button and WndCaption=”=”.
It’s up to you to make sure you define these properties so that TestComplete can uniquely identify the object… so you need to find unique properties and list them here. That might mean picking just 2 properties (as in the example) or 20 properties. Whatever you pick you must make sure the properties uniquely identify the object and that they don’t change (if the property values change each time you start the application or change dynamically during run time you’ll need to find different properties). Just remember to look for unique properties, static properties and you won’t go too far wrong.
It’s all very well having a layer of abstraction (to help manage changes) and the ability to identify those objects. You’ll still need a name to refer to the objects in your scripts though. And it’s no good if that name looks something like this….
Sys.Process(“CalcPlus”).Window(“SciCalc”, “Calculator Plus”, 1).Window(“Button”, “=”, 21)
That’s the full name that TestComplete will always be able to refer to the ‘=’ button object with. Bit difficult to say and even more difficult to type in. That’s were Aliases come in.
Once you have the object mapped and listed in the name map TestComplete gives the object a more logical, shorter name for you. In the Aliases panel (3a) you’ll see a much shorter, easier to say and type, name. …
In fact this is one that you can edit yourself by clicking on the name (as seen in the example). By clicking on the name twice you can edit the name yourself to give it an even more meaningful name. Also sometimes TestComplete’s best guess at a sensible name isn’t so clever. So it’s wise to review and modify these as you see fit.
If you’re wondering what the ‘Mapped Objects’ (3b) is that’s another way of looking at the same list of objects you’ve mapped. I like to think of this window as showing the Mapped Object names as TestComplete sees them. And the Aliases window as the Mapped Object names that you want to see them. To start with I’d recommend ignoring the Mapped Objects panel and just focus on the Aliases panel. This’ll keep you on the right track as you get used to mapping and referring to objects you’ve listed in the Name Map.
As mentioned before this concept of Name Mapping is absolutely key to TestComplete. It can be quite confusing to start with but persevere. It’ll make your life so much easier as you build your automated tests.
There’s no point in going to all this trouble of creating tests unless you can see the results of those test runs and see what’s passed or failed. Test logging in TestComplete is quite simple to get to grips with. Yet this logging capability provides a powerful way for you to easily navigate through mountains of test data that get generated as part of lots of test automation runs.
In this module we’ll look at
* How test logs are organise and stored
* What the test log window shows us
* How to filter the logs to see what you need
* Exporting logs in different formats
* How we can add log messages to our scripts
* Test logging options and project properties
First up then lets see how these logs are organised. In the Project Explorer window, scroll to the bottom, and you’ll see the folders where all the log files are stores.
The structure mirrors that of the Project Suite and Projects folders above. So if you run the ‘Project Suite’ you’ll see a log file stored at the Project Suite folder level. If you run a Project you’ll see the test run results stored in the folder for that specific project. If you run an individual test within a project you’ll see the log for that run stored at the relevant Project folder level too.
What’s important to grasp is that for, say each run of a test project, you’ll get one log entry in the relevant project logs folder. So one Project can have ‘many’ test logs (one for each test execution run).
As you complete lots of test runs you’ll find you end up with a lot of log files. Most of which you won’t want to keep. If you want to delete them select the ones you want to delete, right click and select ‘Remove’.
You’ll get the option to just ‘Remove’ (so you don’t see them in project explorer but they remain on the file system) or you can ‘Delete’ (and delete them from the file system – forever).
When you double click on a particular log file displayed int he Project Explorer you’ll see the log file displayed in the work space area. This work space is broken down into 3 distinct areas.
i) Log Items: Hierarchy of the test items (from the Project Test Items list)
ii) Test Log: list of the log messages created during the run (either Error, Warning, Message, Event or Checkpoint)
iii) Additional Info: sequence of tabs that delivers more info for each Test Log item with a Picture, Additional Info, Call Stack and/or Performance Counters
When you want to filter or search your logs you have two options. Either use the basic ‘Search’ functionality (see the search text box about the log entries) or invoke the ‘Filter’ feature. If you right click anywhere in the Test Log you can select the Filter option.
This log filter gives you the capability to define complex conditional logical to display exactly the log entries you need to see. You can also opt to show the parent or child log entries for the entries you find with this filter.
You’ll also notice, when you right click on the log items, the rest of the items available in this context sensitive menu. Among these options you have the capability to format the display of the log items and copy entries when needed.
If you right click in the Log Items panel you get a different context sensitive menu. From here you can create defects/issues in external tools and export the log to various different formats.
That’s what things look like when you’re viewing the logs. Next thing to look at is how you get your tests to create log entries and how those tests can be directed to structure and organise the log entries. For this we’ll look at the Logging operations found in the operations panel when developing your tests.
In the operations panel make sure you select the ‘Logging’ category. Then you’ll see all the logging operations you can use in your tests. Drag and drop these into your tests to implement the function when your tests run. For example create a new folder to put test log messages in with the ‘Append Log Folder’ (append all new log messages to the new folder). When you’ve finished adding log messages to that specific folder you can start putting them back at the parent folder level by using the ‘Pop Log Folder’ operation. Just need to add your own log message? Then use the ‘Log Message’ operation.
Last to mention then are some of the more important project logging options and properties. Two areas to think about here. Options (global to TestComplete) and Properties (specific to each project).
Log options allow you to set global TestComplete options to define how log files are managed. Most are self explanatory but the two you might consider changing are:
Activate after test run: whenever a test run ends the work space area opens with the log file displayed. If this starts to annoy you then deselect it here.
Store all logs: by default TestComplete will save all log files and keep them forever. Log files stack up with this selected. So you could deselect it and then define how many you want to keep. Go over this number and TestComplete deletes the oldest log file automatically for you.
Next then are the project specific settings. Found under the properties tab in the project work space. The two most important settings here being:
Post image on error: if you switch of the visualizer on play back (to save space) then enabling this can help you catch screen shots when things don’t go quite right during the test run. Worth keeping switched on
Save log every: Normally TestComplete only writes the log file to disk at the end of the test run. Sometimes your system will crash or reboot and the log file won’t get written to disk… you lose all your test run logging. Enable this and you’ll at least keep most of the log when this happens. Very useful for long test runs.
So that’s pretty much it on the logging side of things. We’ve looked at How test logs are organise and stored, what the test log window shows us and the Test logging options and project properties. Next up we’re going to look in more detail at Name Mapping and identifying objects in the application you’re testing.