Just start typing...
Technologies

Test Automation Tools for Desktop Apps. Pt.2: Open Source tools

Published December 21, 2018

Find a comprehensive comparison of free frameworks that enable to automate testing of a complex .NET desktop app. An open-source solution that we chose was a perfect fit in terms of Continuous Integration, transparency and costs.

Automated testing is widely used for web and mobile apps, which are the majority of applications developed today. Desktop apps are not so popular nowadays, but are still common for financial companies, manufacturing, and HoReCa. We had an opportunity to perform testing for our client’s project, which was tens of thousands of LOCs. In this case, manual testing of just one new function would require 6 weeks from development to release. 

To speed up the process, we have selected the optimal automated test framework: leaving packaged solutions behind because of their price and non-transparency, we have found an open-source solution that fit our needs perfectly.

So, our challenge was to set up a pilot solution for automated testing, the final goal being to automate testing of a complex .NET desktop app. The solution must fit our selection criteria:

  • Continuous Integration (CI): The framework must generate overnight CI reports with test results, or send an email with the failed tests description.
  • Transparency. The testing team must understand which parameter is being checked at each testing step, and the manager must be able to get reports in a human-readable fashion (such as, “The form N is lacking the third input field”).
  • Costs. The solution must not raise the project costs significantly.

Comparison of free frameworks

  1. The first solution tested for our challenge was Visual Studio Coded UI.
    Pros:
    • Allows for using the same technologies that the test object is based on (in our case, .NET and C#)
    • Has a test recorder
    • Supports WinForms
    • Supports DevExpress UI controls
    Cons:
    • Scarce documentation
    • Visual Studio Ultimate is available for a fee, and the free version has no access to CodedUI Test
    • The code generated by the test recorder is hardly supportable: recorded tests fail to run (this might not be the case for less complex interfaces, for which the tests may run properly)
    • BDD test case writing (for example, using one of the most popular C# frameworks, Specflow) is not compatible with Coded UI in the “out of the box” version
    • Limitations of search and using UI elements by UIA API v.1 — MSAA
  2. The CUITe framework can be used in combination with Coded UI for different VS versions. This tool can be considered, but it does not fix the essential drawbacks of Coded UI
  3. TestStack.White is the second free solution that we considered
    Pros:
    • The tool is fully functional for writing tests in C#
    • Compatible with BDD-style test writing
    • MSAA, UIA 2, 3 are available for selection
    Cons:
    • Legacy codebase is obviously in need for refactoring
    • NuGet package had been last updated in 2014, but the repository was changed since then
  4. Further research suggested that we should try WhiteSharp, so we did
    Pros:
    • The tool is fully functional for writing tests in C#
    • BDD compatible
    Cons:
    • No way to identify all UI elements correctly. Not compatible with DevExpress elements
    • NuGet package was last updated in 2016
    • Scarce documentation and small community to ask for help
    • Elementary written test cases (“open the app”, “check the status bar content”) run very slowly
  5. Winium.Desktop has a significant disadvantage of not supporting .NET 4.0
  6. FlaUI is another fully functional tool for writing tests in C#. Its author emphasizes that the goal was to remake TestStack.White from scratch, removing the unnecessary and outdated features
    Pros:
    • BDD compatible
    • Allows for using any version of UIA: MSAA, UIA2, or UIA3
    • Supports Windows 7, 8, 10 (likely XP too, but we have not checked)
    • Good community: helps solve problems not only in issues, but also in chats
  7. WinAppDriver + Appium is a full-functional test writing platform in C#, Java, and Python
    Pros:
    • Developed and supported by Microsoft
    • BDD compatible
    • Low entry barrier, assuming some experience in web and mobile testing
    Cons:

Having checked all benefits and risks, we opted for WinAppDriver and FlaUI.

Examples of using the chosen solutions

Before writing tests, one must pick an UI Element Inspector — for example, here. Our QA specialists prefer Inspect.exe and UISpy.exe.

WinAppDriver

System requirements: Windows 10 (unfortunately it doesn’t support newer versions).

Of course, first of all you download, install, and run WinAppDriver.exe. Then we installed Appium, and ran Appium Server (a “daemon”). If you don’t use C#, here are the samples in Python+Robot.

Creating an app session and connecting to it:

    
        class App:
        def __init__(self):
           desired_caps = {"app": "Microsoft.WindowsCalculator_8wekyb3d8bbwe!App"}
           self.__driver = webdriver.Remote(
               command_executor='http://127.0.0.1:4723',
               desired_capabilities=desired_caps)

        def setup(self):
           return self.__driver
    
  

Opening the app:

    
    class CalculatorKeywordsLibrary(unittest.TestCase):
    @keyword("Open App")
    def open_app(self):
       self.driver = App().setup()
       return self.driver
    

Finding buttons and clicking them:

    
      @keyword("Click Calculator Button")
        def click_calculator_button(self, str):
        self.driver.find_element_by_name(str).click()
    

Getting the result:

    
      @keyword("Get Result")
        def getresult(self):
        display_text = self.driver.find_element_by_accessibility_id("CalculatorResults").text
        display_text = display_text.strip("Display is ")
        display_text = display_text.rstrip(' ')
        display_text = display_text.lstrip(' ')
        return display_text
    

Quitting the app:

    
    @keyword("Quit App")
      def quit_app(self):
      self.driver.quit()
    

As a result, the Robot test to calculate a square root looks like this:

    
      Test Square
      [Tags] Calculator test
      [Documentation] *Purpose: test Square*
      ...
      ... *Actions*:
      ... — Check is the calculator started
      ... — Press button Four
      ... — Press button Square
      ...
      ... *Result*:
      ... — Result is 2
      ...
      Assert Calculator Is Open
      Click Calculator Button Four
      Click Calculator Button Square root
      ${result} = Get Result
      AssertEquals ${result} 2
      [Teardown] Quit App
    

You can run this test like a regular robot-test from command line: robot test.robot.

There is some detailed documentation on BDD Robot Framework. WinStore apps, as well as regular apps, can be tested, too.

Supported Locator types:

  • FindElementByAccessibilityId

  • FindElementByClassName

  • FindElementById

  • FindElementByName

  • FindElementByTagName

  • FindElementByXPath

This suggests that it is fully Selenium-like test. Any testing patterns can be applied: Page Object and others, and BDD framework can be added (for Python, for example, it is Robot Framework or Behave).

Further reading: detailed documentation with test examples in different languages.

FlaUI

FlaUI is basically a wrapper of UI Automation libraries. Earlier commercial utilities are also wrappers of Microsoft Automation API.

Windows 7 PC, Visual Studio. Running the app and getting attached to the process:

    
      [Test]
        public void NotepadAttachByNameTest()
        {
            using (var app = Application.Launch("notepad.exe"))
            {
                using (var automation = new UIA3Automation())
                {
                    var window = app.GetMainWindow(automation);
                    Assert.That(window, Is.Not.Null);
                    Assert.That(window.Title, Is.Not.Null);
                }
                app.Close();
            }
        }
    

Finding the text area with an Inspector and sending a string to it:

    
      [Test]
      public void SendKeys()
      {
      using (var app = Application.Launch("notepad.exe"))
      {
      using (var automation = new UIA3Automation())
      {
      var mainWindow = app.GetMainWindow(automation);
      var textArea = mainWindow.FindFirstDescendant("_text");
      textArea.Click();
      //some string
      Keyboard.Type("Test string");
      //some key:
      Keyboard.Type(VirtualKeyShort.KEY_Z);
      Assert.That(mainWindow.FindFirstDescendant("_text").Patterns.Value.Pattern.Value, Is.EqualTo("Test stringz"));
      }
      app.Close();
      }
      }
    

Basic element search types:

  • FindFirstChild

  • FindAllChildren

  • FindFirstDescendant

  • FindAllDescendants

Also, you can search by:

  • AutomationId

  • Name

  • ClassName

  • HelpText

Search by any pattern that is supported by element is also available — for example, by Value or by LegacyAccessible. Search by condition is available too.  

    
      public AutomationElement Element(string legacyIAccessibleValue) =>
        LoginWindow.FindFirstDescendant(new PropertyCondition(Automation.PropertyLibrary.LegacyIAccessible.Value, legacyIAccessibleValue));
    

Our team appreciated the static retry class that provides help with retrying:

Retry.While(foundMethod, Timeout, RetryInterval);

This class has a looping retry method at given intervals with a given timeout (custom or default) — and no more thread.sleep in the tests\page logic.

Example:

    
      private static LoginWindow LoginWindow
      {
        get
        {
          Retry.WhileException(() => MainWindow.LoginWindow.IsEnabled);
          return new LoginWindow(MainWindow.LoginWindow);
        }
      }
    

LoginWindow is created only when the login window is actually shown (using default interval and timeouts). This approach gets UI test stability at a new level.

For further reading, you can refer to this detailed guide on GitHub.

Test development time. UI test stability and support

It was planned to complete six large-scale regress scenarios in three months of development from scratch — including project studies, available framework research (described above), AT skeleton development, CI adjustment, management, communication, and documentation. The project was delivered on time.

If we continue test coverage improvement, these set-up processes will of course take less time, as they were basically implemented at the pilot stage.

Tests in Continuous Integration

Both frameworks support test run using Continuous Integration systems. Our team uses TC and TFS.

When using frameworks such as FLAUI and SpecFlow, and tests written in C#, making a build is trivial (svc — build — run), except for some conditions for build agent:

  • Build agent is run as a command line utility (instead of a service), both systems allow that. It is required to set up an automatic start of the build agent.
  • Automatic logon setup (Use SysInternals Autologon for Windows as it encrypts your password in the registry).
  • Turn off screensaver.
  • Turn off Windows Error Reporting. See Help on Gist: DisableWER.reg.
  • Have an active desktop session on the machine where the tests are built and run. A standard Windows app is enough for that.
  • RDC, but it is often recommended to use VNC (because when you connect using Remote desktop, then disconnect, the desktop will be locked, and UI Automation will fail).

Specflow BDD framework can be integrated on-the-fly with TC and TFS, namely the testing report is generated right in the test run.

The ROI of Automated Testing

It is hard to calculate ROI before implementing the automated testing solution, because there is a lot of variables and unknown factors. (And you better not believe the ROI calculation provided by TestComplete pricing!)

But after the pilot has been implemented, some figures can be drawn. Let’s stick to the simplest ROI definition:

ROI = (Gain of Investment — Cost of Investment) / Cost of Investment

First year automation cost (without support):

Three months to get complete regression test coverage via automated tests by a single developer-in-test: 500 hours * Dev hourly rate

We assume that the solution cost (Cost of Investment) is paid in full by the client during the first year, not divided, and no loan capital is used.

Profit:

In the first part we have calculated that manual regression testing takes one month: two weeks for primary run, plus two weeks for secondary run (without checking new tests and associated expenses).

After development and introduction of automated regression testing (3 months), manual regression testing is not performed for the remaining 9 months. Let’s assume that the released resources (i.e. the testing team) are employed in other projects — so we saved 1500 hours * QA hourly rate * 2 engineers.

ROI = (1500*QA*2 — 500*Dev) / 500 * Dev = (6*QA — 1*Dev) / Dev

Assuming the mid-market average of QA hourly rate = n, Dev hourly rate = 1,5n:

ROI = (6n — 1,5n) / 1,5n = 3.

Obviously, the first year provides less payback, and ROI in the next years is going to be much better due to the absence of Cost of Investment.

Of course, this calculation is very rough and it does not include support, depreciation, reporting enhancement, and the quality of automatic tests themselves. We suggest you take these factors into consideration in the next ROI calculation after the first year of AT introduction.

The Result

Introduction of automated testing saves the QA labor costs and frees up a team of two QA engineers for other tasks, thus optimizing the project resources.

Of course, manual testing is still an essential part of the development job. But with a large-scale project of tens of thousands LOCs, where feature release takes several weeks or more, test automation is crucial. To figure out whether it is possible and efficient to automate all testing jobs, our team uses the Do Pilot approach, which means automating a small scope of work using a given framework.

Nevertheless, if we use all necessary functionality of paid packaged solutions and expand the team to at least two people, it comes at a price. To spare the customers yearly support costs after having paid for the solution first, we have considered several open source AT frameworks. Two of those met our needs and are being used currently.

Therefore, although test automation takes research to find a suitable framework and make automatic tests themselves, still yields much more advantage than manual test runs.

Release your project sooner without raising costs for no reason! 

We will gladly answer your questions about test automation of any application — even a complex Windows Desktop one.

Related Services

Automated Functional Testing
Automated Load Testing

How we process your personal data

When you submit the completed form, your personal data will be processed by WaveAccess USA. Due to our international presence, your data may be transferred and processed outside the country where you reside or are located. You have the right to withdraw your consent at any time.
Please read our Privacy Policy for more information.