Test Management Forum 27 July 2016

The 51st Test Management Forum took place on Wednesday 27 July 2016 at the conference centre at Balls's Brothers, Minster Pavement

Sponsored by CA Technologies

Timetable

13:30pm Tea/Coffee
14:00pm Introductions
14:15pm

Session A

Stevan Zivanovic, "New Technology, New Approaches"

Stevan's Summary of the session is here.

Stevan's slides are here.

Session B

Paul Gerrard, "Will Robots Replace Testers?"

Paul's slides are here

Session C

Andy Redwood, "Quick wins in Testing that actually work"

15:30pm Tea/Coffee
16:00pm

Session D

Huw Price, CA Technologies, Manual Testing is Dead: A Real World Story of moving from 5% Functional Test Automation to 95%

Session E

Paul Gerrard, "Surveying - the New Testing?"

Pauls slides are here

Session F

Declan O'Riordan, "Blended Security Testing"

17:15pm Drinks Reception

Abstracts

Paul Gerrard, "Will Robots Replace Testers?"

A recent study from the University of Oxford makes for interesting reading:

  • Over the next two decades, 47% of jobs in the US may be under threat.
  • 702 occupations are ranked in order of their probability of computerisation. Telemarketers are deemed most likely (99%), recreational therapists least likely at 0.28%. Computer programmers appear to be 48% likely to be replaced.

If programmers have a 50/50 chance of being replaced by robots, we should think seriously on how the same might happen to testers.

Machine Learning in testing is an intriguing prospect but not imminent. However, the next generation of testing tools will look a lot different from the ones we use today.

For the past thirty years or so we have placed emphasis on test automation and checking. In the New Model for Testing, I call this 'Applying'. We have paid much less attention to the other nine - yes, nine - test activities. As a consequence, we have simple robots to run tests, but nothing much to help us to create good tests for those robots to run. 

The tools we use in testing today are limited by the approaches and processes we employ. Traditional testing is document-centric and aims to reuse plans as records of tester activity. That approach and many of our tools are stuck in the past. Bureaucratic test management tools have been one automation pillar (or millstone). The other pillar – test automation tools – derive from an obsession with the mechanical, purely technical execution activity and is bounded by an assertion that many vendors still promote – that testing is just bashing keys or touchscreens which tools can do just as well.

In this session we'll discuss the capabilities of the tools we need in the future.

Andy Redwood, "Quick wins in Testing that actually work"

As a senior test manager I don’t just have a responsibility to prove or disprove the changes or attempt to provide evidence of quality, although these are important. I also have to maximise services from the available budget and look for savings and efficiencies.

In reality I’m trying to achieve all of these objectives at the same time and have had some wins on small quite niche small pieces of work and also when creating economies of scale across life-cycle changes to policies or working practices.

I will share some of my experiences large and small and explain the enablers and the constraints and provide some indication (within the confidentiality agreements) of the value of these initiatives.

Stevan Zivanovic, "New Technology, New Approaches"

From a recent discussion with a C Level executive: "I want to Digitally Transform, using micro-services, in the cloud and I need a big data solution".

This all appears to be big shift from an more traditional application stack, that we as tester know, understand and have delivered against. So what do technology changes such as Digital, micro-services and big data actually mean for the tester?

This session will look to explore what some of these terms mean and the strategic approaches to testing, whilst encouraging participation from the group.

Declan O'Riordan, "Blended Security Testing"

‘Blended Testing’ sounds attractive and harmonious, like ‘intelligent testing’ or ‘world peace’, but what does blended testing really mean? Which ingredients are being combined, and how? Is it possible to blend what have traditionally been categorised as ‘functional’ and ‘non-functional’ tests? The very separation of security, performance, usability, and functional testing into different spheres indicates their test approaches are naturally separate.

Until recently, anyone attempting to simultaneously blend functional tests with non-functional tests would only be able to exploit small overlaps in the coverage, while the majority of tests would still require a bespoke approach. The acceleration of deliveries driven by agile and DevOps frameworks has led to large areas being left out of scope, particularly in security testing. The lack of skilled staff and abundance of inaccurate tools has created a security testing bottleneck which either stifles the delivery pipeline, or security testing is bypassed as huge risks are simply ignored.

Fortunately, a new breed of sensor technology has recently emerged that provides amazing levels of security and performance attribute details at tremendous speed. Best of all, these security vulnerability detection tools don’t need specific security tests or skills to achieve 92%+ accuracy. Any test data prompts the sensors into action. Is blended testing becoming a reality? Let’s discuss!

Huw Price, "Manual Testing is Dead: A Real World Story of moving from 5% Functional Test Automation to 95%"

It is a mistake to think that automating test execution will deliver a “silver bullet” to testing woes, eliminating testing bottlenecks while detecting the most number of defects possible. Even with a good automation engine, the manual effort of deriving test scripts still often remains, and the defects created by ambiguous requirements will persist. The lack of systematism involved in any manual approach further means that test coverage will remain low, and defects will continue to be detected late.

In this talk, Huw Price will provide a real world example of how Model-Based Testing has been used to make testing more rigorous, fully automated, and more reactive to change.

The session will set out how fully automated testing started with the BAs modelling system requirements as unambiguous, “active” flowchart models. These were taken by testers and developers, verified, and iteratively refined to more closely match the end-user’s desired functionality. At the same time, every test case needed to cover 100% of a system’s functionality were derived from the mathematically-precise model, and optimized to reduce execution time.

The right data and expected results were further derived automatically from the model, while the automated tests were updated automatically when the requirements change. This approach therefore automated the bulk of the testing effort, so that rigorously tested software could be delivered, even as user requirements constantly change.

Paul Gerrard, "Surveying - the New Testing?"

The pressure to modernise our approaches, to speed up testing and reduce the cost and dependency on less-skilled labour means we need some new ideas. I have suggested a refined approach using a Surveying metaphor. This metaphor enables us to think differently on how we use tools to support knowledge acquisition.

The Survey metaphor requires new collaborative tools that can capture information as it is gathered with little distraction or friction. But they can also prompt the user to ask questions, to document their thoughts, concerns, observations and ideas for tests. In this vision, automated tools get a new role – supportive of tester thinking, but not replacing it.

Your pair in the exploration and testing of systems might soon be a robot. Like a human partner, they will capture the knowledge you impart. Over time they will learn how to support and challenge you and help you to navigate through your exploration or Surveying activity. Eventually, your partner will suggest ideas that rival your own. But that is still some way off.

Paul will outline the survey metaphor and, technology willing, demonstrate how a bot might be used as your pair.