End-to-end: Go Fix It

Table of contents

Not that long ago, our testing processes got completely messed up. With the project growing rapidly and new teams and people onboarding quickly, no wonder it negatively affected the product quality.

In this article, I am going to share my experience and tell you how we analyzed this issue, looked for ways to solve it, and eventually dealt with it. In other words, how we successfully struggled for quality.

First, let's define what end-to-end testing, or E2E, is.

E2E is the top of the testing pyramid, or, simply put, the final stage of testing. With E2E, one checks what the features being tested look like to the end users. Basically, it helps answer two important questions: is everything working just as planned and does the product meet all user needs?

With such kind of testing, one focuses on what the user sees, on the bigger picture, as the internal parts of the system should have already been checked with other tiers of the testing pyramid.

A bit of background

Our project started two years ago, when one of the customers of a large testing platform provider decided to acquire this platform and continue developing it on their own.

On this project, our main tasks were the following:

  1. Maintaining the existing features. This is a large platform that has been around for over 20 years. It facilitates the entire way, from creating test content to assessing the exam passed by the candidate. With a huge number of components, our task is to monitor their performance and interaction.

  2. Optimizing the existing functionality. No matter how good it is, one can always do better.

  3. Developing new features. This is the main reason for our project to start. Our client had their own idea of how to develop the platform, and they needed assistance for this idea to come true.

When we started working, we only had two teams, with two QA engineers each. We had no communication issues, as,

  • first, everyone had already worked with the platform and knew its ins and outs,
  • and, second, with only four people, everything could be quickly solved within a chat or call in MS Teams.

It went quite alright, with a quite fast and promising kick-off, as we took part in global planning, seamlessly delivered new features, suggested our ideas for optimization, and, generally, made the product better.

As a result, our client appreciated our work and approved scaling. After a few stages, we eventually came to five teams that included ten core QA engineers, as well as trainees and some Client's team members.

We were proud of ourselves and our project, but, unfortunately, not for long. The regression started to become longer, while there were also bugs in the pre-release environment, which was quite disappointing.

This was well enough to ring the alarm and start changing something urgently.

Thus, we arranged a meeting with Scrum masters, the delivery manager, and the Client's quality control team, the goal of which was to find the root cause.

These are the issues we discovered

  1. After scaling in, the communication between various QA teams reduced drastically; each team kind of lived in its own world, with most communication being internal.

  2. At the planning stage, we were missing some important points that were not that much obvious, partly due to having many new team members after scaling in. Such points came to light when the development cycle was almost over, which led to quick fixes and tests in a kind of 'panic mode'.

  3. A larger number of teams also meant more features being released. With a rather large and complex project, multiple teams may work on the same basket of tasks at the same time, which usually generates a lot of bugs.

The conclusion we came to: 

We were just short of info, as the QA engineers had little idea of what was going on in other teams. As a result, everyone seemed to be doing the same thing, but eventually it turned into a complete mess.


We realized we need to unify the testing process, especially focusing on end-to-end testing. We already had E2E, but each team ran it in its own way, and the results were not recorded anywhere. This meant no transparency: nobody knew what a QA team already covered and what was yet out scope.

To streamline the process, we introduced a new type of shared document that would both help record E2E scenarios and contain useful information for more structured and centralized testing.

We realized we needed such a document for each of the features being tested. That means that, at the planning stage, we now created one more user story, E2E. During the development cycle, this user story has an end-to-end plan being created based on an existing template.

Below you can find a slightly anonymized example of how this may look like. That is, it is based on our real end-to-end test plan, although some specific details are not disclosed.

Structure Example:

  1. Useful links that may come in handy as you create your E2E plan. This section may have:

    • Wiki-like instructions on how to work with the test plan (in our case, it was created at the same time while the E2E was being developed)

    • Wiki-like product consideration document, a guide to the key areas of the application, system requirements, such as screen resolution, browsers, or languages, etc.

    • Wiki-like backward compatibility document, in case the product in question has versions.

  1. Questions to answer:

    • Which scenarios should be tested based on the feature's functional requirements?

    • Which scenarios should be tested for backward compatibility?

    • Which product areas were affected when the feature was being developed?

    • Which of these areas are the most critical and require special attention?

  1. Non-functional requirements, where you choose what is relevant and describe what is to be tested. Our list looked like this:

    • Performance and load testing

    • Accessibility testing

    • Localization testing

    • Cross browser and cross platform testing

  1. Automated testing (select what applies and describe scenarios):

    • Selenium.

    • API / Integration tests.

    • Unit tests.

  2. Out-of-scope scenarios.

  3. Links to all test cases for the feature.

You can (and, likely, should) adjust this list structure to the specifics of your project so as to get the maximum effect. Our example is just something you could start from.

Important notes:

  1. You should start working with such a test plan at the feature planning stage; this way, you will be able to ask all questions as early as possible.

  2. As soon as the first version is ready, you can additionally visualize the plan y creating a mind map based on it. Looking at an image is faster and easier than reading the full description. Our client especially appreciated this point.

  3. If another team reviews your test plan, this will increase transparency and help find defects.

  4. Adjust and refine the plan as the feature is being developed.

In our case, we already got some results during the next regression. We became better equipped and prepared, which could not but have a positive effect on the testing quality.

Results and advantages

  1. Once each feature has been developed, you already have a document with all testing related points recorded.

  2. The plan is a user story, which means it is shared among all members of the development team, which includes client team, product owners, etc.

  3. With plan reviews, members of different QA teams started communicating more. This also enabled sharing expertise by area.

  4. The plan assisted in arranging tasks, while the number of unexpectedly emerging issues significantly decreased.

  5. As long as you have a draft plan already at the planning stage, you can share it with the product owner and clarify whether you understand the task correctly.

  6. You can always get back to the plan to remember what a certain feature was about.

Overall, I would not say it worked out well from the very start; we had a few discussion rounds before we arrived to the final version. It took us around two to three months to create, test, and adjust the entire process.


Some of you may not like changing anything that much. If this is the case, here is a small bonus for you: instructions on how to make the implementation process as smooth as possible. For some, this may sound like Captain Obvious advice, but still:

  1. Create a template, if your task management system allows it. Changing anything may be an issue, especially when it comes to some additional effort. However, if you make the process easier, there will be less resistance.
  2. Discuss with the team: Doing something in a standalone mode and then just saying: 'Here you go, now let's start working' is bad practice. You should start discussion as soon as you have the first version of your document. The more people you involve, the less Oh no, paperwork again reactions you will be getting, and the more engagement and responsibility for shared decisions there will be.
  3. Improve: Nothing can be perfect from the very start. Even when you improve it once or twice, there will still be some more room for improvement. Gather meetings after you have implemented your plan and discuss what was good and what not. Run a few sessions like this to achieve the best result possible. It's worth your while.
  4. Create a Wiki, if required, and put a link to it directly into the template. This will be very handy during the first months of the implementation. You will have a single reference source with up-to-date information that will be useful to both current and future project participants.
  5. Review: Before the implementation, share the final document version with those who did not take part in its development, but will still use it in their work, e.g., the developer team. This may bring in some new good ideas, especially when a developer is much engaged into the process.


Creating an end-to-end test plan for each feature, as well as discussing it after it's already been implemented, does take time. The results, however, cover all those efforts by far. Our testing processes finally got transparent and harmonized; other client teams also liked our idea and are currently using it, too. We all now follow the same path, looking for bottlenecks and resolving these issues, while also making the workflow comfortable for all project members.

This concludes my article, thanks for reading it up to the end!


You Might Also Like

Blog Posts Distribution of Educational Content within LMS and Beyond
October 16, 2023
When creating digital education content, it is a good practice to make it compatible with major LMSs by using one of the widely used e-learning standards. The post helps to choose a suitable solution with minimal compromise.
Blog Posts Story of Deprecation and Positive Thinking in URLs Encoding
May 13, 2022
There is the saying, ‘If it works, don’t touch it!’ I like it, but sometimes changes could be requested by someone from the outside, and if it is Apple, we have to listen.
Blog Posts The Laws of Proximity and Common Region in UX Design
April 18, 2022
The Laws of Proximity and Common Region explain how people decide if an element is a part of a group and are especially helpful for interface designers.