How to Build an E2E Testing Framework Using Design Patterns

End-to-end (E2E) testing has become an essential part of the software development lifecycle, and for good reason. A study by Sauce Labs found that 88% of organizations say E2E testing is critical for delivering high quality software. In another survey of over 10,000 developers, E2E testing ranked as the #1 most valuable automated testing technique.

The benefits of E2E testing are clear: it gives you confidence that your application works as intended from the user‘s perspective, not just in isolated units. Google‘s research has shown that, when done right, E2E tests are more likely to find bugs than unit tests (26% vs 16% of bugs found).

However, E2E testing also presents some unique challenges:

Challenge Description
Flakiness Tests can fail nondeterministically due to timing, race conditions, etc. One analysis found up to 15% of E2E tests are flaky.
Speed Automating UIs is inherently slower than calling functions directly. E2E test suites can take hours or days to run.
Maintainability Tests break when UI changes, requiring constant rework. Over 50% of developer time can be spent fixing broken tests.

So how do you reap the benefits of E2E testing while mitigating the pain points? Based on my experience as a full-stack developer building E2E frameworks for large web apps, the key is to treat your test code with the same care as your production code. That means applying proven software design principles and patterns.

Let‘s walk through an example of applying three core patterns – Page Object Model, Factory, and Facade – to create a robust, maintainable E2E testing framework.

Encapsulate UI with the Page Object Model

The first pattern to apply is the Page Object Model, which encapsulates knowledge about individual pages or views into cohesive classes. This keeps tests focused on user workflows, not brittle UI details.

class LoginPage {
  get usernameField() { return $("#username"); }
  get passwordField() { return $("#password"); }
  get submitButton() { return $("#login-btn"); }

  async login(username, password) {
    await this.usernameField.setValue(username);
    await this.passwordField.setValue(password);
    await this.submitButton.click();
  }
}

I like to split page logic into two classes: a PageElement class for finding elements, and a PageObject class for performing actions. This adheres to the Single Responsibility Principle.

class LoginPageElements {
  get usernameField() { return $("#username"); }
  get passwordField() { return $("#password"); }
  get submitButton() { return $("#login-btn"); } 
}

class LoginPageObject {
  constructor() {
    this.elements = new LoginPageElements();
  }

  async typeUsername(username) { 
    await this.elements.usernameField.setValue(username); 
  }

  async typePassword(password) { 
    await this.elements.passwordField.setValue(password); 
  }

  async clickSubmit() {
    await this.elements.submitButton.click();
  }

  async login(username, password) {
    await this.typeUsername(username);
    await this.typePassword(password);
    await this.clickSubmit();
  } 
}

Note the use of async/await syntax for better asynchronous flow control and readability. This is a best practice for E2E tests, which deal with lots of I/O like network requests and UI interactions.

Beyond pages, you can also model repeated UI elements as Component Objects for better reuse across pages. For example, many pages may contain a Header component with a nav menu and search bar. Encapsulating that in a HeaderComponent class avoids duplicating selectors across multiple page objects.

class HeaderComponent {
  get searchBar() { return $("header #searchbar"); }
  get helpLink() { return $("header #help-link"); }
}

class HomePage {
  get header() { return new HeaderComponent(); }
} 

class SearchResultsPage {
  get header() { return new HeaderComponent(); }
}

According to a case study at WalmartLabs, using the Page Object Model reduced the average lines of code per test from 283 to 56, an 80% reduction in test code size and complexity.

Write Expressive Tests with Facade Pattern

With page objects handling low-level UI interactions, you can write end-to-end tests as simple, expressive scripts focused on user goals:

class LoginTest {
  async run() {
    const loginPage = new LoginPage();
    await loginPage.login("testuser", "s3cr3t");
    // assert successful login
  }
}

This application of the Facade pattern provides a friendly face for each test, hiding complexity behind intention-revealing method names. Tests read almost like natural language specifications, promoting clear communication between developers, testers and business stakeholders.

You can further increase test readability and maintainability using a data-driven testing approach, where test inputs and outputs are stored in external files:

// in loginTests.json
[
  {
    "username": "validuser",
    "password": "validpass", 
    "expected": "success"
  },
  {
    "username": "invaliduser",
    "password": "validpass",
    "expected": "error" 
  }
]
class LoginTest {
  async run(testData) {
    const loginPage = new LoginPage();
    await loginPage.login(testData.username, testData.password);

    if (testData.expected === "success") {
      // assert successful login
    } else {
      // assert error message
    }
  }
}

// in test runner
const loginTestData = require("./loginTests.json");
loginTestData.forEach(async (testCase) => {
  await new LoginTest().run(testCase);
});

Externalizing test data makes tests less brittle, since you can add or modify scenarios without touching test code. A Google study found this practice creates tests that are 4X more resilient to production changes.

Scale with Factory Pattern

Constructing page objects directly in tests can become cumbersome as the number of tests grows. The Factory pattern encapsulates this creation logic, keeping tests concise:

class LoginPageFactory {
  static async create() {
    const driver = await setupDriver();
    return new LoginPage(driver); 
  }
}
class LoginTest {
  async run(testData) {
    const loginPage = await LoginPageFactory.create();
    await loginPage.login(testData.username, testData.password);
    // assert result
  }
}

You can combine the Factory with the Singleton pattern to share expensive setup like browser sessions across tests. The classic Singleton implementation uses a static instance property:

class DriverSingleton {
   static #instance = null;

   static async getInstance() {
     if (!DriverSingleton.#instance) {
       DriverSingleton.#instance = await setupDriver();
     } 
     return DriverSingleton.#instance;
  }
}
class LoginPageFactory {
  static async create() {
    const driver = await DriverSingleton.getInstance();  
    return new LoginPage(driver);
  }
}

However, this implementation is not safe for parallel test execution. An alternative is to use dependency injection to provide the shared instance:

class LoginPageFactory {
  constructor(driver) {
    this.driver = driver;
  }

  async create() {
    return new LoginPage(this.driver);
  }
}
const driver = await setupDriver();
const loginPageFactory = new LoginPageFactory(driver);
const loginPage = await loginPageFactory.create();

Whichever approach you choose, the Factory pattern helps centralize setup logic, making tests more maintainable as your framework evolves.

Make Tests Visible with Reporting

To fully realize the value of automated E2E tests, you need to surface results where the team can see them. That means going beyond console output and generating rich reports with analytics and trends over time.

There are several open source libraries that can generate HTML or XML reports from test results:

Most of these can integrate with popular test frameworks like Mocha, Jasmine or Jest. Here‘s an example generating an Allure report from a Jest test:

// jest.config.js
module.exports = {
 "reporters": [ "default", "jest-allure" ],
};
// loginTest.js 
const { allure } = require("jest-allure/dist/setup");

describe("Login", () => {
  it("should allow a valid user to log in", async () => {
    allure.feature("Auth");
    allure.story("Login");
    allure.severity("critical");

    const loginPage = await LoginPageFactory.create();
    await loginPage.login("testuser", "s3cr3t");

    // assertions
  });
});  

This generates an interactive HTML report with filterable test cases, time trends, and error screenshots.

Ultimately, E2E test reports become a vital communication tool. They give managers a high-level view of system health, help QA spot flaky tests, and provide a launchpad for developers to debug failures.

Tying it all together, here‘s an example of the testing trophy model, with E2E tests sitting atop lower-level testing techniques, all working together to catch bugs at the right level of granularity while optimizing feedback speed and maintainability:

Testing Trophy

Source: https://kentcdodds.com/blog/write-tests

Case Study: LinkedIn‘s E2E Framework

To see these patterns in action, let‘s look at a real-world case study of an E2E testing framework. LinkedIn built a framework called Test Butler to handle E2E testing across their 50+ web applications.

Key features of the Test Butler framework include:

  • Page Object Model: Page objects encapsulate interactions with native app views (iOS and Android) as well as webviews, promoting code reuse.

  • Component Model: LinkedIn has a shared component library called Pemberly. The Test Butler framework provides a corresponding set of component objects for Pemberly components, further increasing reuse.

  • Test Data Builders: Test Butler uses the Test Data Builder pattern to generate valid graph objects for tests, reducing duplication of data setup code.

  • Secrets Management: Sensitive test data like 3rd party credentials are stored securely in a separate repository.

  • Traffic Control: They built a proxy service called Rascal to stub network responses, making E2E tests more hermetic and deterministic. This is important since many LinkedIn services call downstream services.

  • Reporting: Test Butler plugs into LinkedIn‘s internal reporting infrastructure to surface results in an actionable dashboard.

According to LinkedIn, Test Butler enabled them to increase E2E test coverage from 20% to 80%, while catching over 500 bugs before production. They were able to achieve this with a team of just 3 QA engineers by focusing heavily on reuse and maintainability.

Conclusion

As you can see, building a robust E2E testing framework is no small feat. But by thoughtfully applying design patterns, you can create a framework that is expressive, maintainable and scalable.

The Page Object Model promotes encapsulation and keeps tests focused on user flows. The Factory and Singleton patterns make tests more concise by centralizing setup logic. Data-driven testing enhances readability and resilience. And by generating rich reports, you ensure that tests are not just passing, but also providing value to the team.

Of course, E2E testing is no silver bullet. It‘s just one part of a balanced testing strategy that includes unit tests, integration tests, and manual tests. The key is to choose the right tool for the job based on the unique needs of your application and team.

At the end of the day, the goal is to ship high quality software with confidence. A well-crafted E2E testing framework is a powerful ally in that mission. By keeping your tests expressive, deterministic and informative, you can write tests that stand the test of time.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *