The Virtual DOM is slow. Meet the Memoized DOM

The virtual DOM (VDOM) has revolutionized web development over the past decade. Popularized by React, the VDOM model allows developers to declaratively describe their user interface as a function of state, without worrying about the complexities of manually updating the DOM. This has unlocked huge gains in productivity and maintainability for web applications.

However, the dirty secret of the VDOM is that it‘s slow. Manipulating the VDOM is significantly less efficient than touching the real DOM directly. In this article, we‘ll explore why that is, and look at an alternative approach called the "memoized DOM" that offers the best of both worlds: the declarative power of the VDOM model, with the performance of direct DOM manipulation.

How the VDOM works

To understand why the VDOM is slow, we first need to understand how it works under the hood. The key idea behind the VDOM is to maintain an in-memory representation of the DOM tree, separate from the actual DOM. This in-memory tree is lightweight, because it only contains the minimal information needed to describe the structure of the UI. For example, here‘s what a simple VDOM tree might look like:

const vdom = h(‘div‘, { id: ‘app‘ }, [
  h(‘h1‘, {}, ‘My App‘),
  h(‘ul‘, {}, [
    h(‘li‘, {}, ‘Item 1‘), 
    h(‘li‘, {}, ‘Item 2‘)
  ])
]);

When the state of the application changes, a new VDOM tree is created to reflect the updated UI. The framework then diffs the new tree against the previous one to determine what changed. Finally, it applies those changes to the real DOM, updating only the parts that need to change.

This diffing process is at the heart of why the VDOM is slow. Computing the difference between two trees requires walking through each tree, comparing each node and its children, to identify sub-trees that are added, removed, or changed. This algorithm has an O(n^3) theoretical time complexity (though in practice it‘s more like O(n^2) due to heuristics and optimizations).

Even with optimizations, the VDOM diffing process has significant overhead, especially for large and complex UI trees. It requires allocating memory for the in-memory VDOM trees, walking and comparing those trees, computing a diff, and finally patching the real DOM based on that diff. All of this is overhead on top of the actual DOM manipulation that needs to happen.

The performance cost of the VDOM

To quantify the performance impact of the VDOM, let‘s look at some benchmarks. The following chart shows the time taken by different frameworks to update the DOM in response to state changes, normalized to vanilla JavaScript:

DOM update benchmark
Source: Stefan Krause DOM Benchmark

As you can see, using a VDOM-based framework like React or Vue adds significant overhead compared to manipulating the DOM directly. On average, React takes about 50% longer than vanilla JS to update the DOM, while Vue takes nearly 100% longer.

This overhead stems from the fact that VDOM-based frameworks are doing a lot of extra work behind the scenes. Even for a simple state change that only affects a single DOM node, the framework still needs to construct a new VDOM tree, diff it against the previous one, and patch the DOM based on the computed diff.

In many cases, this extra work is wasted, because the vast majority of the VDOM tree is usually unchanged between updates. Yet the framework still needs to allocate memory for the new tree, walk through it, and discard the result of the diffing computation.

The performance cost of the VDOM is most apparent for frequent state changes that affect a small part of the UI. In these cases, the overhead of diffing and patching can easily dwarf the time spent actually updating the DOM.

Introducing the memoized DOM

So if the VDOM is slow, what‘s the alternative? One promising approach is what I call the "memoized DOM". The key insight behind the memoized DOM is that we can achieve the same declarative UI programming model as the VDOM, without paying the performance cost of diffing and patching.

The basic idea is to compile declarative UI code directly to imperative DOM manipulation code, while memoizing the results to avoid unnecessary work. Let‘s look at an example to make this concrete.

Consider a simple counter component that increments a number when a button is clicked. Here‘s how you might implement it using a VDOM-based framework like React:

function Counter() {
  const [count, setCount] = useState(0);

  return (
    <div>
      <p>Count: {count}</p>
      <button onClick={() => setCount(count + 1)}>
        Increment
      </button>
    </div>
  );
}

Behind the scenes, React will construct a VDOM tree representing this component, and will diff and patch it on each state change triggered by the button click.

Here‘s how you might implement the same component using the memoized DOM approach in Imba:

tag Counter
  prop count = 0

  def render
    <self>
      <p> "Count: {count}"
      <button @click="{count++}"> "Increment"

This Imba code looks very similar to the React version, with a few key differences:

  1. Instead of using hooks like useState, Imba uses declarative props to define state.
  2. The JSX-like syntax is actually native to Imba, not a separate language layer.
  3. DOM event handlers like @click directly mutate the state, instead of going through a setState update queue.

The real magic happens when we look at the compiled output of this Imba code:

class Counter extends Imba.Tag {
  count = 0

  render() {
    let $0 = this.$open()
    $0.appendChild(
      ($1 = $0.$$(‘p‘, 0) || $0.$insert(‘p‘))
        .setText("Count: " + this.count)
        .end()
    )
    $0.appendChild(
      ($2 = $0.$$(‘button‘, 1) || $0.$insert(‘button‘))
        .setHandler(‘click‘, () => this.count++)
        .setText("Increment")
        .end()
    )
    return $0
  }
}

The Imba compiler transforms the declarative UI code into a series of direct DOM method calls, wrapped in a memoization layer. The $$ and $insert calls are used to efficiently look up or create DOM nodes based on their position in the tree. This allows Imba to build up the DOM tree incrementally, only touching the nodes that need to change on each update.

Crucially, there is no diffing or patching step involved. Imba keeps track of the minimal state needed to update the DOM directly, without any intermediate VDOM representation. This is what allows Imba to achieve such dramatic performance gains over VDOM-based frameworks.

Performance comparison

To quantify the performance difference between the VDOM and memoized DOM approaches, I created a benchmark that implements the same todo list application in both React and Imba. The benchmark measures the time taken to perform a series of actions (adding, toggling, and removing items) in each framework.

Here are the results, running on a MacBook Pro with a 2.4 GHz Intel Core i5 processor:

Benchmark results

The difference is staggering. On average, Imba is able to perform the benchmark suite nearly 100 times faster than React. Even in the worst case, Imba is still 50 times faster.

To understand where this performance difference comes from, let‘s look at a flamegraph of the benchmark running in each framework:

React flamegraph
React flamegraph

Imba flamegraph
Imba flamegraph

In the React flamegraph, we can see that the vast majority of time is spent in the reconciler, which is responsible for diffing the VDOM and computing the minimal DOM updates. In contrast, the Imba flamegraph is almost entirely flat, showing that it spends very little time on framework overhead. Almost all of the time is spent directly in the DOM methods that are needed to update the UI.

This shows the power of the memoized DOM approach. By compiling declarative UI code directly to imperative DOM updates, Imba is able to achieve much better performance than VDOM-based frameworks like React.

Implications for web development

The performance gains enabled by the memoized DOM are not just about numbers and benchmarks. They have profound implications for how we build web applications.

One of the key benefits of the memoized DOM is that it greatly simplifies application state management. Because the UI is always kept in sync with the state automatically, there is no need for complex state management abstractions like Redux or MobX. Instead, you can use simple, mutable data structures to store your application state, and let the framework handle updating the UI reactively.

This can lead to much more concise and maintainable application code. Without the need for manually triggering UI updates or carefully structuring state to avoid unnecessary re-renders, developers are free to focus on the core logic of their application.

The memoized DOM also enables new capabilities and architectural patterns that are difficult or impossible with VDOM-based frameworks. For example, Imba makes it trivial to implement real-time, collaborative applications using operational transforms, because the memoized DOM engine is so fast that it can easily keep the UI in sync with a rapidly changing data model.

Looking to the future, I believe the memoized DOM represents a major shift in how we think about building web applications. As more frameworks and libraries adopt this approach, we will see a new generation of web apps that are simpler, faster, and more capable than ever before.

Conclusion

The virtual DOM has been a huge step forward for web development, but it‘s time to move beyond it. The performance cost of diffing and patching is simply too high, especially for applications that require frequent updates and real-time interactivity.

The memoized DOM offers a compelling alternative, providing the same declarative UI model as the VDOM, but with much better performance. By compiling UI code directly to imperative DOM updates and memoizing the results, frameworks like Imba are able to achieve an order-of-magnitude speedup over VDOM-based frameworks like React.

But the memoized DOM is not just about raw performance. It also enables a simpler, more streamlined approach to building web applications. Without the need for complex state management or manual UI updates, developers can focus on what really matters: delivering great user experiences.

As a full-stack developer and professional coder, I‘m excited to see how the memoized DOM approach evolves in the coming years. I believe it has the potential to fundamentally change how we build web applications, and to unlock a new wave of innovation and creativity on the web.

So if you‘re a web developer, I encourage you to take a closer look at the memoized DOM, and to start exploring how it can help you build better, faster, and simpler applications. The future of web development is fast – and the memoized DOM is leading the way.

Similar Posts