Implications of Rewriting a Browser Component in Rust

The previous posts in this Fearless Security series examine memory safety and thread safety in Rust. This closing post uses the Quantum CSS project as a case study to explore the real world impact of rewriting code in Rust.

The style component is the part of a browser that applies CSS rules to a page. This is a top-down process on the DOM tree: given the parent style, the styles of children can be calculated independently—a perfect use-case for parallel computation. By 2017, Mozilla had made two previous attempts to parallelize the style system using C++. Both had failed.

Quantum CSS resulted from a need to improve page performance. Improving security is a happy byproduct.

Rewrites code to make it faster; also makes it more secure

There’s a large overlap between memory safety violations and security-related bugs, so we expected this rewrite to reduce the attack surface in Firefox. In this post, I will summarize the potential security vulnerabilities that have appeared in the styling code since Firefox’s initial release in 2002. Then I’ll look at what could and could not have been prevented by using Rust.

Over the course of its lifetime, there have been 69 security bugs in Firefox’s style component. If we’d had a time machine and could have written this component in Rust from the start, 51 (73.9%) of these bugs would not have been possible. While Rust makes it easier to write better code, it’s not foolproof.

Rust

Rust is a modern systems programming language that is type- and memory-safe. As a side effect of these safety guarantees, Rust programs are also known to be thread-safe at compile time. Thus, Rust can be a particularly good choice when:

✅ processing untrusted input safely.
✅ introducing parallelism to improve performance.
✅ integrating isolated components into an existing codebase.

However, there are classes of bugs that Rust explicitly does not address—particularly correctness bugs. In fact, during the Quantum CSS rewrite, engineers accidentally reintroduced a critical security bug that had previously been patched in the C++ code, regressing the fix for bug 641731. This allowed global history leakage via SVG image documents, resulting in bug 1420001. As a trivial history-stealing bug, this is rated security-high. The original fix was an additional check to see if the SVG document was being used as an image. Unfortunately, this check was overlooked during the rewrite.

While there were automated tests intended to catch :visited rule violations like this, in practice, they didn’t detect this bug. To speed up our automated tests, we temporarily turned off the mechanism that tested this feature—tests aren’t particularly useful if they aren’t run. The risk of re-implementing logic errors can be mitigated by good test coverage (and actually running the tests). There’s still a danger of introducing new logic errors.

As developer familiarity with the Rust language increases, best practices will improve. Code written in Rust will become even more secure. While it may not prevent all possible vulnerabilities, Rust eliminates an entire class of the most severe bugs.

Quantum CSS Security Bugs

Overall, bugs related to memory, bounds, null/uninitialized variables, or integer overflow would be prevented by default in Rust. The miscellaneous bug I referenced above would not have been prevented—it was a crash due to a failed allocation.

Security bugs by category

All of the bugs in this analysis are related to security, but only 43 received official security classifications. (These are assigned by Mozilla’s security engineers based on educated “exploitability” guesses.) Normal bugs might indicate missing features or problems like crashes. While undesirable, crashes don’t result in data leakage or behavior modification. Official security bugs can range from low severity (highly limited in scope) to critical vulnerability (might allow an attacker to run arbitrary code on the user’s platform).

There’s a significant overlap between memory vulnerabilities and severe security problems. Of the 34 critical/high bugs, 32 were memory-related.

Security rated bug breakdown

Comparing Rust and C++ code

Bug 955914 is a heap buffer overflow in the GetCustomPropertyNameAt function. The code used the wrong variable for indexing, which resulted in interpreting memory past the end of the array. This could either crash while accessing a bad pointer or copy memory to a string that is passed to another component.

The ordering of all CSS properties (both longhand and custom) is stored in an array, mOrder. Each element is either represented by its CSS property value or, in the case of custom properties, by a value that starts at eCSSProperty_COUNT (the total number of non-custom CSS properties). To retrieve the name of a custom property, first, you have to retrieve the custom property value from mOrder, then access the name at the corresponding index of the mVariableOrder array, which stores the custom property names in order.

Vulnerable C++ code:

    void GetCustomPropertyNameAt(uint32_t aIndex, nsAString& aResult) const {
            MOZ_ASSERT(mOrder[aIndex] >= eCSSProperty_COUNT);

            aResult.Truncate();
            aResult.AppendLiteral("var-");
            aResult.Append(mVariableOrder[aIndex]);

The problem occurs at line 6 when using aIndex to access an element of the mVariableOrder array. aIndex is intended for use with the mOrder array not the mVariableOrder array. The corresponding element for the custom property represented by aIndex in mOrder is actually mOrder[aIndex] - eCSSProperty_COUNT.

Fixed C++ code:

    void Get CustomPropertyNameAt(uint32_t aIndex, nsAString& aResult) const {
      MOZ_ASSERT(mOrder[aIndex] >= eCSSProperty_COUNT);

      uint32_t variableIndex = mOrder[aIndex] - eCSSProperty_COUNT;
      aResult.Truncate();
      aResult.AppendLiteral("var-");
      aResult.Append(mVariableOrder[variableIndex]);
    }

Equivalent Rust code

While Rust is similar to C++ in some ways, idiomatic Rust uses different abstractions and data structures. Rust code will look very different from C++ (see below for details). First, let’s consider what would happen if we translated the vulnerable code as literally as possible:

    fn GetCustomPropertyNameAt(&self, aIndex: usize) -> String {
        assert!(self.mOrder[aIndex] >= self.eCSSProperty_COUNT);

        let mut result = "var-".to_string();
        result += &self.mVariableOrder[aIndex];
        result
    }

The Rust compiler would accept the code, since there is no way to determine the length of vectors before runtime. Unlike arrays, whose length must be known, the Vec type in Rust is dynamically sized. However, the standard library vector implementation has built-in bounds checking. When an invalid index is used, the program immediately terminates in a controlled fashion, preventing any illegal access.

The actual code in Quantum CSS uses very different data structures, so there’s no exact equivalent. For example, we use Rust’s powerful built-in data structures to unify the ordering and property name data. This allows us to avoid having to maintain two independent arrays. Rust data structures also improve data encapsulation and reduce the likelihood of these kinds of logic errors. Because the code needs to interact with C++ code in other parts of the browser engine, the new GetCustomPropertyNameAt function doesn’t look like idiomatic Rust code. It still offers all of the safety guarantees while providing a more understandable abstraction of the underlying data.

tl;dr;

Due to the overlap between memory safety violations and security-related bugs, we can say that Rust code should result in fewer critical CVEs (Common Vulnerabilities and Exposures). However, even Rust is not foolproof. Developers still need to be aware of correctness bugs and data leakage attacks. Code review, testing, and fuzzing still remain essential for maintaining secure libraries.

Compilers can’t catch every mistake that programmers can make. However, Rust has been designed to remove the burden of memory safety from our shoulders, allowing us to focus on logical correctness and soundness instead.

About Diane Hosfelt

More articles by Diane Hosfelt…


16 comments

  1. stillDreaming1

    What is fuzzing and how does it relate to security?

    February 28th, 2019 at 12:29

    1. Diane Hosfelt

      Fuzzing is a technique that provides random inputs to a program and sees if any inputs cause crashes or other problems. Browsers in particular use fuzzing extensively to discover potential vulnerabilities

      February 28th, 2019 at 12:57

  2. Aky

    Nice article! Rust’s type and memory safety benefits sound good… but it seems to describe something happening more in Mozilla’s labs than in practice – where it’s a hit-and-miss to get it to work in the first place. The most advertised use of Rust is in Firefox, but it can’t compile the current ESR (60.5.2) :
    rustc[25334]: segfault at ffffffff ip 00000000ffffffff sp 00007f37697fcf70 error 14 in rustc[55db58dbc000+1000]

    Since many major releases of Firefox required specific versions of rustc and sometiomes not even that specificity is enough, rust seems too unstable to qualify as a systems programming language.

    When Rust will actually work, I’d love to give it another try. But until then, it remains just added dependency of Firefox.

    February 28th, 2019 at 12:47

    1. D D

      It could be that Firefox’s build system is trying to do things in a way that is too complicated and fragile or somehow incorrect — or it could be that Firefox’s Rust code relies too heavily on experimental rust language features…

      But I don’t think it’s common for code to cause rustc to error out/segfault. Also, there is a lot of low-level code, even an operating system kernel being written in Rust. So I think it works pretty well for a systems programming language. People working on the language admit it is a young language with room to grow and mature. But I think Rust can be “ready for production” at this point, especially if you reach out to the rust community for support or troubleshooting if/when needed.

      March 6th, 2019 at 19:43

  3. tim

    Those *correctness bugs* are mostly known as logical errors. Great article!

    February 28th, 2019 at 13:20

    1. Rune K. Svendsen

      The given example sounds like a memory safety bug, not a logical error.

      March 6th, 2019 at 08:47

  4. teki

    C+ can do bounds checking too, just use .at() instead of [].

    February 28th, 2019 at 19:53

  5. Tom Gee

    Years ago, we named this type of testing the “five year old test suite” after the five year old daughter of one of our colleagues who was left in front of a live program under test. When the father returned, she was merrily typing away at a frozen screen (this was systems code). When he asked what she was doing, she smiled and said, “writing to grand ma”. We howled. Whenever he brought her in, we’d let her type a letter to grand ma. Many times it produced either unexpected results or a crash.

    Sorry to interject, but too good to pass up the opportunity to mention. Please delete if you like.

    Best Regards,

    TGee

    March 1st, 2019 at 17:28

  6. Carl Rash

    Have you ever seen the size of a Rust hello world app? It is mind boggling

    March 4th, 2019 at 04:38

    1. Jonathan Adams

      fn main() {
      println!(“Hello World!”);
      }

      That seems about par-for-the-course to me.

      March 7th, 2019 at 12:14

    2. D D

      Last time I checked, a compiled hello_world in rust was about 2.4 MB on disk (using cargo/rustc 1.33.0). Which is pretty large compared to something like compiled hello_world in C (about 16.5 kilobytes, using gcc 8)

      There was a post explaining the priorities behind this over on reddit a few months ago: https://www.reddit.com/r/rust/comments/9m2dwo/noob_question_why_are_rust_binaries_so_big/

      As mentioned in the thread, there are a few features included into every binary by default that increase the size. (This can be turned off, apparently.)

      I personally don’t feel qualified to know whether I should agree with the tradeoffs made, but for anyone curious about why the binaries for really simple apps seem larger in Rust than for other languages (using Rust’s default settings), that thread explains it. And there is some justification for the size, at least.

      March 8th, 2019 at 09:39

    3. jcdyer

      Most of that is just debugging symbols, which are included by default in rust builds. Running strip on the builds clears most of that away:

      cliff@conakry:~$ cargo new /tmp/foo
      Created binary (application) `/tmp/foo` package
      cliff@conakry:/tmp/foo$ cargo build
      Compiling foo v0.1.0 (/tmp/foo)
      c Finished dev [unoptimized + debuginfo] target(s) in 0.96s
      cliff@conakry:/tmp/foo$ cargo build –release
      Compiling foo v0.1.0 (/tmp/foo)
      Finished release [optimized] target(s) in 0.21s
      cliff@conakry:/tmp/foo$ ls -al target/*/foo
      -rwxr-xr-x 2 cliff cliff 2427528 Mar 19 09:53 target/debug/foo
      -rwxr-xr-x 2 cliff cliff 2415600 Mar 19 09:53 target/release/foo
      cliff@conakry:/tmp/foo$ strip target/*/foo
      cliff@conakry:/tmp/foo$ ls -al target/*/foo
      -rwxr-xr-x 2 cliff cliff 199080 Mar 19 09:54 target/debug/foo
      -rwxr-xr-x 2 cliff cliff 194904 Mar 19 09:54 target/release/foo

      200k is still bigger than 16.5k, but not too bad for most uses. When you get to real program sizes, it’s even less relevant. Building with #[no_std], which you’ll probably be doing in any severely constrained environments, can make the resulting binary even smaller.

      March 19th, 2019 at 06:57

  7. Dan Neely

    There’s something wrong with your charts. They’re showing as embedded JSFiddles instead of rendering; and when I tried opening and running them in new tabs they failed silently. (Clicking run didn’t produce any output.)

    March 4th, 2019 at 05:46

    1. Diane Hosfelt

      Thanks! Should be fixed now

      March 5th, 2019 at 10:45

  8. Javier Sánchez

    Me gustaría probar.

    March 7th, 2019 at 09:24

  9. Wellington Torrejais da SIlva

    Nice! Thanks!

    March 8th, 2019 at 06:40

Comments are closed for this article.