Let’s Encrypt

Today we announced a project that I’ve been working on for a while now – Let’s Encrypt. This is a new Certificate Authority (CA) that is intended to be free, fully automated, and transparent. We want to help make the dream of TLS everywhere a reality. See the official announcement blog post I wrote for more information.

Eric Rescorla and I decided to try to make this happen during the summer of 2012. We were trying to figure out how to increase SSL/TLS deployment, and felt that an innovative new CA would likely be the best way to do so. Mozilla agreed to help us out as our first major sponsor, and by May of 2013 we had incorporated Internet Security Research Group (ISRG). By September 2013 we had merged a similar project started by EFF and researchers from the University of Michigan into ISRG, and submitted our 501(c)(3) application. Since then we’ve put a lot of work into ISRG’s governance, found the right sponsors, and put together the plans for our CA, Let’s Encrypt.

I’ll be serving as ISRG’s Executive Director while we search for more permanent leadership. During this time I’ll remain with Mozilla.

Too many people to thank for their help here, many of whom work for our sponsors, but I want to call out Eric Rescorla (Mozilla) and Kevin Dick (Right Side Capital Management) in particular. Eric was my original co-conspirator, and Kevin has spent innumerable hours with me helping to create partnerships and the necessary legal infrastructure for ISRG. Both are incredible at what they do, and I’ve learned a lot from working with them.

Now it’s time to finish building the CA – lots of software to write, hardware to install, and auditing to complete. If you have relevant skills, we hope you’ll join us.

Simple Code Review Checklists

What if, when giving a patch r+ on Mozilla’s bugzilla, you were presented with the following checklist:

You could not actually submit an r+ unless you had checked an HTML check box next to each item. For patches where any of this is irrelevant, just check the box(es) – you considered it.

Checklists like this are commonly used in industries that value safety, quality, and consistency (e.g. medicine, construction, aviation). I don’t see them as often as I’d expect in software development, despite our commitments to these values.

The idea here is to get people to think about the most common and/or serious classes of errors that can be introduced with nearly all patches. Reviewers tend to focus on whatever issue a patch addresses and pay less attention to the other myriad issues any patch might introduce. Example: a patch adds a null check, the reviewer focuses on pointer validity, and misses a leak being introduced.

Catching mistakes in code review is much, much more efficient than dealing with them after they make it into our code base. Once they’re in, fixing them requires a report, a regression range, debugging, a patch, another patch review, and another opportunity for further regressions. If a checklist like this spurred people to do some extra thinking and eliminated even one in twenty (5%) of preventable regressions in code review, we’d become a significantly more efficient organization.

For this to work, the checklist must be kept short. In fact, there is an art to creating effective checklists, involving more than just brevity, but I won’t get into anything else here. My list here has only four items. Are there items you would add or remove?

General thoughts on this or variations as a way to reduce regressions?

Informal Test: Building Firefox with VMWare vs. VirtualBox

I needed to set up a Linux virtual machine on a Windows host for working with Firefox OS. I don’t like working in slow VMs, so I did an informal test of VMWare vs. VirtualBox. I prefer to use open-source software when I can, so if VirtualBox is as fast as VMWare, or close, then I’ll just use VirtualBox.

Mozilla developers often work with VMs, and minimizing build times is a frequent topic of discussion, so I thought I’d post my results in case anyone finds them useful. If you have anything to add on the subject, please do so in the comments.

Host Software and Hardware:

  • Windows 7 Professional 64-bit, all updates applied as of Sept 12, 2013
  • Intel Core i7-3930K CPU @ 3.2 GHz, 6 cores
  • 16gb RAM

Guest Software:

  • Ubuntu 13.04 64-bit, fully updated as of Sept 12, 2013
  • Approximately the minimum number of packages installed in order to build Firefox

VirtualBox Config:

  • VirtualBox 4.2.18
  • 4 CPUs (VirtualBox does not have CPU vs. core distinction that VMWare does)
  • 6002mb RAM assigned to VM

VMWare Config:

  • VMWare Workstation 10, purchased
  • 2 CPUs with 2 cores each, for a total of 4 cores
  • 6004mb RAM assigned to VM

The test was was to create a Firefox nightly debug build (latest code as of Sept 12, 2013) with the command “time make -f client.mk”, using the “-j4” flag for make.

VirtualBox Build Time

  • real: 28m53.005s
  • user: 88m32.932s
  • sys: 10m6.376s

VMWare Build Time

  • real: 29m31.595s
  • user: 89m22.548s
  • sys: 11m6.192s

Given these results, I’m just going to use VirtualBox. I’ll note, however, that graphics performance (and subsequently, UI responsiveness) in VMWare does seem to be noticeably better than VirtualBox. I don’t care, but if you do, VMWare’s advantage here is fairly noticeable even in simple interactions. VirtualBox is not bad though. Also, I didn’t test device support or too many different VM configs for the sake of tweaking performance. Take my results as a rough guide at best.

I did test VirtualBox with fewer CPUs, to test a rumor I’ve heard in the past that adding CPUs to a VM doesn’t improve performance much, or could even hurt it. That rumor seems to not be true, or is no longer true, at least with VirtualBox and my setup. Build times went from 101m, to 52m, to 28m, for 1, 2, and 4 CPUs, respectively.

Improving DNS Performance in Firefox for Android

Mozilla has been working hard to improve Firefox on Android. The following is a guest post from Steve Workman of Mozilla’s networking team which describes an effort to improve DNS performance. – Josh

It started with some crashes on Android that were due to getaddrinfo being called from multiple threads. The problem was that the version of getaddrinfo supplied by Bionic (Android’s minimal-but-fast libc implementation) in pre-Honeycomb Android isn’t thread-safe. This is because fopen/fclose etc. aren’t thread-safe. Multiple accesses were being made to a file pointer when reading the local hosts file, resulting in crashes.

Why were we calling getaddrinfo on multiple threads? Calls to getaddrinfo can block until a response is received from a DNS server. This can take a while, especially if there is a problem and we wait for the timeout. Making parallel getaddrinfo calls allows us to cut down on waiting and get more done at once. Sockets can be opened sooner, HTTP requests can be sent sooner, and ultimately your content can be received and displayed sooner. Not being able to make parallel calls to getaddrinfo would be a serious performance regression, especially on mobile where round trip times are generally longer.

First we needed a quick fix for the crash – a performance regression is better than a crash regression. We temporarily serialized calls to getaddrinfo and prefetching (predictive DNS resolution) was disabled.

After that, we decided to provide our own thread-safe version of getaddrinfo, bypassing Bionic’s. Our implementation would have mmap‘d access to the local hosts file, using open/close directly, thus providing a thread-safe function. However, since we were dealing with a library-exposed function, it meant calls to functions and use of structures which were not exposed; at least not officially. After a few failed attempts in which we were trying to get away with dependencies on some unofficially exposed symbols, we finally pulled in a pretty complete version of the host resolver from Gingerbread. This added to our library size a bit, but it allowed for parallel calls to getaddrinfo on Android again. Given the potential for such calls to block for the duration of a DNS request, we believe this is a good tradeoff.

This change is currently scheduled to ship in Firefox 11.


Steve Workman

Firefox 4 for Mac OS X: Under the Hood

Looking under the hood of a carFirefox 4 will be an exciting release and we’ve made a number of improvements specific to Mac OS X. Users will benefit primarily in terms of speed, stability, and security. We’ve come a long way since Firefox 3 for Mac OS X.

First, we’ve switched from a ppc/i386 universal binary to an i386/x86_64 universal binary. The default architecture on Mac OS X 10.6 will be x86_64. The default architecture on Mac OS X 10.5 will be i386. You will be able to run in i386 mode on Mac OS X 10.6 if you choose to do so but you will not be able to run in x86_64 mode on Mac OS X 10.5. Performance is the primary motivation for the move to x86_64. These numbers comparing Firefox 4b7 i386 to Firefox 4b7 x86_64 on Mac OS X 10.6.4 give some idea of the kinds of gains we’re seeing from the architecture change alone:

  • Cold startup: x86_64 is ~26% faster
  • Warm startup: x86_64 is ~5% faster
  • MS Psychadelic Browsing Demo: x86_64 is ~540% faster
  • MS Speed Reading Demo: x86_64 is ~35% faster

A big part of this is the availability of more CPU registers, but there are a number of other factors in play such as the ABI and the caching of system libraries. If most of your other applications are x86_64, and this is the case on most Mac OS X 10.6 systems, then x86_64 system libraries are more likely to be cache-hot. Your mileage may vary depending on your exact system configuration.

We dropped ATSUI for text rendering and moved to Harfbuzz and Core Text. The move to Harfbuzz for many operations was done for security reasons and in order to expose advanced typographic features. Font handling is difficult in general, and even more so in web browsers. We’d prefer to depend on open source font code if possible because we can patch it quickly and participate in improving it.

We enabled OpenGL accelerated layer composition. This stage in the rendering pipeline is where we composite independently-rendered regions of a web page for your screen. Accelerating it helps us most when resizing images and video. GPUs are much better at performing those sorts of transformations than CPUs. For more information, see this post from Joe Drew. We hope to accelerate the rest of our rendering pipeline on Mac OS X soon.

We also added support for the Cocoa NPAPI event model and the Core Animation NPAPI drawing model. These specifications are a big step forward for browser plugins on Mac OS X. They are easier to develop for, properly documented, and designed with IPC in mind. As of version 10.1, Adobe’s Flash plugin supports Cocoa NPAPI. Which leads me to the next improvement…

Firefox 4 will run many plugins out-of-process on Mac OS X. All of your plugins will be out-of-process if you’re running the x86_64 version of Firefox. If you’re running the i386 version of Firefox we’ll run some popular plugins, such as Flash 10.1+, out-of-process but others will run in-process for performance and user experience reasons. And yes – the x86_64 version of Firefox will be able to use i386 plugins.

Those are the major Mac OS X-specific changes but we’ve also made a large number of minor improvements for Mac OS X. Combined with all of the great cross-platform improvements like our new JavaScript engine, WebM, and better HTML5 support, Firefox 4 should take the web to a whole new level for our users.

Building 32-bit Firefox on Mac OS X 10.6

Building 32-bit Firefox for Mac OS X 10.6 is a little trickier than building on 10.5. This post will explain how to do it. Assume everything I don’t mention is the same as on 10.5.

First, installing mercurial via MacPorts does not work on 10.6 at this time (MacPorts bug 18449). You’ll have to install mercurial “manually,” instructions can be found on the mercurial website. You can install libidl via MacPorts as usual.

Second, Mac OS X 10.6 is 64-bit by default. Most machines will boot with a 32-bit kernel but applications included with the OS are 64-bit by default and the developer toolchain’s default architecture is x86-64. This means that in order to produce a 32-bit build of Firefox you actually have to do a cross-compile, much like the way we produce PPC builds on Intel machines. Luckily this is not too hard – simply start with the mozconfig I put in Mozilla bug 477945. If anything changes I’ll put updated mozconfigs there. The mozconfig on that bug won’t work for everyone, crashreporter is disabled (Mozilla bug 429841) and the resulting build won’t work on 10.4, but that isn’t to say that you can’t get those features when building on 10.6. You’ll just have to tweak the mozconfig as you need and file bugs when you run into problems. The mozconfig on the bug is designed to get most developers up and running quickly.

Status Update: Gecko/Firefox for 64-bit Mac OS X

Since Mac OS X 10.6 is coming out tomorrow I thought I’d give an update on 64-bit Gecko for Mac OS X. Progress is tracked on Mozilla bug 468509.

We’re very close to having running builds. We still need to replace some old API usage related to complex text input and print dialogs, but aside from that I do not know of any other major work items. We have come a long way and we will have builds within a month. As soon as we can produce them we’ll start working on getting a 64-bit Mac OS X 10.6 tinderbox set up to produce nightly builds.

We have not made any decisions about shipping an officially supported 64-bit build for Mac OS X. I suspect at some point we will make the decision to remove the PPC architecture from our universal binary and replace it with x86-64 but like I said, we have not made any decisions yet.