ES6: Features By Testing

By  on  

TL;DR
Use the FeatureTests.io service to perform feature tests of ES6+ features. The results of these tests are cached by default in the user's browser, and shared across all sites the user visits that use this service.

In the bootstrapper for your site/app, check the results of these feature tests to decide which files are appropriate to load.

If the tests pass, you can load your original source *.es6.js files and know they'll work natively and performantly just fine in that browser. If any test fails, fall back to loading the already build-step pre-transpiled *.es5.js versions of your code.

Use the same checking logic to decide if the user's browser needs a big shim library (like ES6-Shim) or if the browser needs none (or only a few) of the API polyfills.

Essentially: load only the code that's necessary, and load the best, most native version of it that the browser can support.

The Problem

If you're using any ES6+ code in your applications, odds are you're using a transpiler like Babel or perhaps Traceur. These tools are fantastic and quite capable of producing transpiled versions of your ES6+ code that can run in ES5+ browsers (the vast majority).

However, there's a nuance that is being largely overlooked, and the point of this post is to bring it to light as motivation for a new service I've launched to help address the concern: FeatureTests.io.

Let me pose this rhetorical question/scenario to perhaps illustrate my concern:

Let's assume TC39 keeps adding new and amazing capabilities to the language specification. But why do the browsers need to implement any of these features? Couldn't we just always rely on transpilers, forever going forward, and couldn't we just always and only serve those transpiled files to the browser? If so, wouldn't that mean these features would never actually need to make their way into a browser? The ES specification could just become a transpiler specification, right?

...

If you ponder that scenario for just a moment or two, odds are several concerns jump out at you. Most notably, you probably realize the transpiled code that's produced is bigger, and perhaps slower (if not now, certainly later once browsers have a chance to optimize the native feature implementations). It also requires shipping dozens of kb of polyfill code to patch the API space in the browser.

This all works, but it's not ideal. The best code you can deliver to each user's browser is the smallest, fastest, most well-tailored code you can practically provide. Right!?

Here's the problem: if you only use a build-step transpiler and you unconditionally always serve that ES5 equivalent transpiled code, you will never actually be using any of the native feature implementations. You'll always and forever be using the older, bigger, (perhaps) slower transpiled code.

For now, while ES6 browser support seems to linger in the lower percentages, that may not seem like such a huge deal. Except, have you actually considered just how much of ES6 your app/site is using (or will use soon)?

My guess is, most sites will use maybe 20-30% of ES6 features on a widespread basis. And most if not all of those are already implemented in just about every browser's latest version. Moreover, the new Microsoft Edge browser already has 81% ES6 support (at the time of this writing), and FF/Chrome at ~50-60% are going to quickly catch up.

It won't be long at all before a significant chunk of your users have full ES6 support for every feature your site/app uses or will practically use in the near future.

Don't you want to serve each user the best possible code?

The Solution

First and foremost, keep transpiling your code using your favorite tool(s). Keep doing this in a build-step.

When you go to deploy the .js files to your web-exposed directory that can be loaded into the browser, include the original (ES6+) source files as well as these transpiled files. Also, don't forget to include the polyfills as necessary. For instance, you may name them *.es6.js (original source) and *.es5.js (transpiled) to keep them straight. Or, you may use subdirectories es6/ and es5/ to organize them. You get the point, I'm sure.

Now, how do you decide when your site/app goes to load the first time which set of files is appropriate to load for each users' browser?

You need a bootstrapper that loads first, right up front. For instance, you ship out an HTML page with a single <script> tag in it, and it either includes inline code, or a reference to a single .js file. Many sites/apps of any complexity already do this in some form or another. It's quite typical to load a small bootstrapper that then sets up and loads the rest of your application.

If you don't already have a technique like this, it's not hard to do at all, and there are many benefits you'll get, including the ability to conditionally load the appropriate versions of files for each browser, as I will explain in a moment. Really, this is not as intimidating as it may seem.


Now, in your bootstrapper (however your's is set up), how are you going to decide what files to load?

You need to feature test that browser instance to decide what its capabilities are. If all the features you need are supported, load the *.es6.js files. If some are missing, load the polyfills and the *.es5.js files.

That's it. Really. No, really, that's all I'm suggesting.

Feature Testing ES6

Feature testing for APIs is easy. I'm sure you probably know how to do things like:

if (Number.isNaN) {
    numberIsNaN = true;
}
else {
    numberIsNaN = false;
}

But what about syntax, like detecting if the browser supports => arrow functions or the let block-scoping declarations?

That's harder, because this doesn't work the way we might hope:

try {
    x = y => y;
    arrows = true;
}
catch (err) {
    arrows = false;
}

The syntax fails JS compilation (in pre-ES6 compliant browsers) before it ever tries to run, so the try..catch can't catch it. The solution? Defer compilation.

try {
    new Function( "(y => y)" );
    arrows = true;
}
catch (err) {
    arrows = false;
}

The new Function(..) constructor compiles the code given at runtime, so any compilation error can be caught by your try..catch.

Great, problem solved.

But do you want to personally devise feature tests for all the different ES6+ features you plan to use? And some of them could be slightly painful (slow) to run (like for TCO), so do you really want to do those? Wouldn't it be nicer to run the tests in a background Web Worker thread to minimize any performance impact to the main UI thread?

And even if you did go to all that trouble, do you really need to run all these tests every single time one of your pages loads? Browsers don't add new features by the minute. Typically, a user's browser might update at best every couple of weeks, maybe months. Couldn't you run the tests once and cache the results for awhile?

But if these cached results are only available to your site, if your user visits other ES6-driven sites, every one of them will need to re-perform their own set of the tests. Wouldn't it be nicer if the test results could be cached "globally" on that user's browser, so that any site could just use the true / false test results without having to re-run all the tests?

Or let me turn that around: wouldn't it be nice if your user showed up at your site and the results were already cached (by a visit to another site), so they didn't need to wait for your site to run them, and thus your site loaded quicker for them?

FeatureTests.io

All these reasons (and more) are why I've built ES Feature Tests as a service: FeatureTests.io.

This service provides a library file https://featuretests.io/rs.js which does all the work I referred to above for you. You request this library file either before or as your bootstrapper loads, and then you simply check the results of the tests (which load from cache or run automatically) with a simple if statement.

For example, to test if your let and => using files can load, this is what you'd do in your bootstrapper:

window["Reflect.supports"]( "all", function(results){
    if (results.letConst && results.arrow) {
        // load `*.es6.js` files
    }
    else {
        // load already pre-transpiled `*.es5.js` files
    }
} );

If your site hasn't already cached results for this user, the library cross-domain communicates (via <iframe> from your site to featuretests.io) so the test results can be stored or retrieved "globally" on that browser.

If the tests need to run, it spins up a Web Worker to do the tests off-thread. It even tries to use a Shared Web Worker, so that if the user is simultaneously loading 2+ sites that both use the service, they both use the same worker instance.

All that logic you get automatically by using this free service.

That's it! That's all it takes to get up and going with conditional split-loading of your site/app code based on in-browser ES6 feature tests.

Advanced Stuff

The library behind this site is open-sourced: es-feature-tests. It's also available on npm.

If you wanted to, you could inline the tests from the library into your own bootstrapper code, and skip using FeatureTests.io. That loses you the benefits of shared caching and all, but it still means you don't have to figure out your own tests.

Or, the service offers an API endpoint that returns the tests in text form, so you could retrieve that on your server during your build step, and then include and perform those tests in your own code.

The npm package is of course Node/iojs compatible, so you can even run the exact same sort of feature testing for split loading inside of your Node programs, like:

var ReflectSupports = require("es-feature-tests");

ReflectSupports( "all", function(results){
    if (results.letConst && results.arrow) {
        // require(..) `*.es6.js` modules
    }
    else {
        // require(..) already pre-transpiled
        // `*.es5.js` modules
    }
} );

Which test results does my code need?

As I asserted earlier, you likely won't need to check every single test result, as you likely won't use 100% of all ES6+ features.

But constantly keeping track of which test results your if statement should check can be tedious and error-prone. Do you remember if anyone ever used a let in your code or not?

The "es-feature-tests" package includes a CLI tool called testify which can scan files or directories of your ES6 authored code, and automatically produces the equivalent check logic for you. For example:

$> bin/testify --dir=/path/to/es6-code/

function checkFeatureTests(testResults){return testResults.letConst&&testResults.arrow}

Warning: At the time of this writing, this testify tool is extremely hackish and WiP. It will eventually do full and complete parsing, but for now it's really rough. Stay tuned to more updates on this tool soon!

You can use testify in your build-process (before transpilation, probably) to scan your ES6 source files and produce that checkFeatureTests(..) function declaration that checks all test results your code needs.

Now, you inline include that code in with your bootstrapper, so it now reads:

// ..

function checkFeatureTests(testResults){return testResults.letConst&&testResults.arrow}

window["Reflect.supports"]( "all", function(results){
    if (checkFeatureTests(results)) {
        // load `*.es6.js` files
    }
    else {
        // load already pre-transpiled `*.es5.js` files
    }
} );

// ..

This build-step CLI tool will make it so your tests are always tuned to the code you've written, automatically, which lets you set it and forget it in terms of making sure your site/app code is always loaded in the best version possible for each browser.

Summary

I want you to write ES6 code, and I want you to start doing so today. I've written a book on ES6 to help you learn it: You Don't Know JS: ES6 & Beyond, which you can either read for free online, or purchase from O'Reilly or other book stores.

But, I want you to be responsible and optimal with how you ship your ES6 code or the transpiled code to your user's browsers. I want us all to benefit from the amazing work that the browsers are doing on implementing these features natively.

Load the best code for every browser -- no more, no less. Hopefully FeatureTests.io helps you with that goal.

Happy ES6'ing!

Kyle Simpson

About Kyle Simpson

Kyle Simpson is a web-oriented software engineer, widely acclaimed for his "You Don't Know JS" book series and nearly 1M hours viewed of his online courses. Kyle's superpower is asking better questions, who deeply believes in maximally using the minimally-necessary tools for any task. As a "human-centric technologist", he's passionate about bringing humans and technology together, evolving engineering organizations towards solving the right problems, in simpler ways. Kyle will always fight for the people behind the pixels.

Recent Features

  • By
    CSS vs. JS Animation: Which is Faster?

    How is it possible that JavaScript-based animation has secretly always been as fast — or faster — than CSS transitions? And, how is it possible that Adobe and Google consistently release media-rich mobile sites that rival the performance of native apps? This article serves as a point-by-point...

  • By
    How to Create a RetroPie on Raspberry Pi &#8211; Graphical Guide

    Today we get to play amazing games on our super powered game consoles, PCs, VR headsets, and even mobile devices.  While I enjoy playing new games these days, I do long for the retro gaming systems I had when I was a kid: the original Nintendo...

Incredible Demos

  • By
    CSS Custom Cursors

    Remember the Web 1.0 days where you had to customize your site in every way possible?  You abused the scrollbars in Internet Explorer, of course, but the most popular external service I can remember was CometCursor.  CometCursor let you create and use loads of custom cursors for...

  • By
    Create GitHub-Style Buttons with CSS and jQuery, MooTools, or Dojo JavaScript

    I'm what you would consider a bit of a GitHub fanboy. We all know that GitHub is the perfect place to store repositories of open source code, but I think my love of GitHub goes beyond that. GitHub seems to understand that most...

Discussion

  1. You want me to block my JS while I wait for a third-party service to tell me which one to load? Have you done any real world testing on the performance of this?

    What happens if your site is down? Or if it’s over-capacity, or for any of a number of reasons taking half a second to load? You’ve just increased my page load time, or broken it entirely.

    • @Jonathan-

      You want me to block my JS while I wait for a third-party service to tell me which one to load?

      Lots of sites use third-party libs. It’s still extremely common for sites to load jQuery from a CDN, for example. Or google analytics, or any of the social media buttons, etc, etc. These aren’t right for every site, but they’re on hundreds of millions of sites, so there must be something to it.

      But more importantly, there’s things like Polyfills.io, built by the incredibly smart folks at FT (Financial Times), and used on their huge sites (even mobile), as well as many others. I think they’ve more than proven that third-party services can be scaled and supported in a way that’s totally reliable.

      My goal would be no less than that. This is a brand-new independently built and hosted site, alpha/beta stage at best, WiP, proof-of-concept — whatever label you want to put on it. I’m already using it on my sites for a few weeks, and hoping others will use it too so it can evolve and grow to the maturity of other services.

      I’m also courting vendor partner(s) who would be able to host/scale the service on their infrastructure so that it’s completely reliable and incredibly fast to load. Stay tuned for more info on that.

      What happens if your site is down?

      Sites can go down, there’s no question. But that’s why choosing a big vendor partner to scale out the service is critical, to absolutely minimize that as much as possible. My single VM running the site is obviously not enough. But it’s enough to get started. And I take seriously the concern of making it reliable and scalable for everyone.

      Some other observations:

      1. The article (and the GitHub repo README) mentions that you can self-host the files if you prefer not to rely on the service at all. You can also skip the library entirely, and just retrieve the test code itself on whatever (in)frequent basis you’d like by having your server retrieve it from the API.

      Either way, the upside is you get to remove this service from your reliability matrix. The downside is you don’t get to share the results with other sites your users go to. If you actually use the library (modifying its internal URLs to match your own), you at least get the LocalStorage caching logic, the Web Workers off-thread logic, etc. Or you can write your own bits of that logic if you prefer.

      2. Just like with any other service, you have the option of applying some more advanced/custom logic around how you use it, if you feel that will give you more reliability. This is quite common for sites that demand fine-grained control over their dependencies.

      For example, you could on initial bootstrapper load quickly check for the cached test results (without the service library yet — just extract them yourself). If there, use them right away. If not, load the library in the background to run the tests, but don’t wait on those results for this page load. Just immediately fallback to assuming the transpiled code. That way, users always get the fastest load, even if sometimes they get a load with sub-optimal code versions, but on subsequent page loads things should be much faster.

      As a variation on that, you could apply a quick timeout mechanism, where you request the library and test results, and if they’re there in say 100ms or less, you use them, if not let them load in the background (try to use them on the next page load), and assume transpiled for this page load.

      If you use your imagination, I’m sure you can devise dozens of other ways that you can fully control your usage of the service in a way that you’re comfortable with. I fully expect and endorse each site using the service in whatever way makes most sense to them.


      Anyway, thanks for your comments and concerns. I hope I’ve answered the questions you had and given you some other paths to consider.

  2. Ahh, this was a good read. I am currently in the process of learning ES6+ code and testing it is exactly what I needed especially now that I feel like a boot strap pro! ha. Perhaps I will need to read the book now!

  3. Hi Kyle, thanks for creating this and great article explaining how it works.

    Is it possible that JS engines could support new ES6/7/etc features like arrow functions and generators but not yet optimize the code (such as no JIT support) so transpiling to bulkier-but-potentially-optimized ES5 code could produce faster running code? This also assumes the download size difference with gzipping is negligible.

    • @Dylan-

      Yes, that’s possible. It’s perhaps even likely.

      But I strongly advise against “betting against the future” (as I call it) by paving over (aka avoiding) native features with transpiled ones, as a general policy. This should only be done on a case-by-case basis and only in extreme situations where you know it’s critical.

      Engines decide what things to optimize based on usage.

      Take any given syntax XYZ. Engines will decide to optimize that feature only if lots of people are using XYZ. If you and everyone else avoid XYZ because it’s slow, it may be a really long time before that ever happens. If you use XYZ right away, and a lot of others do too, it’ll get optimized a lot more quickly, which is good for you and everyone else using it.

      Also, and don’t miss this part: once XYZ is optimized, there’s a good chance it’ll be at least as good, but probably even better, than the old transpiled way. If you keep using the transpiled version “forever”, you’ll never pick up on the optimization once it lands. If you use the native feature once it lands, you’ll automatically benefit from the optimization once that lands, too, without any code/process changes or updates.

      Hope that makes sense!

  4. Kirill Sheremetiev

    Kyle, thanks for your great “Advanced JavaScript” course! Could you please explain, why you don’t like constructions like var self = this and similar? What is the correct option to reference object in closures?

    • @Kirill: if you want to reference class methods in a closure you bind the this context using Function.prototype.bind. As in doSomething( function(){ this.doBar(); }.bind(this) ). Or just use arrow functions, which do the same thing :-)

Wrap your code in <pre class="{language}"></pre> tags, link to a GitHub gist, JSFiddle fiddle, or CodePen pen to embed!