PerformanceWriting Fast, Memory-Efficient JavaScript

Advertisement

JavaScript engines such as Google’s V8 (Chrome, Node) are specifically designed for the fast execution of large JavaScript applications. As you develop, if you care about memory usage and performance, you should be aware of some of what’s going on in your user’s browser’s JavaScript engine behind the scenes.

Whether it’s V8, SpiderMonkey (Firefox), Carakan (Opera), Chakra (IE) or something else, doing so can help you better optimize your applications. That’s not to say one should optimize for a single browser or engine. Never do that.

You should, however, ask yourself questions such as:

  • Is there anything I could be doing more efficiently in my code?
  • What (common) optimizations do popular JavaScript engines make?
  • What is the engine unable to optimize for, and is the garbage collector able to clean up what I’m expecting it to?

Dashboard Speedometer
Fast-loading Web sites — like fast cars — require the use specialized tools. Image source: dHybridcars.

There are many common pitfalls when it comes to writing memory-efficient and fast code, and in this article we’re going to explore some test-proven approaches for writing code that performs better.

So, How Does JavaScript Work In V8?

While it’s possible to develop large-scale applications without a thorough understanding of JavaScript engines, any car owner will tell you they’ve looked under the hood at least once. As Chrome is my browser of choice, I’m going to talk a little about its JavaScript engine. V8 is made up of a few core pieces.

  • A base compiler, which parses your JavaScript and generates native machine code before it is executed, rather than executing bytecode or simply interpreting it. This code is initially not highly optimized.
  • V8 represents your objects in an object model. Objects are represented as associative arrays in JavaScript, but in V8 they are represented with hidden classes, which are an internal type system for optimized lookups.
  • The runtime profiler monitors the system being run and identifies “hot” functions (i.e. code that ends up spending a long time running).
  • An optimizing compiler recompiles and optimizes the “hot” code identified by the runtime profiler, and performs optimizations such as inlining (i.e. replacing a function call site with the body of the callee).
  • V8 supports deoptimization, meaning the optimizing compiler can bail out of code generated if it discovers that some of the assumptions it made about the optimized code were too optimistic.
  • It has a garbage collector. Understanding how it works can be just as important as the optimized JavaScript.

Garbage Collection

Garbage collection is a form of memory management. It’s where we have the notion of a collector which attempts to reclaim memory occupied by objects that are no longer being used. In a garbage-collected language such as JavaScript, objects that are still referenced by your application are not cleaned up.

Manually de-referencing objects is not necessary in most cases. By simply putting the variables where they need to be (ideally, as local as possible, i.e. inside the function where they are used versus an outer scope), things should just work.

Garbage collection attempts to reclaim memory.
Garbage collection attempts to reclaim memory. Image source: Valtteri Mäki.

It’s not possible to force garbage collection in JavaScript. You wouldn’t want to do this, because the garbage collection process is controlled by the runtime, and it generally knows best when things should be cleaned up.

De-Referencing Misconceptions

In quite a few discussions online about reclaiming memory in JavaScript, the delete keyword is brought up, as although it was supposed to be used for just removing keys from a map, some developers think you can force de-referencing using it. Avoid using delete if you can. In the below example, delete o.x does a lot more harm than good behind the scenes, as it changes o‘s hidden class and makes it a generic slow object.

var o = { x: 1 }; 
delete o.x; // true 
o.x; // undefined

That said, you are almost certain to find references to delete in many popular JavaScript libraries – it does have a purpose in the language. The main takeaway here is to avoid modifying the structure of hot objects at runtime. JavaScript engines can detect such “hot” objects and attempt to optimize them. This is easier if the object’s structure doesn’t heavily change over its lifetime and delete can trigger such changes.

There are also misconceptions about how null works. Setting an object reference to null doesn’t “null” the object. It sets the object reference to null. Using o.x = null is better than using delete, but it’s probably not even necessary.

var o = { x: 1 }; 
o = null;
o; // null
o.x // TypeError

If this reference was the last reference to the object, the object is then eligible for garbage collection. If the reference was not the last reference to the object, the object is reachable and will not be garbage collected.

Another important note to be aware of is that global variables are not cleaned up by the garbage collector during the life of your page. Regardless of how long the page is open, variables scoped to the JavaScript runtime global object will stick around.

var myGlobalNamespace = {};

Globals are cleaned up when you refresh the page, navigate to a different page, close tabs or exit your browser. Function-scoped variables get cleaned up when a variable falls out of scope. When functions have exited and there aren’t any more references to it, the variable gets cleaned up.

Rules of Thumb

To give the garbage collector a chance to collect as many objects as possible as early as possible, don’t hold on to objects you no longer need. This mostly happens automatically; here are a few things to keep in mind.

  • As mentioned earlier, a better alternative to manual de-referencing is to use variables with an appropriate scope. I.e. instead of a global variable that’s nulled out, just use a function-local variable that goes out of scope when it’s no longer needed. This means cleaner code with less to worry about.
  • Ensure that you’re unbinding event listeners where they are no longer required, especially when the DOM objects they’re bound to are about to be removed
  • If you’re using a data cache locally, make sure to clean that cache or use an aging mechanism to avoid large chunks of data being stored that you’re unlikely to reuse

Functions

Next, let’s look at functions. As we’ve already said, garbage collection works by reclaiming blocks of memory (objects) which are no longer reachable. To better illustrate this, here are some examples.

function foo() {
    var bar = new LargeObject();
    bar.someCall();
}

When foo returns, the object which bar points to is automatically available for garbage collection, because there is nothing left that has a reference to it.

Compare this to:

function foo() {
    var bar = new LargeObject();
    bar.someCall();
    return bar;
}

// somewhere else
var b = foo();

We now have a reference to the object which survives the call and persists until the caller assigns something else to b (or b goes out of scope).

Closures

When you see a function that returns an inner function, that inner function will have access to the outer scope even after the outer function is executed. This is basically a closure — an expression which can work with variables set within a specific context. For example:

function sum (x) {
    function sumIt(y) {
        return x + y;
    };
    return sumIt;
}

// Usage
var sumA = sum(4);
var sumB = sumA(3);
console.log(sumB); // Returns 7

The function object created within the execution context of the call to sum can’t be garbage collected, as it’s referenced by a global variable and is still very much accessible. It can still be executed via sumA(n).

Let’s look at another example. Here, can we access largeStr?

var a = function () {
    var largeStr = new Array(1000000).join('x');
    return function () {
        return largeStr;
    };
}();

Yes, we can, via a(), so it’s not collected. How about this one?

var a = function () {
    var smallStr = 'x';
    var largeStr = new Array(1000000).join('x');
    return function (n) {
        return smallStr;
    };
}();

We can’t access it anymore and it’s a candidate for garbage collection.

Timers

One of the worst places to leak is in a loop, or in setTimeout()/setInterval(), but this is quite common.

Consider the following example.

var myObj = {
    callMeMaybe: function () {
        var myRef = this;
        var val = setTimeout(function () { 
            console.log('Time is running out!'); 
            myRef.callMeMaybe();
        }, 1000);
    }
};

If we then run:

myObj.callMeMaybe();

to begin the timer, we can see every second “Time is running out!” If we then run:

myObj = null;

The timer will still fire. myObj won’t be garbage collected as the closure passed to setTimeout has to be kept alive in order to be executed. In turn, it holds references to myObj as it captures myRef. This would be the same if we’d passed the closure to any other function, keeping references to it.

It is also worth keeping in mind that references inside a setTimeout/setInterval call, such as functions, will need to execute and complete before they can be garbage collected.

Be Aware Of Performance Traps

It’s important never to optimize code until you actually need to. This can’t be stressed enough. It’s easy to see a number of micro-benchmarks showing that N is more optimal than M in V8, but test it in a real module of code or in an actual application, and the true impact of those optimizations may be much more minimal than you were expecting.

Speed trap.
Doing too much can be as harmful as not doing anything. Image source: Tim Sheerman-Chase.

Let’s say we want to create a module which:

  • Takes a local source of data containing items with a numeric ID,
  • Draws a table containing this data,
  • Adds event handlers for toggling a class when a user clicks on any cell.

There are a few different factors to this problem, even though it’s quite straightforward to solve. How do we store the data? How do we efficiently draw the table and append it to the DOM? How do we handle events on this table optimally?

A first (naive) take on this problem might be to store each piece of available data in an object which we group into an array. One might use jQuery to iterate through the data and draw the table, then append it to the DOM. Finally, one might use event binding for adding the click behavior we desire.

Note: This is NOT what you should be doing

var moduleA = function () {

    return {

        data: dataArrayObject,

        init: function () {
            this.addTable();
            this.addEvents();
        },

        addTable: function () {

            for (var i = 0; i < rows; i++) {
                $tr = $('<tr></tr>');
                for (var j = 0; j < this.data.length; j++) {
                    $tr.append('<td>' + this.data[j]['id'] + '</td>');
                }
                $tr.appendTo($tbody);
            }

        },
        addEvents: function () {
            $('table td').on('click', function () {
                $(this).toggleClass('active');
            });
        }

    };
}();

Simple, but it gets the job done.

In this case however, the only data we’re iterating are IDs, a numeric property which could be more simply represented in a standard array. Interestingly, directly using DocumentFragment and native DOM methods are more optimal than using jQuery (in this manner) for our table generation, and of course, event delegation is typically more performant than binding each td individually.

Note that jQuery does use DocumentFragment internally behind the scenes, but in our example, the code is calling append() within a loop and each of these calls has little knowledge of the other so it may not be able to optimize for this example. This should hopefully not be a pain point, but be sure to benchmark your own code to be sure.

In our case, adding in these changes results in some good (expected) performance gains. Event delegation provides decent improvement over simply binding, and opting for documentFragment was a real booster.

var moduleD = function () {

    return {

        data: dataArray,

        init: function () {
            this.addTable();
            this.addEvents();
        },
        addTable: function () {
            var td, tr;
            var frag = document.createDocumentFragment();
            var frag2 = document.createDocumentFragment();

            for (var i = 0; i < rows; i++) {
                tr = document.createElement('tr');
                for (var j = 0; j < this.data.length; j++) {
                    td = document.createElement('td');
                    td.appendChild(document.createTextNode(this.data[j]));

                    frag2.appendChild(td);
                }
                tr.appendChild(frag2);
                frag.appendChild(tr);
            }
            tbody.appendChild(frag);
        },
        addEvents: function () {
            $('table').on('click', 'td', function () {
                $(this).toggleClass('active');
            });
        }

    };

}();

We might then look to other ways of improving performance. You may have read somewhere that using the prototypal pattern is more optimal than the module pattern (we confirmed it wasn’t earlier), or heard that using JavaScript templating frameworks are highly optimized. Sometimes they are, but use them because they make for readable code. Also, precompile!. Let’s test and find out how true this hold in practice.

moduleG = function () {};

moduleG.prototype.data = dataArray;
moduleG.prototype.init = function () {
    this.addTable();
    this.addEvents();
};
moduleG.prototype.addTable = function () {
    var template = _.template($('#template').text());
    var html = template({'data' : this.data});
    $tbody.append(html);
};
moduleG.prototype.addEvents = function () {
   $('table').on('click', 'td', function () {
       $(this).toggleClass('active');
   });
};

var modG = new moduleG();

As it turns out, in this case the performance benefits are negligible. Opting for templating and prototypes didn’t really offer anything more than what we had before. That said, performance isn’t really the reason modern developers use either of these things — it’s the readability, inheritance model and maintainability they bring to your codebase.

More complex problems include efficiently drawing images using canvas and manipulating pixel data with or without typed arrays

Always give micro-benchmarks a close lookover before exploring their use in your application. Some of you may recall the JavaScript templating shoot-off and the extended shoot-off that followed. You want to make sure that tests aren’t being impacted by constraints you’re unlikely to see in real world applications — test optimizations together in actual code.

V8 Optimization Tips

Whilst detailing every V8 optimization is outside the scope of this article, there are certainly many tips worth noting. Keep these in mind and you’ll reduce your chances of writing unperformant code.

  • Certain patterns will cause V8 to bail out of optimizations. A try-catch, for example, will cause such a bailout. For more information on what functions can and can’t be optimized, you can use --trace-opt file.js with the d8 shell utility that comes with V8.
  • If you care about speed, try very hard to keep your functions monomorphic, i.e. make sure that variables (including properties, arrays and function parameters) only ever contain objects with the same hidden class. For example, don’t do this:
function add(x, y) { 
   return x+y;
} 

add(1, 2); 
add('a','b'); 
add(my_custom_object, undefined);
  • Don’t load from uninitialized or deleted elements. This won’t make a difference in output, but it will make things slower.
  • Don’t write enormous functions, as they are more difficult to optimize

For more tips, watch Daniel Clifford’s Google I/O talk Breaking the JavaScript Speed Limit with V8 as it covers these topics well. Optimizing For V8 — A Series is also worth a read.

Objects Vs. Arrays: Which Should I Use?

  • If you want to store a bunch of numbers, or a list of objects of the same type, use an array.
  • If what you semantically need is an object with a bunch of properties (of varying types), use an object with properties. That’s pretty efficient in terms of memory, and it’s also pretty fast.
  • Integer-indexed elements, regardless of whether they’re stored in an array or an object, are much faster to iterate over than object properties.
  • Properties on objects are quite complex: they can be created with setters, and with differing enumerability and writability. Items in arrays aren’t able to be customized as heavily — they either exist or they don’t. At an engine level, this allows for more optimization in terms of organizing the memory representing the structure. This is particularly beneficial when the array contains numbers. For example, when you need vectors, don’t define a class with properties x, y, z; use an array instead..

There’s really only one major difference between objects and arrays in JavaScript, and that’s the arrays’ magic length property. If you’re keeping track of this property yourself, objects in V8 should be just as fast as arrays.

Tips When Using Objects

  • Create objects using a constructor function. This ensures that all objects created with it have the same hidden class and helps avoid changing these classes. As an added benefit, it’s also slightly faster than Object.create()
  • There are no restrictions on the number of different object types you can use in your application or on their complexity (within reason: long prototype chains tend to hurt, and objects with only a handful of properties get a special representation that’s a bit faster than bigger objects). For “hot” objects, try to keep the prototype chains short and the field count low.

Object Cloning
Object cloning is a common problem for app developers. While it’s possible to benchmark how well various implementations work with this type of problem in V8, be very careful when copying anything. Copying big things is generally slow — don’t do it. for..in loops in JavaScript are particularly bad for this, as they have a devilish specification and will likely never be fast in any engine for arbitrary objects.

When you absolutely do need to copy objects in a performance-critical code path (and you can’t get out of this situation), use an array or a custom “copy constructor” function which copies each property explicitly. This is probably the fastest way to do it:

function clone(original) {
  this.foo = original.foo;
  this.bar = original.bar;
}
var copy = new clone(original);

Cached Functions in the Module Pattern
Caching your functions when using the module pattern can lead to performance improvements. See below for an example where the variation you’re probably used to seeing is slower as it forces new copies of the member functions to be created all the time.

Performance improvements
Performance improvements when using the module or prototypal patterns.

Here is a test of prototype versus module pattern performance

// Prototypal pattern
  Klass1 = function () {}
  Klass1.prototype.foo = function () {
      log('foo');
  }
  Klass1.prototype.bar = function () {
      log('bar');
  }

  // Module pattern
  Klass2 = function () {
      var foo = function () {
          log('foo');
      },
      bar = function () {
          log('bar');
      };

      return {
          foo: foo,
          bar: bar
      }
  }


  // Module pattern with cached functions
  var FooFunction = function () {
      log('foo');
  };
  var BarFunction = function () {
      log('bar');
  };

  Klass3 = function () {
      return {
          foo: FooFunction,
          bar: BarFunction
      }
  }


  // Iteration tests

  // Prototypal
  var i = 1000,
      objs = [];
  while (i--) {
      var o = new Klass1()
      objs.push(new Klass1());
      o.bar;
      o.foo;
  }

  // Module pattern
  var i = 1000,
      objs = [];
  while (i--) {
      var o = Klass2()
      objs.push(Klass2());
      o.bar;
      o.foo;
  }

  // Module pattern with cached functions
  var i = 1000,
      objs = [];
  while (i--) {
      var o = Klass3()
      objs.push(Klass3());
      o.bar;
      o.foo;
  }
// See the test for full details

Note: If you don’t require a class, avoid the trouble of creating one. Here’s an example of how to gain performance boosts by remoxing the class overhead altogether http://jsperf.com/prototypal-performance/54.

Tips When Using Arrays

Next let’s look at a few tips for arrays. In general, don’t delete array elements. It would make the array transition to a slower internal representation. When the key set becomes sparse, V8 will eventually switch elements to dictionary mode, which is even slower.

Array Literals
Array literals are useful because they give a hint to the VM about the size and type of the array. They’re typically good for small to medium sized arrays.

// Here V8 can see that you want a 4-element array containing numbers:
var a = [1, 2, 3, 4];

// Don't do this:
a = []; // Here V8 knows nothing about the array
for(var i = 1; i <= 4; i++) {
     a.push(i);
}

Storage of Single Types Vs. Mixed Types
It’s never a good idea to mix values of different types (e.g. numbers, strings, undefined or true/false) in the same array (i.e. var arr = [1, “1”, undefined, true, “true”])

Test of type inference performance

As we can see from the results, the array of ints is the fastest.

Sparse Arrays vs. Full Arrays
When you use sparse arrays, be aware that accessing elements in them is much slower than in full arrays. That’s because V8 doesn’t allocate a flat backing store for the elements if only a few of them are used. Instead, it manages them in a dictionary, which saves space, but costs time on access.

Test of sparse arrays versus full arrays.

The full array sum and sum of all elements on an array without zeros were actually the fastest. Whether the full array contains zeroes or not should not make a difference.

Packed Vs. Holey Arrays
Avoid “holes” in an array (created by deleting elements or a[x] = foo with x > a.length). Even if only a single element is deleted from an otherwise “full” array, things will be much slower.

Test of packed versus holey arrays.

Pre-allocating Arrays Vs. Growing As You Go
Don’t pre-allocate large arrays (i.e. greater than 64K elements) to their maximum size, instead grow as you go. Before we get to the performance tests for this tip, keep in mind that this is specific to only some JavaScript engines.

Test of empty literal versus pre-allocated arrays in various browsers.
Test of empty literal versus pre-allocated array in various browsers.

Nitro (Safari) actually treats pre-allocated arrays more favorably. However, in other engines (V8, SpiderMonkey), not pre-allocating is more efficient.

Test of pre-allocated arrays.

// Empty array
var arr = [];
for (var i = 0; i < 1000000; i++) {
    arr[i] = i;
}

// Pre-allocated array
var arr = new Array(1000000);
for (var i = 0; i < 1000000; i++) {
    arr[i] = i;
}

Optimizing Your Application

In the world of Web applications, speed is everything. No user wants a spreadsheet application to take seconds to sum up an entire column or a summary of their messages to take a minute before it’s ready. This is why squeezing every drop of extra performance you can out of code can sometimes be critical.

An old phone on the screen of an iPad.
Image source: Per Olof Forsberg.

While understanding and improving your application performance is useful, it can also be difficult. We recommend the following steps to fix performance pain points:

  • Measure it: Find the slow spots in your application (~45%)
  • Understand it: Find out what the actual problem is (~45%)
  • Fix it! (~10%)

Some of the tools and techniques recommended below can assist with this process.

Benchmarking

There are many ways to run benchmarks on JavaScript snippets to test their performance — the general assumption being that benchmarking is simply comparing two timestamps. One such pattern was pointed out by the jsPerf team, and happens to be used in SunSpider‘s and Kraken‘s benchmark suites:

var totalTime,
    start = new Date,
    iterations = 1000;
while (iterations--) {
  // Code snippet goes here
}
// totalTime → the number of milliseconds taken 
// to execute the code snippet 1000 times
totalTime = new Date - start;

Here, the code to be tested is placed within a loop and run a set number of times (e.g. six). After this, the start date is subtracted from the end date to find the time taken to perform the operations in the loop.

However, this oversimplifies how benchmarking should be done, especially if you want to run the benchmarks in multiple browsers and environments. Garbage collection itself can have an impact on your results. Even if you’re using a solution like window.performance, you still have to account for these pitfalls.

Regardless of whether you are simply running benchmarks against parts of your code, writing a test suite or coding a benchmarking library, there’s a lot more to JavaScript benchmarking than you might think. For a more detailed guide to benchmarking, I highly recommend reading JavaScript Benchmarking by Mathias Bynens and John-David Dalton.

Profiling

The Chrome Developer Tools have good support for JavaScript profiling. You can use this feature to detect what functions are eating up the most of your time so that you can then go optimize them. This is important, as even small changes to your codebase can have serious impacts on your overall performance.

Profiles panel in Chrome Developer Tools.
Profiles Panel in Chrome Developer Tools.

Profiling starts with obtaining a baseline for your code’s current performance, which can be discovered using the Timeline. This will tell us how long our code took to run. The Profiles tab then gives us a better view into what’s happening in our application. The JavaScript CPU profile shows us how much CPU time is being used by our code, the CSS selector profile shows us how much time is spent processing selectors and Heap snapshots show how much memory is being used by our objects.

Using these tools, we can isolate, tweak and reprofile to gauge whether changes we’re making to specific functions or operations are improving performance.

The profile tab gives information about your code's performance.
The Profile tab gives you information about your code’s performance.

For a good introduction to profiling, read JavaScript Profiling With The Chrome Developer Tools, by Zack Grossbart.

Tip: Ideally, you want to ensure that your profiling isn’t being affected by extensions or applications you’ve installed, so run Chrome using the --user-data-dir <empty_directory> flag. Most of the time, this approach to optimization testing should be enough, but there are times when you need more. This is where V8 flags can be of help.

Avoiding Memory Leaks — Three Snapshot Techniques for Discovery

Internally at Google, the Chrome Developer Tools are heavily used by teams such as Gmail to help us discover and squash memory leaks.

Memory statistics in Chrome Developer Tools.
Memory statistics in Chrome Developer Tools.

Some of the memory statistics that our teams care about include private memory usage, JavaScript heap size, DOM node counts, storage clearing, event listener counts and what’s going on with garbage collection. For those familiar with event-driven architectures, you might be interested to know that one of the most common issues we used to have were listen()’s without unlisten()’s (Closure) and missing dispose()’s for objects that create event listeners.

Luckily the DevTools can help locate some of these issues, and Loreena Lee has a fantastic presentation available documenting the “3 snapshot” technique for finding leaks within the DevTools that I can’t recommend reading through enough.

The gist of the technique is that you record a number of actions in your application, force a garbage collection, check if the number of DOM nodes doesn’t return to your expected baseline and then analyze three heap snapshots to determine if you have a leak.

Memory Management in Single-Page Applications

Memory management is quite important when writing modern single-page applications (e.g. AngularJS, Backbone, Ember) as they almost never get refreshed. This means that memory leaks can become apparent quite quickly. This is a huge trap on mobile single-page applications, because of limited memory, and on long-running applications like email clients or social networking applications. With great power comes great responsibility.

There are various ways to prevent this. In Backbone, ensure you always dispose old views and references using dispose() (currently available in Backbone (edge)). This function was recently added, and removes any handlers added in the view’s ‘events’ object, as well as any collection or model listeners where the view is passed as the third argument (callback context). dispose() is also called by the view’s remove(), taking care of the majority of basic memory cleanup needs when the element is cleared from the screen. Other libraries like Ember clean up observers when they detect that elements have been removed from view to avoid memory leaks.

Some sage advice from Derick Bailey:

“Other than being aware of how events work in terms of references, just follow the standard rules for manage memory in JavaScript and you’ll be fine. If you are loading data in to a Backbone collection full of User objects you want that collection to be cleaned up so it’s not using anymore memory, you must remove all references to the collection and the individual objects in it. Once you remove all references, things will be cleaned up. This is just the standard JavaScript garbage collection rule.”

In his article, Derick covers many of the common memory pitfalls when working with Backbone.js and how to fix them.

There is also a helpful tutorial available for debugging memory leaks in Node by Felix Geisendörfer worth reading, especially if it forms a part of your broader SPA stack.

Minimizing Reflows

When a browser has to recalculate the positions and geometrics of elements in a document for the purpose of re-rendering it, we call this reflow. Reflow is a user-blocking operation in the browser, so it’s helpful to understand how to improve reflow time.

Chart of reflow time.
Chart of reflow time.

You should batch methods that trigger reflow or that repaint, and use them sparingly. It’s important to process off DOM where possible. This is possible using DocumentFragment, a lightweight document object. Think of it as a way to extract a portion of a document’s tree, or create a new “fragment” of a document. Rather than constantly adding to the DOM using nodes, we can use document fragments to build up all we need and only perform a single insert into the DOM to avoid excessive reflow.

For example, let’s write a function that adds 20 divs to an element. Simply appending each new div directly to the element could trigger 20 reflows.

function addDivs(element) {
  var div;
  for (var i = 0; i < 20; i ++) {
    div = document.createElement('div');
    div.innerHTML = 'Heya!';
    element.appendChild(div);
  }
}

To work around this issue, we can use DocumentFragment, and instead, append each of our new divs to this. When appending to the DocumentFragment with a method like appendChild, all of the fragment’s children are appended to the element triggering only one reflow.

function addDivs(element) {
  var div; 
  // Creates a new empty DocumentFragment.
  var fragment = document.createDocumentFragment();
  for (var i = 0; i < 20; i ++) {
    div = document.createElement('a');
    div.innerHTML = 'Heya!';
    fragment.appendChild(div);
  }
  element.appendChild(fragment);
}

You can read more about this topic at Make the Web Faster,
JavaScript Memory Optimization and Finding Memory Leaks.

JavaScript Memory Leak Detector

To help discover JavaScript memory leaks, two of my fellow Googlers (Marja Hölttä and Jochen Eisinger) developed a tool that works with the Chrome Developer Tools (specifically, the remote inspection protocol), and retrieves heap snapshots and detects what objects are causing leaks.

A tool for detecting JavaScript memory leaks.
A tool for detecting JavaScript memory leaks.

There’s a whole post on how to use the tool, and I encourage you to check it out or view the Leak Finder project page.

Some more information: In case you’re wondering why a tool like this isn’t already integrated with our Developer Tools, the reason is twofold. It was originally developed to help us catch some specific memory scenarios in the Closure Library, and it makes more sense as an external tool (or maybe even an extension if we get a heap profiling extension API in place).

V8 Flags for Debugging Optimizations & Garbage Collection

Chrome supports passing a number of flags directly to V8 via the js-flags flag to get more detailed output about what the engine is optimizing. For example, this traces V8 optimizations:

"/Applications/Google Chrome/Google Chrome" --js-flags="--trace-opt --trace-deopt"

Windows users will want to run chrome.exe --js-flags="--trace-opt --trace-deopt"

When developing your application, the following V8 flags can be used.

  • trace-opt – log names of optimized functions and show where the optimizer is skipping code because it can’t figure something out.
  • trace-deopt – log a list of code it had to deoptimize while running.
  • trace-gc – logs a tracing line on each garbage collection.

V8’s tick-processing scripts mark optimized functions with an * (asterisk) and non-optimized functions with ~ (tilde).

If you’re interested in learning more about V8′s flags and how V8′s internals work in general, I strongly recommend looking through Vyacheslav Egorov’s excellent post on V8 internals, which summarizes the best resources available on this at the moment.

High-Resolution Time and Navigation Timing API

High Resolution Time (HRT) is a JavaScript interface providing the current time in sub-millisecond resolution that isn’t subject to system clock skews or user adjustments. Think of it as a way to measure more precisely than we’ve previously had with new Date and Date.now(). This is helpful when we’re writing performance benchmarks.

High Resolution Time (HRT) provides the current time in sub-millisecond resolution.
High Resolution Time (HRT) provides the current time in sub-millisecond resolution.

HRT is currently available in Chrome (stable) as window.performance.webkitNow(), but the prefix is dropped in Chrome Canary, making it available via window.performance.now(). Paul Irish has written more about HRT in a post on HTML5Rocks.

So, we now know the current time, but what if we wanted an API for accurately measuring performance on the web?

Well, one is now also available in the Navigation Timing API. This API provides a simple way to get accurate and detailed time measurements that are recorded while a webpage is loaded and presented to the user. Timing information is exposed via window.performance.timing, which you can simply use in the console:

Timing information is shown in the console.
Timing information is shown in the console.

Looking at the data above, we can extract some very useful information. For example, network latency is responseEnd-fetchStart, the time taken for a page load once it’s been received from the server is loadEventEnd-responseEnd and the time taken to process between navigation and page load is loadEventEnd-navigationStart.

As you can see above, a perfomance.memory property is also available that gives access to JavaScript memory usage data such as the total heap size.

For more details on the Navigation Timing API, read Sam Dutton’s great article Measuring Page Load Speed With Navigation Timing.

about:memory and about:tracing

about:tracing in Chrome offers an intimate view of the browser’s performance, recording all of Chrome’s activities across every thread, tab and process.

about:tracing offers an intimate view of the browser’s performance.
About:Tracing offers an intimate view of the browser’s performance.

What’s really useful about this tool is that it allows you to capture profiling data about what Chrome is doing under the hood, so you can properly adjust your JavaScript execution, or optimize your asset loading.

Lilli Thompson has an excellent write-up for games developers on using about:tracing to profile WebGL games. The write-up is also useful for general JavaScripters.

Navigating to about:memory in Chrome is also useful as it shows the exact amount of memory being used by each tab, which is helpful for tracking down potential leaks.

Conclusion

As we’ve seen, there are many hidden performance gotchas in the world of JavaScript engines, and no silver bullet available to improve performance. It’s only when you combine a number of optimizations in a (real-world) testing environment that you can realize the largest performance gains. But even then, understanding how engines interpret and optimize your code can give you insights to help tweak your applications.

Measure It. Understand it. Fix it. Rinse and repeat.

Measuring.
Image source: Sally Hunter.

Remember to care about optimization, but stop short of opting for micro-optimization at the cost of convenience. For example, some developers opt for .forEach and Object.keys over for and for in loops, even though they’re slower, for the convenience of being able to scope. Do make sanity calls on what optimizations your application absolutely needs and which ones it could live without.

Also, be aware that although JavaScript engines continue to get faster, the next real bottleneck is the DOM. Reflows and repaints are just as important to minimize, so remember to only touch the DOM if it’s absolutely required. And do care about networking. HTTP requests are precious, especially on mobile, and you should be using HTTP caching to reduce the size of assets.

Keeping all of these in mind will ensure that you get the most out of the information from this post. I hope you found it helpful!

Credits

This article was reviewed by Jakob Kummerow, Michael Starzinger, Sindre Sorhus, Mathias Bynens, John-David Dalton and Paul Irish.

Image source of picture on front page.

(cp) (jc)

↑ Back to top

Addy Osmani is a Developer Programs Engineer on the Chrome team at Google. A passionate JavaScript developer, he has written open-source books like 'Learning JavaScript Design Patterns' and 'Developing Backbone Applications', having also contributed to open-source projects like Modernizr and jQuery. He is currently working on 'Yeoman' - an opinionated workflow for building beautiful applications.

  1. 1

    Excellent article! Will become very handy when I’am playing with box2dweb and other cool processor intensive stuff. I also wasn’t aware of the “DocumentFragment” option in the DOM. Very handy!

    0
  2. 3

    Great article! As a web developer who is just now getting into the front-end and trying to learn JavaScript, this is very helpful. There are so many JavaScript resources out there (I guess that’s an advantage of being the most popular programming language in the world), but there are not many that focus on performance, especially to this level of depth. Thanks again!

    0
  3. 4

    Really useful article, thanks.

    Abou variables scope, is the second example with largeStr a safe garbage collection?
    How does the compiler knows that largeStr won’t actually be used?
    What if, for instance, the inner function was using eval to calculate the variable name?

    0
    • 5

      >What if, for instance, the inner function was using eval to calculate the variable name?

      Do you have a code example of what you had in mind?

      0
      • 6

        Hey Addy,

        Modifying the code to the following makes the necessity of the variables ambiguous:

        var a = function () {
        var smallStr = ‘x’,
        largeStr = new Array(10).join(‘x’);
        return function (n) {
        return eval(n+”Str”)
        };
        }();

        Though in this case, v8 barely attempts any optimization due specifically to the presence of eval. So to answer Val’s question, it is safe because V8 knows it is safe… and if it can’t know it doesn’t bother.

        0
        • 7

          Yep, that’s what I was talking about.
          I guess there’s an useful lesson here: eval breaks engine optimisations for the scope where it’s used.

          0
  4. 8

    Wow, it’s nice to see thorough articles like this every now and then. Great job!

    One question though, you mentioned that having something like the memory leak finder would be a better fit as an external tool, rather than being built into Chrome – Why is that? I think it would make an excellent addition to the existing tools.

    0
    • 9

      So, the reason this isn’t a part of the DevTools is two fold. It was originally developed to help catch some specific memory scenarios in the Closure Library and it makes more sense as an external tool as it’s broader usefulness is still being evaluated. If there were more developers using the tool, the case for it being in there natively would certainly be stronger I imagine :)

      0
  5. 10

    Extremely useful article, only briefly read it just now but i’ll definitely read it in full later!

    0
  6. 11

    This is very informative about the JavaScript. Being a front end developer, I always try to know how to optimize the performance of the JavaScript. This article really points out small tips about the JavaScript performance.

    Thanks Addy.

    0
  7. 12

    Addy…great article! Thanks very much for sharing it with us!

    I had a question – I am returning a bunch of JSON objects from the server which are then stored in an array memory and used like a local cache.

    I run these objects through a helper function to make sure they all have the same keys (some of them are missing a few keys for whatever reason) – should I be creating new objects so they’ll all have the same hidden class in V8? Or does it really matter?

    The objects all have identical keys and the values are all fairly simple – ints, booleans, strings and, in one or 2 cases, arrays – but no methods.

    Thanks again!

    0
    • 13

      >I run these objects through a helper function to make sure they all have the same keys (some of them are missing a few keys for whatever reason) – should I be creating new objects so they’ll all have the same hidden class in V8? Or does it really matter?

      I’m not entirely sure what your helper function is outputting, but I would avoid the overhead of creating new object instances just for the sake of it. I’m sure someone will correct me if I’m wrong, but V8 should be able to optimize your current setup fine as long as you’re not unnecessarily modifying the structure of your local array cache.

      0
      • 14

        I’d say it would depend on what Adam is doing with the objects that get returned from the helper function. But returning objects of different shapes will deopt any previous inline caching of the function and if he has some code where the missing key is accessed and operated on(ie Obj1[sometimesMissing] /*number*/ + Obj2[sometimesMissing] /* undefined*/) this will cause v8 to have to use the slower interpreter rather than the JIT compiler.

        0
  8. 15

    AFAIK, JQuery HTML constructors use document fragments behind the scenes. See http://www.bennadel.com/blog/2281-jQuery-Appends-Multiple-Elements-Using-Efficient-Document-Fragments.htm .

    0
    • 16

      jQuery does indeed use DocumentFragments behind the scenes. That said, I found that in my test cases while writing this article that using fragments directly appeared to offer performance improvements even though jQuery should technically be using a similar approach for its constructors. It may be localized to my tests, so as mentioned, be sure to benchmark your own code to be certain :)

      0
  9. 17

    Addy, thanks for the great post. Specially with the tests on documentFragment.
    I did notice that line 12 of ModuleD should be var td, tr. Correct?

    0
    • 18

      Thanks! We actually corrected this a few hours ago but the caching might be taking a while to kick in. Hopefully the right version will be up soon :)

      0
  10. 19

    Great article with tremendous amounts of valuable information.
    Thank you for sharing all this!

    0
  11. 21

    Hi Addy,
    Your performance test for Module Pattern vs. Prototypal pattern is not an accurate test in that it’s defining the class in every iteration. This is not realistic. The use of a class is that you define it once and you can create as many instances of that class as you need.

    In this modified version of your test, the Prototypal pattern is the fastest means of instantiating a new instance of a class in Javascript. http://jsperf.com/prototypal-performance/11

    0
    • 22

      Hey Luke,

      Thanks a lot for your comment! In your test it appears that the prototypal pattern is the fastest approach in Chrome, but may require further testing to verify if this holds true for other browsers (e.g in the results, FF16 actually appears to favor the module pattern with cached functions a little better). Will review and test further to confirm, but I’ll update the text if we can verify its the fastest option.

      Cheers!
      Addy

      0
      • 23

        Hi Addy,

        The test is still apples to oranges because neither module pattern constructors should be called with new because they are returning objects. Calling them with new will (should? might?) create a throw away ‘this’ object before the return, so they are doing twice the work or more.

        I adjusted the benchmark accordingly, and the results favor module pattern with cached functions. http://jsperf.com/prototypal-performance/12

        0
        • 24

          Aha. Another good catch. I’m relieved to see the original assertion about the cached functions variation of the module pattern holds. Article and tests updated. Thanks, Luke!

          0
          • 25

            Addy,
            Your test of prototype versus module pattern performance doesn’t consider one important real-world scenarios aspect: created object state. If you modify your module/constructor function to take an input parameter that should be accessible from within ‘foo’ and ‘bar’ functions of the module, then using ‘cached’ functions becomes troublesome. One way to make them work is to bind them, but it obviously comes at a price.
            The benchmark is here: http://jsperf.com/prototypal-performance/18

            0
  12. 26

    So coding in Javascript is getting more complex than doing C++ coding…

    0
    • 27

      Complexity, or in this case, writing code that is memory efficient is a problem which certainly isn’t exclusive to C++ :)

      0
  13. 28

    jQuery already optimizes DOM creation using document fragments. If you’re using jQuery, it is not necessary to use documentFragments like this article suggests.

    For the setTimeout and setInterval tip, it’s worth noting that JavaScript has clearTimeout and clearInterval methods which are the correct way to cancel/clear a timeout/interval. Newbies might not know these methods exist since they weren’t mentioned in the article. Here’s an example:

    var a = setTimeout(function () { alert(‘timeout fired’); }, 10000); //10 second delay
    clearTimeout(a);

    0
  14. 30

    I wonder if largeStr will really be garbage collected in this case:

    var a = function () {
    var smallStr = ‘x’;
    var largeStr = new Array(1000000).join(‘x’);
    return function (n) {
    return smallStr;
    };
    }();

    because you could have code like that:

    var a = function () {
    var smallStr = ‘x’;
    var largeStr = new Array(1000000).join(‘x’);
    return function (n) {
    return eval(n);
    };
    }();

    and it allows you to access any variable in scope so the compiler cannot infer which variable can be collected. This example uses explicit eval but it too can be hidden.

    0
    • 31

      Upon detecting the presence of eval, v8 does not perform many common operations, including invoking the GC on likely garbage. So in the article’s example it will be GC’d, but only because v8 is smart.

      0
  15. 32

    Great article! Question, though – what are you talking about with this dispose() method for Backbone? There is no such method in Backbone. There are a few extensions on gitHub which provide a dispose() method, though. Were you referring to one of these?

    0
  16. 34

    I love this post. I’d really like to read it again when I get home tonight. Do you moderate your comments? I don’t get it.

    0
    • 35

      Please feel free to. On comments, I believe they get automatically approved and shown. I’m just replying to them as time allows :)

      0
  17. 36

    Thanks for the article ! It’s awesome, not quite done but had a quick question anyway.

    In the “DE-REFERENCING MISCONCEPTIONS” you mention
    delete o.x being bad
    What about
    o.x = undefined

    0
    • 37

      By setting o.x to undefined you nullify any assumptions v8 has made about the type of your variable. Such assumptions are fairly key to performance/Inline Caching/Type-specializing JIT compiler. Type-stability/writing C-like code is probably the best rule of thumb for creating fast JavaScript in new engines.

      0
  18. 38

    Re: de-referencing – Based on the hidden class docs I imagine the described benefit of avoiding `delete` only applies when the property is expected to be reset at some point later. Is that right? What exactly is a “generic slow object”? My interpretation is that using `delete` is like de-initializing the property (and its hidden class) but that setting it to `null` or `undefined` frees memory of the prop’s data but keeps the prop’s hidden class in memory. Am I understanding this correctly?

    0
    • 39

      Vyacheslav Egorov

      November 6, 2012 9:18 am

      “generic slow object” refers to the way V8 stores and accesses object properties.

      Fast objects are those that are stored in a way similar to that used by more static languages like Java: a linear sequence of fields plus a reference to a hidden class that describes layout. Properties of fast objects are accessed through offsets on monomorphic paths in optimized code and no lookup overhead is involved. This representation is compact and fast when accessed in monomorphic way.

      Slow objects are those that have their properties stored in the hash map. Every time you are accessing a property of a slow object V8 does a hash lookup to find it. This representation is bloated and slow (compared to fast one).

      When you delete a property from an object V8 will convert the object from fast mode (if it was in fast mode) to slow mode. This will negatively impact both the memory usage and performance of property accesses.

      0
  19. 40

    Any jsperf sample to support your proposal of “DE-REFERENCING MISCONCEPTIONS”? There should be a reason why “delete” exists in js.

    0
    • 41

      There absolutely is a reason “delete” exists in JavaScript (see http://perfectionkills.com/understanding-delete/ for a great, in-depth article about it from @Kangax). “delete” is valid for removing a key from a map but you need to be careful about changing the structure of “hot” objects (which “delete” does do) as it’s harder for V8 to optimize those cases. Also see my response to the question from Amit for more on this :)

      0
      • 42

        `delete` will still has its purpose. I am working with a mobile hybrid apps project that involves a lot of canvas manipulation. I used `delete` in very tight loop that generates a lot of canvas masks. Without delete, it will simply crash quite often in those awful Android tablets. I wish I can throw one of those android craps out of the window down to the swimming pool ;-)

        And coming from native app developer background using Objective-C, I am very particular about being memory efficient, how I can tell the compiler at which particular point that the program should release the memory with immediate effect.

        So `delete` will still have its own stand.

        You should probably be more specific like what kangax wrote his case about instead of generalising everything.

        0
  20. 43

    Excellent article Addy. I am using many delete’s in my code. Can you talk more about why using delete operator is not a good idea? How does it harm internal hidden classes? Any example to elaborate on this?

    0
    • 44

      Sure thing. I can’t comment about other JavaScript engines, but when you delete elements (e.g in an array) it makes the key set sparse and can lead to V8 switching elements to dictionary mode (i.e not fast mode). Even if only a single element is deleted from an otherwise “full” array, things will be much slower. This demonstrates the effect: http://jsperf.com/packed-vs-holey-arrays (mentioned in the article above).

      0
  21. 45

    Addy, can you share references to your source material for this article?

    0
  22. 47

    O,I can’t agree in something.
    1.about “Wix” said.the largeStr will not be collected.
    2.Object is faster than Array for Array in Javascript is not real Array.

    0
  23. 48

    Now this was a fantastic article! Thanks!

    0
  24. 49

    very useful article… Thanks!

    0
  25. 50

    Very nice. A lot of this carries over into other languages as well (C# tends to spring to mind, especially in many of the bits about garbage collection).

    JavaScript is an especially hard language for many people to grasp in terms of how it works internally, (a) because implementations vary from browser to browser, and (b) because it’s so loose in terms of how it views scope and type that at times, it’s not always straightforward to understand in terms of how it handles persistence.

    0
  26. 51

    I would think an even more optimized module for your table question would not use frag2, as you aren’t inserting anything to the live DOM where you use it. This seems to be the case as shown here:
    http://jsperf.com/first-pass/7
    (dDTab function inserted (drawDocumentFragTable without frag2) and ModuleC changed to a version of ModuleD which instead calls dDTab)

    0
  27. 52

    AWESOME. Thanks so much! Keep on publishing man. This article and your one on MVC have been sooo helpful. I had no clue how much potential gain can come from cached functions in the module pattern!

    0
  28. 53

    Other than a few typos, a most excellent article; thank you Addy!

    add(‘a’,”b’); // extra apostrophe

    “an helpful” -> “a helpful”

    “or that repaints” -> “or that repaint” or “or repaint”

    PS: Monomorphic variables seem to offer extremely little performance improvement (under 2%) – http://jsperf.com/monomorphic-variable-performance

    0
    • 54

      Thanks Dan! I’ve corrected the typos you mentioned. I’ll review the perf test you shared in a little more depth soon.

      0
    • 55

      Monomorphism specifically gives you a benefit in repeated property access or function calls, neither of which are demonstrated in that test. For monomorphic objects, v8 can convert to flat fields with fast access rather than normal property lookup. As well, function calls can be fully inlined if called with static types.

      0
  29. 56

    Addy, Thanks for the great post. Especially the background of V8 engine which you have given before explaining the valuable information of Js code optimization .

    0
  30. 57

    it would be fantastic if smashing magazine enable printer friendly version of writings like this

    0
  31. 58

    @mustafa, I agree. Most of the time, I have to print such great article such as this and read it on the home way since I am taking the subway as I do not have time to read at work.

    Actually, just found out that if you open this in chrome and try printing it, browser will render it nicely and would let you print it the way you wanted to see it.

    0
  32. 59

    Hi Addy,

    What would be the correct way to do a “self invoking timer” that would tick forever?

    0
  33. 60

    Nice article. One thing that could be made clearer, though, is the closures section. The examples given and their descriptions seem to suggest that the variable closed upon actually needs to be returned or otherwise directly accessible to the outside in order to remain in use and uncollected by the GC. But that’s not really necessary. For instance:

    var someVar =
    (function()
    { var innerThing=[]; return function() { innerThing.push(1); return true; }; }
    )();

    You can’t access innerThing from the outside at all, but it persists and can’t be garbage-collected, because the function object referred to by someVar contains a reference to it. In other words, it doesn’t have to do with whether you “can/can’t access” it, it has to do with whether some other object, one that can’t itself yet be garbage-collected, still holds a reference to it.

    0
  34. 61

    Rodrigo Alves Vieira

    November 8, 2012 9:19 am

    You can also make your code more memory efficient by caching length inside of JavaScript loops.

    As explained here http://coding.smashingmagazine.com/2012/11/05/writing-fast-memory-efficient-javascript/

    0
  35. 62

    Hey Addy,

    Great article. I was just wondering if you have a source for global variables not being cleaned by the garbage collector.

    I tried to reproduce this on Chrome 23, and it seems like the garbage collector does in fact clean up de-referenced objects. I have the experiment here: https://gist.github.com/4035423

    Any clarification would be greatly appreciated.
    Thanks,
    Daniel

    0
  36. 63

    “For example, when you need vectors, don’t define a class with properties x, y, z; use an array instead.”

    It seems this is only faster in Chrome. In Opera (12) it doesn’t really matter and in Firefox (16) and Safari (6) an object with x and y properties is faster.
    See: http://jsperf.com/properties-vs-indices

    0
    • 64

      As you basically said, I think the speed comparison of performing actions on an array vs object is debatable. So instead, I think of it this way:

      1) It’s all about context. `xyz` is easier to read, in my opinion, but If you’re doing webGL, arrays can be passed directly to webGL functions without having to create temporary objects or “convert”.
      2) Never make a “Vec3″ class, as the `this` lookup is apparently slower: http://jsperf.com/new-vecA3-vs-duck

      I like your perf test, but it’s good to keep in mind that there are two (maybe more?) aspects to measure: property lookup speed (your test) and creation speed (my test).

      0
  37. 67

    This is a fantastic collection of best practices and how to make the most of JavaScript. This is a must read if you work with JavaScript or are interviewing front-end devs. Thanks for sharing!

    0
  38. 68

    Addy, thank you for writing this article, and that the points you made were accompanied by “hard data”!

    tl;dr; Is there a way to use “private” variables with your cached module pattern?

    I was reviewing the section on the module pattern, and got excited about cached functions being just as fast in V8 as prototypes. But then I realized that the cached functions give you no way to use “private” variables, which to me is the primary benefit of using a module pattern variant. I edited your perf test (I added Klass5 & 6) (http://jsperf.com/prototypal-performance/26) to examine approaches using bind, and am curious if you have a solution that allows for cached functions _and_ private variables.

    0
  39. 69

    Sour grapes, but … the whole detect an element has been removed and unbind has been in JMVC for 4 years and it’s a big and heavily promoted feature of CanJS.

    0
  40. 70

    Thank you Addy.
    Always excellent articles and good ideas.

    0
  41. 71

    Guilherme Medeiros

    November 27, 2012 6:18 pm

    Most required article of 2012.
    Thanks Addy!

    0
  42. 72

    This is really useful article. I hoped I will find here the answer for my current dilemma: custom events vs observer pattern: http://stackoverflow.com/questions/13609994/performance-of-custom-events-and-observer-pattern-in-node-js. I think it’s good topic for some research…

    0
  43. 73

    With due respect your rationalizations in support of the CHROME browser/Java-script-code-executioner are absurd. Google decided to model their Java-script internet browser (Chrome) based on the success of their URL rapid look-up algorithms. In so doing the Chrome Browser randomly initiates copies of itself, thus making execution of Java-script based Web-site software precarious, and super slow compared to the same code running on any of the other Browsers. My question is why did Google make such an egregious design mistake? , and is there anyway to rectify the problem with CHROME randomly initiating copies of itself ?
    Thanks
    Bob Rogan

    0
  44. 74

    Hi Addy,

    Excellent article.

    A small issue: Someone created a revision ( I think it’s the benchmark you wanted to post) for the “second pass”

    http://jsperf.com/second-pass/3

    Because the link you posted doesn’t have any sense with // modG.init() line commented.

    0
  45. 75

    I’d like to ask, what means “to bail out” in this context? I’m not native speaker and I am confused with meaning of that phrase. Can somebody explain to me, or rather rewrite these paragraphs please?

    this one:
    V8 supports deoptimization, meaning the optimizing compiler can bail out of code generated if it discovers that some of the assumptions it made about the optimized code were too optimistic.

    and this one:
    Certain patterns will cause V8 to bail out of optimizations. A try-catch, for example, will cause such a bailout. For more information on what functions can and can’t be optimized, you can use –trace-opt file.js with the d8 shell utility that comes with V8.

    THANK YOU!

    0
    • 76

      It means giving up on optimizations.

      In the first example, if a function is optimized under the assumption that the second argument passed is always an integer, and then you pass something else, then that function will be deoptimized. However, it doesn’t completely give up on the function yet and it can be later optimized again under better assumptions.

      In the second example V8 permanently gives up on optimizing the function because it contains unsupported syntax such as try-catch. A function that contains try-catch will not be optimized.

      0
  46. 77

    Excellent article — thanks!

    I’m not sure that I saw our local current subject of debate discussed, though: use of inner functions (whether they’re actual closures or not) vs object properties, e.g.

    var myObj = {
    methodProp: function() {
    var x,y;
    function doSomethingIOnlyNeedHere(a,b) {
    return a – b*2;
    }
    return doSomethingIOnlyNeedHere(y,x);
    }
    }

    as opposed to

    var myObj = {
    _pseudoPrivateDoSomething: function(a,b) {
    return a – b*2;
    },
    methodProp: function() {
    var x,y;
    return this._pseudoDoSomething(y,x);
    }
    }

    I’d argue that data privacy and the rule/pattern: “hide by policy, reveal by necessity” supports the first variant; but a coworker argues that testability and memory usage issues makes the second variant more desirable.

    What do you say?

    Thanks!

    Howard

    0
    • 78

      Just because the underscore prefix doesn’t actually hide the properties during runtime, doesn’t mean it didn’t achieve the goals. In most languages, it is possible to access “hidden” properties during runtime through e.g. reflection or directly using memory, yet they have clearly achieved the goals.

      The second variant when used with constructors also enables much better code reuse and usability since the methods won’t be tied to concrete closures but remain abstract and generic.

      And yes, having to allocate N JSFunctions when you create an instance is slower than not having to allocate anything but the object itself. N=25 jsperf http://jsperf.com/prtorpropr

      0
  47. 79

    Can someone explain why the document fragment code doesn’t result in table rows where each cell has one more cell than the row above it? Does adding the fragment to something else somehow clear it out for reuse?

    0
  48. 80

    Excellent article. I am trying to find out performance overhead of associative array with long key name. Suppose I have an array with 2000 keys and some of those are very long names, like:
    myArray['thisisverylongkeyname'] = ‘some value’;

    My general feeling is that lookup with shorter key names will be faster, like:
    myArray['shName'] = ‘some value’; should be faster than above.

    Is it true or it doesn’t matter in browser like chrome? Is there an article or documentation to prove my point or otherwise.

    Thanks
    Neeraj

    0
    • 81

      First of all, if you have 2000 such keys in an object, you are conceptually using a dictionary. So you should use something like a Map, OrderedMap or SortedMap depending on your need. There can be subtle bugs when using objects as dictionaries because arbitrary keys are not possble. For example ?__proto__ will/used to crash express body parser. At least use hasOwnProperty.

      In V8, having so many properties turns an object into dictionary mode and lookup speed is indeed proportional to the key length at this point. However, strings can be internalized and also cache their hash code so it’s not as clear-cut.

      0
  49. 82

    Is it just me or does the link under below does not work?

    https://docs.google.com/presentation/d/1wUVmf78gG-ra5aOxvTfYdiLkdGaR9OhXRnOlIcEmu2s/pub?start=false&loop=false&delayms=3000#slide=id.g1d65bdf6_0_0

    from the section –

    “Luckily the DevTools can help locate some of these issues, and Loreena Lee has a fantastic presentation available documenting the “3 snapshot” technique for finding leaks within the DevTools that I can’t recommend reading through enough.”

    0
  50. 83

    Hi,

    If possible please see this post.
    http://www.bennadel.com/blog/1482-A-Graphical-Explanation-Of-Javascript-Closures-In-A-jQuery-Context.htm

    Most of the jQuery stuffs will be like that. I did many times like that. Are they all stay in memory until user closes their browser?

    Expecting your reply. Thanks !!!!

    0
  51. 84

    Please see this stack overflow post as well.

    http://codereview.stackexchange.com/questions/19686/will-this-kind-of-codes-leak-memory

    This is very common use case, that you could find among many developers. As per your article, this type of code will leak memory. right?

    Please do reply. I am really confused with clsoure + jQuery scenario.

    0
  52. 85

    Small correction — “Items in arrays aren’t able to be customized as heavily — they either exist or they don’t.” is false. Array elements are just properties with names which happen to be decimal numbers. You can certainly expect that “un-customized” numeric-named properties on arrays will be optimized for by the implementation, but there is no restriction against reconfiguring array properties as any other.

    var a = [];
    Object.defineProperty(a, '0', { get: function() { console.log('boo'); return 1; } });
    console.log(a.length, a[0]);

    0
    • 86

      He is speaking from the point of view of V8 implementation. Any customization of an indexed property at all will permanently turn that object into a slow mode that it cannot exit from. But this isn’t true for customization of a named property for which you can define any kind of descriptor without direct penalty.

      0
  53. 87

    Eduardo F. Sandino

    October 2, 2013 12:28 pm

    Hello, I have added new mixed patterns for testing and the results are quite interesting for people that use Firefox.

    * Module Pattern + Cached Functions 22,608 ops/sec ±0.72%9% slower
    * Module Pattern + Dinamyc Functions 12,516 ops/sec ±1.86%50% slower
    * Module Pattern + Anonymous Functions 25,009 ops/sec ±1.30%fastest
    * Module Pattern + Anonymous Cached Functions 22,827 ops/sec ±6.94%14% slower

    http://jsperf.com/javascript-module-pattern-test-cases

    0
  54. 88

    This is an excellent article. I was having issues tearing down objects in my HTML5 game and this blog confirmed that my setInterval method was the issue. Thanks a lot!

    0
  55. 89

    great article. Nice read, very informative.

    0
  56. 90
  57. 91

    Another problem people may face, is the issue of not being able to add private methods onto the prototype chain. I have built a solution to this here https://github.com/TremayneChrist/ProtectJS

    It allows you to add all of your methods to the prototype and then applies protection to the private ones so that they cannot be called from outside of the object.

    0
  58. 92

    “Create objects using a constructor function. This ensures that all objects created with it have the same hidden class and helps avoid changing these classes. As an added benefit, it’s also slightly faster than Object.create()”

    Using supplied jsperf link:

    Object.create() using pre-defined property object: 715,068 ops/sec
    Constructor function: 157,804,384 ops/sec

    220 times difference? Yeah, I guess you could say that’s “slightly” faster.

    That’s in Chrome 34. Firefox 28 has a ‘mere’ 12x difference (with constructor function again in front).

    I would prefer to use Object.create() to constructor functions for the added flexibility, and while I understand that sometimes flexibility comes with a runtime cost, looking at those numbers I believe it’s fair to say: WTF?

    0

Leave a Comment

Yay! You've decided to leave a comment. That's fantastic! Please keep in mind that comments are moderated and rel="nofollow" is in use. So, please do not use a spammy keyword or a domain as your name, or else it will be deleted. Let's have a personal and meaningful conversation instead. Thanks for dropping by!

↑ Back to top