memonic

this, is boomerang

Save

this, is boomerang

boomerang always comes back, except when it hits something.

what?

boomerang is a piece of javascript that you add to your web pages, where it measures the performance of your website from your end user's point of view. It has the ability to send this data back to your server for further analysis. With boomerang, you find out exactly how fast your users think your site is.

boomerang is opensource and released under the BSD license, and we have a whole bunch of documentation about it.

how?

  • Use cases — Just some of the uses of boomerang that we can think of
  • How it works — A short description of how boomerang works internally
  • Help, bugs, code — This is where the community comes in
  • TODO — There's a lot that we still need to do. Wanna help?
  • Howto docs — Short recipes on how to do a bunch of things with boomerang
  • API — For all you hackers out there

who?

boomerang comes to you from the Exceptional Performance team at Yahoo!, aided by the Yahoo! Developer Network.

where?

Get the code from github.com/yahoo/boomerang.

Spine

Save

Spine

Build Awesome JavaScript
MVC Applications

MVC

The Model View Controller pattern is at the heart of Spine, and absolutely integral to modern JavaScript applications.

Simplicity

Spine is a simple and lightweight framework, and doesn't consist of a vast amount of complex widgets to configure and theme.

Documentation

Spine strives to have the best, and most friendly documentation for any JavaScript framework available.

More information...

Alex MacCaw - Asynchronous UIs - the future of web user interfaces

Save
November 16, 2011

It's an interesting time to be working on the frontend now. We have new technologies such as HTML5, CSS3, Canvas and WebGL; all of which greatly increase the possibilities for web application development. The world is our oyster!

However, there's also another trend I've noticed. Web developers are still stuck in the request/response mindset. I call it the 'click and wait' approach - where every UI interaction results in a delay before another interaction can be performed. That's the process they've used their entire careers so it's no wonder most developers are blinkered to the alternatives.

Speed matters; a lot. Or to be precise, perceived speed matters a lot. Speed is a critical and often neglected part of UI design, but it can make a huge difference to user experience, engagement and revenue.

  • Amazon: 100 ms of extra load time caused a 1% drop in sales (source: Greg Linden, Amazon).
  • Google: 500 ms of extra load time caused 20% fewer searches (source: Marrissa Mayer, Google).
  • Yahoo!: 400 ms of extra load time caused a 5–9% increase in the number of people who clicked “back” before the page even loaded (source: Nicole Sullivan, Yahoo!).

Yet, despite all this evidence, developers still insist on using the request/response model. Even the introduction of Ajax hasn't improved the scene much, replacing blank loading states with spinners. There's no technical reason why we're still in this state of affairs, it's purely conceptual.

A good example of the problem is Gmail's 'sending' notification; how is this useful to people? What's the point of blocking? 99% of the time the email will be sent just fine.

Why oh why?

As developers, we should optimize for the most likely scenario. Behavior like this reminds me of Window's balloon notifications, which were awesome for telling you about something you just did:

No shit sherlock

The solution

I've been working on this problem, specifically with a MVC JavaScript framework called Spine, and implementing what I've dubbed asynchronous user interfaces, or AUIs. The key to this is that interfaces should be completely non-blocking. Interactions should be resolved instantly; there should be no loading messages or spinners. Requests to the server should be decoupled from the interface.

The key thing to remember is that users don't care about Ajax. They don't give a damn if a request to the server is still pending. They don't want loading messages. Users would just like to use your application without any interruptions.

The result

AUIs result in a significantly better user experience, more akin to what people are used to on the desktop than the web. Here's an example of an AUI Spine application with a Rails backend.

AUI

Notice that any action you take, such as updating a page, is completely asynchronous and instant. Ajax REST calls are sent off to Rails in the background after the UI has updated. It's a much better user experience.

Compare it to the static version of the same application, which blocks and navigates to a new page on every interaction. The AUI experience is a big improvement, that will get even more noticeable in larger (and slower) applications.

Have a browse round the source to see what's going on, especially the main controller.

Not a silver bullet

It's worth mentioning here that I don't think this approach is a silver bullet for all web applications, and it won't be appropriate for all use cases. One example that springs to mind is credit-card transactions, something you'll always want to be synchronous and blocking. However, I do believe that AUIs are applicable the vast majority of the time.

The other point I'd like to make is that not all feedback is bad. Unobtrusive feedback that's actually useful to your users is completely fine; like a spinner indicating when files have synced, or network connections have finished. The key thing is that feedback is useful and doesn't block further interaction.

The implementation

So how do you achieve these AUIs? There are a number of key principles:

  • Move state & view rendering to the client side
  • Intelligently preload data
  • Asynchronous server communication

Now, these concepts turn the existing server-driven model on its head. It's often not possible to convert a conventional web application into a client side app; you need to set out from the get go with these concepts in mind as they involve a significantly different architecture.

Moving state to the client side is a huge subject and beyond the scope of this article. For more information on that, you might want to read my book: JavaScript Web Applications. Instead in this post I want to focus in on a specific part of AUIs; asynchronous server communication, or in other words: server interaction that's decoupled from the user interface.

The idea is that you update the client before you send an Ajax request to the server. For example, say a user updated a page name in a CMS. With an asynchronous UI, the name change would be immediately reflected in the application, without any loading or pending messages. The UI is available for further interaction instantly. The Ajax request specifying the name change would then be sent off separately in the background. At no point does the application depend on the Ajax request for further interaction.

For example, let's take a Spine Model called Page. Say we update it in a controller, changing its name:

page = Page.find(1)
page.name = "Hello World"
page.save()

As soon as you call save(), Spine will perform the following actions:

  1. Run validation callbacks and persist the changes to memory
  2. Fire the change event and update the user interface
  3. Send an Ajax PUT to the server indicating the change

Notice that the Ajax request to the server has been sent after the UI has been updated, in other words what we've got here is an asynchronous interface.

Synchronizing state

Now this is all very well in principle, but I'm sure you're already thinking of scenarios where this breaks down. Since I've been working with these types of applications for a while, I can hopefully address some of these concerns. Feel free to skip the next few sections if you're not interested in the finer details.

Validation

What if server validation fails? The client thinks the action has already succeeded, so they'll be pretty surprised if subsequently told that the validation had failed.

There's a pretty simple solution to this: client-side validation. Replicate server-side validation on the client side, performing the same checks. You'll always ultimately need server-side validation, since you can't trust clients. But, by also validating on the client you can be confident a request will be successful. Server-side validation should only fail if there's a flaw in your client-side validation.

That said, not all validation is possible on the client-side, especially validating the uniqueness of an attribute (an operation which requires DB access). There's no easy solution to this, but there is a discussion covering various options in Spine's documentation.

Network failures & server errors

What happens if the user closes their browser before a request has completed? This is fairly simple to resolve, just listen to the window.onbeforeunload event, check to see if Ajax requests are still pending, and if appropriate notify the user. Spine's Ajax documentation contains a discussion about this.

window.onbeforeunload = ->
  if Spine.Ajax.pending
    '''Data is still being sent to the server; 
       you may lose unsaved changes if you close the page.'''

Alternatively, if the server returns an unsuccessful response code, say a 500, or the network request fails, we can catch that in a global error handler and notify the user. Again, this is an exceptional event, so it's not worth investing too much developer time into. We can just log the event, inform the user and perhaps refresh the page to re-sync state.

ID generation

IDs are useful to refer to client-side records, and are used extensively throughout JavaScript frameworks like Backbone and Spine. However, this throws up a bit of a dilemma - where are the IDs generated, the server or the client?

Generating IDs on the server has the advantage that IDs are guaranteed to be unique, but generating them on the client side has the advantage that they're available instantly. How do we resolve this situation?

Well, a solution that Backbone uses is generating an internal cid (or client id). You can use this cid temporarily before the server responds with the real identifier. Backbone has a separate record retrieval API, depending on whether you're using a cid, or a real id.

Users.getByCid(internalID)
Users.get(serverID)

I'm not such a fan of that solution, so I've taken a different tack with Spine. Spine generates pseudo GUIDs internally when creating records (unless you specify an ID yourself). It'll use that ID to identify records from then on. However, if the response from an Ajax create request to the server returns a new ID, Spine will switch to using the server specified ID. Both the new and old ID will still work, and the API to find records is still the same.

Synchronous requests

The last issue is with Ajax requests that get sent out in parallel. If a user creates a record, and then immediately updates the same record, two Ajax requests will be sent out at the same time, a POST and a PUT. However, if the server processes the 'update' request before the 'create' one, it'll freak out. It has no idea what record needs updating, as the record hasn't been created yet.

The solution to this is to pipeline Ajax requests, transmitting them serially. Spine does this by default, queuing up POST, PUT and DELETE Ajax requests so they're sent one at a time. The next request sent only after the previous one has returned successfully.

Next steps

So that's a pretty thorough introduction into asynchronous interfaces, and even if you glossed over the finer details, I hope you've been left with the impression that AUIs are a huge improvement over the status quo, and a valid option when building web applications.

If you're interested in building AUIs, you should definitely check out Spine, and Spine's Ajax guide.

Patterns For Large-Scale JavaScript Application Architecture

Save

 

 

I'm currently a JavaScript and UI developer at AOL helping to plan and write the front-end architecture to our next generation of client-facing applications. As these applications are both complex and often require an architecture that is scalable and highly-reusable, it's one of my responsibilities to ensure the patterns used to implement such applications are as sustainable as possible.

I also consider myself something of a design pattern enthusiast (although there are far more knowledgeable experts on this topic than I). I've previously written the creative-commons book 'Essential JavaScript Design Patterns' and am in the middle of writing the more detailed follow up to this book at the moment.

 

In the event of you being short for time, here's the tweet-sized summary of this article:

Decouple app. architecture w/module,facade & mediator patterns. Mods publish msgs, mediator acts as pub/sub mgr & facade handles security

 

Before we begin, let us attempt to define what we mean when we refer to a JavaScript application as being significantly 'large'. This is a question I've found still challenges developers with many years of experience in the field and the answer to this can be quite subjective.

As an experiment, I asked a few intermediate developers to try providing their definition of it informally. One developer suggested 'a JavaScript application with over 100,000 LOC' whilst another suggested 'apps with over 1MB of JavaScript code written in-house'. Whilst valiant (if not scary) suggestions, both of these are incorrect as the size of a codebase does not always correlate to application complexity - those 100,000 LOC could easily represent quite trivial code.

My own definition may or may not be universally accepted, but I believe that it's closer to what a large application actually represents.

In my view, large-scale JavaScript apps are non-trivial applications requiring significant developer effort to maintain, where most heavy lifting of data manipulation and display falls to the browser.

The last part of this definition is possibly the most significant.

If working on a significantly large JavaScript application, remember to dedicate sufficient time to planning the underlying architecture that makes the most sense. It's often more complex than you may initially imagine.

I can't stress the importance of this enough - some developers I've seen approach larger applications have stepped back and said 'Okay. Well, there are a set of ideas and patterns that worked well for me on my last medium-scale project. Surely they should mostly apply to something a little larger, right?'. Whilst this may be true to an extent, please don't take it for granted - larger apps generally have greater concerns that need to be factored in. I'm going to discuss shortly why spending a little more time planning out the structure to your application is worth it in the long run.

Most JavaScript developers likely use a mixed combination of the following for their current architecture:

  • custom widgets
  • models
  • views
  • controllers
  • templates
  • libraries/toolkits
  • an application core.

You probably also break down your application's functionality into blocks of modules or apply other patterns for this. This is great, but there are a number of potential problems you can run into if this represents all of your application's structure.

Can single modules exist on their own independently? Are they self-contained? Right now if I were to look at the codebase for a large application you or your team were working on and selected a random module, would it be possible for me to easily just drop it into a new page and start using it on its own?. You may question the rationale behind wanting to do this, however I encourage you to think about the future. What if your company were to begin building more and more non-trivial applications which shared some cross-over in functionality?. If someone said, 'Our users love using the chat module in our mail client. Let's drop that into our new collaborative editing suite', would this be possible without significantly altering the code?.

Are they tightly coupled? Before I dig into why this is a concern, I should note that I understand it's not always possible to have modules with absolutely no other dependencies in a system. At a granular level you may well have modules that extend the base functionality of others, but this question is more-so related to groups of modules with distinct functionality. It should be possible for all of these distinct sets of modules to work in your application without depending on too many other modules being present or loaded in order to function.

If you're building a GMail-like application and your webmail module (or modules) fail, this shouldn't block the rest of the UI or prevent users from being able to use other parts of the page such as chat. At the same time, as per before, modules should ideally be able to exist on their own outside of your current application architecture. In my talks I mention dynamic dependency (or module) loading based on expressed user-intent as something related. For example, in GMail's case they might have the chat module collapsed by default without the core module code loaded on page initialization. If a user expressed an intent to use the chat feature, only then would it be dynamically loaded. Ideally, you want this to be possible without it negatively affecting the rest of your application.

When working on systems of significant scale where there's a potential for millions of users to use (or mis-use) the different parts it, it's essential that modules which may end up being re-used across a number of different applications be sufficiently tested. Testing needs to be possible for when the module both inside and outside of the architecture for which it was initially built. In my view, this provides the most assurance that it shouldn't break if dropped into another system.

When devising the architecture for your large application, it's important to think ahead. Not just a month or a year from now, but beyond that. What might change? It's of course impossible to guess exactly how your application may grow, but there's certainly room to consider what is likely. Here, there is at least one specific aspect of your application that comes to mind.

Developers often couple their DOM manipulation code quite tightly with the rest of their application - even when they've gone to the trouble of separating their core logic down into modules. Think about it..why is this not a good idea if we're thinking long-term?

One member of my audience suggested that it was because a rigid architecture defined in the present may not be suitable for the future. Whilst certainly true, there's another concern that may cost even more if not factored in.

You may well decide to switch from using Dojo, jQuery, Zepto or YUI to something entirely different for reasons of performance, security or design in the future. This can become a problem because libraries are not easily interchangeable and have high switching costs if tightly coupled to your app.

If you're a Dojo developer (like some of the audience at my talk), you may not have something better to switch to in the present, but who is to say that in 2-3 years something better doesn't come out that you'll want to switch to?.

This is a relatively trivial decision in smaller codebases but for larger applications, having an architecture which is flexible enough to support not caring about the libraries being used in your modules can be of great benefit, both financially and from a time-saving perspective.

To summarize, if you reviewed your architecture right now, could a decision to switch libraries be made without rewriting your entire application?. If not, consider reading on because I think the architecture being outlined today may be of interest.

There are a number of influential JavaScript developers who have previously outlined some of the concerns I've touched upon so far. Three key quotes I would like to share from them are the following:

"The secret to building large apps is never build large apps. Break your applications into small pieces. Then, assemble those testable, bite-sized pieces into your big application" - Justin Meyer, author JavaScriptMVC

"The key is to acknowledge from the start that you have no idea how this will grow. When you accept that you don't know everything, you begin to design the system defensively. You identify the key areas that may change, which often is very easy when you put a little bit of time into it. For instance, you should expect that any part of the app that communicates with another system will likely change, so you need to abstract that away." - Nicholas Zakas, author 'High-performance JavaScript websites'

and last but not least:

"The more tied components are to each other, the less reusable they will be, and the more difficult it becomes to make changes to one without accidentally affecting another" - Rebecca Murphey, author of jQuery Fundamentals.

These principles are essential to building an architecture that can stand the test of time and should always be kept in mind.

Let's think about what we're trying to achieve for a moment.

We want a loosely coupled architecture with functionality broken down into independent modules with ideally no inter-module dependencies. Modules speak to the rest of the application when something interesting happens and an intermediate layer interprets and reacts to these messages.

For example, if we had a JavaScript application responsible for an online bakery, one such 'interesting' message from a module might be 'batch 42 of bread rolls is ready for dispatch'.

We use a different layer to interpret messages from modules so that a) modules don't directly access the core and b) modules don't need to directly call or interact with other modules. This helps prevent applications from falling over due to errors with specific modules and provides us a way to kick-start modules which have fallen over.

Another concern is security. The reality is that most of us don't consider internal application security as that much of a concern. We tell ourselves that as we're structuring the application, we're intelligent enough to figure out what should be publicly or privately accessible.

However, wouldn't it help if you had a way to determine what a module was permitted to do in the system? eg. if I know I've limited the permissions in my system to not allow a public chat widget to interface with an admin module or a module with DB-write permissions, I can limit the chances of someone exploiting vulnerabilities I have yet to find in the widget to pass some XSS in there. Modules shouldn’t be able to access everything. They probably can in most current architectures, but do they really need to be able to?

Having an intermediate layer handle permissions for which modules can access which parts of your framework gives you added security. This means a module is only able to do at most what we’ve permitted it do.

 

The solution to the architecture we seek to define is a combination of three well-known design patterns: the module, facade and mediator.

Rather than the traditional model of modules directly communicating with each other, in this decoupled architecture, they'll instead only publish events of interest (ideally, without a knowledge of other modules in the system). The mediator pattern will be used to both subscribe to messages from these modules and handle what the appropriate response to notifications should be. The facade pattern will be used to enforce module permissions.

I will be going into more detail on each of these patterns below:

You probably already use some variation of modules in your existing architecture. If however, you don't, this section will present a short primer on them.

Modules are an integral piece of any robust application's architecture and are typically single-purpose parts of a larger system that are interchangeable.

Depending on how you implement modules, it's possible to define dependencies for them which can be automatically loaded to bring together all of the other parts instantly. This is considered more scalable than having to track the various dependencies for them and manually load modules or inject script tags.

Any significantly non-trivial application should be built from modular components. Going back to GMail, you could consider modules independent units of functionality that can exist on their own, so the chat feature for example. It's probably backed by a chat module, however, depending on how complex that unit of functionality is, it may well have more granular sub-modules that it depends on. For example, one could have a module simply to handle the use of emoticons which can be shared across both chat and mail composition parts of the system.

In the architecture being discussed, modules have a very limited knowledge of what's going on in the rest of the system. Instead, we delegate this responsibility to a mediator via a facade.

This is by design because if a module only cares about letting the system know when something of interest happens without worrying if other modules are running, a system is capable of supporting adding, removing or replacing modules without the rest of the modules in the system falling over due to tight coupling.

Loose coupling is thus essential to this idea being possible. It facilitates easier maintainability of modules by removing code dependencies where possible. In our case, modules should not rely on other modules in order to function correctly. When loose coupling is implemented effectively, its straight-forward to see how changes to one part of a system may affect another.

In JavaScript, there are several options for implementing modules including the well-known module pattern and object literals. Experienced developers will already be familiar with these and if so, please skip ahead to the section on CommonJS modules.

The module pattern is a popular design that pattern that encapsulates 'privacy', state and organization using closures. It provides a way of wrapping a mix of public and private methods and variables, protecting pieces from leaking into the global scope and accidentally colliding with another developer's interface. With this pattern, only a public API is returned, keeping everything else within the closure private.

This provides a clean solution for shielding logic doing the heavy lifting whilst only exposing an interface you wish other parts of your application to use. The pattern is quite similar to an immediately-invoked functional expression (IIFE) except that an object is returned rather than a function.

It should be noted that there isn't really a true sense of 'privacy' inside JavaScript because unlike some traditional languages, it doesn't have access modifiers. Variables can't technically be declared as being public nor private and so we use function scope to simulate this concept. Within the module pattern, variables or methods declared are only available inside the module itself thanks to closure. Variables or methods defined within the returning object however are available to everyone.

Below you can see an example of a shopping basket implemented using the pattern. The module itself is completely self-contained in a global object called basketModule. The basket array in the module is kept private and so other parts of your application are unable to directly read it. It only exists with the module's closure and so the only methods able to access it are those with access to its scope (ie. addItem(), getItem() etc).


var basketModule = (function() {
    var basket = []; //private
    return { //exposed to public
        addItem: function(values) {
            basket.push(values);
        },
        getItemCount: function() {
            return basket.length;
        },
        getTotal: function(){
           var q = this.getItemCount(),p=0;
            while(q--){
                p+= basket[q].price; 
            }
            return p;
        }
    }
}());

Inside the module, you'll notice we return an object. This gets automatically assigned to basketModule so that you can interact with it as follows:


//basketModule is an object with properties which can also be methods
basketModule.addItem({item:'bread',price:0.5});
basketModule.addItem({item:'butter',price:0.3});

console.log(basketModule.getItemCount());
console.log(basketModule.getTotal());

//however, the following will not work:
console.log(basketModule.basket);// (undefined as not inside the returned object)
console.log(basket); //(only exists within the scope of the closure)

The methods above are effectively namespaced inside basketModule.

From a historical perspective, the module pattern was originally developed by a number of people including Richard Cornford in 2003. It was later popularized by Douglas Crockford in his lectures and re-introduced by Eric Miraglia on the YUI blog.

How about the module pattern in specific toolkits or frameworks?

Dojo

Dojo attempts to provide 'class'-like functionality through dojo.declare, which can be used for amongst other things, creating implementations of the module pattern. For example, if we wanted to declare basket as a module of the store namespace, this could be achieved as follows:

//traditional way
var store = window.store || {};
store.basket = store.basket || {};

//using dojo.declare
dojo.setObject("store.basket.object", (function() {
    var basket = [];
    function privateMethod() {
        console.log(basket);
    }
    return {
        publicMethod: function(){
                privateMethod();
        }
    };
}()));

which can become quite powerful when used with dojo.provide and mixins.

YUI

The following example is heavily based on the original YUI module pattern implementation by Eric Miraglia, but is relatively self-explanatory.

YAHOO.store.basket = function () {

    //"private" variables:
    var myPrivateVar = "I can be accessed only within YAHOO.store.basket .";

    //"private" method:
    var myPrivateMethod = function () {
            YAHOO.log("I can be accessed only from within YAHOO.store.basket");
        }

    return {
        myPublicProperty: "I'm a public property.",
        myPublicMethod: function () {
            YAHOO.log("I'm a public method.");

            //Within basket, I can access "private" vars and methods:
            YAHOO.log(myPrivateVar);
            YAHOO.log(myPrivateMethod());

            //The native scope of myPublicMethod is store so we can
            //access public members using "this":
            YAHOO.log(this.myPublicProperty);
        }
    };

}();

jQuery

There are a number of ways in which jQuery code unspecific to plugins can be wrapped inside the module pattern. Ben Cherry previously suggested an implementation where a function wrapper is used around module definitions in the event of there being a number of commonalities between modules.

In the following example, a library function is defined which declares a new library and automatically binds up the init function to document.ready when new libraries (ie. modules) are created.


function library(module) {
  $(function() {
    if (module.init) {
      module.init();
    }
  });
  return module;
}

var myLibrary = library(function() {
   return {
     init: function() {
       /*implementation*/
     }
   };
}());

In object literal notation, an object is described as a set of comma-separated name/value pairs enclosured in curly braces ({}). Names inside the object may be either strings or identifiers that are followed by a colon. There should be no comma used after the final name/value pair in the object as this may result in errors.

Object literals don't require instantiation using the new operator but shouldn't be used at the start of a statement as the opening { may be interpreted as the beginning of a block. Below you can see an example of a module defined using object literal syntax. New members may be added to the object using assignment as follows myModule.property = 'someValue';

Whilst the module pattern is useful for many things, if you find yourself not requiring specific properties or methods to be private, the object literal is a more than suitable alternative.

var myModule = {
    myProperty : 'someValue',
    //object literals can contain properties and methods.
    //here, another object is defined for configuration
    //purposes:
    myConfig:{
        useCaching:true,
        language: 'en'   
    },
    //a very basic method
    myMethod: function(){
        console.log('I can haz functionality?');
    },
    //output a value based on current configuration
    myMethod2: function(){
        console.log('Caching is:' + (this.myConfig.useCaching)?'enabled':'disabled');
    },
    //override the current configuration
    myMethod3: function(newConfig){
        if(typeof newConfig == 'object'){
           this.myConfig = newConfig;
           console.log(this.myConfig.language); 
        }
    }
};

myModule.myMethod(); //I can haz functionality
myModule.myMethod2(); //outputs enabled
myModule.myMethod3({language:'fr',useCaching:false}); //fr

Over the last year or two, you may have heard about CommonJS - a volunteer working group which designs, prototypes and standardizes JavaScript APIs. To date they've ratified standards for modules and packages.The CommonJS AMD proposal specifies a simple API for declaring modules which can be used with both synchronous and asynchronous script tag loading in the browser. Their module pattern is relatively clean and I consider it a reliable stepping stone to the module system proposed for ES Harmony (the next release of the JavaScript language).

From a structure perspective, a CommonJS module is a reusable piece of JavaScript which exports specific objects made available to any dependent code. This module format is becoming quite ubiquitous as a standard module format for JS. There are plenty of great tutorials on implementing CommonJS modules, but at a high-level they basically contain two primary parts: an exports object that contains the objects a module wishes to make available to other modules and a require function that modules can use to import the exports of other modules.

/*
Example of achieving compatibility with AMD and standard CommonJS by putting boilerplate around the standard CommonJS module format:
*/

(function(define){
define(function(require,exports){
// module contents
 var dep1 = require("dep1");
 exports.someExportedFunction = function(){...};
 //...
});
})(typeof define=="function"?define:function(factory){factory(require,exports)});

There are a number of great JavaScript libraries for handling module loading in the CommonJS module format, but my personal preference is RequireJS. A complete tutorial on RequireJS is outside the scope of this tutorial, but I can recommend reading James Burke's ScriptJunkie post on it here. I know a number of people that also like Yabble.

Out of the box, RequireJS provides methods for easing how we create static modules with wrappers and it's extremely easy to craft modules with support for asynchronous loading. It can easily load modules and their dependencies this way and execute the body of the module once available.

There are some developers that however claim CommonJS modules aren't suitable enough for the browser. The reason cited is that they can't be loaded via a script tag without some level of server-side assistance. We can imagine having a library for encoding images as ASCII art which might export a encodeToASCII function. A module from this could resemble:

var encodeToASCII = require("encoder").encodeToASCII;
exports.encodeSomeSource = function(){
    //process then call encodeToASCII
}

This type of scenario wouldn't work with a script tag because the scope isn't wrapped, meaning our encodeToASCII method would be attached to the window, require wouldn't be as such defined and exports would need to be created for each module separately. A client-side library together with server-side assistance or a library loading the script with an XHR request using eval() could however handle this easily.

Using RequireJS, the module from earlier could be rewritten as follows:

define(function(require, exports, module) {
    var encodeToASCII = require("encoder").encodeToASCII;
    exports.encodeSomeSource = function(){
            //process then call encodeToASCII
    }
});

For developers who may not rely on just using static JavaScript for their projects, CommonJS modules are an excellent path to go down, but do spend some time reading up on it. I've really only covered the tip of the ice berg but both the CommonJS wiki and Sitepen have a number of resources if you wish to read further.

Next, we're going to look at the facade pattern, a design pattern which plays a critical role in the architecture being defined today.

When you put up a facade, you're usually creating an outward appearance which conceals a different reality. The facade pattern provides a convenient higher-level interface to a larger body of code, hiding its true underlying complexity. Think of it as simplifying the API being presented to other developers.

Facades are a structural pattern which can often be seen in JavaScript libraries and frameworks where, although an implementation may support methods with a wide range of behaviors, only a 'facade' or limited abstract of these methods is presented to the client for use.

This allows us to interact with the facade rather than the subsystem behind the scenes.

The reason the facade is of interest is because of its ability to hide implementation-specific details about a body of functionality contained in individual modules. The implementation of a module can change without the clients really even knowing about it.

By maintaining a consistent facade (simplified API), the worry about whether a module extensively uses dojo, jQuery, YUI, zepto or something else becomes significantly less important. As long as the interaction layer doesn't change, you retain the ability to switch out libraries (eg. jQuery for Dojo) at a later point without affecting the rest of the system.

Below is a very basic example of a facade in action. As you can see, our module contains a number of methods which have been privately defined. A facade is then used to supply a much simpler API to accessing these methods:

var module = (function() {
    var _private = {
        i:5,
        get : function() {
            console.log('current value:' + this.i);
        },
        set : function( val ) {
            this.i = val;
        },
        run : function() {
            console.log('running');
        },
        jump: function(){
            console.log('jumping');
        }
    };
    return {
        facade : function( args ) {
            _private.set(args.val);
            _private.get();
            if ( args.run ) {
                _private.run();
            }
        }
    }
}());


module.facade({run: true, val:10});
//outputs current value: 10, running

and that's really it for the facade before we apply it to our architecture. Next, we'll be diving into the exciting mediator pattern. The core difference between the facade pattern and the mediator is that the facade (a structural pattern) only exposes existing functionality whilst the mediator (a behavioral pattern) can add functionality.

 

The mediator pattern is best introduced with a simple analogy - think of your typical airport traffic control. The tower handles what planes can take off and land because all communications are done from the planes to the control tower, rather than from plane-to-plane. A centralized controller is key to the success of this system and that's really what a mediator is.

Mediators are used when the communication between modules may be complex, but is still well defined. If it appears a system may have too many relationships between modules in your code, it may be time to have a central point of control, which is where the pattern fits in.

In real-world terms, a mediator encapsulates how disparate modules interact with each other by acting as an intermediary. The pattern also promotes loose coupling by preventing objects from referring to each other explicitly - in our system, this helps to solve our module inter-dependency issues.

What other advantages does it have to offer? Well, mediators allow for actions of each module to vary independently, so it’s extremely flexible. If you've previously used the Observer (Pub/Sub) pattern to implement an event broadcast system between the modules in your system, you'll find mediators relatively easy to understand.

Let's take a look at a high level view of how modules might interact with a mediator:

Consider modules as publishers and the mediator as both a publisher and subscriber. Module 1 broadcasts an event notifying the mediator something needs to done. The mediator captures this message and 'starts' the modules needed to complete this task Module 2 performs the task that Module 1 requires and broadcasts a completion event back to the mediator. In the mean time, Module 3 has also been started by the mediator and is logging results of any notifications passed back from the mediator.

Notice how at no point do any of the modules directly communicate with one another. If Module 3 in the chain were to simply fail or stop functioning, the mediator could hypothetically 'pause' the tasks on the other modules, stop and restart Module 3 and then continue working with little to no impact on the system. This level of decoupling is one of the main strengths the pattern has to offer.

To review, the advantages of the mediator are that:

It decouples modules by introducing an intermediary as a central point of control.It allows modules to broadcast or listen for messages without being concerned with the rest of the system. Messages can be handled by any number of modules at once.

It is typically significantly more easy to add or remove features to systems which are loosely coupled like this.

And its disadvantages:

By adding a mediator between modules, they must always communicate indirectly. This can cause a very minor performance drop - because of the nature of loose coupling, its difficult to establish how a system might react by only looking at the broadcasts. At the end of the day, tight coupling causes all kinds of headaches and this is one solution.

Example: This is a possible implementation of the mediator pattern based on previous work by @rpflorence

var mediator = (function(){
    var subscribe = function(channel, fn){
        if (!mediator.channels[channel]) mediator.channels[channel] = [];
        mediator.channels[channel].push({ context: this, callback: fn });
        return this;
    },

    publish = function(channel){
        if (!mediator.channels[channel]) return false;
        var args = Array.prototype.slice.call(arguments, 1);
        for (var i = 0, l = mediator.channels[channel].length; i < l; i++) {
            var subscription = mediator.channels[channel][i];
            subscription.callback.apply(subscription.context, args);
        }
        return this;
    };

    return {
        channels: {},
        publish: publish,
        subscribe: subscribe,
        installTo: function(obj){
            obj.subscribe = subscribe;
            obj.publish = publish;
        }
    };

}());

Example: Here are two sample uses of the implementation from above. It's effectively managed publish/subscribe:

//Pub/sub on a centralized mediator

mediator.name = "tim";
mediator.subscribe('nameChange', function(arg){
        console.log(this.name);
        this.name = arg;
        console.log(this.name);
});

mediator.publish('nameChange', 'david'); //tim, david


//Pub/sub via third party mediator

var obj = { name: 'sam' };
mediator.installTo(obj);
obj.subscribe('nameChange', function(arg){
        console.log(this.name);
        this.name = arg;
        console.log(this.name);
});

obj.publish('nameChange', 'john'); //sam, john

 

In the architecture suggested:

A facade serves as an abstraction of the application core which sits between the mediator and our modules - it should ideally be the only other part of the system modules are aware of.

The responsibilities of the abstraction include ensuring a consistent interface to these modules is available at all times. This closely resembles the role of the sandbox controller in the excellent architecture first suggested by Nicholas Zakas.

Components are going to communicate with the mediator through the facade so it needs to be dependable. When I say 'communicate', I should clarify that as the facade is an abstraction of the mediator which will be listening out for broadcasts from modules that will be relayed back to the mediator.

In addition to providing an interface to modules, the facade also acts as a security guard, determining which parts of the application a module may access. Components only call their own methods and shouldn't be able to interface with anything they don't have permission to. For example, a module may broadcast dataValidationCompletedWriteToDB. The idea of a security check here is to ensure that the module has permissions to request database-write access. What we ideally want to avoid are issues with modules accidentally trying to do something they shouldn't be.

To review in short, the mediator remains a type of pub/sub manager but is only passed interesting messages once they've cleared permission checks by the facade.

 

The mediator plays the role of the application core. We've briefly touched on some of its responsibilities but lets clarify what they are in full.

The core's primary job is to manage the module lifecycle. When the core detects an interesting message it needs to decide how the application should react - this effectively means deciding whether a module or set of modules needs to be started or stopped.

Once a module has been started, it should ideally execute automatically. It's not the core's task to decide whether this should be when the DOM is ready and there's enough scope in the architecture for modules to make such decisions on their own.

You may be wondering in what circumstance a module might need to be 'stopped' - if the application detects that a particular module has failed or is experiencing significant errors, a decision can be made to prevent methods in that module from executing further so that it may be restarted. The goal here is to assist in reducing disruption to the user experience.

In addition, the core should enable adding or removing modules without breaking anything. A typical example of where this may be the case is functionality which may not be available on initial page load, but is dynamically loaded based on expressed user-intent eg. going back to our GMail example, Google could keep the chat widget collapsed by default and only dynamically load in the chat module(s) when a user expresses an interest in using that part of the application. From a performance optimization perspective, this may make sense.

Error management will also be handled by the application core. In addition to modules broadcasting messages of interest they will also broadcast any errors experienced which the core can then react to accordingly (eg. stopping modules, restarting them etc).It's important that as part of a decoupled architecture there to be enough scope for the introduction of new or better ways of handling or displaying errors to the end user without manually having to change each module. Using publish/subscribe through a mediator allows us to achieve this.

 

 

  • Modules contain specific pieces of functionality for your application. They publish notifications informing the application whenever something interesting happens - this is their primary concern. As I'll cover in the FAQs, modules can depend on DOM utility methods, but ideally shouldn't depend on any other modules in the system. They should not be concerned with:

    • what objects or modules are subscribing to the messages they publish
    • where these objects are based (whether this is on the client or server)
    • how many objects subscribe to notifications
  •  

  • The Facade abstracts the core to avoid modules touching it directly. It subscribes to interesting events (from modules) and says 'Great! What happened? Give me the details!'. It also handles module security by checking to ensure the module broadcasting an event has the necessary permissions to pass such events that can be accepted.
  •  

  • The Mediator (Application Core) acts as a 'Pub/Sub' manager using the mediator pattern. It's responsible for module management and starts/stops modules as needed. This is of particular use for dynamic dependency loading and ensuring modules which fail can be centrally restarted as needed.

The result of this architecture is that modules (in most cases) are theoretically no longer dependent on other modules. They can be easily tested and maintained on their own and because of the level of decoupling applied, modules can be picked up and dropped into a new page for use in another project without significant additional effort. They can also be dynamically added or removed without the application falling over.

 

As previously mentioned by Michael Mahemoff, when thinking about large-scale JavaScript, it can be of benefit to exploit some of the more dynamic features of the language. You can read more about some of the concerns highlighted on Michael's G+ page, but I would like to focus on one specifically - automatic event registration (AER).

AER solves the problem of wiring up subscribers to publishers by introducing a pattern which auto-wires based on naming conventions. For example, if a module publishes an event called messageUpdate, anything with a messageUpdate method would be automatically called.

The setup for this pattern involves registering all components which might subscribe to events, registering all events that may be subscribed to and finally for each subscription method in your component-set, binding the event to it. It's a very interesting approach which is related to the architecture presented in this post, but does come with some interesting challenges.

For example, when working dynamically, objects may be required to register themselves upon creation. Please feel free to check out Michael's post on AER as he discusses how to handle such issues in more depth.

 

A: Although the architecture outlined uses a facade to implement security features, it's entirely possible to get by using a mediator and pub/sub to communicate events of interest throughout the application without it. This lighter version would offer a similar level of decoupling, but ensure you're comfortable with modules directly touching the application core (mediator) if opting for this variation.

A: I'm specifically referring to dependencies on other modules here. What some developers opting for an architecture such as this opt for is actually abstracting utilities common to DOM libraries -eg. one could have a DOM utility class for query selectors which when used returns the result of querying the DOM using jQuery (or, if you switched it out at a later point, Dojo). This way, although modules still query the DOM, they aren't directly using hardcoded functions from any particular library or toolkit. There's quite a lot of variation in how this might be achieved, but the takeaway is that ideally core modules shouldn't depend on other modules if opting for this architecture.

You'll find that when this is the case it can sometimes be more easy to get a complete module from one project working in another with little extra effort. I should make it clear that I fully agree that it can sometimes be significantly more sensible for modules to extend or use other modules for part of their functionality, however bear in mind that this can in some cases increase the effort required to make such modules 'liftable' for other projects.

A: I plan on releasing a free boilerplate pack for this post when time permits, but at the moment, your best bet is probably the 'Writing Modular JavaScript' premium tutorial by Andrew Burgees (for complete disclosure, this is a referral link as any credits received are re-invested into reviewing material before I recommend it to others). Andrew's pack includes a screencast and code and covers most of the main concepts outlined in this post but opts for calling the facade a 'sandbox', as per Zakas. There's some discussion regarding just how DOM library abstraction should be ideally implemented in such an architecture - similar to my answer for the second question, Andrew opts for some interesting patterns on generalizing query selectors so that at most, switching libraries is a change that can be made in a few short lines. I'm not saying this is the right or best way to go about this, but it's an approach I personally also use.

A: As Zakas has previously hinted, there's technically no reason why modules shouldn't be able to access the core but this is more of a best practice than anything. If you want to strictly stick to this architecture you'll need to follow the rules defined or opt for a looser architecture as per the answer to the first question.

 

Thanks to Nicholas Zakas for his original work in bringing together many of the concepts presented today; Andrée Hansson for his kind offer to do a technical review of the post (as well as his feedback that helped improve it); Rebecca Murphey, Justin Meyer, John Hann, Peter Michaux, Paul Irish and Alex Sexton, all of whom have written material related to the topics discussed in the past and are a constant source of inspiration for both myself and others.

 

 

Jasmine: BDD for your JavaScript

Save
Fork me on GitHub

Jasmine is a behavior-driven development framework for testing your JavaScript code. It does not depend on any other JavaScript frameworks. It does not require a DOM. And it has a clean, obvious syntax so that you can easily write tests.

describe("Jasmine", function() {
  it("makes testing JavaScript awesome!", function() {
    expect(yourCode).toBeLotsBetter();
  });
});
Jasmine can be run anywhere you can execute JavaScript: a static web page, your continuous integration environment, or server-side environments like Node.js .
Find out more in the documentation .

Powered by   Pivotal Labs, Inc.

Introduction to JavaScript BDD testing with Jasmine library | Web side of life

Save
Posted: July 22nd, 2011 | Author: | Filed under: jasmine, javascript, testing | No Comments »

If we look at the developments in the software testing area, step beyond in evolution after TDD is, so called, BDD (Behavior Deriven Development). It is hard to describe BDD in couple of sentences. Excerpt taken from Wikipedia says:

BDD focuses on obtaining a clear understanding of desired software behaviour through discussion with stakeholders. It extends TDD by writing test cases in a natural language that non-programmers can read. Behavior-driven developers use their native language to describe the purpose and benefit of their code. This allows the developers to focus on why the code should be created, rather than the technical details, and minimizes translation between the technical language in which the code is written and the domain language spoken by the business, users, stakeholders, project management, etc.

If this looks a bit complicated, in one sentence it would mean: BDD helps programmers and non-technical stakeholders understand each other better and develop software which will be the most representative model of the problem solved by writing the software  itself. 

Jasmine is a library for BDD testing of your JavaScript. It offers range of possibilities to describe software functionalities by simple and descriptive tests, which is one of the basic principles of BDD based development. This way tests usually serve as the most accurate software documentation. More about Jasmine library on the GitHub repository.

After you download standalone Jasmine distribution from here you’ll find in the archive all you need to write your first tests together with the example of your code organization. The most important fie is SpecRunner.html which, when run in browser, will execute tests and give you report about the state of the tests. SpecRunner.html is very simple and self explanatory. lib folder contains Jasmine library, src folder contains code to be tested and  spec folder contains actual software specifications as tests.

Further, I will implement simple object that can register multiple callback functions and call them with given data. Something like simple observer. I will always first write tests and then implement functionality to make tests pass.

We start from the downloaded archive for standalone Jasmine library. We need to delete example code provided with the library to implement our own example.

cd jasmine-standalone # folder in which we extracted downloaded archive
rm src/*
rm spec/*

Also, from SpecRunner.html file we delete references to just removed scripts. Therefore, part which includes them stays empty like this:

...

  <!-- include source files here... -->

  <!-- include spec files here... -->

...

In TDD and BDD philosophy one rule is essential for success.

First you write the test which will at the beginning fail, then you implement code which should satisfy specification described by test. Then, you execute test and continue with implementation until test passes. When that happens you know that implementation is OK.

According to this we create file to place our tests in spec/ObserverSpec.js and we define first test to simply check existence of the observer object.

describe("rs.ji.observer", function () {

    it("should be present in the global namespace", function () {
        expect(rs.ji.observer).toBeDefined();
    });

});

And we add this specification to SpecRunner.html to execute it:

<!-- include spec files here... -->
<script type="text/javascript" src="spec/ObserverSpec.js"></script>

If we now open SpecRunner.html in some browser (i.e. Google Chrome) we will see one failing test.

It is important to take a look in how our test looks like. If you read the test you’ll see how descriptive it is and how with simple words expresses how our code should behave, although example is very simple. Here it takes some creativity of the developer to make tests really valuable but more than that is important that developer have deep understanding of the dictionary and terminology of the domain for which software is developed because understanding of the domain should be heavily embedded in the test definitions.

Next step is to implement the code which will satisfy test. We will create file src/observer.js

// defining so called namespace objects so I can namespace observer into
// rs.ji.observer
var rs = {};
rs.ji = {};

// observer implementation
rs.ji.observer = {};

And we will add it to SpecRunner.html:

<!-- include source files here... -->
<script type="text/javascript" src="src/observer.js"></script>

If we run SpecRunner.html again, we can see that our test passes, it is green :)

Now, when we know for the principle  “First failing test, then implementation” we can add new functionalities to our simple example. We add next test:

    it("should be able to register callback function", function () {
        // define function to be placed into the observer register
        function callback() { return null; }
        rs.ji.observer.register(callback);
        var callbacks = rs.ji.observer.getCallbacks();
        expect(callbacks.length).toBe(1);
        expect(callbacks[0]).toBe(callback);
    });

And implementation that makes this test pass:

rs.ji.observer = {
    callbacks: [],
    register: function (callback) {
        this.callbacks.push(callback);
    },
    getCallbacks: function () {
        return this.callbacks;
    }
};

We want to check if we can register multiple callbacks. This functionality should be supported already, but we write test to confirm this:

    // before every test we reset array of callbacks
    beforeEach(function () {
        rs.ji.observer.callbacks = [];
    });

    it("should be able to register multiple callbacks", function () {
        function callback1() { return null; }
        function callback2() { return null; }
        rs.ji.observer.register(callback1);
        rs.ji.observer.register(callback2);
        var callbacks = rs.ji.observer.getCallbacks();
        expect(callbacks.length).toBe(2);
        expect(callbacks[0]).toBe(callback1);
        expect(callbacks[1]).toBe(callback2);
    });

Programmers often use print, echo and friends to test code for certain functionalities. That takes time as for writing a test, but from other side tests are possible to be executed all the time after this initial check is done even when we move to implement something else.

If we run SpecRunner.html in browser, all tests should pass.

Last thing our observer should do to satisfy its basic purpose is to update callback functions with some new information. That means it should be able to call them with some data. To define this functionality we write new test:

    it("should provide update method to execute all callbacks with provided data", function () {
        window.callback1 = function () { return null; }
        window.callback2 = function () { return null; }
        spyOn(window, 'callback1');
        spyOn(window, 'callback2');
        rs.ji.observer.register(window.callback1);
        rs.ji.observer.register(window.callback2);
        rs.ji.observer.update("data string");
        expect(window.callback1).toHaveBeenCalledWith("data string");
        expect(window.callback2).toHaveBeenCalledWith("data string");
    });

Here we use some advanced techniques of spying on methods (SpyOn) in order to see if some method is called while other is executed.

As a helper, we will add jQuery library to src/jquery.js location and add it to SpecRunner.html file:

<!-- include source files here... -->
<script type="text/javascript" src="src/jquery.js"></script>
<script type="text/javascript" src="src/observer.js"></script>

And we implement code to satisfy the test:

// observer implementation
rs.ji.observer = {
    callbacks: [],
    register: function (callback) {
        this.callbacks.push(callback);
    },
    getCallbacks: function () {
        return this.callbacks;
    },

    // added update method to inform registered callbacks
    // uses jQuery to iterate over the array
    update: function(data) {
        $.each(this.callbacks, function (i, callback) {
            callback.call(null, data); // call every callback with provided data
        });
    }
};

That is it, now we have full implementation of simple observer with tests that can pretty clearly describe functionalities of the software we developed and they can as well prove in every time that needed functionalities are working OK by executing them in the browser (or different browsers to see if they will work in all of them).

Complete files:

describe("rs.ji.observer", function () {

    beforeEach(function () {
        rs.ji.observer.callbacks = [];
    });

    it("should be present in the global namespace", function () {
        expect(rs.ji.observer).toBeDefined();
    });

    it("should be able to register callback function", function () {
        // define anonymous function to be placed into the observer register
        function callback() { return null; }
        rs.ji.observer.register(callback);
        var callbacks = rs.ji.observer.getCallbacks();
        expect(callbacks.length).toBe(1);
        expect(callbacks[0]).toBe(callback); // so we need to have first funtion in registered callbacks
    });

    it("should be able to register multiple callbacks", function () {
        function callback1() { return null; }
        function callback2() { return null; }
        rs.ji.observer.register(callback1);
        rs.ji.observer.register(callback2);
        var callbacks = rs.ji.observer.getCallbacks();
        expect(callbacks.length).toBe(2);
        expect(callbacks[0]).toBe(callback1);
        expect(callbacks[1]).toBe(callback2);
    });

    it("should provide update method to execute all callbacks with provided data", function () {
        window.callback1 = function () { return null; }
        window.callback2 = function () { return null; }
        spyOn(window, 'callback1');
        spyOn(window, 'callback2');
        rs.ji.observer.register(window.callback1);
        rs.ji.observer.register(window.callback2);
        rs.ji.observer.update("data string");
        expect(window.callback1).toHaveBeenCalledWith("data string");
        expect(window.callback2).toHaveBeenCalledWith("data string");
    });
});
// defining so called namespace objects so I can namespace observer into
// rs.ji.observer
var rs = {};
rs.ji = {};

// observer implementation
rs.ji.observer = {
    callbacks: [],
    register: function (callback) {
        this.callbacks.push(callback);
    },
    getCallbacks: function () {
        return this.callbacks;
    },
    update: function(data) {
        $.each(this.callbacks, function (i, callback) {
            callback.call(null, data);
        });
    }

};

three.js

Save

The aim of the project is to create a lightweight 3D engine with a very low level of complexity — in other words, for dummies. The engine can render using <canvas>, <svg> and WebGL.

ContributorsGetting StartedAPI Reference

More? #three.js on irc.freenode.net

equirectangular scissors lookat video dof ribbon vertexcolors particles lines shader materials_normalmap2 materials_grass materials_normalmap geometry_terrain_gl geometry_minecraft materials_shader_fresnel materials_cars materials_cubemap_refraction materials_cubemap_balls_reflection materials_cubemap_balls_refraction materials_cubemap_escher materials_cubemap materials_gl large_mesh

materials_reflection materials materials_depth materials_normal lights_pointlights interactive_cubes camera_ortographic geometry_birds geometry_earth geometry_terrain materials_video geometry_panorama geometry_cube particles_sprites particles_random particles_wave

Mission Control ROME Globe Photoparticles Plumegraph HelloRacer FastKat Sculpt Voxels The Wilderness Downtown Or so they say...

Download the minified library and include it in your html.

<script src="js/Three.js"></script>

This code creates a camera, then creates a scene, adds a cube on it, creates a <canvas> renderer and adds its viewport in the document.body element.

<script>

    var camera, scene, renderer,
    geometry, material, mesh;

    init();
    animate();

    function init() {

        camera = new THREE.Camera( 75, window.innerWidth / window.innerHeight, 1, 10000 );
        camera.position.z = 1000;

        scene = new THREE.Scene();

        geometry = new THREE.CubeGeometry( 200, 200, 200 );
        material = new THREE.MeshBasicMaterial( { color: 0xff0000, wireframe: true } );

        mesh = new THREE.Mesh( geometry, material );
        scene.addObject( mesh );

        renderer = new THREE.CanvasRenderer();
        renderer.setSize( window.innerWidth, window.innerHeight );

        document.body.appendChild( renderer.domElement );

    }

    function animate() {

        // Include examples/js/RequestAnimationFrame.js for cross-browser compatibility.
        requestAnimationFrame( animate );
        render();

    }

    function render() {

        mesh.rotation.x += 0.01;
        mesh.rotation.y += 0.02;

        renderer.render( scene, camera );

    }

</script>

2011 05 31 - r41/ROME (265.317 KB, gzip: 64.849 KB)

(Up to this point, some rome specific features managed to get in the lib. The aim is to clean this up in next revisions.)

  • Improved Blender Object and Scene exporters. (alteredq)
  • Fixes on WebGL attributes. (alteredq and empaempa)
  • Reduced overall memory footprint. (mrdoob)
  • Added Face4 support to CollisionSystem. (NINE78)
  • Made Blender exporter work on 2.57. (remoe)
  • Added Particle support to Ray. (mrdoob and jaycrossler)
  • Improved Ray.intersectObject performance by checking boundingSphere first. (mrdoob)
  • Added `TrackballCamera. (egraether)
  • Added repeat and offset properties to Texture. (mrdoob and alteredq)
  • Cleaned up Vector2, Vector3 and Vector4. (egraether)

2011 04 24 - r40 (263.774 KB, gzip: 64.320 KB)

  • Fixed Object3D.lookAt. (mrdoob)
  • More and more Blender exporter goodness. (alteredq and mrdoob)
  • Improved CollisionSystem. (drojdjou and alteredq)
  • Fixes on WebGLRenderer. (empaempa)
  • Added Trident object. (sroucheray)
  • Added data object to Renderers for getting number of vertices/faces/callDraws from last render. (mrdoob)
  • Fixed Projector handling Particles with hierarchies. (mrdoob)

2011 04 09 - r39 (249.048 KB, gzip: 61.020 KB)

  • Improved WebGLRenderer program cache. (alteredq)
  • Added support for pre-computed edges in loaders and exporters. (alteredq)
  • Added Collisions classes. (drojdjou)
  • Added Sprite object. (empaempa)
  • Fixed *Loader issue where Workers were kept alive and next loads were delayed. (alteredq)
  • Added THREE namespace to all the classes that missed it. (mrdoob)

2011 03 31 - r38 (225.442 KB, gzip: 55.908 KB)

  • Added LensFlare light. (empaempa)
  • Added ShadowVolume object (stencil shadows). (empaempa)
  • Improved Blender Exporter plus added Scene support. (alteredq)
  • Blender Importer for loading JSON files. (alteredq)
  • Added load/complete callbacks to Loader (mrdoob)
  • Minor WebGL blend mode clean up. (mrdoob)
  • *Materials now extend Material (mrdoob)
  • material.transparent define whether material is transparent or not (before we were guessing). (mrdoob)
  • Added internal program cache to WebGLRenderer (reuse already available programs). (mrdoob)

2011 03 22 - r37 (208.495 KB, gzip: 51.376 KB)

  • Changed JSON file format. (Re-exporting of models required) (alteredq and mrdoob)
  • Updated Blender and 3DSMAX exporters for new format. (alteredq)
  • Vertex colors are now per-face (alteredq)
  • Geometry.uvs is now a multidimensional array (allowing infinite uv sets) (alteredq)
  • CanvasRenderer renders Face4 again (without spliting to 2 Face3) (mrdoob)
  • ParticleCircleMaterial > ParticleCanvasMaterial. Allowing injecting any canvas.context code! (mrdoob)

2011 03 14 - r36 (194.547 KB, gzip: 48.608 KB)

  • Added 3DSMAX exporter. (alteredq)
  • Fixed WebGLRenderer aspect ratio bug when scene had only one material. (mrdoob)
  • Added sizeAttenuation property to ParticleBasicMaterial. (mrdoob)
  • Added PathCamera. (alteredq)
  • Fixed WebGLRenderer bug when Camera has a parent. CameraCamera.updateMatrix method. (empaempa)
  • Fixed Camera.updateMatrix method and Object3D.updateMatrix. (mrdoob)

2011 03 06 - r35 (187.875 KB, gzip: 46.433 KB)

  • Added methods translate, translateX, translateY, translateZ and lookAt methods to Object3D. (mrdoob)
  • Added methods setViewport and setScissor to WebGLRenderer. (alteredq)
  • Added support for non-po2 textures. (mrdoob and alteredq)
  • Minor API clean up. (mrdoob)

2011 03 02 - r34 (186.045 KB, gzip: 45.953 KB)

  • Now using camera.matrixWorldInverse instead of camera.matrixWorld for projecting. (empaempa and mrdoob)
  • Camel cased properties and object json format (Re-exporting of models required) (alteredq)
  • Added QuakeCamera for easy fly-bys (alteredq)
  • Added LOD example (alteredq)

2011 02 26 - r33 (184.483 KB, gzip: 45.580 KB)

  • Changed build setup (build/Three.js now also include extras) (mrdoob)
  • Added ParticleSystem object to WebGLRenderer (alteredq)
  • Added Line support to WebGLRenderer (alteredq)
  • Added vertex colors support to WebGLRenderer (alteredq)
  • Added Ribbon object. (alteredq)
  • Added updateable textures support to WebGLRenderer (alteredq)
  • Added Sound object and SoundRenderer. (empaempa)
  • LOD, Bone, SkinnedMesh objects and hierarchy being developed. (empaempa)
  • Added hierarchies examples (mrdoob)

2010 12 31 - r32 (89.301 KB, gzip: 21.351 KB)

  • Scene now supports Fog and FogExp2. WebGLRenderer only right now. (alteredq)
  • Added setClearColor( hex, opacity ) to WebGLRenderer and CanvasRenderer (alteredq & mrdoob)
  • WebGLRenderer shader system refactored improving performance. (alteredq)
  • Projector now does frustum culling of all the objects using their sphereBoundingBox. (thx errynp)
  • material property changed to materials globaly.

2010 12 06 - r31 (79.479 KB, gzip: 18.788 KB)

  • Minor Materials API change (mappings). (alteredq & mrdoob)
  • Added Filters to WebGLRenderer
  • python build.py --includes generates includes string

2010 11 30 - r30 (77.809 KB, gzip: 18.336 KB)

  • Reflection and Refraction materials support in WebGLRenderer (alteredq)
  • SmoothShading support on CanvasRenderer/MeshLambertMaterial
  • MeshShaderMaterial for WebGLRenderer (alteredq)
  • Removed RenderableFace4 from Projector/CanvasRenderer (maybe just temporary).
  • Added extras folder with GeometryUtils, ImageUtils, SceneUtils and ShaderUtils (alteredq & mrdoob)
  • Blender 2.5x Slim now the default exporter (old exporter removed).

2010 11 17 - r29 (69.563 KB)

  • New materials API Still work in progress, but mostly there. (alteredq & mrdoob)
  • Line clipping in CanvasRenderer (julianwa)
  • Refactored CanvasRenderer and SVGRenderer. (mrdoob)
  • Switched to Closure compiler.

2010 11 04 - r28 (62.802 KB)

  • Loader class allows load geometry asynchronously at runtime. (alteredq)
  • MeshPhongMaterial working with WebGLRenderer. (alteredq)
  • Support for huge objects. Max 500k polys and counting. (alteredq)
  • Projector.unprojectVector and Ray class to check intersections with faces (based on mindlapse work)
  • Fixed Projector z-sorting (not as jumpy anymore).
  • Fixed Orthographic projection (was y-inverted).
  • Hmmm.. lib file size starting to get too big...

2010 10 28 - r25 (54.480 KB)

  • WebGLRenderer now up to date with other renderers! (alteredq)
  • .obj to .js python converter (alteredq)
  • Blender 2.54 exporter
  • Added MeshFaceMaterial (multipass per face)
  • Reworked CanvasRenderer and SVGRenderer material handling

2010 10 06 - r18 (44.420 KB)

  • Added PointLight
  • CanvasRenderer and SVGRenderer basic lighting support (ColorStroke/ColorFill only)
  • Renderer > Projector. CanvasRenderer, SVGRenderer and DOMRenderer do not extend anymore
  • Added computeCentroids method to Geometry

2010 09 17 - r17 (39.487 KB)

  • Added Light, AmbientLight and DirectionalLight (philogb)
  • WebGLRenderer basic lighting support (philogb)
  • Memory optimisations

2010 08 21 - r16 (35.592 KB)

  • Workaround for Opera bug (clearRect not working with context with negative scale)
  • Additional Matrix4 and Vector3 methods

2010 07 23 - r15 (32.440 KB)

  • Using new object UV instead of Vector2 where it should be used
  • Added Mesh.flipSided boolean (false by default)
  • CanvasRenderer was handling UVs at 1,1 as bitmapWidth, bitmapHeight (instead of bitmapWidth - 1, bitmapHeight - 1)
  • ParticleBitmapMaterial.offset added
  • Fixed gap when rendering Face4 with MeshBitmapUVMappingMaterial

2010 07 17 - r14 (32.144 KB)

  • Refactored CanvasRenderer (more duplicated code, but easier to handle)
  • Face4 now supports MeshBitmapUVMappingMaterial
  • Changed order of *StrokeMaterial parameters. Now it's color, opacity, lineWidth.
  • BitmapUVMappingMaterial > MeshBitmapUVMappingMaterial
  • ColorFillMaterial > MeshColorFillMaterial
  • ColorStrokeMaterial > MeshColorStrokeMaterial
  • FaceColorFillMaterial > MeshFaceColorFillMaterial
  • FaceColorStrokeMaterial > MeshFaceColorStrokeMaterial
  • ColorStrokeMaterial > LineColorMaterial
  • Rectangle.instersects returned false with rectangles with 0px witdh or height

2010 07 12 - r13 (29.492 KB)

  • Added ParticleCircleMaterial and ParticleBitmapMaterial
  • Particle now use ParticleCircleMaterial instead of ColorFillMaterial
  • Particle.size > Particle.scale.x and Particle.scale.y
  • Particle.rotation.z for rotating the particle
  • SVGRenderer currently out of sync

2010 07 07 - r12 (28.494 KB)

  • First version of the WebGLRenderer (ColorFillMaterial and FaceColorFillMaterial by now)
  • Matrix4.lookAt fix (CanvasRenderer and SVGRenderer now handle the -Y)
  • Color now using 0-1 floats instead of 0-255 integers

2010 07 03 - r11 (23.541 KB)

  • Blender 2.5 exporter (utils/export_threejs.py) now exports UV and normals (Thx kikko)
  • Scene.add > Scene.addObject
  • Enabled Scene.removeObject

2010 06 22 - r10 (23.959 KB)

  • Changed Camera system. (Thx Paul Brunt)
  • Object3D.overdraw = true to enable CanvasRenderer screen space point expansion hack.

2010 06 20 - r9 (23.753 KB)

  • JSLinted.
  • autoClear property for renderers.
  • Removed SVG rgba() workaround for WebKit. (WebKit now supports it)
  • Fixed matrix bug. (transformed objects outside the x axis would get infinitely tall :S)

2010 06 06 - r8 (23.496 KB)

  • Moved UVs to Geometry.
  • CanvasRenderer expands screen space points (workaround for antialias gaps).
  • CanvasRenderer supports BitmapUVMappingMaterial.

2010 06 05 - r7 (22.387 KB)

  • Added Line Object.
  • Workaround for WebKit not supporting rgba() in SVG yet.
  • No need to call updateMatrix(). Use .autoUpdateMatrix = false if needed. (Thx Gregory Athons).

2010 05 17 - r6 (21.003 KB)

  • 2d clipping on CanvasRenderer and SVGRenderer
  • clearRect optimisations on CanvasRenderer

2010 05 16 - r5 (19.026 KB)

  • Removed Class.js dependency
  • Added THREE namespace
  • Camera.x -> Camera.position.x
  • Camera.target.x > Camera.target.position.x
  • ColorMaterial > ColorFillMaterial
  • FaceColorMaterial > FaceColorFillMaterial
  • Materials are now multipass (use array)
  • Added ColorStrokeMaterial and FaceColorStrokeMaterial
  • geometry.faces.a are now indexes instead of references

2010 04 26 - r4 (16.274 KB)

  • SVGRenderer Particle rendering
  • CanvasRenderer uses context.setTransform to avoid extra calculations

2010 04 24 - r3 (16.392 KB)

  • Fixed incorrect rotation matrix transforms
  • Added Plane and Cube primitives

2010 04 24 - r2 (15.724 KB)

2010 04 24 - r1 (15.25 KB)

 

JSHint, A JavaScript Code Quality Tool

Save

JSHint is a community-driven tool to detect errors and potential problems in JavaScript code and to enforce your team's coding conventions. It is very flexible so you can easily adjust it to your particular coding guidelines and the environment you expect your code to execute in.

JSHint is an open-source project that is supported and maintained by the JavaScript developer community. The source code is available on GitHub.

Preload CSS/JavaScript without execution

Save

Preloading components in advance is good for performance. There are several ways to do it. But even the cleanest solution (open up an iframe and go crazy there) comes at a price - the price of the iframe and the price of parsing and executing the preloaded CSS and JavaScript. There's also a relatively high risk of potential JavaScript errors if the script you preload assumes it's loaded in a page different than the one that preloads.

After a bit of trial and lot of error I think I came up with something that could work cross-browser:

  • in IE use new Image().src to preload all component types
  • in all other browsers use a dynamic <object> tag

Code and demo

Here's the final solution, below are some details.

In this example I assume the page prefetches after onload some components that will be needed by the next page. The components are a CSS, a JS and a PNG (sprite).

window.onload = function () {

    var i = 0,
        max = 0,
        o = null,

        // list of stuff to preload
        preload = [
            'http://tools.w3clubs.com/pagr2/<?php echo $id; ?>.sleep.expires.png',
            'http://tools.w3clubs.com/pagr2/<?php echo $id; ?>.sleep.expires.js',
            'http://tools.w3clubs.com/pagr2/<?php echo $id; ?>.sleep.expires.css'
        ],
        isIE = navigator.appName.indexOf('Microsoft') === 0;

    for (i = 0, max = preload.length; i < max; i += 1) {

        if (isIE) {
            new Image().src = preload[i];
            continue;
        }
        o = document.createElement('object');
        o.data = preload[i];

        // IE stuff, otherwise 0x0 is OK
        //o.width = 1;
        //o.height = 1;
        //o.style.visibility = "hidden";
        //o.type = "text/plain"; // IE 
        o.width  = 0;
        o.height = 0;

        // only FF appends to the head
        // all others require body
        document.body.appendChild(o);
    }

};

A demo is here:
http://phpied.com/files/object-prefetch/page1.php?id=1
In the demo the components are delayed with 1 second each and sent with Expries header. Feel free to increment the ID for a new test with uncached components.

Tested in FF3.6, O10, Safari 4, Chrome 5, IE 6,7,8.

Comments

  • new Image().src doesn't do the job in FF because it has a separate cache for images. Didn't seem to work in Safari either where CSS and JS were requested on the second page where they sould've been cached
  • the dynamic object element has to be outside the head in most browsers in order to fire off the downloads
  • dynamic object works also in IE7,8 with a few tweaks (commented out in the code above) but not in IE6. In a separate tests I've also found the object element to be expensive in IE in general.

That's about it. Below are some unsuccessful attempts I tried which failed for various reasons in different browsers.

Other unsuccessful attempts

1.
I was actually inspired by this post by Ben Cherry where he loads CSS and JS in a print stylesheet. Clever hack, unfortunately didn't work in Chrome which caches the JS but doesn't execute it on the next page.

2.
One of the comments on Ben's post suggested (Philip and Dejan said the same) using invalid type attribute to prevent execution, e.g. text/cache.

var s = document.createElement('script');
s.src = preload[1];
s.type = "text/cache";
document.getElementsByTagName('head')[0].appendChild(s);

That worked for the most parts but not in FF3.6 where the JavaScript was never requested.

3.
A dynamic link prefetch didn't do anything, not even in FF which is probably the only browser that supports this.

for (i = 0, max = preload.length; i < max; i += 1) {
    var link = document.createElement('link');
    link.href = preload[i];
    link.rel = "prefetch";
    document.getElementsByTagName('head')[0].appendChild(link);
}

Then it took a bit of trial/error to make IE7,8 work with an object tag, before I stumbled into IE6 and gave up in favor of image src.

In conclusion

I believe this is a solution I could be comfortable with, although it involves user agent sniffing. It certainly looks less hacky than loading JS as CSS anyways. And object elements are meant to load any type of component so no semantic conflict here I don't believe. Feel free to test and report any edge cases or browser/OS combos. (JS errors in IE on the second page are ok, because I'm using console.log in the preloaded javascript)

Thanks for reading!

This entry was posted on Wednesday, April 21st, 2010 and is filed under CSS, images, JavaScript, performance. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.


Get notification for future posts: follow me on Twitter or subscribe to my RSS feed

Parsing XML in JavaScript? at Mike Chambers

Save

You probably need to convert your response to an XML object before using XML methods on it. For example, I have a script that uses Prototype, and need to do the following with XML responses before using them:

onSaveComplete: function(response)
{
var ajaxResponse = Try.these(
function() { return new DOMParser().parseFromString(response.responseText, ‘text/xml’); },
function() { var xmldom = new ActiveXObject(‘Microsoft.XMLDOM’); xmldom.loadXML(response.responseText); return xmldom; }
);

var personid = ajaxResponse.getElementsByTagName(‘personid’)[0].firstChild.nodeValue;
}

Higher-level libraries like Rico handle this for you, and that’s probably where I copied the code from.

(1 - 10 of 22)