What’s new in Babylon.js v2.0

Well, a lot of things are new actually Smile. You can find the complete “What’s new” here but I would like to take some time with you to showcase a few of the cool stuff that we cooked with love for you.

Feel free to ping me on Twitter if you want to discuss about this article: @deltakosh

But before digging into these features, please let me share with you the fantastic demo that Michel Rousseau did for Babylon.js v2.0: https://www.babylonjs.com/?MANSION


This demo showcases 3D sound positioning and

WebAudio that David Rousset added in the framework (Expect a post from him soon on this topic).

This demo contains 6 clickable areas with various Easter eggs…

You will also see a great usage of our volumetric light scattering post-process developed by Julien Moreau-Mathis.

Performance

We added some interesting tools to help web developers debug or optimize their scenes. One of these tools is the debug layer. When activated, it will give you a lot of great information about the current scene. It allows you to enable/disable specific features of the engine and provides performance counters as well.

We also used it to integrate with F12 tools thanks to user marks: https://blogs.msdn.com/b/eternalcoding/archive/2015/02/02/using-user-mark-to-analyze-performance-of-your-javascript-code.aspx

Debug layer

Rendering a scene on a browser is a great experience because you can reach a lot of different users and hardware. But the main associated caveat is that you can encounter very low end devices.

The SceneOptimizer tool is designed to help you reach a specific framerate by gracefully degrading rendering quality at runtime. For more information, please read associated documentation.

Special effects

Creating awesome visual effects is one of my biggest pleasure when I work with a 3D engine. However, a specific option was missing from Babylon.js to allow a huge variety of these effects: the DepthRenderer. Now with Babylon.js v2.0 we introduced a way for post process to read the depth buffer. This is why we were able to also ship the following effects:

Volumetric light scattering: https://www.babylonjs-playground.com/?25 / https://doc.babylonjs.com/page.php?p=24840

Screen Space Ambient Occlusion (SSAO) : https://www.babylonjs-playground.com/?24 / https://doc.babylonjs.com/page.php?p=24837

Rendering optimizations

To allow you to create always more complex scenes, we added support for LOD (Level of Detail). This feature can select different mesh quality based on the distance to the viewer. And obviously this works well with hardware instancing:

https://www.babylonjs.com/?LOD

On the same topic, we also added support for bones and instances. Now you can easily simulate crowds!

https://www.babylonjs.com/?INSTANCEDBONES

New documentation site

We were previously using the wiki feature of GitHub to host our documentation. But some features were missing (control over display, documentation generation from TypeScript code, better rights managements, etc..).

With babylon.js v2.0, we also shipped our new documentation site: https://doc.babylonjs.com. This site is community based as well because anyone can suggest a new page or revisions to any existing page. You can even add comments to API documentation (which obviously will be keep every time we will have to regenerate the API documentation).

And this is just the beginning

The complete change log will give you more insights about all the great stuff we added to Babylon.js.

I would like as well to thank a lot the community that works with us on this engine. Thank to them we added far more features that expected in this release. You guys rock!!!

Using user mark to analyze performance of your JavaScript code

When working on some advanced JavaScript code like a 3d engine, you may ask yourself what you can optimize and how many time do you spend in some specific pieces of code.

Feel free to ping me on Twitter (@deltakosh) if you want to discuss about this article!

Can’t wait to see what this article is about? Watch out this video:



The first idea that comes to mind is obviously the integrated profiler you can find using F12 tools.

Please note that with the new F12 tools that we shipped with Windows 10 Technical preview, profiler is now part of the UI responsiveness window (I really like the new design by the way…):

Let’s see other options that can give you more insights about how your code is performing.

console.time

You just have to call console.time() and console.timeEnd() around the piece of code you want to evaluate. The result is a string in your console displaying the time elapsed between time and timeEnd.

This is pretty basic and can be easily emulated but I found this function really straightforward to use.

Even more interesting, you can specify a string to get a label for your measurement.

This is for instance what I did for Babylon.js:

console.time("Active meshes evaluation");
this._evaluateActiveMeshes();
console.timeEnd("Active meshes evaluation");

This kind of code can be found around all major features and then, when performance logging is enabled, you can get really great info:

You must be warned that rendering text into the console can consume CPU power

Even if this function is not per se a standard function, the browser compatibility is pretty great. Chrome, Firefox, IE, Opera and Safari support it.

performance object

If you want something more visual, you can use the performance object as well (W3C recommendation / Can I Use?). Among others interesting features to help you measure a web page performance, you can find a function called mark that can emit an user mark.

An user mark is the association of a name with a time value. You can measure portions of code with this API in order to get precise values. You can find a great article about this API by Aurelio de Rosa on Sitepoint.

The idea today is to use this API to visualize specific user marks on the UI Responsiveness screen:

This tool allows you to capture a session and analyze how CPU is used:

We can then zoom on a specific frame by selecting an entry called “Animation frame callback” and right-clicking on it to select “filter to event”.

The selected frame will be filtered then:

Thanks to the new F12 tool, you can then switch to JavaScript call stacks to get more details about what happened during this event:

The main problem here is that it is not easy to get how code is dispatched during the event.

And this is where user marks enter the game (Go Hawks !). We can add our own markers and then be able to decompose a frame and see which feature is the more expensive and so on.

performance.mark("Begin of something…just now!");

Furthermore, when you create your own framework, it is super handy to be able to instrument your code with measurements:

performance.mark("Active meshes evaluation-Begin");
this._evaluateActiveMeshes();
performance.mark("Active meshes evaluation-End");
performance.measure("Active meshes evaluation", "Active meshes evaluation-Begin", "Active meshes evaluation-End");

Let’s see what you can get with babylon.jsbabylon.js for instance with the “V8” scene:

You can ask babylon.js to emit user marks and measures for you by using the debug layer:

Then, using UI responsiveness analyzer, you can get this screen:

You can see that user marks are display on top of the event itself (the orange triangles) as well as segments for every measure:

This is then super easy to determine that, for instance, [Render targets] and [Main render] phases are the most expensive.

The complete code used by babylon.js to allow users to measure performance of various features is the following:

Tools._StartUserMark = function (counterName, condition) {
    if (typeof condition === "undefined") { condition = true; }
    if (!condition || !Tools._performance.mark) {
        return;
    }
    Tools._performance.mark(counterName + "-Begin");
};

Tools._EndUserMark = function (counterName, condition) {
    if (typeof condition === "undefined") { condition = true; }
    if (!condition || !Tools._performance.mark) {
        return;
    }
    Tools._performance.mark(counterName + "-End");
    Tools._performance.measure(counterName, counterName + "-Begin", counterName + "-End");
};

Tools._StartPerformanceConsole = function (counterName, condition) {
    if (typeof condition === "undefined") { condition = true; }
    if (!condition) {
        return;
    }

    Tools._StartUserMark(counterName, condition);

    if (console.time) {
        console.time(counterName);
    }
};

Tools._EndPerformanceConsole = function (counterName, condition) {
    if (typeof condition === "undefined") { condition = true; }
    if (!condition) {
        return;
    }

    Tools._EndUserMark(counterName, condition);

    if (console.time) {
        console.timeEnd(counterName);
    }
};

Thanks to F12 tools and user marks you can now get a great dashboard about how different pieces of your code are working together.

Angular Cloud Data Connector

As announced with Microsoft Open Technologies (see Eric Mittelette blog post), we released a new open source JavaScript framework called Angular Cloud Data Connector.

You want to discuss about it? Please ping me on twitter: @deltakosh

Angular Cloud Data Connector, or AngularCDC is a library for Angular.js that allows you to work seamlessly wiclip_image001th many data sources. If you are familiar with .NET, you can think of it like the DataSet of JavaScript.

Additionally, AngularCDC also supports offline mode and can handle all CRUD operations for you.

Thanks to providers, you can easily get information from various data sources with the same client code.

So far, we are supporting the following providers:

  • Azure Mobile Services
  • Amazon Web Services (DynamoDB)
  • Facebook (read only)
  • Twitter (read only)
  • Ordrin (read only)
  • Nitrogen (read only)

 

The idea here is obviously to get more and more providers (please feel free to contribute!) in order to be able to reach every data sources we can find on the web (think about CouchDB, Azure Tables, etc…)

Note: thanks to Cory Fowler, I was invited to Web Camps TV on Channel9 to discuss about AngularCDC: https://channel9.msdn.com/Shows/Web+Camps+TV/Offline-Web-Based-Data-Storage-with-Cloud-Sync-using-Angular-Cloud-Data-Connector-ACDC

clip_image001

Getting started with AngularCDC

If you are using Visual Studio you can just load AngularCDC.sln file. But you can as well setup a grunt environment to build your own version of AngularCDC. We already defined the gruntfile.js for your convenience.

To directly use AngularCDC, you just have to reference the angular-cdc.js file which is available in the /dist folder in the repository.

Then depending on the provider you want to use you will have to reference specific files. For instance, if you want to connect to Azure Mobile Services, you will have to add the following references:

 

Then the magic can happen!

You can use Angular.js DI to reference AngularCDC objects:

var app = angular.module('demoApp', ['AngularCDC', 'AngularCDC.AzureMobileServices']);

Once this is done, you have to initialize the provider with required information:

$scope.initialize = function () {
    // configure A.M.S.
    angularCDCAzureMobileService.addSource(
        'serviceUrl', // appUrl
        'xxxxxxxxxx', // appKey
        ['people']);  // table name
    // register A.M.S. to AngularCDC
    angularCDCService.addSource(angularCDCAzureMobileService);
    // connect to service
    angularCDCService.connect(function (results) {
        // We are good to go
    }, $scope, 1);
};
$scope.initialize();

When everything is connected you can use angularCDCService object to do regular operations like for instance adding a new entity:

$scope.add = function (entity) {
    angularCDCService.add('people', entity);
    angularCDCService.commit(function () {
        // Things went well, call a sync  (which is not necessary 
// if you added the $scope to connect function of angularCDCService) // $scope.sync();
}, function () { console.log('Problem adding data'); }); };

AngularCDC will then take care of everything for you. For instance, if you are not connected to the network, the commit operation will save everything in a local IDB object and will then synchronize data back to the cloud.

And obviously, all the client code remains the same if you decide to switch providers!

If you want to learn more about basic usage of AngularCDC, please visit official documentation.

How does it work?

AngularCDC is based on several TypeScript files:

For instance, the angularCDCService connects with providers through an interface defined with TypeScript called IDataService:

module AngularCloudDataConnector {
    export interface IDataService {
        _dataId: number;
        tableNames: Array<string>;

        add(tableName: string, entity: any, onsuccess: (newEntity: any) => void, onerror: (error: string) => void): void;
        update(tableName: string, entity: any, onsuccess: (newEntity: any) => void, onerror: (error: string) => void): void;
        get(updateCallback: (result: any) => void, lastSyncDates: { [tableName: string]: Date; }): void;
        remove(tableName: string, entity: any, onsuccess: () => void, onerror: (error: string) => void): void;
    }
} 

Basically each provider has just to implement this interface.

If you want to learn more about how to create your own provider, please visit official documentation.

connectivityService.ts and offlineService.ts are used to support offline mode with the help of IndexedDB. To address devices where IndexedDB is not supported we also added an in-memory emulation with inMemoryDatabase.ts

The core of AngularCDC is in database.ts where the “dataset” is created and maintained.

Going further

The best way to learn more about AngularCDC is to clone the repository and start playing with the code itself.

And please note that we are looking for contributors to add support for new providers as well !

[JavaScript] Shoud I have to cache my array’s length?

Happy new year!

To start this promising year, I would like to discuss about an interesting topic I saw on Twitter (@deltakosh if you want to discuss). The discussion was about array’s length access during a loop.

Simply put, should I use this:

var total = 0;
for (var i = 0; i < myArray.length; i++) {
    total += myArray[i];
}

Or that:

var total = 0;
for (var i = 0, len = myArray.length; i < len; i++) {
    total += myArray[i];
}

Should I use the .length property on every loop or should I cache it? Interesting question because almost all JavaScript code on the web have to use loops.

So pragmatically I created this small Jsperf: https://jsperf.com/arraylengthprecaching

And the result is self-explanatory:

clip_image002

Please do not be afraid by the absence of IE, this is due to some user agent sniffing changes we did_._

First point that we can note: Results are slightly the same most of the time.

On desktop configuration, browsers are doing a great work and there is almost NO difference between our two options. We can even see some devices where cached version is slower than regular version.

For instance in latest version of IE (That you can test on Windows 10 Technical preview or using https://remote.modern.ie) , we started optimizing this recently by hoisting the length load out of the loop as you can see in this post. However, it is worth nothing that optimizations have limitations and don’t always kick in.

Mobile browsers do not have (yet) the array length hoisting optimization, so it is expected that length caching performs noticeably better in a micro benchmark like this one.

To sum up, when optimizing for performance, it’s almost always better to manually hoist such things, as it provides more information – it enforces the otherwise assumption that the JIT must make and guarantee, that the length does not change inside the loop, or that the loop does not care if the length does change. But on the other hand, if the code being written is not performance sensitive, it may not be worth going out of the way to optimize it, unless it is also done for readability or some other good reason (Vyacheslav Egorov wrote an excellent post on this topic)

__

Side note: I saw people arguing that accessing .length property can be longer because JavaScript arrays are stored internally as linked list. This is not true because the chunks of an array are stored as contiguous elements for random access within the chunks, but since JavaScript allows sparse arrays, sparse arrays store several chunks in a linked list, to balance access speed and memory usage.

JavaScript: Returning this or not returning this, this is the question!

You want to discuss about this article? Ping me on Twitter!

While designing Babylon.js API, I recently found that some API may required to be more fluent.

A fluent API as stated by this Wikipedia article is an implementation of an object oriented API that aims to provide for more readable code. jQuery for instance is a great example of what a fluent API allows you to do:

 $('<div></div>')
     .html("Fluent API are cool!")
     .addClass("header")
     .appendTo("body");

Fluent API lets you chain function calls by returning this object.

We can easily create a fluent API like this:

var MyClass = function(a) {
    this.a = a;
}

MyClass.prototype.foo = function(b) {
    // Do some complex work   
    this.a += Math.cos(b);
    return this;
}

As you can see, the trick is just about returning the this object (reference to current instance in this case) to allow the chain to continue.

If you are not aware of how “this” keyword is working in JavaScript, I recommend reading this great article by Mike West.

We can then chain calls:

var obj = new MyClass(5);
obj.foo(1).foo(2).foo(3);

Before trying to do the same with babylon.js, I wanted to be sure that this would not generate some performance issues.

So I did a benchmark!

var count = 10000000;

var MyClass = function(a) {
    this.a = a;
}

MyClass.prototype.foo = function(b) {
    // Do some complex work   
    this.a += Math.cos(b);
    return this;
}

MyClass.prototype.foo2 = function (b) {
    // Do some complex work   
    this.a += Math.cos(b);
}

var start = new Date().getTime();
var obj = new MyClass(5);
obj.foo(1).foo(2).foo(3);
for (var index = 0; index < count; index++) {
    obj.foo(1).foo(2).foo(3);
}
var end = new Date().getTime();

var start2 = new Date().getTime();
var obj2 = new MyClass(5);
for (var index = 0; index < count; index++) {
    obj2.foo2(1);
    obj2.foo2(2);
    obj2.foo2(3);
}
var end2 = new Date().getTime();

var div = document.getElementById("results");

div.innerHTML += obj.a + ": With return this: " + (end - start) + "ms<BR>";
div.innerHTML += obj2.a + ": Without return this: " + (end2 - start2) + "ms";

As you can see foo and foo2 do exactly the same thing. The only difference is that foo can be chained whereas foo2 cannot.

Obviously the call chain is different between:

obj.foo(1).foo(2).foo(3);

and

obj2.foo2(1);
obj2.foo2(2);
obj2.foo2(3);

Given this code, I ran it on Chrome, Firefox and IE to determine if I have to get concerned about performance.

And here are the results I got:

  • On Chrome, regular API is 6% slower than fluent API
  • On Firefox, both API are almost running at same speed (fluent API is 1% slower)
  • On IE, both API are almost running at same speed (fluent API is 2% slower)

The thing is that I added an operation into the function (Math.cos) to simulate some kind of treatment done by the function.

If I remove everything and just keep the “return” statement, on all browser there is no difference (actually just one or two milliseconds for 10,000,000 tries).

So my conclusion is: It’s a go!

Fluent API is great, it produce more readable code and you can use it without any problem or performance loss!

Simple inheritance with JavaScript

A lot of my friends are C# or C++ developers. They are used to use inheritance in their projects and when they want to learn or discover JavaScript, one of the first question they ask is: “But how can I do inheritance with JavaScript?”.

Actually, JavaScript uses a different approach than C# or C++ to create an object oriented language. It is a prototype-based language. The concept of prototyping implies that behavior can be reused by cloning existing objects that serve as prototypes. Every object in JavaScript depends from a prototype which defines a set of functions and members that the object can use. There is no class. Just objects. Every object can then be used as prototype for another object.

This concept is extremely flexible and we can use it to simulate some concepts from OOP like inheritance.

Implementing inheritance

Let’s image we want to create this hierarchy using JavaScript:

First of all, we can create ClassA easily. Because there is no explicit classes, we can define a set of behavior (A class so…) by just creating a function like this:

var ClassA = function() {
    this.name = "class A";
}

This “class” can be instantiated using the new keyword:

var a = new ClassA();
ClassA.prototype.print = function() {
    console.log(this.name);
}

And to use it using our object:

a.print();

Fairly simple, right?

The complete sample is just 8 lines long:

var ClassA = function() {
    this.name = "class A";
}

ClassA.prototype.print = function() {
    console.log(this.name);
}

var a = new ClassA();

a.print();

Now let’s add a tool to create “inheritance” between classes. This tool will just have to do one single thing: Cloning the prototype:

var inheritsFrom = function (child, parent) {
    child.prototype = Object.create(parent.prototype);
};

This is exactly where the magic happens! By cloning the prototype, we transfer all members and functions to the new class.

So if we want to add a second class that will be child of the first one, we just have to use this code:

var ClassB = function() {
    this.name = "class B";
    this.surname = "I'm the child";
}

inheritsFrom(ClassB, ClassA);

Then because ClassB inherited the print function from ClassA, the following code is working:

var b = new ClassB();
b.print();

And produces the following output:

class B

We can even override the print function for ClassB:

ClassB.prototype.print = function() {
    ClassA.prototype.print.call(this);
    console.log(this.surname);
}

In this case, the produced output will look lie this:

class B

I’m the child

The trick here is to call ClassA.prototype to get the base print function. Then thanks to call function we can call the base function on the current object (this).

Creating ClassC is now obvious:

var ClassC = function () {
    this.name = "class C";
    this.surname = "I'm the grandchild";
}

inheritsFrom(ClassC, ClassB);

ClassC.prototype.foo = function() {
    // Do some funky stuff here...
}

ClassC.prototype.print = function () {
    ClassB.prototype.print.call(this);
    console.log("Sounds like this is working!");
}

var c = new ClassC();
c.print();

And the output is:

class C

I’m the grandchild
Sounds like this is working!

Philosophy…

To conclude, I just want to clearly state that JavaScript is not C# or C++. It has its own philosophy. If you are a C++ or C# developer and you really want to embrace the full power of JavaScript, the best tip I can give you is: Do not try to replicate your language into JavaScript. There is no best or worst language. Just different philosophies!

JavaScript: using closure space to create real private members

For a recent project, I was discussing with @johnshew about the way JavaScript developers can embed private members into an object. My technique for this specific case is to use what I call “closure space”.

But before diving into it, let me present you why you may need private member and also the other way to “simulate” private member.

Feel free to ping me on twitter if you want to discuss about this article: @deltakosh

Why using private members

When you create an object using JavaScript, you can define value members. If you want to control read/write access on them, you need accessors that can be define like this:

var entity = {};

entity._property = "hello world";
Object.defineProperty(entity, "property", {
    get: function () { return this._property; },
    set: function (value) {
        this._property = value;
    },
    enumerable: true,
    configurable: true
});

Doing this, you have full control over read and write operations. The problem is that the __property_ member is still accessible and can be modified directly.

This is exactly why you need a more robust way to define private members that can only be accessed by object’s functions.

Using closure space

The trick here is to use closure space. This memory space is built for you by the browser each time an inner function has access to variables from the scope of an outer function. This can be tricky sometimes but for our topic this is perfect.

So let’s change a bit the previous code to use this feature:

var createProperty = function (obj, prop, currentValue) {
    Object.defineProperty(obj, prop, {
        get: function () { return currentValue; },
        set: function (value) {
            currentValue = value;
        },
        enumerable: true,
        configurable: true
    });
}

var entity = {};

var myVar = "hello world";
createProperty(entity, "property", myVar);

In this example, the createProperty function has a currentValue variable that get and set functions can see. This variable is going to be saved in the closure space of get and set functions. Only these two functions can now see and update the currentValue variable! Mission accomplished !

The only caveat we have here is that the source value (myVar) is still accessible. So here comes another version for even more robust protection:

var createProperty = function (obj, prop) {
    var currentValue = obj[prop];
    Object.defineProperty(obj, prop, {
        get: function () { return currentValue; },
        set: function (value) {
            currentValue = value;
        },
        enumerable: true,
        configurable: true
    });
}

var entity = {
    property: "hello world"
};

createProperty(entity, "property");

Using this way, even the source value is destructed. So mission fully accomplished!

Performance consideration

Let’s now have a look to performance.

Obviously, closure spaces or even properties are slower and more expensive than just a plain variable. That’s why this article focuses more on the difference between regular way and closure space technique.

To check if closure space approach is not too expensive compared to regular way, I wrote this little benchmark:

<!DOCTYPE html>
<html xmlns="https://www.w3.org/1999/xhtml">
<head>
    <title></title>
</head>
<style>
    html {
        font-family: "Helvetica Neue", Helvetica;
    }
</style>
<body>
    <div id="results">Computing...</div>
    <script>
        var results = document.getElementById("results");
        var sampleSize = 1000000;
        var opCounts = 1000000;

        var entities = [];

        setTimeout(function () {
            // Creating entities
            for (var index = 0; index < sampleSize; index++) {
                entities.push({
                    property: "hello world (" + index + ")"
                });
            }

            // Random reads
            var start = new Date().getTime();
            for (index = 0; index < opCounts; index++) {
                var position = Math.floor(Math.random() * entities.length);
                var temp = entities[position].property;
            }
            var end = new Date().getTime();

            results.innerHTML = "<strong>Results:</strong><br>Using member access: <strong>" + (end - start) + "</strong> ms";
        }, 0);

        setTimeout(function () {
            // Closure space =======================================
            var createProperty = function (obj, prop, currentValue) {
                Object.defineProperty(obj, prop, {
                    get: function () { return currentValue; },
                    set: function (value) {
                        currentValue = value;
                    },
                    enumerable: true,
                    configurable: true
                });
            }
            // Adding property and using closure space to save private value
            for (var index = 0; index < sampleSize; index++) {
                var entity = entities[index];

                var currentValue = entity.property;
                createProperty(entity, "property", currentValue);
            }

            // Random reads
            var start = new Date().getTime();
            for (index = 0; index < opCounts; index++) {
                var position = Math.floor(Math.random() * entities.length);
                var temp = entities[position].property;
            }
            var end = new Date().getTime();

            results.innerHTML += "<br>Using closure space: <strong>" + (end - start) + "</strong> ms";
        }, 0);

        setTimeout(function () {
            // Using local member =======================================
            // Adding property and using local member to save private value
            for (var index = 0; index < sampleSize; index++) {
                var entity = entities[index];

                entity._property = entity.property;
                Object.defineProperty(entity, "property", {
                    get: function () { return this._property; },
                    set: function (value) {
                        this._property = value;
                    },
                    enumerable: true,
                    configurable: true
                });
            }

            // Random reads
            var start = new Date().getTime();
            for (index = 0; index < opCounts; index++) {
                var position = Math.floor(Math.random() * entities.length);
                var temp = entities[position].property;
            }
            var end = new Date().getTime();

            results.innerHTML += "<br>Using local member: <strong>" + (end - start) + "</strong> ms";
        }, 0);

    </script>
</body>
</html>

 

I create 1 million objects all with a property member. Then I do three tests:

  • Do 1 million random accesses to the property
  • Do 1 million random accesses to the “closure space” version
  • Do 1 million random accesses to the regular get/set version

 

Here are a table and a chart about the result:

We can notice that the closure space version is always faster than the regular version and depending on the browser, it can be a really impressive optimization.

Chrome performance seems really weird. There may be a bug. To be sure, I contacted Google’s team to figure out what’s happening here

However, if we look closely we can find that using closure space or even a property can be ten times slower than direct access to a member. So be warned and use it wisely.

Memory footprint

We also have to check if this technique does not consume too much memory. To benchmark memory I wrote these three little pieces of code:

Reference code

var sampleSize = 1000000;

var entities = [];

// Creating entities
for (var index = 0; index < sampleSize; index++) {
    entities.push({
        property: "hello world (" + index + ")"
    });
}

Regular way

var sampleSize = 1000000;

var entities = [];

// Adding property and using local member to save private value
for (var index = 0; index < sampleSize; index++) {
    var entity = {};

    entity._property = "hello world (" + index + ")";
    Object.defineProperty(entity, "property", {
        get: function () { return this._property; },
        set: function (value) {
            this._property = value;
        },
        enumerable: true,
        configurable: true
    });

    entities.push(entity);
}

Closure space version

var sampleSize = 1000000;

var entities = [];

var createProperty = function (obj, prop, currentValue) {
    Object.defineProperty(obj, prop, {
        get: function () { return currentValue; },
        set: function (value) {
            currentValue = value;
        },
        enumerable: true,
        configurable: true
    });
}

// Adding property and using closure space to save private value
for (var index = 0; index < sampleSize; index++) {
    var entity = {};

    var currentValue = "hello world (" + index + ")";
    createProperty(entity, "property", currentValue);

    entities.push(entity);
}

Then I ran all these three codes and I launched the embedded memory profiler (Example here using F12 tools):

Here are the results I got on my computer:

Between closure space and regular way, only Chrome has better results for closure space version. IE and Firefox use a bit more memory.

Conclusion

As you can see, closure space properties can be a great way to create really private data. You may have to deal with a small increase in memory consumption but from my point of view this is fairly reasonable (And at that price you can have a great performance improvement over using the regular way).

And by the way if you want to try it by yourself, please find all the code used here.

Why we decided to move from plain JavaScript to TypeScript for Babylon.js

One year ago when we decided to sacrifice all of our spare time to create Babylon.js we had a really interesting discussion about using TypeScript as main development language.

At that time, TypeScript was not robust enough (even if we did some experiments) so we decided to use plain JavaScript language. But I am really excited to announce that we started the port of Babylon.js to TypeScript this weekend!

Before going further, here are some numbers you may need to correctly understand my explanations. Indeed Babylon.js is:

  • clip_image002An average of 1 version per month
  • 21 contributors
  • 32 releases
  • 365 commits (one per day!)
  • 14000+ lines of code
  • More than 120 files of code
  • More than 200 forks
  • A bandwidth of 1TB per month for the website
  • All my spare time (I cannot even remember the last time I went to see a movie)
  • 1.3GB (Code and samples)

Let me explain you what the main reasons for this decision are.

You want to discuss about this article: reach me on Twitter: @deltakosh

Because it is transparent for users

TypeScript is a language that generates plain JavaScript files. The code produced follows all the JavaScript rules (the “good parts”) and thus is clean and modular (one file per class). You can export namespaces (modules) and to be honest most of the time the produced JavaScript is fairly the same as what we can do.

Developers that use Babylon.js will not be able to see the difference between previous version developed with JavaScript and new version developed using TypeScript.

Furthermore, we can reach more users who can be afraid by JavaScript. TypeScript is for instance a good way for C#, Java and all strong typed languages developers to start developing for the web.

Because Babylon.js is an open-source project

You may consider this one as a counter-example. It could be considered as a blocker for many JavaScript developers that may want to contribute to Babylon.js. But remember that TypeScript supports JavaScript directly so nothing prevents you to fix or submit a feature in JavaScript. It will be our job then to educate developers to try to have less contributions in JavaScript and more in TypeScript.

And here why:

  • Using TypeScript, we benefit from the power of static compilation and this helps A LOT. We can easily find wrong parameters, wrong names, typos and all kind of syntax errors thanks to TypeScript
  • Integrating a pull-request is a hard task because you must guarantee that a code you did not produce nor manage will not break things. With TypeScript it is easier thanks to static compilation
  • Reading and understanding TypeScript code is easier because parameters and functions are typed, so you know what kind of parameters you should pass to a function for instance.

The funny thing is that during the port to TypeScript I even found a bug into my own code. It was in the collisions engine code where I do a lot of computations. The bug was hidden there:

clip_image003

Nothing remarkable, especially when there are tons of other lines like this.

When I moved to TypeScript this code remains the same but thanks to strong typing and static compilation here is what I got:

clip_image004

This is a common mistake in JavaScript: using a property instead of a function. It is a hard to find and easy to fix bug! TypeScript detected it instantaneously because it knew that _this.tempVector was a Vector3 and did not have a lengthSquared property.

Because TypeScript is an open source project

Our users are not all on Windows and they are not all using Visual Studio (so sad Smile) so moving from JavaScript to TypeScript would not have been possible if these users were not able to contribute if they want.

But no worries, TypeScript has us covered! First of all you can find all the source code here:

https://TypeScript.codeplex.com/

Then, TypeScript can run under:

You also have to know that everything using classes/packages/modules compiles to RequireJS and CommonJS.

Because tooling is awesome when working with a modern IDE

TypeScript works extremely well with Sublime Text, Eclipse and almost all major IDEs. On our side we are using Visual Studio and to be honest, the experience is really great.

Indeed, using Visual Studio 2013, you will have:

  • Integrated TypeScript file support
  • Syntax color
  • Intellisense:

clip_image005

  • Discoverability: With intellisense and strong typing, you have a kind of API documentation just under your mouse
  • Refactoring support
  • Integrated class browser (I LOVE this one):

clip_image007

I also use Resharper (www.jetbrains.com) as a plugin into Visual Studio. And with this tool you can get some goodness like for instance auto refactoring for lambda expressions:

Here is my initial code:

clip_image008

Can you see the green line? If I right-click on it, I have an option to convert my function to a lambda expression:

clip_image009

No more “that = this” stuff! (With just one mouse click)

  • The debug experience is also great because you can put a breakpoint into your TypeScript code! Visual Studio will handle the link between .ts and .js files for you:

clip_image011

Because TypeScript is handy

We have just seen the lambda expression (An elegant way to get rid of closures). And there are tons of other wonderful stuff like this one in TypeScript. For instance, here how to handle inheritance:

clip_image012

Here my Camera class inherits from Node class. Nothing more to do! Obviously you can call parent’s function within child’s code:

clip_image013

How can it be simpler?

The generated code handles for you all the burden of inheritance with prototyping:

clip_image014

Using TypeScript, you have all the power of a strong typed language but you can code in normal JavaScript at any time because TypeScript is optional: you can mix and match.

Because of the future

The real power of TypeScript is its compilation. At the end of the day the goal is to produce JavaScript. Thanks to the compilation you can add new options to generated files without changing a line of your code.

For instance according to TypeScript’s roadmap, the async/await language feature from C# is under exploration. This means that perhaps one day, we will not have to struggle with callback/promises/handling exceptions in an asynchronous flow.

Obviously the generated code will have to do that but as a developer you will just have a clear vision of your code.

Let’s look at this standard JavaScript code:

clip_image015

Now imagine you can do that!

clip_image016

Lovely, right?

And this is the same thing for ECMAScript 6 current and upcoming features!

Call to action

We have just started porting our code so it is a bit too early for a post-mortem but I will write an article in two or three months about how things will have gone our 3D engine written using TypeScript.

In the meantime, if you want to learn more about TypeScript here are some useful pointers:

And obviously, I urge you to think about doing the same thing if you have open source projects that use JavaScript!

What do you mean by shaders? Learn how to create shaders with babylon.js

You may have noticed that we talked a lot about babylon.js during //Build 2014. If you do not, you can see the keynote for day 2 here and go directly to 2:24-2:28: https://channel9.msdn.com/Events/Build/2014/KEY02

Steven Guggenheimer and John Shewchuk demoed how the Oculus Rift support was added to Babylon.js. And one of the key thing for this demo was the work we did on a specific shader to simulate lenses as you can see in this picture:

I also presented a session with Frank Olivier and Ben Constable about graphics on IE and Babylon.js: https://channel9.msdn.com/Events/Build/2014/3-558

This leads me to one of the questions I often have about babylon.js: What do you mean by shaders???

So today I am going to try to explain you how shaders work.

You want to discuss about this article: reach me on Twitter: @deltakosh

Summary

  1. The theory
  2. Too hard? BABYLON.ShaderMaterial to the rescue
  3. CYOS: Create Your Own Shader
  4. Your shader?

The theory

Before starting experimenting, we must take a break and see how things work internally.

When dealing with hardware accelerated 3D, you must be aware of the fact that you will have to discuss with 2 CPU: the main CPU and the GPU. The GPU is a kind of extremely specialized CPU.

The GPU is a state machine that you set up using the CPU. For instance the CPU will configure the GPU to render lines instead of triangles. Or it will define that transparency is on and so on.

Once all the states are set, the CPU will define what to render (the geometry which is composed of a list of points (called the vertices and stored into an array called vertex buffer) and a list of indexes (the faces (or triangles) stored into an array called index buffer)).

The final step for the CPU is to define how to render the geometry and for this specific task, the CPU will define shaders for the GPU. Shaders are a piece of code that the GPU will execute for each of the vertices and pixels it has to render.

Some vocabulary: think of a vertex (vertices when there are several of them) as a “point” in a 3D environment as opposed to the point in a 2D environment.

There are two kinds of shaders: vertex shader and pixel (or fragment) shader.

Graphics pipeline

Before digging into shaders, let’s take a step back here. To render pixels the GPU will take the geometry defined by the CPU and will do the following:

  • Using the index buffer, three vertices are gathered to define a triangle: The index buffer contains a list of vertex indexes. This means that each entry in the index buffer is the number of a vertex in the vertex buffer. This is really useful to avoid duplicating vertices. For instance the following index buffer is a list of 2 faces: [1 2 3 1 3 4]. The first face contains vertex 1, vertex 2 and vertex 3. The second face contains vertex 1, vertex 3 and vertex 4. So there are 4 vertices in this geometry:

  • The vertex shader is applied on each vertex of the triangle. The primary goal of the vertex shader is to produce a pixel for each vertex (the projection on the 2D screen of the 3D vertex):

  • Using this 3 pixels (which define a 2d triangle on the screen), the GPU will interpolate all values attached to the pixel (at least its position) and the pixel shader will be applied on every pixel included into the 2d triangle in order to generate a color for every pixel:

  • This process is done for every face defined by the index buffer.

Obviously due to its parallel nature, the GPU is able to process this step for a lot of faces simultaneously and then achieve really good performance.

GLSL

We have just seen that to render triangles, the GPU needs two shaders: the vertex shader and the pixel shader. These shaders are written using a language called GLSL (Graphics Library Shader Language). It looks like C.

For Internet Explorer 11, we have developed a compiler to transform GLSL to HLSL (High Level Shader Language) which is the shader language of DirectX 11. This allows IE11 to ensure that the shader code is safe (You don’t want to reset your computer when using WebGL Sourire):

 

Here is a sample of a common vertex shader:

precision highp float;

// Attributes
attribute vec3 position;
attribute vec2 uv;

// Uniforms
uniform mat4 worldViewProjection;

// Varying
varying vec2 vUV;

void main(void) {
    gl_Position = worldViewProjection * vec4(position, 1.0);

    vUV = uv;
}

Vertex shader structure

A vertex shader contains the following:

  • Attributes: An attribute defines a portion of a vertex. By default a vertex should at least contain a position (a vector3:x, y, z). But as a developer you can decide to add more information. For instance in the former shader, there is a vector2 named uv (Texture coordinates that allows to apply a 2D texture on an 3D object)
  • Uniforms: A uniform is a variable used by the shader and defined by the CPU. The only uniform we have here is a matrix used to project the position of the vertex (x, y, z) to the screen (x, y)
  • Varying: Varying variables are values created by the vertex shader and transmitted to the pixel shader. Here the vertex shader will transmit a vUV (a simple copy of uv) value to the pixel shader. This means that a pixel is defined here with a position and a texture coordinates. These values will be interpolated by the GPU and used by the pixel shader.
  • main: The function named main is the code executed by the GPU for each vertex and must at least produce a value for _gl_position_ (the position on the screen of the current vertex).

We can see in our sample that the vertex shader is pretty simple. It generates a system variable (starting with _gl__) named _gl_position_ to define the position of the associated pixel and it sets a varying variable called vUV.

The voodoo behind matrices

The thing in our shader is that we have a matrix named worldViewProjection and we use this matrix to project the vertex position to the gl_position variable. That is cool but how do we get the value of this matrix ? It is an uniform so we have to define it on the CPU side (using JavaScript).

This is one of the complex part of doing 3D. You must understand complex math (or you will have to use a 3D engine like babylon.js that we are going to see later).

The worldViewProjection matrix is the combination of 3 different matrices:

Using the resulting matrix allows us to be able to transform 3d vertices to 2d pixels while taking in account the point of view and everything related to the position/scale/rotation of the current object.

This is your responsibility as a 3D developer to create and keep this matrix up to date.

Back to the shaders

Once the vertex shader was executed on every vertex (3 times then) we have 3 pixels with a correct _gl_position_ and a vUV value. The GPU will then interpolate these values on every pixel contained into the triangle produced by these pixels

Then for each pixel, it will execute the pixel shader:

precision highp float;
varying vec2 vUV;
uniform sampler2D textureSampler;

void main(void) {
    gl_FragColor = texture2D(textureSampler, vUV);
}

Pixel (or fragment) shader structure

The structure of a pixel shader is similar to a vertex shader:

  • Varying: Varying variables are value created by the vertex shader and transmitted to the pixel shader. Here the pixel shader will receive a vUV value from the vertex shader
  • Uniforms: A uniform is a variable used by the shader and defined by the CPU. The only uniform we have here is a sampler which is a tool used to read texture colors
  • main: The function named main is the code executed by the GPU for each pixel and must at least produce a value for _gl_FragColor_ (The color of the current pixel).

This pixel shader is fairly simple: It reads the color from the texture using texture coordinates from the vertex shader (which in turn got it from the vertex).

To achieve this result, you will have to deal with a LOT of WebGL code. Indeed, WebGL is a really powerful but really low level API and you have to do everything by yourself from creating the buffers to defining vertex structures. You also have to do all the math and set all the states and handle texture loading and so on…

Too hard? BABYLON.ShaderMaterial to the rescue

I know what you are thinking: Shaders are really cool but I do not want to bother with WebGL internal plumbing or even with math.

And you are right! This is a perfectly legitim ask and that is exactly why I created Babylon.js.

Let me present you the code used by the previous rolling sphere demo. First of all you will need a simple webpage:

<!DOCTYPE html>
<html>
<head>
    <title>Babylon.js</title>
    <script src="Babylon.js"></script>

    <script type="application/vertexShader" id="vertexShaderCode">
        precision highp float;

        // Attributes
        attribute vec3 position;
        attribute vec2 uv;

        // Uniforms
        uniform mat4 worldViewProjection;

        // Normal
        varying vec2 vUV;

        void main(void) {
        gl_Position = worldViewProjection * vec4(position, 1.0);

        vUV = uv;
        }
    </script>

    <script type="application/fragmentShader" id="fragmentShaderCode">
        precision highp float;
        varying vec2 vUV;

        uniform sampler2D textureSampler;

        void main(void) {
        gl_FragColor = texture2D(textureSampler, vUV);
        }
    </script>

    <script src="index.js"></script>
    <style>
        html, body {
            width: 100%;
            height: 100%;
            padding: ;
            margin: ;
            overflow: hidden;
            margin: 0px;
            overflow: hidden;
        }

        #renderCanvas {
            width: 100%;
            height: 100%;
            touch-action: none;
            -ms-touch-action: none;
        }
    </style>
</head>
<body>
    <canvas id="renderCanvas"></canvas>
</body>
</html>

You can notice that the shaders are defined by

Create wonderful interactive games for the web: Using webgl and a physics engine (babylon.js &#038; cannon.js)

Did you ever notice how a physics game is addictive? Actually, I can spend hours playing with small physics simulations just because it looks so real.

That’s why I decided to integrate a physics engine into babylon.js.

Can’t wait and want to play right now? Just go there..

State of the art

The purpose was not to create a physics engine from scratch., There are already a few good physics engine written for JavaScript by the community (This is one of the reason why I love JavaScript: there is always a .js project for what you want to do ^^). Here are some examples (I do not talk about 2D engine but only 3D ones):

  • Cannon.js: A pure JavaScript engine. I decided to use this one because the code is clear, efficient and very well maintained
  • JigLibJS: A JavaScript port of a C/C++ library called JigLib
  • Ammo.js: A JavaScript port of Bullet engine.The port was made using Emscripten and thus this is an automated port.
  • Bullet.js: Another JavaScript port of Bullet

All of these engines are great but I had to make a choice. So I decided to use a purely JavaScript engine (in opposite to a port of an existing one) because I wanted an engine designed and thought for JavaScript.

For instance, I did not keep Ammo.js because it was created with Emscripten. Because of how Emscripten converts libraries to Javascript there are some things in Ammo.js which are more difficult to use – pointers for instance add an extra layer of complication.

Because they are ports and not written in Javascript, they are not optimized for the web. Javascript has its own features and quirks which make developing for it unique.

Activating physics with babylon.js

So let’s go back to Babylon.js. From the point of view of a web developer, physics engine can give you a lot of cool features among which we can get for instance:

  • Realistic simulation of basic shapes (Box, Sphere, compound, etc…)
  • Simulation of friction and bounciness
  • Link between objects (to simulate chains for instance)

So let’s have a look on how to activate all these funny things with Babylon.js:

Enabling physics engine

To enable the physics engine, you just have to run this line of code:

scene.enablePhysics();

Please note that the physics simulation can be really expensive when it comes to talk about performance

You can define the gravity of your simulation with the following command:

scene.setGravity(new BABYLON.Vector3(0, -10, 0));

Defining impostors for your meshes

The simulation will not directly work on your meshes (far too complex). Actually, you will need to create a geometric impostor for them. Right now, I only support boxes, spheres and planes but in a not too distant future I will continue adding new impostors.

To define an impostor, you just have to call the setPhysicState function:

sphere.setPhysicsState({ impostor: BABYLON.PhysicsEngine.SphereImpostor, mass: 1 });

The mass parameter can be set to 0 if you want to create a static object (such as a ground for instance).

You can also define the friction (resistance of the object to movement) and the restitution (tendency of the object to bounce after colliding with another):

ground.setPhysicsState({ impostor: BABYLON.PhysicsEngine.BoxImpostor, mass: 0, friction: 0.5, restitution: 0.7 });

The initial position and rotation (using mesh.rotationQuaternion property) of the mesh are used to define the position and rotation of the impostors.

You can also link your impostors in order to always keep meshes linked. You can for instance create chains like this one:

To do so, you just have (as always) one line of code to execute:

spheres[index].setPhysicsLinkWith(spheres[index + 1], new BABYLON.Vector3(0, 0.5, 0), new BABYLON.Vector3(0, -0.5, 0));

Creating compound impostors

If you want to create more complex physics objects you can use the scene.createCompoundImpostor function:

// Compound
var part0 = BABYLON.Mesh.CreateBox("part0", 3, scene);
part0.position = new BABYLON.Vector3(3, 30, 0);

var part1 = BABYLON.Mesh.CreateBox("part1", 3, scene);
part1.parent = part0; // We need a hierarchy for compound objects
part1.position = new BABYLON.Vector3(0, 3, 0);

scene.createCompoundImpostor({
    mass: 2, friction: 0.4, restitution: 0.3, parts: [
    { mesh: part0, impostor: BABYLON.PhysicsEngine.BoxImpostor },
    { mesh: part1, impostor: BABYLON.PhysicsEngine.BoxImpostor }]
});

This will create an unique rigid body object based on the hierarchy provided.

Beware, to create a compound impostor, you must create a hierarchy and provide the root as first object

Applying an impulse

Once your scene is set up, you can play with your meshes using this code:

var pickResult = scene.pick(evt.clientX, evt.clientY);
var dir = pickResult.pickedPoint.subtract(scene.activeCamera.position);
dir.normalize();
pickResult.pickedMesh.applyImpulse(dir.scale(10), pickResult.pickedPoint);

This will apply an impulse on the selected mesh at a given point (in world space).

Exporting a blender scene with physics enabled

Thanks to the extensibility capabilities of Blender (with Python), I was able to support exporting physics information.

You just have to select a mesh and directly go to the Physics tab:

Blender will not let you define a mass equal to zero. But do not worry about that because the exporter will consider a value of 0.001 as zero.

Then, you will be able to define mass, shape, friction and restitution (Bounciness):

And you’re done! Simply export your scene to a .babylon file and use it in your own app/site or drag’n’drop it to our sandbox.

The power of the web

I keep being amazed by the power of current browsers. Right now, you can have a complete 3D simulation alongside an accurate physics engine. And that, with only a few lines of JavaScript code!!

You have now everything you need to create wonderful and dynamic games for the web and for Windows 8! So let’s unleash your creativity. Who knows, the next Angry Bird is perhaps a few lines of code away.