Use the power of Azure to create your own raytracer

6d594911-ad38-43af-ae46-2df304f020bb[1]The power available in the cloud is growing every day. So I decided to use this raw CPU power to write a small raytracer.

I’m certainly not the first one to have had this idea as for example Pixar or GreenButton already use Azure to render pictures.

During this article, we will see how to write our own rendering system using Azure in order to be able to realize your own 3D rendered movie.

The article will be organized around the following axis:

  1. Prerequisites
  2. Architecture
  3. Deploying to Azure
  4. Defining a scene
  5. Web server and workers roles
  6. How it works?
  7. JavaScript client
  8. Conclusion
  9. To go further

The final solution can be downloaded here and if you want to see the final result, please go there: https://azureraytracer.cloudapp.net/

image94

You can use a default scene or create your own scene definition (we will see later how to do that).

The rendered pictures are limited to a 512×512 resolution (you can of course change this settings).

Prerequisites

To be able to use the project, you must have:

You will also need an Azure account. You can get a free one just there: https://www.windowsazure.com/en-us/pricing/free-trial/

Architecture

Our architecture can be defined using the following schema:

image_thumb49

The client will connect to a web server composed of one or more web roles (in my case, there are 2 web roles). The web roles will provide the web pages and a web service used to get the status of a request. When an user wants to render a picture, the associated web role will write a render message in an Azure queue. A farm of worker roles will read the same queue and will process any incoming render message. Azure queues are transactionals and atomic so only one worker role will grab the order. The first available worker will read and remove the message. As queues are transactionals, if the worker role crashes the render message is reintroduced in order to avoid losing your work.

In our sample, I decided to use a semaphore in order to limit the maximum number of requests executed concurrently. Indeed, I prefer not to overload my workers in order to give maximum CPU power to each render task.

Deploying to Azure

After opening the solution, you will be able to launch it directly from Visual Studio inside the Azure Emulator. You will be so able to debug and fine tune your code before sending it to the production stage.

Once you’re ready, you can deploy your package on your Azure account using the following procedure:

  • Open the “AzureRaytracer.sln” __solution inside Visual Studio
  • Configure your Azure account: to do so, right click on the “AzureRaytracer” project and choose “Publish” menu. You will get the following screen:

image_thumb11

  • Using this screen, please choose “Sign in to download credentials” option which will let you download an automatic configuration file on your Azure account :

image_thumb13

  • Once the file is downloaded, we will import it inside the :

image_thumb15

  • After importing the information, Visual Studio will ask you to give a name for the service:

image_thumb50

  • The next screen will present a summary of all selected options:

image_thumb19

  • Before publishing, we must change some parameters to prepare our package to the production stage. First of all, we have to go the Azure portal: https://windows.azure.com. Go the the storage accounts tab to grab required information:

image_thumb22

  • On the right pane, you can get the primary access key:

image_thumb51

  • With this information, you can go to your project:

image_thumb26

  • On every role, you have to go to the settings menu in order to define the Azure connection string (you will use here the information grabbed on the Azure portal):

image_thumb29

  • You must change the “AzureStorage” value using the “…” button:

image_thumb31

  • In the Configuration tab, you can change the instance count for each role:

image_thumb33

image_thumb35

image_thumb37

Your raytracer is now ONLINE !!! We will no see how to use it Sourire

Defining a scene

To define a scene, you have to specify it using an xml file. Here is a sample scene:






  1. <?xml version=1.0 encoding=utf-8 ?>


  2. <scene FogStart=5 FogEnd=20 FogColor=0, 0, 0 ClearColor=0, 0, 0 AmbientColor=0.1, 0.1, 0.1>


  3.   <objects>


  4.     <sphere Name=Red Sphere Center=0, 1, 0 Radius=1>


  5.       <defaultShader Diffuse=1, 0, 0 Specular=1, 1, 1 ReflectionLevel=0.6/>


  6.     </sphere>


  7.     <sphere Name=Transparent Sphere Center=-3, 0.5, 1.5 Radius=0.5>


  8.       <defaultShader Diffuse=0, 0, 1 Specular=1, 1, 1 OpacityLevel=0.4 RefractionIndex=2.8/>


  9.     </sphere>


  10.     <sphere Name=Green Sphere Center=-3, 2, 4 Radius=1>


  11.       <defaultShader Diffuse=0, 1, 0 Specular=1, 1, 1 ReflectionLevel=0.6 SpecularPower=10/>


  12.     </sphere>


  13.     <sphere Name=Yellow Sphere Center=-0.5, 0.3, -2 Radius=0.3>


  14.       <defaultShader Diffuse=1, 1, 0 Specular=1, 1, 1 Emissive=0.3, 0.3, 0.3 ReflectionLevel=0.6/>


  15.     </sphere>


  16.     <sphere Name=Orange Sphere Center=1.5, 2, -1 Radius=0.5>


  17.       <defaultShader Diffuse=1,0.5, 0 Specular=1, 1, 1 ReflectionLevel=0.6/>


  18.     </sphere>


  19.     <sphere Name=Gray Sphere Center=-2, 0.2, -0.5 Radius=0.2>


  20.       <defaultShader Diffuse=0.5, 0.5, 0.5 Specular=1, 1, 1 ReflectionLevel=0.6 SpecularPower=1/>


  21.     </sphere>


  22.     <ground Name=Plane Normal=0, 1, 0 Offset=>


  23.       <checkerBoard WhiteDiffuse=1, 1, 1 BlackDiffuse=0.1, 0.1, 0.1 WhiteReflectionLevel=0.1 BlackReflectionLevel=0.5/>


  24.     </ground>


  25.   </objects>


  26.   <lights>


  27.     <light Position=-2, 2.5, -1 Color=1, 1, 1/>


  28.     <light Position=1.5, 2.5, 1.5 Color=0, 0, 1/>


  29.   </lights>


  30.   <camera Position=0, 2, -6 Target=-0.5, 0.5, 0 />


  31. </scene>




The file structure is the following:

  • A [scene] tag is used as root tag and allows you to define the following parameters:
  • FogStart / FogEnd : Define the range of the fog from the camera.
  • FogColor : RGB color of the fog
  • ClearColor : Background RGB color
  • AmbientColor : Ambient RGB

  • A [objects] tag which contains the objects list

  • A [lights] tag which contains the lights list
  • A [camera] tag which define the scene camera. It is our point of view, defined by the following parameters:
  • Position : Camera position (X,Y,Z)
  • Target : Camera target (X, Y, Z)

All objects are defined by a name and can be of one of the following type:

  • sphere : Sphere defined by its center and radius
  • ground : Plane representing the ground defined by its offset from 0 and the direction of its normal
  • mesh : Complex object defined by a list of vertices and faces. It can be manipulated with 3 vectors:Position, Rotation and Scaling:





  1. <mesh Name=Box Position=-3, 0, 2 Rotation=0, 0.7, 0>


  2.   <vertices count=24>-1, -1, -1, -1, 0, 0,-1, -1, 1, -1, 0, 0,-1, 1, 1, -1, 0, 0,-1, 1, -1, -1, 0, 0,-1, 1, -1, 0, 1, 0,-1, 1, 1, 0, 1, 0,1, 1, 1, 0, 1, 0,1, 1, -1, 0, 1, 0,1, 1, -1, 1, 0, 0,1, 1, 1, 1, 0, 0,1, -1, 1, 1, 0, 0,1, -1, -1, 1, 0, 0,-1, -1, 1, 0, -1, 0,-1, -1, -1, 0, -1, 0,1, -1, -1, 0, -1, 0,1, -1, 1, 0, -1, 0,-1, -1, 1, 0, 0, 1,1, -1, 1, 0, 0, 1,1, 1, 1, 0, 0, 1,-1, 1, 1, 0, 0, 1,-1, -1, -1, 0, 0, -1,-1, 1, -1, 0, 0, -1,1, 1, -1, 0, 0, -1,1, -1, -1, 0, 0, -1,</vertices>


  3.   <indices count=36>0,1,2,2,3,0,4,5,6,6,7,4,8,9,10,10,11,8,12,13,14,14,15,12,16,17,18,18,19,16,20,21,22,22,23,20,</indices>


  4. </mesh>




Faces are indexes to vertices. A face contains 3 vertices and each vertex is defined by two vectors: position (X, Y, Z) and normal (Nx, Ny, Nz).

Objects can have a child node used to define the applied materials:

  • defaultShader : Default material defined by:
  • Diffuse : Base RGB color
  • Ambient : Ambiant RGB color
  • Specular : Specular RGB color
  • Emissive : Emissive RGB color
  • SpecularPower : Sharpness of the specular
  • RefractionIndex : Refraction index (you must also define OpacityLevel to use it)
  • OpacityLevel : Opacity level (you must also define RefractionIndex to use it)
  • ReflectionLevel : Reflection level (0 = no reflection)

  • checkerBoard : material defining a checkerboard with the following properties:

  • WhiteDiffuse : “White” square diffuse color
  • WhiteAmbient : “White” square ambient color
  • WhiteReflectionLevel : “White” square reflection level
  • BlackDiffuse : “Black” square diffuse color
  • BlackAmbient : “Black” square ambient color
  • BlackReflectionLevel : “Black” square reflection color

Lights are defined via the [light] tag which can have Position and Color attributes. Lights are omnidirectionals.

Finally, if we use this scene file:






  1. <?xml version=1.0 encoding=utf-8 ?>


  2. <scene FogStart=5 FogEnd=20 FogColor=0, 0, 0 ClearColor=0, 0, 0 AmbientColor=1, 1, 1>


  3.   <objects>


  4.     <ground Name=Plane Normal=0, 1, 0 Offset=>


  5.       <defaultShader Diffuse=0.4, 0.4, 0.4 Specular=1, 1, 1 ReflectionLevel=0.3 Ambient=0.5, 0.5, 0.5/>


  6.     </ground>


  7.     <sphere Name=Sphere Center=-0.5, 1.5, 0 Radius=1>


  8.       <defaultShader Diffuse=0, 0, 1 Specular=1, 1, 1 ReflectionLevel= Ambient=1, 1, 1/>


  9.     </sphere>


  10.   </objects>


  11.   <lights>


  12.     <light Position=-0.5, 2.5, -2 Color=1, 1, 1/>


  13.   </lights>


  14.   <camera Position=0, 2, -6 Target=-0.5, 0.5, 0 />


  15. </scene>




We will obtain the following picture:

53ac53ad-b971-4d5e-8526-e7a4e39c3bb1

Web server and worker roles

The web server is running under ASP.Net and will provide two functionalities:

  • Connection to worker roles using the queue in order to launch a rendering:





  1. void Render(string scene)


  2. {


  3.     try


  4.     {


  5.         InitializeStorage();


  6.         var guid = Guid.NewGuid();


  7.  


  8.         CloudBlob blob = Container.GetBlobReference(guid + “.xml”);


  9.         blob.UploadText(scene);


  10.  


  11.         blob = Container.GetBlobReference(guid + “.progress”);


  12.         blob.UploadText(“-1”);


  13.  


  14.         var message = new CloudQueueMessage(guid.ToString());


  15.         queue.AddMessage(message);


  16.  


  17.         guidField.Value = guid.ToString();


  18.     }


  19.     catch (Exception ex)


  20.     {


  21.         System.Diagnostics.Trace.WriteLine(ex.ToString());


  22.     }


  23. }




As you can see, the web server will generate for each request a GUID to identify the rendering job. Subsequently, the description of the scene (the xml file) is copied to a blob (with the GUID as name) in order to allow the worker roles to access it. Finally a message is sent to the queue and a blob is created to give a feedback on the request progress.

  • Publish a web service to expose requests progress:





  1. [OperationContract]


  2. [WebGet]


  3. public string GetProgress(string guid)


  4. {


  5.     try


  6.     {


  7.         CloudBlob blob = _Default.Container.GetBlobReference(guid + “.progress”);


  8.         string result = blob.DownloadText();


  9.  


  10.         if (result == “101”)


  11.             blob.Delete();


  12.  


  13.         return result;


  14.     }


  15.     catch (Exception ex)


  16.     {


  17.         return ex.Message;


  18.     }


  19. }




The web service will get the content of the blob and return the result. If the request is queued, the value will be –1 and if the request is finished the value will be 101 (and in this case the blob will be deleted).

The worker roles will read the content of the queue and when a message is available, a worker will get it and will handle it:






  1. while (true)


  2. {


  3.     CloudQueueMessage msg = null;


  4.     semaphore.WaitOne();


  5.     try


  6.     {


  7.         msg = queue.GetMessage();


  8.         if (msg != null)


  9.         {


  10.             queue.DeleteMessage(msg);


  11.             string guid = msg.AsString;


  12.             CloudBlob blob = container.GetBlobReference(guid + “.xml”);


  13.             string xml = blob.DownloadText();


  14.  


  15.             CloudBlob blobProgress = container.GetBlobReference(guid + “.progress”);


  16.             blobProgress.UploadText(“0”);


  17.  


  18.             WorkingUnit unit = new WorkingUnit();


  19.  


  20.             unit.OnFinished += () =>


  21.                                    {


  22.                                        blob.Delete();


  23.                                        unit.Dispose();


  24.                                        semaphore.Release();


  25.                                    };


  26.  


  27.             unit.Launch(guid, xml, container);


  28.         }


  29.         else


  30.         {


  31.             semaphore.Release();


  32.         }


  33.         Thread.Sleep(1000);


  34.     }


  35.     catch (Exception ex)


  36.     {


  37.         semaphore.Release();


  38.         if (msg != null)


  39.         {


  40.             CloudQueueMessage newMessage = new CloudQueueMessage(msg.AsString);


  41.             queue.AddMessage(newMessage);


  42.         }


  43.         Trace.WriteLine(ex.ToString());


  44.     }


  45. }




Once the scene is loaded, the worker will update the progress state (using the associated blob) and will create a WorkingUnit which will be in charge of producing asynchronously the picture. It will raise a OnFinished event when the render is done in order to clean and dispose all associated resources.

We can also see here the usage of the semaphore in order to limit the number of concurrent renders.

The WorkingUnit is mainly defined like this:






  1. public void Launch(string guid, string xml, CloudBlobContainer container)


  2. {


  3.     try


  4.     {


  5.         XmlDocument xmlDocument = new XmlDocument();


  6.         xmlDocument.LoadXml(xml);


  7.         XmlNode sceneNode = xmlDocument.SelectSingleNode(“/scene”);


  8.  


  9.         Scene scene = new Scene();


  10.         scene.Load(sceneNode);


  11.  


  12.         ParallelRayTracer renderer = new ParallelRayTracer();


  13.  


  14.         resultBitmap = new Bitmap(RenderWidth, RenderHeight, PixelFormat.Format32bppRgb);


  15.  


  16.         bitmapData = resultBitmap.LockBits(new Rectangle(0, 0, RenderWidth, RenderHeight), ImageLockMode.WriteOnly, PixelFormat.Format32bppRgb);


  17.         int bytes = Math.Abs(bitmapData.Stride) bitmapData.Height;


  18.         byte[] rgbValues = new byte[bytes];


  19.         IntPtr ptr = bitmapData.Scan0;


  20.  


  21.         renderer.OnAfterRender += (obj, evt) =>


  22.                                       {


  23.                                           System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes);


  24.  


  25.                                           resultBitmap.UnlockBits(bitmapData);


  26.                                           using (MemoryStream ms = new MemoryStream())


  27.                                           {


  28.                                               resultBitmap.Save(ms, ImageFormat.Png);


  29.                                               ms.Position = 0;


  30.                                               CloudBlob finalBlob = container.GetBlobReference(guid + “.png”);


  31.                                               finalBlob.UploadFromStream(ms);


  32.                                               CloudBlob blob = container.GetBlobReference(guid + “.progress”);


  33.                                               blob.UploadText(“101”);


  34.                                           }


  35.                                           OnFinished();


  36.                                       };


  37.  


  38.         int previousPercentage = -10;


  39.         renderer.OnLineRendered += (obj, evt) =>


  40.                                        {


  41.                                            if (evt.Percentage – previousPercentage < 10)


  42.                                                return;


  43.                                            previousPercentage = evt.Percentage;


  44.                                            CloudBlob blob = container.GetBlobReference(guid + “.progress”);


  45.                                            blob.UploadText(evt.Percentage.ToString());


  46.                                        };


  47.  


  48.         renderer.Render(scene, RenderWidth, RenderHeight, (x, y, color) =>


  49.         {


  50.             var offset = x 4 + y bitmapData.Stride;


  51.             rgbValues[offset] = (byte)(color.B 255);


  52.             rgbValues[offset + 1] = (byte)(color.G 255);


  53.             rgbValues[offset + 2] = (byte)(color.R 255);


  54.         });


  55.     }


  56.     catch (Exception ex)


  57.     {


  58.         CloudBlob blob = container.GetBlobReference(guid + “.progress”);


  59.         blob.DeleteIfExists();


  60.         blob = container.GetBlobReference(guid + “.png”);


  61.         blob.DeleteIfExists();


  62.         Trace.WriteLine(ex.ToString());


  63.     }


  64. }




The WorkingUnit works according to the following algorithm:

  • Loading the scene
  • Creating the raytracer
  • Generating the picture and accessing the bytes array
  • When the picture is rendered, we can save it in a blob and we update the job progress state
  • Launching the render

The raytracer

The raytracer is entirely written in C# 4.0 and uses TPL (Task Parallel Libray) to enable parallel code execution.

The following functionalities are supported (but as Yoda said “Obvious is the code”, so do not hesitate to browse the code):

  • Fog
  • Diffuse
  • Ambient
  • Transparency
  • Reflection
  • Refraction
  • Shadows
  • Complex objects
  • Unlimited light sources
  • Antialiasing
  • Parallel rendering
  • Octrees

The interesting point with a raytracer is that it is a massively parallelizable process. Indeed, a raytracer will execute strictly the same code for each pixel of the screen.

So the central point of the raytracer is:






  1. Parallel.For(0, RenderHeight, y => ProcessLine(scene, y));




So for each line, we will execute the following method in parallel on all CPU cores of the computer:






  1. void ProcessLine(Scene scene, int line)


  2. {


  3.     for (int x = 0; x < RenderWidth; x++)


  4.     {


  5.         if (!renderInProgress)


  6.             return;


  7.         RGBColor color = RGBColor.Black;


  8.  


  9.         if (SuperSamplingLevel == 0)


  10.         {


  11.             color = TraceRay(new Ray { Start = scene.Camera.Position, Direction = GetPoint(x, line, scene.Camera) }, scene, 0);


  12.         }


  13.         else


  14.         {


  15.             int count = 0;


  16.             double size = 0.4 / SuperSamplingLevel;


  17.  


  18.             for (int sampleX = -SuperSamplingLevel; sampleX <= SuperSamplingLevel; sampleX += 2)


  19.             {


  20.                 for (int sampleY = -SuperSamplingLevel; sampleY <= SuperSamplingLevel; sampleY += 2)


  21.                 {


  22.                     color += TraceRay(new Ray { Start = scene.Camera.Position, Direction = GetPoint(x + sampleX size, line + sampleY size, scene.Camera) }, scene, 0);


  23.                     count++;


  24.                 }


  25.             }


  26.  


  27.             if (SuperSamplingLevel == 1)


  28.             {


  29.                 color += TraceRay(new Ray { Start = scene.Camera.Position, Direction = GetPoint(x, line, scene.Camera) }, scene, 0);


  30.                 count++;


  31.             }


  32.  


  33.             color = color / count;


  34.         }


  35.  


  36.         color.Clamp();


  37.  


  38.         storePixel(x, line, color);


  39.     }


  40.  


  41.     // Report progress


  42.     lock (this)


  43.     {


  44.         linesProcessed++;


  45.         if (OnLineRendered != null)


  46.             OnLineRendered(this, new LineRenderedEventArgs { Percentage = (linesProcessed * 100) / RenderHeight, LineRendered = line });


  47.     }


  48. }




The main part is the TraceRay method which will cast a ray for each pixel of a line:






  1. private RGBColor TraceRay(Ray ray, Scene scene, int depth, SceneObject excluded = null)


  2. {


  3.     List<Intersection> intersections;


  4.    


  5.     if (excluded == null)


  6.         intersections = IntersectionsOrdered(ray, scene).ToList();


  7.     else


  8.         intersections = IntersectionsOrdered(ray, scene).Where(intersection => intersection.Object != excluded).ToList();


  9.  


  10.     return intersections.Count == 0 ? scene.ClearColor : ComputeShading(intersections, scene, depth);


  11. }




If the ray intersects no object then the color of the background is returned (ClearColor). In the other case, we will have to evaluate the color of the intersected object:






  1. private RGBColor ComputeShading(List<Intersection> intersections, Scene scene, int depth)


  2. {


  3.     Intersection intersection = intersections[0];


  4.     intersections.RemoveAt(0);


  5.  


  6.     var direction = intersection.Ray.Direction;


  7.     var position = intersection.Position;


  8.     var normal = intersection.Normal;


  9.     var reflectionDirection = direction – 2 Vector3.Dot(normal, direction) normal;


  10.  


  11.     RGBColor result = GetBaseColor(intersection.Object, position, normal, reflectionDirection, scene, depth);


  12.  


  13.     // Opacity


  14.     if (IsOpacityEnabled && intersections.Count > 0)


  15.     {


  16.         double opacity = intersection.Object.Shader.GetOpacityLevelAt(position);


  17.         double refractionIndex = intersection.Object.Shader.GetRefractionIndexAt(position);


  18.  


  19.         if (opacity < 1.0)


  20.         {


  21.             if (refractionIndex == 1 || !IsRefractionEnabled)


  22.                 result = result opacity + ComputeShading(intersections, scene, depth) (1.0 – opacity);


  23.             else


  24.             {


  25.                 // Refraction


  26.                 result = result opacity + GetRefractionColor(position, Utilities.Refract(direction, normal, refractionIndex), scene, depth, intersection.Object) (1.0 – opacity);


  27.             }


  28.         }


  29.     }


  30.  


  31.     if (!IsFogEnabled)


  32.         return result;


  33.  


  34.     // Fog


  35.     double distance = (scene.Camera.Position – position).Length;


  36.  


  37.     if (distance < scene.FogStart)


  38.         return result;


  39.  


  40.     if (distance > scene.FogEnd)


  41.         return scene.FogColor;


  42.  


  43.     double fogLevel = (distance – scene.FogStart) / (scene.FogEnd – scene.FogStart);


  44.  


  45.     return result (1.0 – fogLevel) + scene.FogColor fogLevel;


  46. }




The ComputeShading method will compute the base color of the object (taking in account all light sources). If the object is transparent or uses refraction or reflection, a new ray must be casted to compute the induced color.

At the end, the fog is added and the final color is returned.

As you can see, computing each pixel is really resource consuming. So having a huge raw power can drastically improve the rendering speed.

The client

The front client is written using HTML with a small part of JavaScript in order to make it a bit more dynamic:






  1. var checkState = function () {


  2.     $.getJSON(“RenderStatusService.svc/GetProgress”, { guid: guid, noCache: Math.random() }, function (result) {


  3.         var percentage = result.d;


  4.         var percentageAsNumber = parseInt(percentage);


  5.  


  6.         if (percentage == “-1”) {


  7.             $(“#progressMessage”).text(“Request queued”);


  8.             setTimeout(checkState, 1000);


  9.             return;


  10.         }


  11.  


  12.         if (isNaN(percentageAsNumber)) {


  13.             window.localStorage.removeItem(“currentGuid”);


  14.             restartUI();


  15.             return;


  16.         }


  17.  


  18.         if (percentageAsNumber != 101) {


  19.             $(“#progressBar”).progressbar({ value: percentageAsNumber });


  20.             $(“#progressMessage”).text(“Rendering in progress…” + result.d + “%”);


  21.             setTimeout(checkState, 1000);


  22.         }


  23.         else {


  24.             $(“#renderInProgressDiv”).slideUp(“fast”);


  25.             $(“#final”).slideDown(“fast”);


  26.             $(“#imageLoadingMessage”).slideDown(“fast”);


  27.             $.getJSON(“RenderStatusService.svc/GetImageUrl”, { guid: guid, noCache: Math.random() }, function (url) {


  28.                 finalImage.src = url.d;


  29.                 document.getElementById(“imageHref”).href = url.d;


  30.             });


  31.             window.localStorage.removeItem(“currentGuid”);


  32.         }


  33.     });


  34. };




If the web service returns –1,the request is queued. If the returned value is between 0 and 100, we can update the progress bar et if the value is –1, we can get and display the rendered picture.

Conclusion

As we can see, Azure gives us all the required tools to develop and debug for the cloud.

I sincerely invite you to install the SDK to develop your own raytracer !

To go further

Some useful links: