Our home-grown geo-distributed architecture based CDN allows us to delivery dynamic javascript content with minimum latencies possible. Using the same architecture we do data acquisition as well. Over the years we’ve done a lot of changes to our backend, this post talks about some scaling and reliability aspects and our recent work on making fast and reliable data acquisition system using message queues which is in production for about three months now. I’ll start by giving some background on our previous architecture.

Web beacons are widely used to do data acquisition, the idea is to have a webpage send us data using an HTTP request and the server sends some valid object. There are many ways to do this. To keep the size of the returned object small, for every HTTP request we return a tiny 1x1 pixel gif image and our geo-distributed architecture along with our managed Anycast DNS service helps us to do this with very low latencies, we aim for less than 40ms. When an HTTP request hits one of our data acquisition servers, OpenResty handles it and our Lua based code processes the request in the same process thread. OpenResty is a nginx mod which among many things bundles luajit that allows us to write URL handlers in Lua and the code runs within the web server. Our Lua code does some quick checks, transformations and writes the data to a Redis server which is used as fast in-memory data sink. The data stored in Redis is later moved, processed and stored in our database servers.


Previous Architecture

This was the architecture when I had joined Wingify couple of months ago. Things were going smooth but the problem was we were not quite sure about data accuracy and scalability. We used Redis as a fast in-memory data storage sink, which our custom written PHP based queue infrastructure would read from, our backend would process it and write to our database servers. The PHP code was not scalable and after about a week of hacking, exploring options we found few bottlenecks and decided to re-do the backend queue infrastructure.

We explored many options and decided to use RabbitMQ. We wrote a few proof-of-concept backend programs in Go, Python and PHP and did a lot of testing, benchmarking and real-world load testing.

Ankit, Sparsh and I discussed how we should move forward and we finally decided to explore two models in which we would replace the home-grown PHP queue system with RabbitMQ. In the first model, we wrote directly to RabbitMQ from the Lua code. In the second model, we wrote a transport agent which moved data from Redis to RabbitMQ. And we wrote RabbitMQ consumers in both cases.

There was no Lua-resty library for RabbitMQ, so I wrote one using cosocket APIs which could publish messages to a RabbitMQ broker over STOMP protocol. The library lua-resty-rabbitmqstomp was opensourced for the hacker community.

Later, I rewrote our Lua handler code using this library and ran a loader.io load test. It failed this model due to very low throughput, we performed a load test on a small 1G DigitalOcean instance for both models. For us, the STOMP protocol and slow RabbitMQ STOMP adapter were performance bottlenecks. RabbitMQ was not as fast as Redis, so we decided to keep it and work on the second model. For our requirements, we wrote a proof-of-concept Redis to RabbitMQ transport agent called agentredrabbit to leverage Redis as a fast in-memory storage sink and use RabbitMQ as a reliable broker. The POC worked well in terms of performance, throughput, scalability and failover. In next few weeks we were able to write a production level queue based pipeline for our data acquisition system.

For about a month, we ran the new pipeline in production against the existing one, to A/B test our backend :) To do that we modified our Lua code to write to two different Redis lists, the original list was consumed by the existing pipeline, the other was consumed by the new RabbitMQ based pipeline. The consumer would process and write data to a new database. This allowed us to compare realtime data from the two pipelines. During this period we tweaked our implementation a lot, rewrote the producers and consumers thrice and had two major phases of refactoring.


A/B testing of existing and new architecture

Based on results against a 1G DigitalOcean instance like for the first model and against the A/B comparison of existing pipeline in realtime, we migrated to the new pipeline based on RabbitMQ. Other issues of HA, redundancy and failover were addressed in this migration as well. The new architecture ensures no single point of failure and has mechanisms to recover from failure and fault.


Queue (RabbitMQ) based architecture in production

We’ve opensourced agentredrabbit which can be used as a general purpose fast and reliable transport agent for moving data in chunks from Redis lists to RabbitMQ with some assumptions and queue name conventions. The flow diagram below has hints on how it works, checkout the README for details.


Flow diagram of "agentredrabbit"


Discussion on Hacker News


When I got an opportunity of interning with the engineering team at Wingify it made me ecstatic because of an exciting office with fascinating transparent walls full of geeky stuff, I came across on my first visit for an interview — and of course Wingify is a becoming a buzz word in IT industry.

On my first day I was a bit nervous, dressed and prepared as I believed anyone working from 10:00 am to 7:00 pm would. When I reached the office only the office boy was present — honestly speaking I had a feeling that I am at a wrong place because there was no way a software company should look like at 10:30 in the morning on a working day. After a while I was surrounded by people in shorts, denims, t-shirts with smiling faces having friendly chats.

Working at Wingify provided me with an entirely new set of skills like software development design patterns and maintenance that is going to be invaluable for my future. My work here mainly included front-end optimization and internationalization.

Frontend optimization of Visual Website Optimizer website.

Reduced the loading time by 62.16%.

Translation of Visual Website Optimizer website.

Worked on template based engine for the translation of web pages in different languages.

I worked along with the marketing team, and this added an additional dimension to my work by interdependent relationship. I also spent my time researching and learning different methods and technologies for various things such as process automation. All these roles and responsibilities taught me to manage time, being attentive and organized, and enhanced problem-solving abilities.

At Wingify you have the solidarity and independence of your own space and an atmosphere where interns like myself would not hesitate to ask questions as they are answered and explained by highly skilled and dedicated engineering team sitting next to you, which makes it easy to get work done. Awesome appreciation mails boost you up with confidence. Personally, I couldn’t have imagined a better internship experience.

Interning with Wingify provides you with a wonderful learning experience. In a nutshell, it is a great place to work and party \m/


This post is about making your web page perform better using a real world example. As you know, we recently launched a very cool animated comic on A/B Testing. It is scroll animation describing what is A/B testing. I’ll talk about it as an example and walk you through its performance issues, how we debugged them and finally what we did to extract 60 FPS out of it.

The process we see in following text will applies more or less to all web pages in general. Here’s what you need to get started:

  1. A janky web page.
  2. Google Chrome with its awesome devtools.
  3. Determination to make it run as smooth as a hot knife through butter :)

Worry not if you are missing any of the above, you can still read on. Let us begin.

WHAT is causing the issue?

All we know now is that our page is janky. When you scroll up/down you’ll notice that the animation is quite choppy. There are sudden jumps occasionally while scrolling which is really irritating and obviously a bad user experience. We don’t know what is causing this. The very first step we take here is profile the page using Chrome devtool’s Timeline feature. So I went on and fired up my devtools.

Open the devtools

Chrome devtools

Devtools in chrome can be fired either going to Tools > Developer Tools or using the shortcut Ctrl + Shift + I on Windows/Linux and Cmd + Opt + I on Mac.

Select frames tab

Frames tab

Frames tab basically will let us visualize each frame individually showing how much time was taken by that frame and for what tasks.

Filter out events taking more than 15ms

Chrome devtools

Note that we are targeting 60 FPS here. A little math here gives us the number 16.666 ms (1 / 60 * 1000). This is the time budget available per frame to do its thing if we want a consistent 60 FPS.

Therefore, we essentially want to investigate those frames which are crossing this time limit. To do so, select the >= 15ms option from bottom bar as shown.

Record

Chrome devtools

Press the ‘Record’ button at the bottom to start devtools record what’s happening on the page. Once you do that, go back to the page and interact with the page as one would normally do exposing the issues we are trying to debug.

In my case, the page was feeling choppy while scrolling between slides. So I simply kept scrolling on the page like a normal user. After interacting for a while with the page, I get back to the devtools window and press the same button to stop the recording.

Notice the frames

Chrome devtools

You now see the frame data for your page… something like in the snapshot above. In the image you’ll notice a vertical limit with the label 60 FPS just below the label for 30 FPS. These limits are for the frames under which they need to do their stuff if the respective framerate is to be achieved. Once you know this, you’ll straight away conclude that almost all of our frames our crossing that limit like hell! This is the point where we have actually visualized and confirmed the issue. Lets find out the cause.

Script events taking more than 15ms

Chrome devtools

Every frame’s bar is made of different colour components. In the above snapshots we see only yellow and green ones. A quick look at the color legend in the bottom bar tells us that yellow is script time and green is painting. A closer analysis tells us that most frames are in majority made up of yellow component. This means that most of the frame’s time is spent in executing script.

Moreover if you hover over any small horizontal yellow bars below, as show in the snapshot above, you’ll also see the exact time that our scripts are taking per frame along with the corresponding event that triggered it. In my case, it’s the scroll event (we expected that…no?). Some of those scroll events are taking upto 27 ms which is much much more than our budget of 16ms per frame.

Issue detected: Scroll event script

After all this analysis using the devtools we hence come to the conclusion that it’s the script executing for every scroll event that is the cause of issue here. Next step in our debug process is finding WHY it is causing it.

WHY is it causing an issue?

Let’s investigate the code

Our code for the callback bound to the Scroll event is as follows:

$(window).scroll(function() {
    var currentScroll = $(this).scrollTop();

    // Set the position to current slide if the user scrolls manually.
    checkpoints.forEach(function(checkpoint, index) {
      if(currentScroll <= checkpoints[index] && currentScroll > checkpoints[index - 1])
        i = index;

      if(currentScroll < checkpoints[1])
        i = 0;

      if(i == checkpoints.length - 1) {
        $("#main_form, .social-icons").css("visibility", "visible");

        $("a#scrollDown").fadeOut();
        $("a#autoscroll").fadeOut();
      }
      else {
        $("#main_form, .social-icons").css("visibility", "hidden");

        $("a#scrollDown").fadeIn();
        $("a#autoscroll").fadeIn();
      }

      if(currentScroll > 0)
        $("a#scrollUp").fadeIn();
      else
        $("a#scrollUp").fadeOut();
    });
});

This callback function will be our target from now on.

Scroll event is too frequent to handle scripts taking time

First thing that striked me was that the Scroll event is fired too frequently. Every time you scroll on a page, that event is fired multiple times within seconds. Therefore any code that is attached to the Scroll event will be fired with the same frequency. And if that code is computation heavy, we are done!

FIX

To improve the situation here, we have 2 ways:

A. Make the Scroll event fire less frequently B. Optimize the callback’s code to take less execution time

FIX A. Make the Scroll event fire less frequently

I could make the Scroll code fire less frequently in our case as it did not had any usability hit. In fact mostly the code thats required to be executed on Scroll event can be run on little longer intervals without any user experience loss.

This thing was easy to do. Ben Alman has an awesome jQuery plugin written for throttling/debouncing functions. Its very easy to use too. Simply get the plugin into your page and pass the throttled function to Scroll event like so:

var callback = function () {
  ...
}

$(window).scroll( $.throttle(350, callback));

As you see in above code, I have made my callback to fire atmost once within 350 ms. In other words, there will be atleast an interval of 350 ms between 2 calls to that function. This should probably keep those adjacent long yellow bar at some distant from each. We’ll see.

TEST!

We made a small change from our side. But remember, there is no point of it without actually testing the page and getting a performance boost. So lets repeat the profiling procedure again.

Here is what we got this time:

Result

Seems to have worked quite a bit! We have lesser frames overshooting the 16ms budget.

FIX B. Optimize the callback’s code to take less execution time

Secondly, its also important to optimize the code inside that callback at that is what is causing the frames to go beyond our 16ms budget.

If you look closely inside the callback’s code and have a basic understanding of what not do while jQuery, you’ll see some horrible things happening there. I’ll not go in much details on why those things are bad as our focus is on using devtools in this article. Lets list out what all jQuery menace we see in it:

  • Cache jQuery objects

At many places, jQuery is being used to reference element by passing their selectors again and again in the callback. That is BAD. Unless these references will change in future, its wise to calculate them once and cache for future use.

Some of the lines where jQuery is being used unnecessarily:

  var currentScroll = $(this).scrollTop(); // this is always window object
  $("#main_form, .social-icons").css("visibility", "visible");

  $("a#scrollDown").fadeOut();
  $("a#autoscroll").fadeOut();
  $("a#scrollUp").fadeIn();
  $("a#scrollUp").fadeOut();
  • Unnecessary animation

Have a look at the following code snippet:

if(i == checkpoints.length - 1) {
  socialIcons.css("visibility", "visible");

  scrollDownBtn.fadeOut();
  scrollAutoBtn.fadeOut();
}
else {
  socialIcons.css("visibility", "hidden");

  scrollDownBtn.fadeIn();
  scrollAutoBtn.fadeIn();
}

The first if checks if we are on the last iteration of the loop or not. If not, then the else part executes. Which means if the loop runs 100 times, 99 times the else part executes. Moreover if you see carefully the code in the else block, it will keep fading in/out certain elements on each iteration, even when it has done the same thing in past iteration. Taking account the heavy animation account cost in jQuery, this is absolutely unnecessary work being done here.

We could simply do that stuff once and set a flag which will be checked next time and we only do it again if the flag is unset somehow.

Final code

After the above 2 fixes, here is how our Scroll event callback looks like:

function scrollHandling() {
  var currentScroll = $window.scrollTop();

  // Set the position to current slide if the user scrolls manually.
  checkpoints.forEach(function(checkpoint, index) {
    if(currentScroll <= checkpoints[index] && currentScroll > checkpoints[index - 1])
      i = index;

    if(currentScroll < checkpoints[1])
      i = 0;

    if(i == checkpoints.length - 1) {
      socialIcons.css("visibility", "visible");

      scrollDownBtn.fadeOut();
      scrollAutoBtn.fadeOut();
      lastSlideUIapplied = true;
    }
    else if (lastSlideUIapplied) {
      socialIcons.css("visibility", "hidden");

      scrollDownBtn.fadeIn();
      scrollAutoBtn.fadeIn();
      lastSlideUIapplied = false;
    }

    if (currentScroll > 0) {
      scrollUpBtn[0].style.display == 'none' && scrollUpBtn.fadeIn();
    }
    else {
      scrollUpBtn.fadeOut();
    }
  });
}

$window.scroll( $.throttle(250, scrollHandling));

TEST AGAIN!

Needless to say, our next step is to test the changes made. Here is what the timeline says now:

Final result

Bingo!

  • We hardly have any frames overshooting the target line of 60 FPS.
  • We get an average execute time of 11.71 ms per frame with a standard deviation of around 4.97 ms.

Going further

We still see paint (green) events which are causing some frames to overshoot the border. It is basically on slides where large image are being animated on the screen. Its not that we can scale down the images or stop them from being painted. The solution still needs to be figured out to optimize the painting going on here. Suggestions?

Last words

As Chrome folks say it, don’t guess it, test it!


In one of our previous posts, we talked about the problems we faced when communicating with frames on a different domain in our application Visual Website Optimizer, and highlighted the possible solutions to each of those problems.

We are proud to announce please.js, a Request/Response based cross-domain communication. If you’ve ever faced problems in cross-domain frame communication, fear not - just say please!

please.js on Github

What is please.js

please.js is a Request/Response based wrapper around the PostMessage API that makes use of jQuery Promises. Here’s a quick example to load an iframe window’s location:

var frameWindow = $('iframe').get(0).contentWindow;

please(frameWindow).call('window.location.reload');

please.js is based on top of jQuery and the jQuery Promise API. jQuery version 1.6 or above is preferred. To make the communication between two windows on different domains work, both of them must be injected with the same version of jQuery and please.js.

Currently, please.js is an alpha release (0.1.0). Down the line, we would like to add features like support for communication in Chrome Extensions and improving the documentation to make it easier for all users to get started easily.

How it works

The underlying concept is simple. Two frames need to communicate with each other asynchronously. To access one of the child frames on a page, the parent frame sends a please.Request to the child frame. The Request object is a lot like the request sent by the browser to a server. It contains information on what needs to be done in the child frame (call a function, get/set a property or a variable, or access a DOM node using jQuery). The child frame sends a please.Response back to the parent frame with the result of what the parent frame asked. For a function call request, it is the return value of that function, and for a get request, the value of the variable/property is returned.

Contributing

If you would like to contribute, you can submit an issue on GitHub. Will be great if accompanied by a failing test case and/or a pull request.


Recently, we launched our first ever animated guide to A/B testing which made it to the top of HN homepage (Yay!).

In this post, I’ll go through the process of how I created the page using HTML5 and JS. Let’s get started!

Setting up things

I searched about some existing parallax scrolling JS scripts and came across Skrollr.js which made my work a piece of cake! If you are going to create your own parallax scrolling page, then I would recommend you to use this library. Apart from that, I also used scrollTo.js and mousewheel.js for scroll handling.

Also, I wanted to make the images used in that page look sharp on retina screens so I used a little LESS mixin from RetinaJS to make sure that retina screens show the images @2x.

Getting started

After looking at some examples of Skrollr, I was ready to start building up the page. The best thing about Skrollr is that it automatically set things up for you and also handles the parallax scrolling on mobile devices.

Now, I saved two versions (1x and 2x, for retina) of all the images and searched for a good comic font. Each slide on that page is a mixture of some text and image elements. I gave each slide an absolute positioning and 100% width and height. Also, each element in the slides are fixed positioned are made to appear and disappear using the opacity property. Here’s the code for the first slide:

  <!-- Slide 1 -->
  <div class=slide id=slide1>
    <div class=bob
      data-0=left: 0%; opacity:0;
      data-1000=left: 50%; opacity:1;
      data-3600=left: 50%; opacity:1;
      data-4800=left: 50%; opacity:0;>
    </div>

    <div class=text
      data-1200=opacity:0; bottom:0%; margin-bottom: 0 
      data-2400=opacity:1; bottom:50%; margin-bottom: -46px
      data-3600=opacity:1; bottom:50%; margin-bottom: -46px; right: 50%
      data-4800=opacity: 0; bottom: 50%; margin-bottom: -48px; right: 0%>

      Meet <strong>Bob</strong>
    </div>
  </div>

The only thing that Skrollr needs is the data-px attribute with some CSS properties passed in that attribute. Here, Bob will be at 0% left having 0 opacity at the start. Now if the user scrolls to 1000px, s/he would see Bob’s image appearing from left to the center with increasing opacity. Thats how it works, you just need to time your animations in terms of pixels and Skrollr will handle it for you. Here, both bob and text are fixed positioned. To make things responsive, I first positioned everything to center using this:

  .element {
    width: 100px; height: 100px;
    left: 50%; top: 50%;
    margin-left: -50px; 
    margin-top: -50px;
  }

After this, I altered the margins to position it perfectly so that on any resolution it will start from the center. I did the same thing for all the elements in each slide. Most of the elements are animated using CSS3 transforms while others are just faded in and out using opacity property.

Scroll handling

All this completed 80% of the page. Now, the only thing left was the scroll handling. I had to make sure that on each scroll, a slide should finish the animation properly and should not be left in between. To do this, I created checkpoints of the scroll position where each slide starts/ends. Now on each scroll, I incremented/decremented a counter based on the scroll direction. Based on that counter’s current value the page is scrolled to the position from the checkpoints array and any other scroll event is ignored in that duration. Here’s the code for this:

  var i = 0;
  var checkpoints = [0, 3600, 6000, 11200, 14800, 17200];
  var timer = [0, 1000, 1000, 1500, 1500, 1500];

  function scrollDown() {
    if(i < checkpoints.length - 1 && percentage == 100) {
      i++;
      
      $(html, body).scrollTo(0, checkpoints[i], {
        animation: {
          easing: linear,
          duration: timer[i]
        }
      });
    }
  }

  function scrollUp() {
    if(i > 0) 
      i--;
      
    $htmlAndBody.scrollTo(0, checkpoints[i], {
      animation: {
        easing: linear,
        duration: timer[i]
      }
    });
  }

I also added keyboard navigation, and put some arrows on the page for easier navigation. Also, after getting reviews from some non-technical people, I added the auto-play option so that all the lazy people would still be able to watch the whole presentation without moving a finger :P

This almost completed the whole page. Last additions were creating a preloader for the page which loaded the images of first 5 slides with a progress bar and then rest of the images are loaded in the background. If you want, you can take a look at the preloader.js to see how I did the preloading. Another thing was the share buttons and showing the count which was retrieved using PHP.

I hope this covered everything but if you get stuck anywhere, then feel free to add your comments! :)