Introduction:

At Wingify, we follow microservices based architecture to leverage it’s great scalability benefits. We have a lot of microservices along with a complex networking setup among them. Currently, all the services are deployed on virtual machines on the cloud. We wanted to improve this architecture set up and use the latest technologies available. To avoid all this we are moving towards Docker and Kubernetes world!

Why Docker and Kubernetes?

The problems we are facing with the existing infrastructure:

  • Standardization and consistency
    • There is always an issue of a consistent/standard environment between production and development.
    • Most of our time goes in creating a production like environment during development to rollout bugfixes or create any new features.
    • With the new architecture, now we are more equipped to efficiently analyze and fix bugs within the application. It has drastically reduced the time wasted on “local environment issues” and in turn increased time available to fix actual issues and new feature development.
    • Docker provides a repeatable production like development environment and eliminates the “it works on my machine” problem once and for all.
  • Local development
    • It’s not easy to develop and debug a service locally and connect it to the rest of the services running on local environment.
    • Constantly redeploying on local environment to test the changes is time consuming.
  • Auto scaling
    • The load on the services can never be the same all the time.
    • Keeping the services up for the whole year just to handle the peak load which comes on a few days of the festive season is a waste of resources.
    • Regularly benchmarking the load to scale the services with time is not an optimal way.
  • Auto service restarts
    • If the service goes in a hanged state or terminates due to memory leak, resource polling deadlocks, file descriptors issues or anything else, how it is going to restart automatically?
    • Although there are different tools available for multiple languages but setting them up for each service on every server is not ideal.
  • Load balancing
    • Adding and maintaining an extra entry point like nginx just to provide load balancing is an overhead.

We are trying to tackle all these problems in an automated and easy way using Docker, Kubernetes and few open-source tools.

Our Journey

We started out from scratch. Read a lot of articles, documentation, tutorials and went through some existing testing and production level open source projects. Some of them solved a few of our problems, for some we found our own way and the rest of them are yet to be solved!

Below is a brief idea of all the ideas and approaches we found to solve many of our problems, the final approach we took and comparison between them:

Common repository approach

Every dockerized service starts with a Dockerfile. But the initial issue is where to put them? There will be a lot of Dockerfiles combining all the services.

There are two ways to put them:

  1. Each service contains it’s own dockerfile
    • All the repositories have separate dockerfiles specific to that service.
  2. A common repository of all dockerfiles
    • All the dockerfiles of every service are added to a common repository.

Below is the comparison among them:

  Common Repository Separate Repositories
1. Need a proper structure to distinguish dockerfiles Separation of concerns
2. Common Linters and Formatters Each repo has to add the same linter and formatter repetitively
3. Common githooks to regulate commit messages, pre-commit, pre-push, etc. tasks Same githooks in every service
4. Can contain reusable Docker base-files No central place to put reusable dockerfiles
5. A central place for DevOps to manage the permissions of all dockerfiles Very difficult to manage dockerfiles individually by Devops

You may be thinking about the ease of local development using volumes in the separate repository approach. We will get back to that later and show how easy it will be in a common repository approach.

So, the common repository approach is a clear winner among them. But what about its folder structure? We gave it plenty of thoughts and finally, this is our Docker repository folder structure:

The folder structure is broadly categorized into 2 parts:

  • Services directory:
    • It contains the directories of all the services each having their own ‘dockerfile’ and ‘.dockerignore’ files.
    • Internally they inherit from the base images.
  • Reusable base images directory:
    • It contains all the reusable dockerfiles that are categorized broadly according to their respective languages like node, PHP, etc.
    • Dockerfiles containing only the languages are placed in the ‘base’ folder.
    • All the extensions, plugins, tools, etc. of above base images are placed in the same directory, like ‘thrift’ for node.js.
    • Versions are important as multiple services may use different versions of the same plugins. Like, one service may require MySQL 5.6 and the other one may require 5.7. So, each directory is further nested on the basis of versions.

Using this folder structure has multiple advantages:

  • All the services and reusable base dockerfiles are segregated.
  • It becomes very clear that which dockerfile is for what service, language or plugin.
  • Multiple versions can be easily served.

Next, we will discuss the reusable base images concept.

Dockerfile Linter

There are many opensource linters available for Docker files. We found hadolint meets most of the standards that Docker recommends. So, to lint all the files we just have to issue a simple command which can be easily integrated into the githooks.

hadolint **/*Dockerfile

Dockerfile Formatter

We searched and tried multiple formatters, but none of them worked as per our requirements. We found dockfmt was close to our requirements but it also has some issues like it removes all the comments from dockerfile. So, we are yet to find a better formatter.

Reusable Docker base images

It’s very common that a lot of services need the same OS, tools, libraries, etc like all the node services may need Debian stretch OS with node.js and yarn installed of a particular version. So, instead of adding them in all such Docker files, we can create some reusable, pluggable Docker base images.

Below is the example of a Node.js service which requires:

  • Debian stretch OS
  • Node.js version 9.11.2 + Yarn
  • Apache thrift version 0.10.0

Node.js base image:

FROM debian:stretch-slim

# Install Node 9.11.x
# Defining builDeps as an argument in alphabetical order for better readability and avoiding duplicacy.
ARG buildDeps=" \  
  curl \
  g++ \
  make"

# It causes a pipeline to produce a failure return code if any command results in an error.
SHELL ["/bin/bash", "-o", "pipefail", "-c"] 
# hadolint ignore=DL3008,DL3015
RUN apt-get update && apt-get install -y —- no-install-recommends $buildDeps \
  # Use --no-install-recommends to avoid installing packages that aren't technically dependencies but are recommended to be installed alongside packages.
  && curl -sL https://deb.nodesource.com/setup_9.x | bash - && apt-get install -y nodejs=9.11.* \
  && npm i -g [email protected] \
  && apt-get clean \ 
  # Remove apt-cache to make the image smaller.
  && rm -rf /var/lib/apt/lists/* 

Let’s consider we build this with name ‘wingify-node-9.11.2:1.0.5’. Where ‘wingify-node-9.11.2’ represents the docker image type and ‘1.0.5’ is the image tag.

Apache thrift base image:

# Default base image
ARG BASE=wingify-node-9.11.2:1.0.5

# hadolint ignore=DL3006
FROM ${BASE}

# Declaring argument to be used in dockerfile to make it reusable.
ARG THRIFT_VERSION=0.10.0 

# Referred from https://github.com/ahawkins/docker-thrift/blob/master/0.10/Dockerfile
# hadolint ignore=DL3008,DL3015
RUN apt-get update \
    && curl -sSL "http://apache.mirrors.spacedump.net/thrift/$THRIFT_VERSION/thrift-$THRIFT_VERSION.tar.gz" -o thrift.tar.gz \
    && mkdir -p /usr/src/thrift \
    && tar zxf thrift.tar.gz -C /usr/src/thrift --strip-components=1 \
    && rm thrift.tar.gz \
    # Clean the apt cache on.
    && apt-get clean \
    # Remove apt cache to make the image smaller.
    && rm -rf /var/lib/apt/lists/* 

WORKDIR /usr/src/thrift
RUN ./configure  --without-python --without-cpp \
    && make \
    && make install \
    # Removing the souce code after installation.
    && rm -rf /usr/src/thrift

Here, by default, we are using the above-created node’s Docker image. But we can pass any other environment’s base image as an argument to install thrift there. So, it’s pluggable everywhere.

Finally, the actual service can use above as a base image for it’s dockerfile.

Access private repository dependencies

We have multiple services that have some dependencies which are fetched from private repositories. Like in our node service, we have one of our dependencies listed in package.json as:

{
  "my-dependency": "git+ssh://[email protected]/link/of/repo:v1.0.0",
}

Normally we need ssh keys to fetch these dependencies, but a Docker container won’t be having it. Below are the few ways of solving this:

  • Option 1: Install dependencies externally (local or Jenkins) and Docker will copy them directly.
    • Advantages:
      • No SSH key required by docker.
    • Disadvantages:
      • Dependencies installation won’t be cached automatically as it’s happening outside the docker.
      • Some modules like bcrypt have binding issues if not installed directly on the same machine.
  • Option 2: Pass SSH key as an argument in dockerfile or copy it from system to the working directory and let dockerfile copy it. Docker container can then install dependencies.
    • Advantages:
      • Caching is achieved.
      • No module binding issues.
    • Disadvantages:
      • SSH key would be exposed in a Docker container if not handled correctly.
      • Single SSH keys will have security issues and different ones will be difficult to manage.
  • Option 3: Host the private repos globally like our own private npm (in case of node.js) and add it’s host entry on the system. Docker container can then install dependencies by fetching from our private npm.
    • Advantages:
      • Caching is achieved.
      • No SSH key required.
    • Disadvantages:
      • One time setup of hosting.
      • We need to publish the private repos each time we create a new tag.

Way 3 proved to be much better in our case and we moved ahead with it.

Service Dockerfile

The final dockerfile of the service implementing all above will be like:

ARG BASE=wingify-node-9.11.2-thrift-0.10.0:1.0.5

# hadolint ignore=DL3006
FROM ${BASE}

RUN mkdir -p /opt/my-service/
WORKDIR /opt/my-service

# Dependency installation separately for caching
COPY ./package.json ./yarn.lock ./.npmrc ./
RUN yarn install

COPY . .

CMD ["yarn", "start:docker"]

Here ‘.npmrc’ contains the registry which points to our own private npm. We are copying it so that Docker container can fetch our private repos from it.

Caching

Every time we change our code, we don’t want Docker container to install dependencies again (unless changed). For this we divided the ‘COPY’ step in above dockerfile into 2 parts:

# Here we are copying the package.json and yarn.lock files and doing dependencies installation.
# This step will always be cached in Docker unless there is change in any of these files
COPY ./package.json ./yarn.lock ./.npmrc ./
RUN yarn install

COPY . .

Doing all this will reduce the Docker image build time to just a few seconds!

Auto-tagging and rollback

Tagging is important for any rollback on productions. Fortunately, it’s easy to do in docker. While building and pushing an image on Kubernetes we can specify the tag version with a colon. We can then use this tag in Kubernetes YAML file to deploy on the pods.

docker build -t org/my-service .
docker build -t org/my-service:1.2.3 .

docker push org/my-service .
docker push org/my-service:1.2.3 .

This works fine, but it still requires a new tag every time we are building a new version of the image. This can be passed manually to a job. But what if there is auto-tagging?

First, let’s find out the latest tag. Here is the command to find the latest image tag from GCP:

gcloud container images list-tags image-name --sort-by=~TAGS --limit=1 --format=json

We can use this in a custom node script which will return the new incremented version. We just have to pass the image name and the release type i.e. major/minor/patch to it.

// Usage: node file-name image-name patch
const exec = require('child_process').execSync;

const TAG_TYPES = {
  PATCH: 'patch',
  MINOR: 'minor',
  MAJOR: 'major'
};

// Referenced from https://semver.org/
const VERSONING_REGEX = /^(v)?(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+([0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$/m;

class Autotag {
  constructor(imageName = '', tagType = TAG_TYPES.PATCH) {
    this._validateParams(imageName, tagType);
    this.imageName = imageName;
    this.tagType = tagType.toLowerCase();
  }

  // Private functions
  _validateParams(imageName, tagType) {
    if (!imageName) {
      throw new Error('Image name is mandatory.');
    }

    if (!Object.values(TAG_TYPES).includes(tagType)) {
      throw new Error(
        `Invalid tag type specified. Possible values are ${Object.values(
          TAG_TYPES
        ).join(', ')}.`
      );
    }
  }

  _fetchTagsFromGCP() {
    return exec(
      `gcloud container images list-tags ${
        this.imageName
      } --sort-by=~TAGS --limit=1 --format=json`
    ).toString();
  }

  // Public functions
  increment() {
    const stringifiedTags = this._fetchTagsFromGCP();

    if (stringifiedTags) {
      try {
        const { tags } = JSON.parse(stringifiedTags)[0];

        for (let i = tags.length - 1; i >= 0; i--) {
          const tag = tags[i];
          if (VERSONING_REGEX.test(tag)) {
            let [
              prefix = '',
              major = 0,
              minor = 0,
              patch = 0
            ] = VERSONING_REGEX.exec(tag).slice(1);

            switch (this.tagType) {
              case TAG_TYPES.PATCH:
                patch++;
                break;
              case TAG_TYPES.MINOR:
                patch = 0;
                minor++;
                break;
              case TAG_TYPES.MAJOR:
                patch = 0;
                minor = 0;
                major++;
                break;
            }

            return `${prefix}${major}.${minor}.${patch}`;
          }
        }
      } catch (e) {}
    }

    // Return default tag if none already exists.
    return '0.0.1';
  }
}

try {
  console.log(new Autotag(...process.argv.slice(2)).increment());
} catch (e) {
  console.log(e.toString());
}

Thanks Gaurav Nanda for the above script.

Production staged rollout

Our ultimate goal is to migrate everything from the existing setup to GCP with Docker and Kubernetes. Migrating the whole system in one go on production is time-consuming as well as risky.

To avoid this we are targeting individual services one by one. Initially, a service will run on GCP as well as on the existing server with their databases pointing to the old setup. We will open them for a few accounts at the beginning. The rest of the accounts will work as before. This will ensure that if any issue comes in a new setup, we can easily switch back to the old setup while fixing it.

Next steps

  • Integrate health check APIs with Kubernetes.
  • Development environment using telepresence.
  • Add service discovery tool like consul.
  • Add a vault system for secrets.
  • Better logging.
  • Integrate helm to manage the Kubernetes cluster.
  • Docker image size management.
  • Add support for blue green deployments.

We may be using some things differently that can be improved upon. There can be better tools that we are yet to explore. We are open to any suggestions that can help us in improving what we are already doing and what we will require in the future. This is just a start, we will try to improve in every iteration and solve new challenges.

Thanks to Gaurav Nanda for mentoring and guiding us for everything.

References


Introduction:

Js13kGames is a JavaScript game development competition that is organized every year from 13th August to 13th September. What makes this one stand apart from other game dev competitions, is the game size limit of 13 kilobytes. Yes, Just 13KB for everything, including code, images, graphics, sounds! Moreover, a theme is decided every year and the game, ideally, should be based on that. This results in a lot of brainstorming and innovative ideas. For this year, the theme was ‘offline’.

The competition is organized by Andrzej Mazur who is also one of the judges. They play every submitted game at the end of the competition and give their reviews in terms of what went right & in what directions improvements could be made. Needless to say, there are a lot of prizes like gadgets, t-shirts, and stickers to be won every year.

Preparations

JS13K competition is not new to folks at Wingify. A couple of us have prior experience of this. Kush, Gaurav and Varun had participated in previous JS13K events. Having experienced & enjoyed the competition first hand, they felt compelled to inform the rest of us as well.

After the first week since the beginning of the competition, we all met and the veterans of this competition introduced us to the rules & theme, basic techniques related to game development, & tools that might be handy along the way. We were a little short on time considering that we had to first come up with feasible concepts & our primary experience being Frontend SPA development, creating these games was about to be unlike any code we professionally write.

Entries

Twisty Polyhedra

Author: Aditya Mishra

The concept behind this one is simple, You will have access to Rubik’s cube variants of different sizes & shapes to solve. You’re likely familiar with the standard size 3 Rubik’s cube. But what you may not be aware of is that it’s actually just one item from a huge family of puzzles with rich mathematical structures. This game was to be built so that it can at least support face turning Tetrahedra & Octahedra apart from the standard cubes.

Some of the fun challenges involved with this one were:

  • Composing & rendering the shape
  • Re-orienting & twisting the puzzle according to cursor movements
  • Animating the twists

There was a lot to learn from these challenges as it involved playing with vectors, coming up with algorithms to generate & render the sliced shapes on a 2D canvas, to infer the desired action from simple input events etc.

demo | source

Keep-Alive

Author: Surbhi Mahajan

The idea of this game is inspired by Duet. Although the gameplay is based on the classic game, it offers extended features and new visuals. There are 3 self-contained levels each with a unique challenge. The player rotates colored orbs in a circular track, guiding them to avoid incoming obstacles. It’s required to keep all the orbs intact to keep going. The orbs only collide with obstacles of a different color than them & pass unharmed through obstacles otherwise.

A few of the interesting challenges with this were:

  • Collision detection & revert effects
  • Special effects for tail & kill animations
  • Dynamic level definitions

Since a lot of these effects were algorithmically generated, the size limit was not a concern for this entry. The primary learning experience here was integrating deterministic dynamic stages, cool effects & structuring the implementation.

demo | source

Anti_Virus

Author: Punit Gupta

This game is inspired by a classic game ‘Snow Bros’ but with a very different flavor. We all use various offline storage devices to save our precious data. But inevitably, sometimes the data gets corrupted due to viruses. The goal here is to go into those devices, kill those viruses and save the data. The gameplay involves moving around, climbing the platforms, freezing the opponents and throwing them over other enemies.

Some of the major challenges involved with this idea are:

  • Detecting collisions among platforms, walls, opponents, shooters and player.
  • Randomizing enemy movements.
  • Animating when player or enemies are killed.

The physics & special effects being the most fun part of the implementation, squeezing all these things into the required size & keeping the gameplay smooth also involved quite a lot of optimizations & polishing.

demo | source

Sum It Up

Author: Hemkaran Raghav

This game is inspired by one of the most popular games of all time, ‘Spider Solitaire’. In this one, you don’t have to stack the cards in increasing order. Instead, numbers are written on these cards and you have to stack identical cards over each other causing them to merge into a new card with double value. Your goal is to create the highest score possible.

The most fun parts of this implementation were creating smooth & beautiful animations.

demo | source

Up & Down

Author: Dinkar Pundir

Inspired by vvvvvv, this game is based on playing with gravity. Apart from the ability to move left/right, you can toggle the direction of the pull. On the click of a button, this direction can be flipped upside-down. This basic idea when combined with adding obstacles in creative ways can lead to plenty of possibilities for a platformer.

This involved problems like:

  • Implementing smooth discrete integration that provides a nice balanced difficulty
  • Collision detection that properly counters changing gravity
  • Creating a well-structured design to allow for easy extensions to stage definitions

Although created in a very short amount of time, not only were these things fun to solve, these simple problems lead the way towards a wide variety of techniques related to mathematical ideas & design principles in general.

demo | source

Robo Galactic Shooter

Author: Ashish Bardhan

The idea for this entry was to create a classic 2D Shoot’em up style & nostalgic retro feel. You need to survive a barrage of asteroids for as long as possible. The good thing is that you’re given some solid guns! The robot is flying towards the right into the coming asteroids of different sizes & velocities. The gameplay involves either dodging them or shooting them till they disintegrate.

Plenty of effort went into the following parts:

  • Creating cool background effects, handling sprites
  • Integrating sound effects
  • Boundless level generation

Considering how many effects & elements were integrated, a lot of optimizations were required to fit it all in 13KB. The game also incorporated sound effects using a micro-library called jsfxr.

demo | source

What we Learned

A lot of what we learned came from implementing animations, physics, collision detections etc. It’s nice to see how ideas from geometry & basic numerical integration techniques come together to make a functional game. It’s also worth mentioning that given how complex the implementations of these simple concepts tend to become, an understanding of software design principles is not only a requirement while building these things, but also grows really well with such an experience.

Most of the integrated effects & animations we had to create on our own. Some of our games also involved a degree of focus on keeping the algorithms fast, for example, by managing object lifecycles to keep the computations limited to only visible entities, by using clever little hacks to avoid redundant computations while rendering etc.

In the instances where third party libraries were used, we had to make sure they introduced very little overhead. 2 of the listed games leveraged Kontra.js, a micro-library to get up & running quickly without introducing any significant impact to build size. Kontra.js provides nice features such as sprite management & out of the box collision detection etc. Galactic Shooter also used a slightly altered version of jsfxr, a lightweight sound generation library.

For the build process, almost all of us followed a different path. In small games, Webpack was suitable for bundling the source. For some larger ones, we wanted to avoid even the tiniest of the overhead introduced by Webpack. Therefore, we used simple Grunt / Gulp tasks to concatenate & minify files. In some cases, we even avoided using Closure Compiler, as arrow functions & classes result in much more concise code. Apart from these, we experimented with various compression tools & techniques for fitting all those JS/HTML/CSS/PNG files in 13K size limit. But one way or another, we managed to make it for all of the entries.

Conclusion

Participation in this event did require committing a significant amount of personal time to it but proved to be an amazing experience not only in terms of what we learned from it but also in how much fun we had while implementing these concepts. After all, game development is as fun as it gets when it comes to programming - creating a tiny universe of your own, the laws of physics are what you want them to be, things evolve how you tell them to. And we certainly managed to pick up some nice ideas along the way.

We would love to hear your feedback or answer any queries you might have, feel free to drop a comment or tweet us at @wingify_engg.

Game on!


For the past few months, we at Wingify, have been working on making a common platform for different products - so that things get reused across products and re-inventing the wheel doesn’t happen. This also has additional benefits like enforcing common good practices across products, easier switching for developer across products and more. As part of the same endeavor, our Frontend team has been working hard on a Design System and a frontend boilerplate app over that. The boilerplate is something which any product at Wingify can simply fork and build a new frontend app, using the reusable components provided by the base Design System. More about the boilerplate and Design System later, but in this post want to specifically talk about a very import part of our Design System - our CSS.

Issues with current CSS

First, why did we even start looking for a new way to write CSS? Previously, we were using a mix of BEM and some helper classes. Occasional classes which belonged to neither of those two categories could be seen in the code base too! 😅 This approach led to the following issues:

  • Naming classes was always a problem - Often, someone was commenting the pull requests that this class name doesn’t make sense and should be changed to something more “meaningful”. Finding “meaningful” names is tough!
  • Unused CSS - Automated tools to detect unused CSS are not very reliable, especially with Single Page Apps. Our CSS kept growing over time and definitely one main reason for that was no one ever cared to remove the unused CSS.
  • Refactoring - With usual classes, it becomes difficult to refactor with confidence. Because the developer cannot be very sure about the class that they are renaming or removing getting used elsewhere which they are not aware of.

I have also blogged about these issues in detail in an article here.

Evaluating other approaches

Much before starting this mission, we started evaluating various frameworks for writing CSS. Our evaluation was based on following parameters:

  • Final output file size
  • Rate of growth of file size over time
  • Unused CSS handling
  • Ease of learning for a new developer
  • Ease of maintenance
  • Documentation (existing or requirement to create one internally)
  • Lintable
  • Themable
  • Ease of refactoring
  • Naming effort involved
  • Critical CSS generation

Yeah, lots of parameters. We evaluated very critically 😀. Also, notice that I have kept the end-user performance related parameters on top as that’s what mattered most to us.

The winner - ACSS

We evaluated lots of known frameworks and libraries out there like pure BEM, Tachyons, Styled Components, Vue’s scoped CSS, CSS modules. But We found that atomic CSS approach met most of our requirements as mentioned above. Also known as helper/utility classes approach, Atomic CSS requires no naming, documentation would be available if we go with a well-known library, its themable, lintable. Refactoring is also easier as all you need to do is remove classes from your HTML and never touch CSS.

But even in various atomic CSS libraries available out there, we decided to go with ACSS(I know, the name is little too generic as they call themselves Atomic CSS!). We got introduced to ACSS by our resident UX engineer, Jitendra Vyas. Along with him, we discussed a lot of points about ACSS with one of the developers of ACSS, Thierry Koblentz. ACSS was also mentioned by Addy Osmani at Google IO.

ACSS comes with very strong benefits which no other library had. You don’t write CSS in ACSS, in fact you don’t even download a CSS file and use in ACSS. ACSS comes with a tool called Atomizer which detects the use of ACSS classes in your HTML (or any file) and generates the corresponding CSS for those detected classes. Here is a sample HTML you would write with ACSS:

<button class="Bgc(blue) C(white) P(10px) D(ib) Cur(p) Bgc(red):h">
I am a button
</button>

On top of usual benefits of Atomic CSS approach, ACSS’s automatic CSS generation means that we never get a single byte of CSS that we are not using in an app! What we use in HTML, lands in the final CSS file. In fact, ACSS generates such small CSS that it’s practically possible to inline your complete CSS - i.e. your complete CSS can become your critical CSS!

We were free from documentation as the only thing a developer needs to write ACSS is their awesome, searchable reference. There is also a VSCode extension which even removes the need for the reference. We were free from naming things of course.

It may seem that a developer might have to write same set of classes repeatedly to create the same things, but that is not true. ACSS or any Atomic CSS approach requires a templating/component system where you can reuse a piece of HTML without duplicating. We use Vue.js to build our small reusable components.

Of course they are some cons in ACSS, as well. For example, inside a particular component’s HTML, you cannot find what any tag is about because there are no descriptive classes. This can be somewhat be fixed by using semantic tags. Basically, the pros are way too strong over its cons.

The end result

We just finished porting a decent size app to our new system and guess what, our CSS reduced from 90 KB to just 8 KB! 😱

That is all for this post! I encourage you to go and try out ACSS with an open mind and see if it solves your current CSS problem if any. We are happy to answer any questions you might have on our new approach, Design System etc. Do comment on this post or tweet them out to our twitter handle 👉🏼 @wingify_engg.

Bbye!


Recently, we migrated one of our web apps to the Webpack 4, which decreases build time and reduces chunk size by using Split Chunks plugin. It automatically identifies modules which should be split by heuristics and splits the chunks. This blog post deals with our efforts in understanding the mysterious Split Chunks plugin.

The Problem

The problem we were facing with default Split Chunks config is that a module of large size 550 KB was duplicated in 4 async chunks. So, our goal was specifically to decrease the bundle size and utilize a better code splitting mechanism in the app.

Our Webpack configuration file looks like this:

// Filename: webpack.config.js

const webpack = require('webpack');
module.exports = {
   //...
   optimization: {
      splitChunks: {
         chunks: 'all'
      }
   }
};

We used webpack-bundle-analyzer to get a nice view of our problem.

Observation

By default, Split Chunks plugin only affects on-demand chunks and it split chunks based on following conditions:

  1. A new chunk should be shared or containing modules should be from the node_modules folder.
  2. New chunk should be bigger than 30 KB.
  3. Maximum number of parallel requests when loading chunks on demand should be lower or equal to 5.
  4. Maximum number of parallel requests at initial page load should be lower or equal to 3.

In our case, a separate chunk of the large-sized library would not be created.

What’s the reasoning behind this?

It satisfies first and second conditions as it is being used in 4 chunks and its size (550 KB) is bigger than 30 KB so concludes that it should be in a new chunk. But it does not satisfy the third one as 5 chunks were already created at each dynamic import which is the maximum limit for async requests. We observed that the first 4 chunks include all modules which are shared among 7,6,5,5 async chunks respectively and the last one is its own chunk. Modules on which a maximum number of async chunks are dependent on have been given priority and as a library is required by only 4 async chunks, a chunk containing it would not be created.

When we run yarn build to build our assets, a chunk named vendors~async.chunk.1~async.chunk.2~async.chunk.3~async.chunk.4 is not found in the output:

Solutions

We can have more control over this functionality. We can change default configuration in either or combination of the following ways:

  1. Increasing maxAsyncRequests result in more chunks. A large number of requests degrades the performance but it’s not a concern in HTTP/2 because of the request and response multiplexing. So, this configuration should be preferred in case of HTTP/2 only.

    Now let’s take a look at Webpack configuration file after this change:

        // Filename: webpack.config.js

        const webpack = require('webpack');
        module.exports = {
            //...
           optimization: {
              splitChunks: {
                 chunks: 'all',
                 maxAsyncRequests: 20
              }
           }
        };
    
  1. Increasing minSize also gives the desired result. Some modules with higher usage in our app and size less than minSize would not be included in separate chunks as they all violate the second condition like in case of minSize 100 KB, modules greater than 100 KB are considered giving more possibilities for creating chunks containing large-sized modules.

    Now let’s take a look at Webpack configuration file after this change:

        // Filename: webpack.config.js

        const webpack = require('webpack');
        module.exports = {
            //...
           optimization: {
              splitChunks: {
                 chunks: 'all',
                 minSize: 100000
              }
           }
        };
     

Experiment

Steps:

  1. We picked any two async chunks between which a large-sized third-party library (550 KB) is shared. Let’s call these chunks as async.chunk.1 and async.chunk.2 and assume that chunk’s name and corresponding route’s name are same.
  2. Loaded async.chunk.1 route first and calculated the total content size loaded.
  3. Then navigated from async.chunk.1 route to async.chunk.2 route and calculated the content size again.

Results with first approach(varying the maxAsyncRequest property):

|   MaxAsyncRequests   |           async.chunk.1          |        async.chunk.2       |
|----------------------|----------------------------------|----------------------------|
|          5           |            1521.6 KB             |          758 KB            |
|          10          |            1523.76 KB            |          79.1 KB           |
|          15          |            1524 KB               |          79.1 KB           |
|          20          |            1524.3 KB             |          79.1 KB           |

After this change our bundles look like this:

With this configuration, a separate chunk named vendors~async.chunk.1~async.chunk.2~async.chunk.3~async.chunk.4 is created which is shown below:

Results with second approach(varying the minSize property):

|       MinSize       |          async.chunk.1           |        async.chunk.2       |
|---------------------|----------------------------------|----------------------------|
|        30 KB        |            1521.6 KB             |          758 KB            |
|        50 KB        |            1521.6 KB             |          188 KB            |
|        100 KB       |            1521.4 KB             |          78.4 KB           |

After this change our bundles look like this:

In this case too, a large-sized library is extracted into a separate chunk named vendors~async.chunk.1~async.chunk.2~async.chunk.3~async.chunk.4 which is shown below:

Note: async.chunk.2 chunk size in case of 50 KB minSize configuration is 188 KB whereas its size is reduced to 78.4 KB in case of 100 KB minSize configuration. This is because one more module of size 146 KB that are shared among four other chunks are extracted into a separate chunk decreasing overall bundle size to 78.4 KB (Awesome!).

Conclusion

Increasing minSize and maxAsyncRequests both decreases the size of async.chunk.2 chunk.

The second approach can result in multiple large-sized chunks, each one having multiple duplicated small-sized modules. On the other hand, the first approach will result in a large number of small chunks which do not have any duplicated module. Loading multiple small chunks increases the loading time of page but with HTTP/2, it will work efficiently.

Finally, we achieved what we wanted, a big library is now separated from our bundles and lazy loaded on demand. Thanks to Dinkar Pundir for helping me in solving the above problem. If you have any doubt feel free to drop a comment or tweet us at @wingify_engg.

Happy Chunking… !!


Heatmaps record visitor clicks on the live state of your website, which can be used to interpret user behavior on elements like modal boxes, pages behind logins, and dynamic URLs.

VWO Heatmap in action on vwo.com

But here comes a question, how to verify Heatmap E2E using automation? How to check if clicks are being plotted correctly? How to check if there is no data loss while plotting the clicks?

The answer to above questions is HTML Canvas. As VWO heatmaps are rendered on HTML canvas, we decided to leverage that to verify Heatmap E2E as well. The best part of using Canvas is that, it can be integrated easily with your existing Selenium scripts.

How can Canvas be used for Heatmap Automation?

There are two phases in order to verify if the heatmaps are working or not.

  1. The first phase is to plot clicks on the test page and store the clicks co-ordinates. This can be easily done using Selenium.
     //get elements location from the top of DOM
     element.getLocation().then(function (location) {
         //get elements height and width
         element.getSize().then(function (size) {
             //store element’s center coordinates w.r.t. top left corner of DOM in array    
             clickDataArray.push(new Coordinates(Math.floor(location.x + size.width / 2), Math.floor(location.y + size.height / 2)));
         });
     });   
    

    In this function, we are simply finding the center coordinates of an element where we have clicked and storing it in to an array. These stored coordinates would be further used to check if the clicks are plotted using the canvas functions or not.

  2. The second phase is to leverage canvas functions and the co-ordinate data stored in order to verify if heatmaps are plotted correctly. We simply check if heatmap canvas is empty and if it is empty, we would not check further.
     exports.isCanvasEmpty = function () {
         browser.wait(EC.presenceOf(element(by.tagName('canvas'))), 5000);
         return browser.executeScript(function () {
             var canvas = document.getElementsByTagName('canvas')[0];
             var imgWidth = canvas.width || canvas.naturalWidth;
             var imgHeight = canvas.height || canvas.naturalHeight;
             // true if all pixels Alpha equals to zero
             var ctx = canvas.getContext('2d');
             var imageData = ctx.getImageData(0, 0, imgWidth, imgHeight);
             //alpha channel is the 4th value in the imageData.data array that’s why we are incrementing it by 4
             for (var i = 0; i < imageData.data.length; i += 4) {
                 if (imageData.data[i + 3] !== 0) {
                     return false;
                 }
             }
             return true;
         });
     }:
    

In this function, we are getting the 2d context of the canvas and then we are iterating over the image data to check if alpha channel of all pixel points is greater than zero. Alpha channel is an 8-bit layer in a graphics file format that is used for expressing translucency (transparency), which in turn means that if the value of alpha channel of a pixel is equal to zero, nothing is plotted over that pixel.

If for any pixel the value of alpha channel is greater than zero, this tells us that the canvas is not empty which indeed means clicks are plotted onto the heatmap.

Once we are sure that the canvas is not empty, we can proceed further to check that the clicks are plotted on the canvas at the correct position i.e exactly where we clicked using selenium.

exports.checkCanvasPlotting = function (coordinates) {
    'use strict';
    browser.wait(EC.presenceOf(element(by.tagName('canvas'))), 5000);
    return browser.executeScript(
        function () {
            var coord = arguments[0];
            var canvas = document.getElementsByTagName('canvas')[0];
            // true if all pixels Alpha equals to zero
            var ctx = canvas.getContext('2d');
            if (ctx.getImageData(coord.x, coord.y, 1, 1).data[3] === 0) {
                return false;
            }
            return true;
    }, coordinates);
};

In this function, we are using the same canvas function to get the imageData and then checking that for all the coordinates where clicks were plotted the value of alpha channel is greater than zero.

The above function can be easily called as below:

exports.validateHeatmapPlotting = function (coordinateArray) {
    'use strict';
    for (var i = 0; i < coordinateArray.length; i++) {
        expect(canvasUtils.checkCanvasPlotting(coordinateArray[i])).toBe(true);
    }
};

Conclusion

  • Canvas utility functions and selenium can be easily leveraged in order to verify basic heatmap functionality using automation.
  • These can be easily extended in order to verify number of clicks on element and also to verify plotting intensity.

Hope this post was a good enough reference to help you write end-to-end automation script for heatmap testing. If you have any questions about this, let us know via comments.