VWO puts a lot of focus on ensuring websites remain performant enough while using VWO. We have been increasing the efforts in this area and due to this, we are able to bring two very big optimizations which would directly reduce the impact on the performance of the websites.

jQuery Dependency Removed

VWO primarily used to be an A/B testing tool (now a complete Experience Optimization Platform) wherein customers can apply changes to their websites using VWO’s editor without having to make changes in their website code. To achieve this we required a DOM manipulation and traversal library which was well supported on all the browsers and jQuery was the best choice back then.

It has served us well over the years but we could not update jQuery and were stuck with its version 1.4.2 because of a feature we provide to our customers wherein they can use any API of jQuery(exposed by VWO as vwo_$). It allowed customers not to include jQuery twice on the page if the customer’s website was already using it. It was the right choice back then as jQuery was very popular and almost every website had it. But as people started using various methods of jQuery, there was a risk in upgrading jQuery version because we can’t be sure what methods are being used by customers and what changes in those methods’ APIs had been done in newer versions of jQuery.

To improve the performance of the library we decided to write a small alternative of jQuery based on cash.js and jQuery.js, this new library is around 13KB (Minified and Uncompressed) as compared to 70.5 KB(Minified and Uncompressed) of jQuery 1.4.2.

Why not jQuery

  • jQuery is huge in uncompressed size (70.5KB).

    • jQuery needs to take care of all the browser quirks and in all possible ways, a method is called.
  • We were also not using all the features that it had to offer.

    • jQuery provides a lot of methods and not all of them are required for our purpose, For example, we don’t require animation support in CSS

    • It provides a custom builder that allows users to skip certain modules but the resulting JS size was still a lot as it has unwanted things not used by us.

So, the solution was to create our own alternative of jQuery which would meet the following requirements.

  1. A very custom solution implementing only the functionality required by VWO - to keep the code size smallest possible supporting IE11+ and all other browsers (Chrome/Firefox/Opera/Safari/Edge). We used Browserlist to identify browsers’ share and decided to not support browsers with percentage share less than 1 percent.

  2. It should conform to jQuery API so that almost no change is required in the code using jQuery earlier.

  3. It should be as stable as jQuery.

Steps we took to migrate to “no jQuery”

  1. We started by scanning the entire library codebase for calls matching vwo_$().methodName with a regular expression search and were able to get a list of jQuery methods being used by VWO.

  2. We found out methods that were just “syntactic sugar” and were being used e.g. there was first() method which is technically equivalent to eq(0). So, we changed the usage of the first method to eq(0).

  3. We started taking the implementation of the methods from the latest jQuery version as-is and started removing the code that wasn’t for us.

  4. We took the tests from cash.js and ran its tests. We chose cash.js tests for this because it’s functionality is a very small subset of jQuery and is mostly a superset of our requirements. A lot of tests failed and we identified which failures are acceptable and which are not and accordingly modified the tests.

  5. We integrated the new library with our codebase and ran tests. We have a good number of Integration tests and E2E Tests apart from Unit tests for our libraries using the jQuery. Even though our existing Unit Tests didn’t help here but good coverage with Integration and E2E tests helped in finding bugs.

  6. Apart from Automated Tests, we did a good amount of Manual Testing across various browsers and platforms with all the libraries which took the major effort.

  7. Once all testing passed, we decided to enable this new library for vwo.com as the website uses VWO itself to monitor the errors in production.

    • This was required to test the library in production with all sorts of devices and browsers visitors come from.

    • The website has sentry in place to track any errors coming on the website.

    • We monitored the errors for a few days and were able to identify a few bugs from it which neither our automation nor manual testing was able to catch.

  8. After the bugs were fixed, we monitored for a few more days just to be sure and then we enabled it for all the new users signing up for VWO. It isn’t possible to enable it for existing accounts as they might be using some methods of jQuery directly.

  9. Next, we would be working on a way to reliably identify accounts for which the new library can be automatically enabled.

Below are the performance stats for a static and local website (we chose this to eliminate network fluctuations) using the two different versions of our library. All the stats are median of five performance audits we did.

Time to Interactive First CPU Idle Performance Score
With jQuery 6.1 5.9 74
Without jQuery 5.4 5.1 79

Brotli Compression for all Static Files

Brotli is a compression technique introduced by google. It is a lossless compression algorithm that compresses data by using the combination of the LZ77 algorithm, Huffman coding, and 2nd order context modelling. It is similar in speed to deflate but offers denser compression.

Why brotli?

It is well supported by all popular browsers. It is reported to have gains up to 25% over gzip compression. This info was enough to get us focussed on implementing Brotli on our CDN. Fewer bytes transferred over the network not only leads to a decrease in the page load time but also decreases the cost of serving the file.

Why just Static Files?

We found that compressing resources on the fly won’t lead to performance optimization in a straightforward manner as compression time which is on the higher side for brotli as compared to gzip might impact response time. See https://blogs.dropbox.com/tech/2017/04/deploying-brotli-for-static-content/ for more details. So we avoided that in the first implementation. Also, as Brotli is meant to compress text files and shouldn’t be used for binary files, we skipped images from brotli compression.

Steps we took to move from gzip to Brotli for Static Files

Brotli Compression was something that would be enabled for all of our customers at once(Per account strategy wasn’t possible as the static requests don’t contain VWO Account Id) and we had to be extra sure that in no circumstance our customers’ data is impacted. So, carefully followed the following steps:

  • We compressed all the static files during our NodeJS based Build process with the highest level of compression Level(i.e. 11)

  • We use OpenResty at our CDN and we already had certain rewrite rules in Lua in place to be able to serve different content to different browsers. There we added support for serving already compressed brotli files.

    • We read the ‘Accept-Encoding’ header and identified the encoding supported by the UserAgent from there.

    • If the UserAgent claimed to support brotli we served brotli otherwise, we served gzip. We assume that there is no browser that doesn’t support gzip which is validated by https://caniuse.com/#search=gzip

    • We made sure the Vary: Accept-Encoding response header is set in all cases. More on this later.

  • Before making it to the production we wanted to be sure that in the production all browsers which claim to have the support of brotli are able to decompress at the highest level of compression. For this, we decided to compress our most used library and served it on vwo.com as an independent request. We identified a particular string in the library and made sure that it was present every time. In case it’s not present or the response code wasn’t 200 we logged it as an error on Sentry. We monitored the logs for 2 days and found no issues. So, all ok from this angle.

  • Due to the relatively complex release process of VWO libraries, it wasn’t practical to create brotli files for all the possible static files [We have versioning for all files]. Due to this, we had to modify Nginx conf location blocks to force any requests to deprecated files to redirect to the latest stable version of that file. It required all possible static content serving endpoints to be tested.

How did we make sure that there were no issues with the release?

To ensure that there were no issues with the new version of the VWO library we wrote Request Response based automation test cases for our CDN in addition to the existing e2e test cases. We created a list of all possible static files that are served by our CDN by scanning all our libraries built code through automation. We combined it with all the possible versions of our libraries and it created a list of all possible endpoints (including some non-existent endpoints as some files are not servable in certain versions) that we have.

It verified the following things for all those endpoints:-

  • For different variations of the Accept-Encoding request header file with the expected compression was served. Expected Compression was verified by Checking the Content-Encoding Response Header.

  • Status Code is 200. If the Status Code is non-200 (Remember there were non-existent endpoints in it), it would verify it with one of the designated production CDN nodes to see if that also returned the same non-200 status code. If yes, then it’s a non-existent endpoint otherwise it’s a bug.

  • We have endpoints like https://dev.visualwebsiteoptimizer.com/6.0/va.js. Here 6.0 means 6.0 version of the library. We used the framework to ensure that the content for the request corresponds to the requested version. Almost all of our files have versioning information in them.

  • Bonus, we used the effort done in framework implementation to verify licenses also for all of our endpoints. Plus, the framework is flexible enough to verify anything in response.

We didn’t make the changes live in one go on all our CDN nodes. We started with a node with the least amount of traffic with careful monitoring. We monitored the logs for a day and then proceeded to make it live on the other servers as well.

Our Testing library - Most Used[Sizes are with the new version in which jQuery doesn’t exist]

With gzip

With Brotli

The importance of Vary Header

To make sure that any HTTP Cache does not cache a specific compressed file and serve it for all UserAgents regardless of the decompression support at that UserAgent, we are using the vary header with Accept-Encoding to make sure the right file is served to the User-Agent. You can read more about it at https://www.fastly.com/blog/best-practices-using-vary-header

Future Plans

Currently this library is available for new customers only. But we are planning to deploy this library for all of our existing customers. It would require to figure out if they are using any jQuery method directly or not. Also, we would be experimenting with brotli compression on the fly for our non-static files.

This is not the end of our performance improvement journey. Stay tuned !


      On March 29th, 2019, our team members Ankit Jain, Dheeraj Joshi and I had the privilege to attend a very exclusive event called BountyCon in Singapore at Facebook APAC HQ. This was an invitation-only event organized by Facebook and Google, that gave Security Researchers and University Students an opportunity to hear from some of the brightest minds in the field of Bug Bounty Hunting. The purpose of BountyCon was to bring together researchers from all over the Asia-Pacific region under one roof to collaborate, network and submit security flaws across both platforms during the live hacking event. In this blog post, I will cover my experience with the BountyCon, how the conference turned out, impression on the CTF event, and overall thoughts about the Conference experience!

How did we get the invitation?

      In January 2019, Facebook and Google launched a Capture the Flag competition (CTF) for the first time to identify new whitehats in the Asia-Pacific region. Like everyone, we signed up for this competition and were super excited about this event.

So there were several reasons that led me to participate in this competition.

  • Chance to visit Singapore: Top 20 winners were to be awarded free flight tickets and accommodations in Singapore to attend the event.
  • Meeting new Researchers: How could I miss a chance to be in an environment where everyone speaks your language.
  • Facebook, Singapore HQ: - Yes! The venue for this event was at the Facebook office and I was thrilled to visit their new APAC HQ.
  • Facebook and Google swags: Just a cherry on the top!

Anyways, the objective of this competition was to perform passive recon (the number of flags planted wasn't revealed) against the target (Google and Facebook) and capture as many flags as possible. Most of the challenges were related to OSINT and some of them were related to Web, Cryptography, steganography, and Reverse Engineering.

I started the challenge by googling " Bountycon" and at the end of the search, I got a flag inside security.txt file ;)

After capturing the first flag I felt a sense of adrenaline build up inside of me. Over the next two hours, I continued hammering away at my keyboard, consequently solving a few more challenges.

A problem that was both interesting and challenging was the Deeplink challenge. It involved reverse-engineering facebook app and reconstructing the flag from assembly code. So here is how I solved this challenge, thanks to Manish Gill for pointing me in the right direction.

The first thing that comes to mind when relating CTF and the executable(APK) file is Reverse-Engineering.

I quickly downloaded the latest app from https://www.facebook.com/android_upgrade.

Using the all-purpose decompiler tool as my first resort, I found the first flag in the AndroidManifest.xml file.

But wait, this is not the solution for Deeplink Challenge :P

The second flag was even more elusive. To solve the Deeplink challenge, I had to download and decompile the previous release of the facebook app.

I downloaded the previous-release(Facebook 203.0.0.16.293) of the Facebook app from apkmirror and decompiled the resources using apktool.

After searching for some time, I found an interesting file called PSaaActivity.smali.

After going through the assembly code and some debugging I finally found the second flag BountyCon{cr4zy_d33pl1nk_m461c}.

By the end of the competition, I solved 18 challenges and earned a total of 943 points. Though it was nowhere near the 1200+ points achieved by Ankit and Dheeraj, I really enjoyed the competition :P :P

After a couple of days, the three of us got an email from the Facebook team and we were delighted to know that we were selected as one of the top 20 winners from the APAC zone.

Meeting the legend Jeff Moss

      On March 28, we started from Delhi at around 11:00 PM and we reached Singapore on Friday morning. After a quick fresh-up at our hotel, we headed out for a walk in the city and ended up at Starbucks.

While we were chilling at Starbucks, Dheeraj got to know about an event "ALIBABA SECURITY Meetup" on twitter. This was a security meetup that was hosted by the Lazada security team and the Alibaba Security Response Center. So, quickly we guys registered ourselves on their online forum and set out to attend the meetup.

We reached at around 7:00 PM to the venue of the meetup, Lazada Visitor Centre. After the opening ceremony, we had a chance to meet and greet the legend Jeff Moss, Founder of Defcon and Blackhat. It was a great honor to meet him in person!

Picture Courtesy: @AsrcSecurity
Picture Courtesy: @AsrcSecurity
Picture Courtesy: @AsrcSecurity
Picture Courtesy: @AsrcSecurity

The rest of our day consisted of an awesome dinner and afterward, we returned to our hotel and called it a night.

First Day of BountyCon

      On the 30th March 2019, the first day of BountyCon, we reached at the venue of BountyCon, Facebook APAC HQ at around 8:30 AM. After registration, we got a cool badge, a BountyCon goodie bag with Google Tshirt, a BountyCon hoodie, a thermos flask and Notepad in it.

The first talk was by Frans Rosen, Security Advisor at Detectify, he gave a walkthrough on methodology and strategies to win big bounties. Here is a link to his presentation.

Some of the top Security Researchers and Security Engineers that included JackWhitton, Maciej Szawłowsk, Yasin Soliman, Richard Neal, Pranav Hivarekar, João Lucas Melo Brasio, Mykola Babii, and Michael Jezierny also shared tips like chaining vulnerabilities to increase the impact and also gave some guidelines to pentest android application. The content of the talks was quite interesting, informative and gave a broad overview of the entire bug bounty process.

Top 4 CTF Winners, Kishan Bagaria, HoMing Tay, Rahul Kankrale and Sachin Thakur gave presentations on approaches they used to find the hidden flags across both the platforms. I recommend checking out the writeup of BountyCon flags by my friend, Kishan, here is a link to his writeup.

The long first day ended with a good dinner at Cook & Brew at The Westin Hotel.

Second Day of BountyCon

      On the second day of BountyCon, Facebook kicked off with a live hacking event where researchers were challenged to report security bugs across both Facebook and Google. On that day, researchers from various APAC regions and few top researchers from Hackerone had submitted 40 bugs in total and Facebook awarded over $120k for valid submissions. The top three researchers with maximum points were Dzmitry Lukyanenka, Shubham Shah and Anbu Selvam Mercy. After the award ceremony, the event was wrapped up with an amazing dinner at Nude Grills and a few drinks.

Overall it was a worthy learning experience and I would like to thank Facebook and Google for their effort in organizing this event.

Here are some of the photos that we took in Singapore and BountyCon:

Picture Courtesy: @rraahhuullk
Picture Courtesy: @GoogleVRP
Picture Courtesy: @ajdumanhug
Picture Courtesy: @InfoSecJon

Introduction:

At Wingify, we follow microservices based architecture to leverage it’s great scalability benefits. We have a lot of microservices along with a complex networking setup among them. Currently, all the services are deployed on virtual machines on the cloud. We wanted to improve this architecture set up and use the latest technologies available. To avoid all this we are moving towards Docker and Kubernetes world!

Why Docker and Kubernetes?

The problems we are facing with the existing infrastructure:

  • Standardization and consistency
    • There is always an issue of a consistent/standard environment between production and development.
    • Most of our time goes in creating a production like environment during development to rollout bugfixes or create any new features.
    • With the new architecture, now we are more equipped to efficiently analyze and fix bugs within the application. It has drastically reduced the time wasted on “local environment issues” and in turn increased time available to fix actual issues and new feature development.
    • Docker provides a repeatable production like development environment and eliminates the “it works on my machine” problem once and for all.
  • Local development
    • It’s not easy to develop and debug a service locally and connect it to the rest of the services running on local environment.
    • Constantly redeploying on local environment to test the changes is time consuming.
  • Auto scaling
    • The load on the services can never be the same all the time.
    • Keeping the services up for the whole year just to handle the peak load which comes on a few days of the festive season is a waste of resources.
    • Regularly benchmarking the load to scale the services with time is not an optimal way.
  • Auto service restarts
    • If the service goes in a hanged state or terminates due to memory leak, resource polling deadlocks, file descriptors issues or anything else, how it is going to restart automatically?
    • Although there are different tools available for multiple languages but setting them up for each service on every server is not ideal.
  • Load balancing
    • Adding and maintaining an extra entry point like nginx just to provide load balancing is an overhead.

We are trying to tackle all these problems in an automated and easy way using Docker, Kubernetes and few open-source tools.

Our Journey

We started out from scratch. Read a lot of articles, documentation, tutorials and went through some existing testing and production level open source projects. Some of them solved a few of our problems, for some we found our own way and the rest of them are yet to be solved!

Below is a brief idea of all the ideas and approaches we found to solve many of our problems, the final approach we took and comparison between them:

Common repository approach

Every dockerized service starts with a Dockerfile. But the initial issue is where to put them? There will be a lot of Dockerfiles combining all the services.

There are two ways to put them:

  1. Each service contains it’s own dockerfile
    • All the repositories have separate dockerfiles specific to that service.
  2. A common repository of all dockerfiles
    • All the dockerfiles of every service are added to a common repository.

Below is the comparison among them:

  Common Repository Separate Repositories
1. Need a proper structure to distinguish dockerfiles Separation of concerns
2. Common Linters and Formatters Each repo has to add the same linter and formatter repetitively
3. Common githooks to regulate commit messages, pre-commit, pre-push, etc. tasks Same githooks in every service
4. Can contain reusable Docker base-files No central place to put reusable dockerfiles
5. A central place for DevOps to manage the permissions of all dockerfiles Very difficult to manage dockerfiles individually by Devops

You may be thinking about the ease of local development using volumes in the separate repository approach. We will get back to that later and show how easy it will be in a common repository approach.

So, the common repository approach is a clear winner among them. But what about its folder structure? We gave it plenty of thoughts and finally, this is our Docker repository folder structure:

The folder structure is broadly categorized into 2 parts:

  • Services directory:
    • It contains the directories of all the services each having their own ‘dockerfile’ and ‘.dockerignore’ files.
    • Internally they inherit from the base images.
  • Reusable base images directory:
    • It contains all the reusable dockerfiles that are categorized broadly according to their respective languages like node, PHP, etc.
    • Dockerfiles containing only the languages are placed in the ‘base’ folder.
    • All the extensions, plugins, tools, etc. of above base images are placed in the same directory, like ‘thrift’ for node.js.
    • Versions are important as multiple services may use different versions of the same plugins. Like, one service may require MySQL 5.6 and the other one may require 5.7. So, each directory is further nested on the basis of versions.

Using this folder structure has multiple advantages:

  • All the services and reusable base dockerfiles are segregated.
  • It becomes very clear that which dockerfile is for what service, language or plugin.
  • Multiple versions can be easily served.

Next, we will discuss the reusable base images concept.

Dockerfile Linter

There are many opensource linters available for Docker files. We found hadolint meets most of the standards that Docker recommends. So, to lint all the files we just have to issue a simple command which can be easily integrated into the githooks.

hadolint **/*Dockerfile

Dockerfile Formatter

We searched and tried multiple formatters, but none of them worked as per our requirements. We found dockfmt was close to our requirements but it also has some issues like it removes all the comments from dockerfile. So, we are yet to find a better formatter.

Reusable Docker base images

It’s very common that a lot of services need the same OS, tools, libraries, etc like all the node services may need Debian stretch OS with node.js and yarn installed of a particular version. So, instead of adding them in all such Docker files, we can create some reusable, pluggable Docker base images.

Below is the example of a Node.js service which requires:

  • Debian stretch OS
  • Node.js version 9.11.2 + Yarn
  • Apache thrift version 0.10.0

Node.js base image:

FROM debian:stretch-slim

# Install Node 9.11.x
# Defining builDeps as an argument in alphabetical order for better readability and avoiding duplicacy.
ARG buildDeps=" \  
  curl \
  g++ \
  make"

# It causes a pipeline to produce a failure return code if any command results in an error.
SHELL ["/bin/bash", "-o", "pipefail", "-c"] 
# hadolint ignore=DL3008,DL3015
RUN apt-get update && apt-get install -y —- no-install-recommends $buildDeps \
  # Use --no-install-recommends to avoid installing packages that aren't technically dependencies but are recommended to be installed alongside packages.
  && curl -sL https://deb.nodesource.com/setup_9.x | bash - && apt-get install -y nodejs=9.11.* \
  && npm i -g [email protected] \
  && apt-get clean \ 
  # Remove apt-cache to make the image smaller.
  && rm -rf /var/lib/apt/lists/* 

Let’s consider we build this with name ‘wingify-node-9.11.2:1.0.5’. Where ‘wingify-node-9.11.2’ represents the docker image type and ‘1.0.5’ is the image tag.

Apache thrift base image:

# Default base image
ARG BASE=wingify-node-9.11.2:1.0.5

# hadolint ignore=DL3006
FROM ${BASE}

# Declaring argument to be used in dockerfile to make it reusable.
ARG THRIFT_VERSION=0.10.0 

# Referred from https://github.com/ahawkins/docker-thrift/blob/master/0.10/Dockerfile
# hadolint ignore=DL3008,DL3015
RUN apt-get update \
    && curl -sSL "http://apache.mirrors.spacedump.net/thrift/$THRIFT_VERSION/thrift-$THRIFT_VERSION.tar.gz" -o thrift.tar.gz \
    && mkdir -p /usr/src/thrift \
    && tar zxf thrift.tar.gz -C /usr/src/thrift --strip-components=1 \
    && rm thrift.tar.gz \
    # Clean the apt cache on.
    && apt-get clean \
    # Remove apt cache to make the image smaller.
    && rm -rf /var/lib/apt/lists/* 

WORKDIR /usr/src/thrift
RUN ./configure  --without-python --without-cpp \
    && make \
    && make install \
    # Removing the souce code after installation.
    && rm -rf /usr/src/thrift

Here, by default, we are using the above-created node’s Docker image. But we can pass any other environment’s base image as an argument to install thrift there. So, it’s pluggable everywhere.

Finally, the actual service can use above as a base image for it’s dockerfile.

Access private repository dependencies

We have multiple services that have some dependencies which are fetched from private repositories. Like in our node service, we have one of our dependencies listed in package.json as:

{
  "my-dependency": "git+ssh://[email protected]/link/of/repo:v1.0.0",
}

Normally we need ssh keys to fetch these dependencies, but a Docker container won’t be having it. Below are the few ways of solving this:

  • Option 1: Install dependencies externally (local or Jenkins) and Docker will copy them directly.
    • Advantages:
      • No SSH key required by docker.
    • Disadvantages:
      • Dependencies installation won’t be cached automatically as it’s happening outside the docker.
      • Some modules like bcrypt have binding issues if not installed directly on the same machine.
  • Option 2: Pass SSH key as an argument in dockerfile or copy it from system to the working directory and let dockerfile copy it. Docker container can then install dependencies.
    • Advantages:
      • Caching is achieved.
      • No module binding issues.
    • Disadvantages:
      • SSH key would be exposed in a Docker container if not handled correctly.
      • Single SSH keys will have security issues and different ones will be difficult to manage.
  • Option 3: Host the private repos globally like our own private npm (in case of node.js) and add it’s host entry on the system. Docker container can then install dependencies by fetching from our private npm.
    • Advantages:
      • Caching is achieved.
      • No SSH key required.
    • Disadvantages:
      • One time setup of hosting.
      • We need to publish the private repos each time we create a new tag.

Way 3 proved to be much better in our case and we moved ahead with it.

Service Dockerfile

The final dockerfile of the service implementing all above will be like:

ARG BASE=wingify-node-9.11.2-thrift-0.10.0:1.0.5

# hadolint ignore=DL3006
FROM ${BASE}

RUN mkdir -p /opt/my-service/
WORKDIR /opt/my-service

# Dependency installation separately for caching
COPY ./package.json ./yarn.lock ./.npmrc ./
RUN yarn install

COPY . .

CMD ["yarn", "start:docker"]

Here ‘.npmrc’ contains the registry which points to our own private npm. We are copying it so that Docker container can fetch our private repos from it.

Caching

Every time we change our code, we don’t want Docker container to install dependencies again (unless changed). For this we divided the ‘COPY’ step in above dockerfile into 2 parts:

# Here we are copying the package.json and yarn.lock files and doing dependencies installation.
# This step will always be cached in Docker unless there is change in any of these files
COPY ./package.json ./yarn.lock ./.npmrc ./
RUN yarn install

COPY . .

Doing all this will reduce the Docker image build time to just a few seconds!

Auto-tagging and rollback

Tagging is important for any rollback on productions. Fortunately, it’s easy to do in docker. While building and pushing an image on Kubernetes we can specify the tag version with a colon. We can then use this tag in Kubernetes YAML file to deploy on the pods.

docker build -t org/my-service .
docker build -t org/my-service:1.2.3 .

docker push org/my-service .
docker push org/my-service:1.2.3 .

This works fine, but it still requires a new tag every time we are building a new version of the image. This can be passed manually to a job. But what if there is auto-tagging?

First, let’s find out the latest tag. Here is the command to find the latest image tag from GCP:

gcloud container images list-tags image-name --sort-by=~TAGS --limit=1 --format=json

We can use this in a custom node script which will return the new incremented version. We just have to pass the image name and the release type i.e. major/minor/patch to it.

// Usage: node file-name image-name patch
const exec = require('child_process').execSync;

const TAG_TYPES = {
  PATCH: 'patch',
  MINOR: 'minor',
  MAJOR: 'major'
};

// Referenced from https://semver.org/
const VERSONING_REGEX = /^(v)?(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+([0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$/m;

class Autotag {
  constructor(imageName = '', tagType = TAG_TYPES.PATCH) {
    this._validateParams(imageName, tagType);
    this.imageName = imageName;
    this.tagType = tagType.toLowerCase();
  }

  // Private functions
  _validateParams(imageName, tagType) {
    if (!imageName) {
      throw new Error('Image name is mandatory.');
    }

    if (!Object.values(TAG_TYPES).includes(tagType)) {
      throw new Error(
        `Invalid tag type specified. Possible values are ${Object.values(
          TAG_TYPES
        ).join(', ')}.`
      );
    }
  }

  _fetchTagsFromGCP() {
    return exec(
      `gcloud container images list-tags ${
        this.imageName
      } --sort-by=~TAGS --limit=1 --format=json`
    ).toString();
  }

  // Public functions
  increment() {
    const stringifiedTags = this._fetchTagsFromGCP();

    if (stringifiedTags) {
      try {
        const { tags } = JSON.parse(stringifiedTags)[0];

        for (let i = tags.length - 1; i >= 0; i--) {
          const tag = tags[i];
          if (VERSONING_REGEX.test(tag)) {
            let [
              prefix = '',
              major = 0,
              minor = 0,
              patch = 0
            ] = VERSONING_REGEX.exec(tag).slice(1);

            switch (this.tagType) {
              case TAG_TYPES.PATCH:
                patch++;
                break;
              case TAG_TYPES.MINOR:
                patch = 0;
                minor++;
                break;
              case TAG_TYPES.MAJOR:
                patch = 0;
                minor = 0;
                major++;
                break;
            }

            return `${prefix}${major}.${minor}.${patch}`;
          }
        }
      } catch (e) {}
    }

    // Return default tag if none already exists.
    return '0.0.1';
  }
}

try {
  console.log(new Autotag(...process.argv.slice(2)).increment());
} catch (e) {
  console.log(e.toString());
}

Thanks Gaurav Nanda for the above script.

Production staged rollout

Our ultimate goal is to migrate everything from the existing setup to GCP with Docker and Kubernetes. Migrating the whole system in one go on production is time-consuming as well as risky.

To avoid this we are targeting individual services one by one. Initially, a service will run on GCP as well as on the existing server with their databases pointing to the old setup. We will open them for a few accounts at the beginning. The rest of the accounts will work as before. This will ensure that if any issue comes in a new setup, we can easily switch back to the old setup while fixing it.

Next steps

  • Integrate health check APIs with Kubernetes.
  • Development environment using telepresence.
  • Add service discovery tool like consul.
  • Add a vault system for secrets.
  • Better logging.
  • Integrate helm to manage the Kubernetes cluster.
  • Docker image size management.
  • Add support for blue green deployments.

We may be using some things differently that can be improved upon. There can be better tools that we are yet to explore. We are open to any suggestions that can help us in improving what we are already doing and what we will require in the future. This is just a start, we will try to improve in every iteration and solve new challenges.

Thanks to Gaurav Nanda for mentoring and guiding us for everything.

References


Introduction:

Js13kGames is a JavaScript game development competition that is organized every year from 13th August to 13th September. What makes this one stand apart from other game dev competitions, is the game size limit of 13 kilobytes. Yes, Just 13KB for everything, including code, images, graphics, sounds! Moreover, a theme is decided every year and the game, ideally, should be based on that. This results in a lot of brainstorming and innovative ideas. For this year, the theme was ‘offline’.

The competition is organized by Andrzej Mazur who is also one of the judges. They play every submitted game at the end of the competition and give their reviews in terms of what went right & in what directions improvements could be made. Needless to say, there are a lot of prizes like gadgets, t-shirts, and stickers to be won every year.

Preparations

JS13K competition is not new to folks at Wingify. A couple of us have prior experience of this. Kush, Gaurav and Varun had participated in previous JS13K events. Having experienced & enjoyed the competition first hand, they felt compelled to inform the rest of us as well.

After the first week since the beginning of the competition, we all met and the veterans of this competition introduced us to the rules & theme, basic techniques related to game development, & tools that might be handy along the way. We were a little short on time considering that we had to first come up with feasible concepts & our primary experience being Frontend SPA development, creating these games was about to be unlike any code we professionally write.

Entries

Twisty Polyhedra

Author: Aditya Mishra

The concept behind this one is simple, You will have access to Rubik’s cube variants of different sizes & shapes to solve. You’re likely familiar with the standard size 3 Rubik’s cube. But what you may not be aware of is that it’s actually just one item from a huge family of puzzles with rich mathematical structures. This game was to be built so that it can at least support face turning Tetrahedra & Octahedra apart from the standard cubes.

Some of the fun challenges involved with this one were:

  • Composing & rendering the shape
  • Re-orienting & twisting the puzzle according to cursor movements
  • Animating the twists

There was a lot to learn from these challenges as it involved playing with vectors, coming up with algorithms to generate & render the sliced shapes on a 2D canvas, to infer the desired action from simple input events etc.

demo | source

Keep-Alive

Author: Surbhi Mahajan

The idea of this game is inspired by Duet. Although the gameplay is based on the classic game, it offers extended features and new visuals. There are 3 self-contained levels each with a unique challenge. The player rotates colored orbs in a circular track, guiding them to avoid incoming obstacles. It’s required to keep all the orbs intact to keep going. The orbs only collide with obstacles of a different color than them & pass unharmed through obstacles otherwise.

A few of the interesting challenges with this were:

  • Collision detection & revert effects
  • Special effects for tail & kill animations
  • Dynamic level definitions

Since a lot of these effects were algorithmically generated, the size limit was not a concern for this entry. The primary learning experience here was integrating deterministic dynamic stages, cool effects & structuring the implementation.

demo | source

Anti_Virus

Author: Punit Gupta

This game is inspired by a classic game ‘Snow Bros’ but with a very different flavor. We all use various offline storage devices to save our precious data. But inevitably, sometimes the data gets corrupted due to viruses. The goal here is to go into those devices, kill those viruses and save the data. The gameplay involves moving around, climbing the platforms, freezing the opponents and throwing them over other enemies.

Some of the major challenges involved with this idea are:

  • Detecting collisions among platforms, walls, opponents, shooters and player.
  • Randomizing enemy movements.
  • Animating when player or enemies are killed.

The physics & special effects being the most fun part of the implementation, squeezing all these things into the required size & keeping the gameplay smooth also involved quite a lot of optimizations & polishing.

demo | source

Sum It Up

Author: Hemkaran Raghav

This game is inspired by one of the most popular games of all time, ‘Spider Solitaire’. In this one, you don’t have to stack the cards in increasing order. Instead, numbers are written on these cards and you have to stack identical cards over each other causing them to merge into a new card with double value. Your goal is to create the highest score possible.

The most fun parts of this implementation were creating smooth & beautiful animations.

demo | source

Up & Down

Author: Dinkar Pundir

Inspired by vvvvvv, this game is based on playing with gravity. Apart from the ability to move left/right, you can toggle the direction of the pull. On the click of a button, this direction can be flipped upside-down. This basic idea when combined with adding obstacles in creative ways can lead to plenty of possibilities for a platformer.

This involved problems like:

  • Implementing smooth discrete integration that provides a nice balanced difficulty
  • Collision detection that properly counters changing gravity
  • Creating a well-structured design to allow for easy extensions to stage definitions

Although created in a very short amount of time, not only were these things fun to solve, these simple problems lead the way towards a wide variety of techniques related to mathematical ideas & design principles in general.

demo | source

Robo Galactic Shooter

Author: Ashish Bardhan

The idea for this entry was to create a classic 2D Shoot’em up style & nostalgic retro feel. You need to survive a barrage of asteroids for as long as possible. The good thing is that you’re given some solid guns! The robot is flying towards the right into the coming asteroids of different sizes & velocities. The gameplay involves either dodging them or shooting them till they disintegrate.

Plenty of effort went into the following parts:

  • Creating cool background effects, handling sprites
  • Integrating sound effects
  • Boundless level generation

Considering how many effects & elements were integrated, a lot of optimizations were required to fit it all in 13KB. The game also incorporated sound effects using a micro-library called jsfxr.

demo | source

What we Learned

A lot of what we learned came from implementing animations, physics, collision detections etc. It’s nice to see how ideas from geometry & basic numerical integration techniques come together to make a functional game. It’s also worth mentioning that given how complex the implementations of these simple concepts tend to become, an understanding of software design principles is not only a requirement while building these things, but also grows really well with such an experience.

Most of the integrated effects & animations we had to create on our own. Some of our games also involved a degree of focus on keeping the algorithms fast, for example, by managing object lifecycles to keep the computations limited to only visible entities, by using clever little hacks to avoid redundant computations while rendering etc.

In the instances where third party libraries were used, we had to make sure they introduced very little overhead. 2 of the listed games leveraged Kontra.js, a micro-library to get up & running quickly without introducing any significant impact to build size. Kontra.js provides nice features such as sprite management & out of the box collision detection etc. Galactic Shooter also used a slightly altered version of jsfxr, a lightweight sound generation library.

For the build process, almost all of us followed a different path. In small games, Webpack was suitable for bundling the source. For some larger ones, we wanted to avoid even the tiniest of the overhead introduced by Webpack. Therefore, we used simple Grunt / Gulp tasks to concatenate & minify files. In some cases, we even avoided using Closure Compiler, as arrow functions & classes result in much more concise code. Apart from these, we experimented with various compression tools & techniques for fitting all those JS/HTML/CSS/PNG files in 13K size limit. But one way or another, we managed to make it for all of the entries.

Conclusion

Participation in this event did require committing a significant amount of personal time to it but proved to be an amazing experience not only in terms of what we learned from it but also in how much fun we had while implementing these concepts. After all, game development is as fun as it gets when it comes to programming - creating a tiny universe of your own, the laws of physics are what you want them to be, things evolve how you tell them to. And we certainly managed to pick up some nice ideas along the way.

We would love to hear your feedback or answer any queries you might have, feel free to drop a comment or tweet us at @wingify_engg.

Game on!


For the past few months, we at Wingify, have been working on making a common platform for different products - so that things get reused across products and re-inventing the wheel doesn’t happen. This also has additional benefits like enforcing common good practices across products, easier switching for developer across products and more. As part of the same endeavor, our Frontend team has been working hard on a Design System and a frontend boilerplate app over that. The boilerplate is something which any product at Wingify can simply fork and build a new frontend app, using the reusable components provided by the base Design System. More about the boilerplate and Design System later, but in this post want to specifically talk about a very import part of our Design System - our CSS.

Issues with current CSS

First, why did we even start looking for a new way to write CSS? Previously, we were using a mix of BEM and some helper classes. Occasional classes which belonged to neither of those two categories could be seen in the code base too! 😅 This approach led to the following issues:

  • Naming classes was always a problem - Often, someone was commenting the pull requests that this class name doesn’t make sense and should be changed to something more “meaningful”. Finding “meaningful” names is tough!
  • Unused CSS - Automated tools to detect unused CSS are not very reliable, especially with Single Page Apps. Our CSS kept growing over time and definitely one main reason for that was no one ever cared to remove the unused CSS.
  • Refactoring - With usual classes, it becomes difficult to refactor with confidence. Because the developer cannot be very sure about the class that they are renaming or removing getting used elsewhere which they are not aware of.

I have also blogged about these issues in detail in an article here.

Evaluating other approaches

Much before starting this mission, we started evaluating various frameworks for writing CSS. Our evaluation was based on following parameters:

  • Final output file size
  • Rate of growth of file size over time
  • Unused CSS handling
  • Ease of learning for a new developer
  • Ease of maintenance
  • Documentation (existing or requirement to create one internally)
  • Lintable
  • Themable
  • Ease of refactoring
  • Naming effort involved
  • Critical CSS generation

Yeah, lots of parameters. We evaluated very critically 😀. Also, notice that I have kept the end-user performance related parameters on top as that’s what mattered most to us.

The winner - ACSS

We evaluated lots of known frameworks and libraries out there like pure BEM, Tachyons, Styled Components, Vue’s scoped CSS, CSS modules. But We found that atomic CSS approach met most of our requirements as mentioned above. Also known as helper/utility classes approach, Atomic CSS requires no naming, documentation would be available if we go with a well-known library, its themable, lintable. Refactoring is also easier as all you need to do is remove classes from your HTML and never touch CSS.

But even in various atomic CSS libraries available out there, we decided to go with ACSS(I know, the name is little too generic as they call themselves Atomic CSS!). We got introduced to ACSS by our resident UX engineer, Jitendra Vyas. Along with him, we discussed a lot of points about ACSS with one of the developers of ACSS, Thierry Koblentz. ACSS was also mentioned by Addy Osmani at Google IO.

ACSS comes with very strong benefits which no other library had. You don’t write CSS in ACSS, in fact you don’t even download a CSS file and use in ACSS. ACSS comes with a tool called Atomizer which detects the use of ACSS classes in your HTML (or any file) and generates the corresponding CSS for those detected classes. Here is a sample HTML you would write with ACSS:

<button class="Bgc(blue) C(white) P(10px) D(ib) Cur(p) Bgc(red):h">
I am a button
</button>

On top of usual benefits of Atomic CSS approach, ACSS’s automatic CSS generation means that we never get a single byte of CSS that we are not using in an app! What we use in HTML, lands in the final CSS file. In fact, ACSS generates such small CSS that it’s practically possible to inline your complete CSS - i.e. your complete CSS can become your critical CSS!

We were free from documentation as the only thing a developer needs to write ACSS is their awesome, searchable reference. There is also a VSCode extension which even removes the need for the reference. We were free from naming things of course.

It may seem that a developer might have to write same set of classes repeatedly to create the same things, but that is not true. ACSS or any Atomic CSS approach requires a templating/component system where you can reuse a piece of HTML without duplicating. We use Vue.js to build our small reusable components.

Of course they are some cons in ACSS, as well. For example, inside a particular component’s HTML, you cannot find what any tag is about because there are no descriptive classes. This can be somewhat be fixed by using semantic tags. Basically, the pros are way too strong over its cons.

The end result

We just finished porting a decent size app to our new system and guess what, our CSS reduced from 90 KB to just 8 KB! 😱

That is all for this post! I encourage you to go and try out ACSS with an open mind and see if it solves your current CSS problem if any. We are happy to answer any questions you might have on our new approach, Design System etc. Do comment on this post or tweet them out to our twitter handle 👉🏼 @wingify_engg.

Bbye!