Performance matters, and an Angular.js developer would especially know it. Several watchers in a digest cycle can often be a bottleneck, and Angular doesn’t recommend having more than 2,000 of them in your application.

We are proud to announce q-directives, a brand new and fast directive system for Angular.js, that takes the watcher optimization to a whole new level. It was a result of several jsperf tests and Chrome Timeline runs.

Q-Directives on Github

Motivation

VWO is single-page application made entirely in Angular.js. When designing a detailed reporting system for campaigns in Angular.js, we faced troubles with rendering large amounts of data using Angular directives. In one of the report pages, the application had registered 15,000+ watchers, especially due to the way ng-repeat works.

With q-directives and a revamped directive system, the number of watchers for a q-repeat directive (replacement for the ng-repeat directive) was brought down to just 1. So whenever the list changes, only one watcher gets fired.

Performance Benchmarks

Below stats are a rendition of the Chrome (version 37) timeline for the following use case:

A table containing 216 rows repeated by q-repeat. Each row has about 10 columns containing about 50+ Angular directives each (Original). The optimized version has those Angular directives replaced with q-directives, and ng-repeat is replaced by q-repeat.

Data is collected over 5 samples for both Original and Optimized situations.

Initial table render

Original

graph

Optimized

graph

Optimized (+ disabling ngAnimate)

graph

Sorting the table

Original

graph

Optimized

graph


Documentation

Head over to this link for a usage documentation and API reference.

Contributing

If you are interested in contributing to the project, we would love to hear from you. Just fork the repository and submit a pull request.


Elasticsearch is essentially a distributed search-engine but there have been more than one example of companies and projects using Elasticsearch for analytics instead of search. We, at Wingify, had similar requirements when we decided to make our analytics more powerful to empower the customers of our product, Visual Website Optimizer (VWO). This blog post is about how we used Elasticsearch to make VWO’s user tracking a lot more powerful than it earlier was.

The Problem

For context, VWO is a tool that makes A/B testing of websites and mobile apps so simple so that there is no engineering intervention involved to run new A/B testing campaigns. Marketers and UI/UX designers do A/B testing to improve online conversions and sales. VWO helps them with performing these A/B tests with almost no engineering knowledge.

Since VWO is at the center of optimizing websites and mobile apps, this makes user tracking important for our product - our users make use of the data we collect to understand how their users (different segments of users) behave and make optimization decisions accordingly. For example, in an A/B test campaign with three variations, variation 2 might be winning for all the goals but for all the users coming from India, variation 3 might be winning for all or some of the goals. It should be possible for our customers to generate custom segmented reports and observe these different behaviours.

So lets summarize how a campaign and its reporting should work:

  • A VWO customer may create multiple campaigns. These campaigns have more than one variations (variations are variants of web pages or iOS apps with UI changes) that our customer wants to A/B test against real-traffic.
  • Every campaign has more than one goals (goals are events that you want to track, such as visiting a particular page, clicking a DOM element, submitting a form, triggering a custom event with JavaScript etc.) which our customer wants us to track.
  • Our JavaScript library tracks how real visitors trigger goals (events) per variation and sends this data to our data collection end-points.
  • Our data backend stores every visit and conversion for all the defined goals per variation. This is stored on a day-wise basis.
  • When the campaign’s report is accessed, the day-wise visitor and goal conversion data is used in the statistics that go behind generating the report.
  • Reports are generated considering behaviour of all the users who became a part of the campaign. However, our customers should have the flexibility to segment reports on the basis of parameters like location, browser, operating system, time range, query parameters, traffic type, etc.

In the prehistoric times

We used to store only counters in our database (we use MySQL) i.e. for goal per variation, we used to store number of visitors and conversions. Here is some sample data:

+------------+---------------+-----------+---------+-----------------+------+-------------+
| account_id | experiment_id | variation | goal_id | event_date      | hits | conversions |
+------------+---------------+-----------+---------+-----------------+------+-------------+
|          1 |           198 | 2         |       1 | 2011-02-19      |   15 |          12 |
|          1 |           198 | 1         |       1 | 2011-02-19      |   10 |           2 |
|          1 |           198 | 2         |       1 | 2011-02-20      |    6 |           2 |
|          1 |           198 | 1         |       1 | 2011-02-20      |   13 |           8 |
|          1 |           198 | 1         |       2 | 2011-02-21      |    7 |           0 |
|          1 |           198 | 1         |       1 | 2011-02-21      |    7 |           7 |
|          1 |           198 | 2         |       2 | 2011-02-21      |    8 |           0 |
|          1 |           198 | 2         |       1 | 2011-02-21      |    8 |           8 |
|          1 |           198 | 2         |       2 | 2011-02-22      |    6 |           0 |
|          1 |           198 | 1         |       1 | 2011-02-22      |    8 |           8 |
+------------+---------------+-----------+---------+-----------------+------+-------------+

We also support revenue tracking for a goal. There is a different table for revenue tracking, which looks something like this:

+------------+---------------+-----------+---------+-----------------+---------+
| account_id | experiment_id | variation | goal_id | event_date      | revenue |
+------------+---------------+-----------+---------+-----------------+---------+
|          1 |           198 | 2         |       1 | 2011-02-19      |   32.43 |
|          1 |           198 | 1         |       1 | 2011-02-19      |   34.35 |
|          1 |           198 | 1         |       1 | 2011-02-19      |    6.13 |
|          1 |           198 | 2         |       1 | 2011-02-19      |   21.93 |
|          1 |           198 | 2         |       1 | 2011-02-20      |   83.36 |
|          1 |           198 | 2         |       1 | 2011-02-20      |   72.65 |
|          1 |           198 | 1         |       1 | 2011-02-20      |   56.14 |
|          1 |           198 | 1         |       1 | 2011-02-20      |   87.12 |
|          1 |           198 | 1         |       1 | 2011-02-21      |   78.25 |
|          1 |           198 | 1         |       1 | 2011-02-21      |   88.36 |
+------------+---------------+-----------+---------+-----------------+---------+

So when our customers want to view the report, our application’s backend will run some queries to generate aggregated metrics like total visitors per goal per variation, total conversions per goal per variation, etc. which could be taken care of using MySQL’s built-in functions and then do some statistics at the application level to decide winning variations per goal.

Notice that in our first table where we store hits (visitors) and conversions, we store total counters of these two metrics per goal per variation per day. In the revenue table, we store every individual revenue per goal per variation with the exact date they occurred on. We need these separately as we need to calculate sum of squares of every revenue generated which is used in the statistics. I am not going to delve in the statistics side of things because that is out of scope of this article.

This worked pretty well for us for a while. It was all very simple and we had to deal with aggregated data most of the times other than the case of revenue where in we had to get every row of revenue for a particular campaign. At the application level, it was essentially firing up a few MySQL queries that would give us the aggregated and day-wise data and then use that data to statistically find winning variations per goal.

But this setup had a major drawback. Our customers were restricted to the view of reports we would expose them to. It was not possible to drill down and understand how different segments of users are behaving as the complete picture may not say it all about some different segments. For example, in an A/B test campaign with three variations, variation 2 might be winning for all the goals but for all the users coming from India, variation 3 might be winning for all or some of the goals. Finding this out was only possible by running another campaign targeted to users from India on the basis of a hunch to understand if the results would differ. And many times the results would not differ and our customers will lose visitors from their visitor quota.

Furthermore, our data storage had a few other problems like no fine grain control over date and time range (it was all day-wise), we would store all the counters according to our customers’ timezone (set at the time of account creation) which means that changing timezone later would be possible but the data collected earlier would be shown according to the previously selected timezone. These were some major drawbacks to our way of storing visitor and conversion data.

New Age Reporting

We knew that our existing MySQL based setup was not perfect but more importantly we realized that it does not help our customers. We wanted to make things simpler for our customers so that:

  • they could easily find important segments of users that behave differently and run targeted campaigns for them if necessary.
  • they have finer control over date and time so that they can see reports at different steps like months, days, hours, minutes, etc.
  • store everything in UTC so that we can take care of timezone changes at application level.

Looking at our application requirements, we realized that we cannot work with just aggregated data any more. We needed to start storing individual visitor’s data and their corresponding conversions to achieve flexibility and giving the power of slicing and dicing of the data in our customers’ hands.

We are also a pretty small team, which means that we wanted lesser headaches about ops and maintaining the entire system in production. We wanted things to be simple and as self-managed as possible.

Our specific requirements were:

  • Allow storage of individual visitor data with a lot of properties for performing segmentation.
  • Allow filtering on all the stored fields for performing segmentation.
  • Allow full text search on a few fields.
  • Capable of storing events for lifetime of a customer account. This means that we cannot delete visitor data as long as our customer is with us.
  • Getting consumable data out should be fast, or lets say not terribly slow. We are okay with an average of 2-3 seconds to start with.
  • Easy ops:
    • Fault tolerant system. Failing nodes should not bring the service down.
    • Scalable to handle our growing traffic, storage and other requirements.

We knew that Hadoop is the de-facto system in the Big Data universe but the entire Hadoop system is so vast that getting started with it is not as easy. There tons of different tools in the Hadoop ecosystem and just selecting the right tools for your use-case may take a significant amount of time for research, leaving the implementation time aside. Also, running a Hadoop cluster is no piece of cake. There are so many moving parts that you are not completely aware of as soon as you start. And performing upgrades of systems that have more systems running with it will always be problematic. Further, tuning all these systems to give an acceptable performance also seemed like a daunting task for a team as small as our’s with no prior experience with such systems.

On top of the above problems that we got to know about Hadoop from our friends working with it and from different blogs/websites, the task of implementing the infrastructure requirements for Hadoop, building an implementation, managing in production and then repeating the cycle for a team of 2 engineers seemed like a daunting task.

We knew that life would be much easy if we keep things simple and we started looking at other options.

Elasticsearch to the rescue

Having worked with Elasticsearch before for a smaller project and remembering that I had watched Shay’s talk from Berlin Buzzwords where he mentioned that Elasticsearch was also being used for analytics, we started looking at Elasticsearch to solve our problems.

Elasticsearch supports filtering which we could use to filter visitors and their conversions on the basis of a lot of properties that we wanted to collect for every visitor. Filtering would be fast in Elasticsearch because you can have indexes on every field if you want and since Elasticsearch uses Lucene under-the-hood, we were confident about its indexing capabilities. Elasticsearch supports full text search out-of-the-box. This fits well with our basic application requirements. On top of this, Elasticsearch supported Faceting (when we were evaluating, aggregations frameworks was not there) which we could exploit for analytics. That means we don’t even have to get all the data out of Elasticsearch to our application layer. Elasticsearch is capable of giving us an aggregated view of the data we were storing.

This was just amazing for us. We were able to build a PoC within two weeks. The next couple of months were spent on understanding Elasticsearch better, optimizing our implementation, testing Elasticsearch against production load and tuning it for the same.

In the meantime, Elasticsearch released 1.0.0 with aggregation framework and we quickly moved from using Facets (see Faceted Search) to Aggregations. Aggregations proved to be very useful with revenue goals as we could just ask Elasticsearch to give us sum of squares of individual revenues without getting individual revenues out of Elasticsearch.

As pointed out earlier, we need to track individual users. How we do this is we create a document for a unique visitor per account per campaign in Elasticsearch. This document stores user meta data, data for segmentation and goal conversion tracking data. A typical visitor document looks like this:

{
   "_index": "february-2015",
   "_type": "123",
   "_id": "D2E0A04858025DFE23928BC1F70D2156_123_313",
   "_score": 1,
   "_source": {
      "query_params": [
         {
            "val": "val1",
            "param": "param1"
         },
         {
            "val": "val2",
            "param": "param2"
         }
      ],
      "browser_string": "Chrome 40.0.2214",
      "ip": "8.8.8.0",
      "screen_colors": "24",
      "browser_version": "40.0.2214",
      "session": 1,
      "device_type": "Desktop",
      "document_encoding": "UTF-8",
      "variation_goals_facet_term": "c1_g1",
      "ts": 1424348107,
      "hour_of_day": 12,
      "os_version": "",
      "experiment": 313,
      "user_time": "2015-02-19T12:15:07.271000",
      "direct_traffic": false,
      "variation": "1",
      "ua": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36",
      "search_traffic": false,
      "social_referrer": "youtube",
      "returning_visitor": false,
      "hit_time": "2015-02-19T12:15:07",
      "user_language": "en-us",
      "device": "Other",
      "active_goals": [
         1
      ],
      "account": 196,
      "url": "https://vwo.com/lp/ab-testing-tool/?gclid=CPiZ7JT-7cMCFfDKtAodPUwAhQ",
      "country": "United Kingdom",
      "day_of_week": "Thursday",
      "converted_goals": [
         1
      ],
      "social_traffic": true,
      "converted_goals_info": [
         {
            "id": 1,
            "facet_term": "1_1",
            "conversion_time": "2015-02-19T12:15:54"
         }
      ],
      "referrer": "https://www.youtube.com/watch?v=EM-5IxL4HwQ",
      "browser": "Chrome",
      "os": "Windows 7",
      "email_traffic": false
   }
}

_id is the UUID of the visitor. Most of the other fields have information extracted out from the IP address, the User Agent, the URL and the Referring URL.

All the fields except a few are some fields with their types correctly set. Indexes are maintained on all of them so that visitor documents can be filtered according to the values in these fields.

But there are a few fields that are interesting:

  • query_params
  • converted_goals_info
  • converted_goals_info.facet_term
  • variation_goals_facet_term

Let’s look at each of them one-by-one.

  • query_params is an array of objects for storing query parameters and their respective values. This is of type nested because our customers may want to find all visitors and their conversions who visited pages with certain query parameters. Consider a scenario where you want to find all visitor documents with query parameter param1 and val2. A simple bool must query with term query would return the above document if query_params was not nested because it would find one of the two query_params.param values to be equal to param1 and the one of the two query_params.val values to be equal to val2 but we know that param1 never had val2 as its value. This happens because each object in query_params array is not considered as an individual component of the document. nested types solve this problem. Read more about nested documents and relations in Elasticsearch in this blog post.
  • converted_goals_info is also an array of objects for storing information of individual goal conversions. Here we store goal_id of the converted goal, the time of conversion as a DateTime field and another field that we will shortly discuss. This field is also of nested type for the same reason as with query_params.
  • converted_goals_info.facet_term and variation_goals_facet_term need to be discussed together because their values are constructed in a similar way. They in particular don’t hold any new information. In the beginning of the post, we saw how we used to store aggregated visitor and conversion count per goal per variation per day. We still need that data out of Elasticsearch in a similar way for our statistics. The day-wise problem gets solved by using day-wise buckets in aggregations framework. The next problem is getting visitor counts per variation per goal. In MySQL terms, we would want to run a GROUP BY query on variation and goal_id column. In Elasticsearch, we can do something similar by using Terms Aggregation using Scripts. The problem with this approach is that if you have a large number of documents, your script will get evaluated on all of them and Elasticsearch is not really a script execution engine (no matter which scripting plugin you use). What you can do instead is push the result of a script at the time of indexing and then simply run Terms Aggregation on it. We saw massive performance boost by doing this performance hack.

Every document gets saved under the doc_type for the account that campaign belongs to i.e. every account on VWO has a separate doc type.

Performance

From performance point-of-view, Elasticsearch has very fast indexing and querying capabilities. It is a distributed system - you can deploy a cluster of nodes in production which stores indexes in a distributed fault-tolerant way to give you performance benefits. Increase the number of replicas per shard and you can scale reads and queries. This can be done after creating an index as well. Elasticsearch does not allow changing of number of shards though. But there is a sweet work around for that. Just create a new index with more shards and use aliases, and you can now scale indexing as well.

Sharding

From our experience with working on large data sets which need to be queried on an ad-hoc basis and have low latency requirements and from our learning from Shay’s talks (1, 2, 3), we understood that a data storage system meant to store a lot of data will scale for your reads and querying requirements well if you can shard your data well according to the variable that determines the growth of that data. For example, if you are using any database for storing machine logs, you should be able to shard your data probably according to time because you would want to query the most recent data and if you have to do it from the all the data you ever collected, then your old data will only become a performance bottleneck. So a possible sharding strategy could be sharding data according to month-year.

Our requirement was similar. We get visitor data which we could easily shard on monthly basis. And since this data would keep on growing, we can just add new indexes every month and place the new data in these indexes. However, which index a visitor document goes to is not determined by the timestamp of the visitor but it is determined by the date of creation of the campaign. Why? Our customers view campaign reports i.e. when a campaign report is opened, we want to get data for that campaign only. So it would make sense to have all the data for a campaign reside only in one index because we wouldn’t want to look into multiple indexes for generating report of one campaign. If we decided to put visitor documents in different indexes depending upon time of visit, we would have faced the following problems:

  • A campaign may run for more than a month, so visitor documents for a campaign may be in more than one indexes and we would not have any way to know which all indexes without keeping a track of it separately as to which indexes have visitors for a given campaign. This would be painful.
  • Since visitors also convert goals and we store conversion data in visitor documents, it would be very difficult for us to find which index to find the visitor document in so that we can add conversion tracking related data in the document.

These problems get solved when we restrict all visitor data for a given campaign to go in one index only. So for account_id 123 that has two campaigns - campaign 1 (created in January 2015) and campaign 2 (created in February 2015), the visitor documents for both will be created in the indexes for January 2015 and February 2015 respectively.

Another big advantage of this is that we can adjust the number of shards every month. So if we are seeing a trend of more visitors getting tracked month after month, in the next month we can create a new index with more shards than the previous month’s index.

Routing

Since documents are stored in a particular shard in an index, Elasticsearch needs to decide which shard to put the document in. Elasticsearch use a hashing algorithm that is used for shard selection and Elasticsearch uses document’s ID by default for determining which shard that document goes into. This is called routing a document into a shard. This may work fine in some cases. But the drawback of this default routing strategy is felt when you have a large number of shards and also when you have to serve a lot of queries. The drawback is that Elasticsearch now needs to search every shard in an index for all the documents matching a given query, wait for the results, aggregate them and then return the final result. So for a given query, all shards get busy.

This can be controlled by using a better routing strategy. In our case, we generate reports of a campaign of a given account. It would be ideal that one account does not limit report generation of another account. So instead of going with the default routing strategy, we decided to route documents on the basis of account_id. So now, when a campaign report is generated for a given account, the query hits only a single shard, leaving all other shards available for serving other queries and also freeing up CPU resources. After moving to this routing strategy, we saw a significant reduction in CPU usage in our cluster.

Operations

From operations and management point-of-view, Elasticsearch is fault tolerant - indexes can be sharded and replicated and distributed in a cluster. Elasticsearch distributes shards and their replicas on different nodes in the cluster so that if a node fails, Elasticsearch promotes replicas to be the primary shards and moves shards and replicas in the cluster to balance the cluster. What is really amazing is that Elasticsearch also gives control over placement of shards in a cluster so that it is easy for you to separate hot data from cold (historic) data easily. We have not had the need to use this feature yet, but it is good to know that we can do this if at all historic data becomes a performance problem. Chances are that it will become a problem but probably much later.

Drawbacks

Although Elasticsearch made it really easy for us to push out something like this with so much ease (and remember we had no experience building something like this before) and we love Elasticsearch for that, we did find a few things with it that we think limits us.

  • The facet term hack for avoiding running scripts works great but then it’s also limiting if you want to add new features in your application that rely on different scripts that were not added at the time of indexing. This means that you will have to re-index all your data if you want to support this new feature or just provide this feature on new data.
  • Lack of JOINS becomes limiting. As of now we push the conversion data in visitor document. But it would have been ideal if we could independently index conversions data in a separate index or doc type.

We don’t know how to solve these problems yet or if Elasticsearch team has any plans for bringing something new that fixes these problems. It will open Elasticsearch to a lot more possibilities if JOINS were possible. But we also understand that it’s not a simple problem to solve and Lucene and Elasticsearch were not made keeping these use-cases in mind. Nevertheless, we hope to see these improving in the future, especially because a lot of companies are using Elasticsearch for analytics as well.

Conclusion

Elasticsearch has been great for us and it proves that you don’t always need Hadoop for building analytics depending upon your requirements. The amazing thing is that we feel Elasticsearch is amazing when it comes to scaling when limited by resources - horizontal scaling is extremely simple. But it will work for you or not depends entirely on your requirements.

Elasticsearch already works with Hadoop, which is being further developed to expand the use-cases it can support. This gives us a lot of confidence as we will add more features to VWO’s user tracking in the future and we know that we will not be limited by our decision to use Elasticsearch.


Meta Refresh is an event organised by HasGeek that focuses on design, user experience and the front-end web. The current iteration of Meta Refresh (2015) will be held in Bangalore on April 16-17.

Wingify will be hosting the Delhi run-up event for Meta Refresh. The session will focus on how designers, engineers, optimization specialists and businesses can use data (statistics) to drive design opinion, thereby optimizing their websites and web applications. The premise being that the data speaks for itself at any point of time, and provides you with validations or rejections on your hypothesis. A data-centric approach towards web-design serves the same cause. The design / layout of any website is based on certain intuitions of the designer, and many a times there is a conflict of interests between the designer and the business team. How about tweaking the website to bridge the gap between design and business?

If this is a topic that interests you we encourage you to join us for the run-up even at our office on March 21, starting at 10am. It would be a great opportunity for you to:

  1. Speak - If you have a similar topic in mind, or even a different one that goes with the theme of Meta Refresh 2015 (web in your pockets), propose a session at the talk funnel.
  2. Learn - RSVP for the event at the same link and be present for the event to learn from the community.
  3. Interact - Mingle and chat with us and other attendees about design, usability, or anything at all.

We look forward to seeing you at the event!

Event Time: March 21, 10am - 3pm

Location

Wingify Software Pvt. Ltd.
14th Floor, KLJ Tower North,
Netaji Subhash Place,
Pitam Pura,
Delhi - 110034

Google Maps: http://bit.ly/1AmaGYh

Get In Touch

You can mail us at engineering@wingify.com or give us a call at +91-9654585776.


At Wingify, we believe in open source and actively seek opportunities to give back to the community. We make use of a lot of open source technologies and libraries in our day to day work. And to share back the love, we have open sourced quite a few things over the past.

Pravendra Singh, a hacker, recently went on to actually collect and visualize data about open source contributions of some of the leading startups in India.

Something really cool that came into picture from his infographics was that Wingify was leading other organizations in terms of the most starred repository - our Please.js. Moreover, we have three more of our repositories that made it to the top 10!

There are more such cool infographics in his blog post - Open Source Presence Infographic of Indian Startups.

We keep doing a lot of cool open source stuff and if you are into such things, don’t forget to follow us on github and on twitter.


We, at Wingify, have been writing e2e test cases for our A/B testing app for the past 5 months using protractor. Writing e2e scripts is easy but making them work 90% of the times on all browsers makes you go nuts! Sometimes, I feel that the browser is moody, but of course, we have to deal with all its moods and make sure that test cases are robust.

Getting your e2e tests to run smoothly everytime comes with experience. And, learning from my experiences testing our app, here is a post describing them.

An Overview of Protractor

Protractor is a specially designed wrapper around WebDriverJs to automate e2e testing for angular apps. The following figure gives an overview of the architecture involved for automation and testing :

Setting up protractor and add-ons

For basic setup, go through the Protractor official documentation.

Once the setup is done, you are ready to write and run e2e scripts. Next you need a report of failed and passed test cases along with screenshots. protractor-html-screenshot-reporter , an npm module, provides you with an Html report of the test cases along with screenshots. Set it up as mentioned in npm documentation

Test Scenario and Test Data follow a one-to-many relationship, so we can’t hardcode the test data within the code. For example: The same login test script can be used to test the login of different types of users. To make life easy, it is a better option to make use of .xls files (Microsoft Excel, OpenOffice etc) and import data dynamically into the script, while execution. xlsjs npm module lets us fetch the data from an .xls file and use it in script. Follow the below mentioned steps to set it up :

  • Install xlsjs
npm install xlsjs
  • Define a JavaScript utility function as :
cellFromXLS = function (cellId) {
    'use strict';
    //Define sheetNumber
    var sheetNumber = 0;
    //Define file Path name
    var fileNamePath = path.join(dirPath, 'data1.xls');
    //NodeJs read file
    var XLS;
    if (typeof require !== 'undefined') {
        XLS = require('xlsjs');
    }
    //Working with workbook
    var workbook = XLS.readFile(fileNamePath);
    var sheetNamelist = workbook.SheetNames;
    var value = workbook.Sheets[sheetNamelist[sheetNumber]][cellId].v;
    return value;
};
  • Call function as:
var email = cellFromXLS('B1');

Test both Angular.js and non-Angular.js based pages

Our use case involves using Protractor for an Angular.js based app, but it works pretty well for non-Angular.js pages as well. Simply set the following flag to true and access the webdriver instance using browser.driver instead of element as shown below :

beforeEach(function() {
browser.ignoreSynchronization = true;
});

For instance, the following code for an angular page:

element(by.css('#elementid').click();

would be written as shown below for non angular page:

browser.driver.findElement(by.css('#elementid')).click();

Sounds cool? Now let’s dig deeper in the protractor world.

A piece of protractor code

Before we start with the advance functions, let’s have a look at a simple login test case, where we verify that the user should be redirected to a welcome page after login.

//Jasmine describe statement : Describes the test
 describe('APP LOGIN::', function() {
    //before Each :  This piece of code executes before all it statement
    beforeEach(function() {
            var ptor = protractor.getInstance();
            ptor.get('https://app.vwo.com');
    });
    //Jasmine it statement : What "it" will do.
    it('Verify that the user is logged in', function() {
	//Delete all cookies
        browser.driver.manage().deleteAllCookies();
	//Enter UserName
        element.all(by.model('username')).get(0).sendKeys('abc@wingify.com');
	//Enter Password
        element(by.model('password')).sendKeys('test');
	//Click Submit button
        element(by.css('.login-form button[type="submit"]')).click();
	//Wait for the current URL to change to welcome
        browser.driver.wait(function() {
            return browser.driver.getCurrentUrl().then(function(url) {
                return (/welcome/).test(url);
            });
        });
	//Jasmine expect statement : compare actual and expected value
	expect(browser.getCurrentUrl()).toEqual('https://app.vwo.com/#/welcome');
    });
    /* Write other it blocks */
});

Let’s study the code:

Describe , it and expect are jasmine framework methods to write the tests easily. Read more about them here.

beforeEach function in the code above executes before all it blocks, however, you might not want to execute the code in beforeEach for all it blocks.

Tip: To control its execution you can use a flag variable as shown in the code below.

//use pageLoadedStatus flag
var pageLoadedStatus = false;
beforeEach(function() {
	if (!pageLoadedStatus) {
            //browser.ignoreSynchronization = true;
            var ptor = protractor.getInstance();
            ptor.get('https://app.vwo.com');
            pageLoadedStatus = true;
        }
});

Multiple ways to select elements:

by.css		 by.model	   by.repeater		by.id		by.binding	    by.xpath

Interacting with the DOM:

  • element: returns a single element
  • element.all: returns a collection of elements. Use get(index), first(), and last() functions to get a single element out of the collection.

Other awesome functions

  • filter : Similar to get(index), first() and last() functions, filter takes a collection of elements and returns a single element. The only difference is that the element can be selected based on the specified condition. This is useful when there is a dynamic/long list of elements having same selector path and you need to get an element using its text or any unique property.

Let’s take this example of clicking on a date from a calendar.

All the dates elements have the same selector. Therefore, define a filter function as :

clickDateByText = function (tileText) {
    'use strict';
    //Select all date elements and apply filter function
    element.all(by.css('.dates_selector')).filter(function (elem) {
	//Return the element or elements
        return elem.getText().then(function (text) {
	    //Match the text
            return text === tileText;
        });
    }).then(function (filteredElements) {
	//filteredElements is the list of filtered elements
        filteredElements[0].click();
    });
};

Now to click on date 17, simply call the function as :

clickDateByText(17);
  • each : Use it when the same action has to be taken for all elements having common selector. For example : clearing the list of input fields in a Signup form.
element.all(by.css('form>input')).each(function (inputs) {
    inputs.clear();
});
  • map : Mapping a collection of elements in an array without use of protractor map function involves a lot of code to deal with all the proimises one by one. On the contrary, using map function for the same purpose is a piece of cake. Map function iterates through each element found with the locator and then resolves all the promises to return a promise with an array of values. For example : To get the text of all elements (with ng-repeat = ‘option in Options’) in an array, write the code as :
var optionTexts = element.all(by.repeater('option in Options')).map(function (Options) {
    return Options.getText();
});
optionTexts.then(function (array) {
    console.log(array);
});

Tips and tricks

  • Manage Browser logs: There will always be certain scenarios which would not be covered in e2e scripts. Therefore, it is a smart move to always check browser console errors for any unexpected issue in the app. The following piece of code allows you to keep a check at browser logs and fails the test cases if there are any errors :
afterEach(function () {
	browser.manage().logs().get('browser').then(function (browserLog) {
		expect(browserLog.length).toEqual(0);
		if (browserLog.length) {
			console.error('log: ' + JSON.stringify(browserLog));
		}
	});
});
  • Combine element statements to move around the dom : Xpath provides an excellent way to move up and down the dom.
// use '..' to select parent of an element
element(by.css('input')).element(by.xpath('..')); // Resulting element will be the parent of input
// use 'following-sibling' to select the sibling
element(by.css('input')).element(by.xpath('following-sibling::span'))

Common UseCase: Error messages are often displayed as a sibling to input or submit types. Therefore, instead of using a different selector path for error message, xpath can be used to pick the sibling.

To verify the error message “invalid URL”, simply write the assertion as:

element(by.model('Url')).sendkeys('http://').then(function (ele){
	expect(ele.element(by.xpath('following-sibling::span')).getText()).toEqual('Invalid URL');
});
  • Never use protractor element statments inside loop: The simple reason is that the webdriverJS (protractor) API is asynchronus. Element statements returns a promise and that promise is in unresolved state while the code below the statements continues to execute. This leads to unpredicatble results. Hence, it is advisable to use recursive functions instead of loops.

  • Debug the tests using elementexplorer.js: elementexplorer.js lets you test the page interactively. You will find this JS file in node_modules/protractor/bin directory. Start the selenium server and run command:

node elementexplorer https://app.vwo.com

Browser will load the URL and you will see > prompt. Use browser, element and protractor variables to interact with page.

Note: Make sure that the developer tools are closed while running commmands in elementexplorer.js prompt, otherwise you will face an unexpected error as “TypeError: Cannot read property ‘click’ of null”

Maintaining and reusing test cases

Let’s admit it, e2e test cases are not easy to be maintained and updated. You have to organize the tests in a way that they can be edited easily. Let’s have a glance at Page-Objects, common-files and tests approach that we follow at Wingify, using nodeJS require and exports functions.

  • page-objects: Page-objects is a commonly used practice across the industry while writing e2e tests. It enables you to write clean tests by listing all the information about the elements in a page-object file. This means that you only need to change the page object file, in case of any change in template of app.

Our Application has more than 50 screens. Therefore we list all the page-objects i.e. dom elements of each screen in a seprate JS file. Take a look at login screen page-object file :

/*File Name : loginPage.js*/
var loginPage = function () {
    'use strict';
    this.userName = element(by.model('username')).get(0);
    this.password = element(by.model('password'));
    this.submitButton = element(by.css('.login-form button[type="submit"]'));
    //******************** functions *******************
    this.setUserName = function (username) {
        this.userName.clear();
        this.userName.sendKeys(username);
    };
    this.clickSubmit = function () {
        this.submitButton.click();
    };
};
module.exports = {
    log: new loginPage()
};
  • common-module: The idea is to divide the entire e2e scenario in small reusable functions in a way that these functions can be used in other e2e scenarios as well. These reusable functions can be grouped in different files for maintainblity. The login and logout module is used in many e2e scenarios. So, both can be clubbed in a file as shown below :
/*File Name : LoginOut.js*/
var loginPage = require('loginPage.js'),
    userName = 'test@wingify.com',
    pass = '12345';

exports.login = function () {
    //delete all cookies
    browser.driver.manage().deleteAllCookies();
    loginPage.log.setUserName(userName);
    loginPage.log.setPassword(pass);
    loginPage.log.clickSubmit();
    browser.driver.wait(function () {
        return browser.driver.getCurrentUrl().then(function (url) {
            return (/welcome/).test(url);
        });
    });
    browser.manage().getCookie('login').then(function (cookie) {
        expect(cookie.value).toBeDefined();
        expect(cookie.value).not.toEqual('');
    });
};

exports.logout = function () {
    //logout script
};
  • e2e-scripts: Include all the common-module functions to write the complete e2e-script as shown below :
/*File Name : CreateNewUserE2E.js*/
var loginMod = require('loginOut.js');
describe('Create a new user in the account and verify', function () {
    /*Load Test Url */
    'use strict';
    it('Verify login', function () {
        loginMod.login();
    });
    /* Rest of the modules to verify user creation */
    it('Verify logout', function () {
        loginMod.logout();
    });
};

To conclude

  • Take an extra step to write test cases in a way that they can be edited and maintained with ease.
  • Create a modular approach keeping scalability in mind.
  • Dive deep in the protractor world, use protractor’s awesome functions and have fun!!

Hope this post was a good enough reference to help you write end-to-end tests in a better way. If things might be unclear, or you have any questions, let us know via comments.