Rebuilding the Gaming lobbies – part 2

By

Alex Cioflica, Cristian Bote and Tiberiu Krisboi


In the first part, we’ve taken you through benchmarking and choosing and the right frameworks by using PoCs, now we’ll take a deep-dive into our app.

Multi-product, multi-brand app

The big challenge was how to deliver several products for multiple brands from the same codebase, with no overhead or performance penalties when developing new features. Similarly challenging is that everything needs to be easily maintained.

Architecture

At Paddy Power Betfair, we use a microservices-like architecture. Since most of our services aren’t customer facing, the backend part of our website has to aggregate data from multiple sources, compute it and then serve to the client.

Server-side Rendering (SSR)

We mentioned before in the previous article that we use server-side rendering, but what is it exactly?

In the context of an isomorphic application, server-side rendering is running the single page app on the backend and delivering a server-rendered HTML to the client, then the subsequent navigation on the site is processed on the client-side.

While SSR is great for static pages, because the pre-rendered HTML can be generated at build time, you would think that for dynamic pages it is an overhead, but it actually isn’t that of a scarecrow, when using a proper framework.

We implemented SSR for a couple of reasons:

  • 🔍SEO: As mentioned earlier, while Google bots have some support for single page applications (with some disadvantages – like deferred indexing), also other search engines don’t
  • 🕹Fast interaction: We want our customer to be able to interact as quick as possible with the products

How is a page rendered by our app?

Let’s take you through a real-world example. A user opens a page of our website:

  1. Application will handle the page request
  2. Application aggregates data from different sources
  3. Then execute SSR and use the aggregated data as the initial state for the Preact app
  4. Compose output by combining the SSR HTML output and serialize the application state as a JavaScript object. At this point we capture all the lazy loaded JavaScript files and append them to HTML output, the same goes for the CSS styles
  5. Send response to the browser
  6. Browser will render the page generated by SSR
  7. JavaScript files will be loaded like for a normal single page application
  8. The application will be instantiated with the state sent from SSR
  9. App hydration. Since the application uses in the browser the same state as on the server, the output will be the same; thus Preact will not change the DOM, but it will take the control of the page.
  10. In this phase, if the user navigates to another page by clicking a link or button, the app will do an AJAX request and render everything on the client side, in the browser.

So, basically SSR is a middleware that runs a special build of application optimised for Node.js. We use an undom instance (which is a light implementation of virtual DOM) to create a node for each request and when the rendering is done the node is serialized into HTML.

Client-side compared to server-side rendering

Client-side compared to server-side rendering

The tests were done using WebPageTest on emulated iPhone 8 instance, using a 4G LTE connection and using repeated views to take advantage of the caching mechanisms and the CDNs in place.

We’ve tested the same page with client-side rendering and with server-side rendering. In the video above you can see the 2 methods side-by-side.

As you can see, there were some interesting results: most of the timings were smaller (smaller times are better, marked in green) when using SSR.

Like we mentioned before, one aspect we focused on when building the app was to have the shortest time necessary for the customer to be able to interact with our products. This is done, while in the background other less critic features are progressively or lazy loaded. Thus, one of the key measurements was First Interactive. This got us to the site being fully functional even if JavaScript is disabled or is slow to execute.

As with everything, there are also some things to look out for when adopting an SSR with hydratation solution:

  • Time to first byte (TTFB) can be bigger for SSR, because of the time to compute, render and then send the HTML from the server (but always use a performant render-to-string or render-to-stream library)
  • The rehydratation of the components can bring complexity (but this can be tacked by using a good framework
  • Memory leaks (see below)

A good comparison of some of the rendering techniques can be read on Google’s Web Fundamentals post – Rendering the Web.

Lessons learned: Memory leaks

In a browser, users maybe aren’t as affected by memory leaks since browser windows/tabs very often are closed. However, when running single page application code on the server, memory leaks can affect the performance of the application in a negative way.

It’s quite easy to produce memory leaks if you pollute the global scope just by using global variables, forgetting to clear timers or forgetting to unbind events. Also, closures can very easily lead to memory leaks, as they have access to outer scope. You can find a more details about closure memory leak here.

To reproduce the memory leaks we did several load tests using Gatling.

Number of requests handled by the application

The application handled a constant number of requests for a period of time, and suddenly the response rate fell to one quarter of the previous rate. While, in the same period, the memory allocation which was increasing constantly.

Memory used (MB)

The time interval for the test is 10 minutes, meaning that in less than 5 minutes the application reached 1GB of memory used, at this point the garbage collector (GC) tries desperately to remove objects from the memory, but isn’t able to. The objects appear as they are still referenced, so GC doesn’t clean them. And the application performance drops because it doesn’t have enough memory to still process new calls. The number of requests handled is decreasing as the memory used is increasing. And at some point, the application will crash.

A key takeaway is that if you implement SSR in your app, do a stress test and look out for memory leaks.

Backend

Going back to our app; based on the existing knowledge and expertise in our company, the choice of which backend to use was between Node.js or Java. In aiming for an isomorphic application, we chose Node.js as our server-side solution, which enabled us to use the same programming language, JavaScript, across the 2 mediums.

The next decision to take was which web framework to use. There were already a few projects using Express and we were happy with it, but we still did some research to see if there were any other viable options. Koa is a lighter and slightly better performing alternative. It was exactly what we needed.

So, our backend is a Koa app that aggregates 10+ services based on user information and on the requested route, serves an API and runs the server-side rendering.

What is a component?

A Preact component is basically a representation of certain functionality with the proprietary lifecycle of vDOM. All the vDOM implementation have similar lifecycle APIs: mounting on the DOM, removing, rendering, updating and so on. Preact, React and Inferno all share almost identical lifecycles but with some differences.

A component can be declared either functional or class based. A functional component is just a function that renders a bunch of `h` calls. Which represents either DOM nodes or other components.

const Foo = props => (
  <div&gt;My Foo component</div&gt;
);

If your component needs to have a state, or to react at some lifecycle method, you need to extend the Component class from Preact.

import { Component } from "preact";

class Foo extends Component {
  render() {
    return (
      <div&gt;My class-based component</div&gt;
    );
  }
}

As you can see either approach is easy to read and to understand. Which is a must, as complexity can grow really fast and it could catch you off-guard.

The component structure

We targeted that the project structure and components’ structure would be self-explanatory. It should be obvious why a component is placed in a certain folder and the naming should reflect its functionality. Obviously, we wanted to reuse components as much as possible, and don’t duplicate the same thing over and over again. So, our approach was:

  • If a component it’s reused by another one, then it should be placed at the root level of our component’s directory.
  • If it’s not reused, keep it nested under the component that uses it. This made things simpler and clearer, when creating a new component or whenever when reusing one.
/src
   /components
      /carousel
         /styles
         /tests
         /components
            /arrows
            /slide
         foo.js

There are several ways to structure your components, ranging from hierarchical, as presented above, to atomic. You should choose what suits you the best.

Choosing the state management library

Managing the application state isn’t a trivial task. When rendering the app, a lot of data needs to be displayed to the customers. Part of our background as a team, was delivering SPAs with Angular, that means we already had some idea of how to tackle the problem using services and RxJS observables to handle the state and data layer. Since one of our goals was to keep the JavaScript size at the lowest level possible, 34kB gzip compressed for having Observables, was not something we were very fond of.

Then, we looked at the popular choice Redux, which was shy of just 8kB gzipped compressed. At the moment, from a size perspective, we still considered it large compared to Preact, which is just 3.5kB. Trying to implement Redux concepts added a lot of boilerplate to our code base, which increased the total amount of JavaScript compiled. At that moment in time, this was a deal breaker.

We set upon creating our own solution based publish – subscribe pattern, having multiple specific data stores. But everything was quickly getting out of hand. You had more than one way of getting your data into a component via storeInstance.getState(), via a subscribe or by using a StoreProvider. Overall, there wasn’t a clear way of defining your data layer. We had to do something about it – we needed to reduce our boilerplate.

Then, we looked again over Redux. And with the lessons learned and the improved data layer, we focused on some technical requirements:

  • Scaling: adding more business logic should be a trivial task
  • Maintainability and testing
  • Ease of use: any developer should have a frictionless experience contributing to the code base

Because of the similarities between Redux and our own createStore, we are able to run Redux in parallel, while progressively refactoring the previous code. Our rule of thumb is, if you touch a component that needs data, then port it to Redux. This worked great so far, and we’ve seen tremendous progress.

A take away from this, is that sometimes it’s good to use a proven solution and not to risk anything. This shouldn’t restrain you from experimenting and challenging an existing solution and go off the beaten path, as there might be a chance to get really excellent results. But, this didn’t happen in our case.

Let’s put the styling on top

Having a performant application is means also using the styles in a correct way. That means that you shouldn’t ship to the browser styles that aren’t used by your app, or have very complex style computation, or don’t think of the platforms you are targeting and their limitations (be it device or browser specific).

As mentioned before, we have several unique websites per brand, and this can easily become a burden to maintain and keep in sync. To ease up adding new features we’ve developed Yeoman generators for creating components and even for adding entirely new products.

Our Product Design department built a flexible and scalable Design System for our entire suite of websites. Meaning that, in most cases, a component would just need some touches to be able to fit into a new website, by either declaring colours or other box-model properties based on the previous product styling.

To define our styles we use Sassy CSS, also known as SASS syntax version 3. This helped a lot with structuring the style files and importing reusable variables and mixins.

/styles
   default.scss
   /brandFoo
      default.scss
      productA.scss
      productB.scss
   /brandBaz
      default.scss
      productA.scss
      productC.scss

To put in words what happens above, each component imports the styling for the current product, which it runs for.

require(`./styles/${brand}/${product}.scss`);

The pattern is quite straight forward and easy to use.

Unit testing

Unit testing is that side of a project that everybody brags about that they’re doing it, but maybe just half of them are actually doing it because of the tooling available and because of the “ease” of setup.

We’ve settled on Jest, with snapshot testing for our presentation layer. That gave use a really nice way of keeping the output in sync. We’ve set up a courageous goal: minimum 99% coverage threshold for our all of our components.

Snapshot testing

Along testing the functionality, we’ve also implemented snapshot testing. In this way, you can keep control of the output of your component. Let’s say your component renders two levels of a JSX tree with some specific props. Down the road, you’d like to make sure that those props and that particular structure will stay the same. And here’s how snapshot testing looks like:

describe("components/button", () =&gt; {
   it("snapshot", () =&gt; {
     expect(
       shallow(<Button&gt;foo</Button&gt;)
     ).toMatchSnapshot();
  });
});

The shallow method is the method that serializes the output of your component; and toMatchSnapshot is the assertion method that evaluates your snapshot.

A snapshot is a file, generated for you by Jest, which is basically a commonJS module that exports a key and value:

// Jest Snapshot v1

exports[`components/button snapshot 1`] = `
preact-render-spy (1 nodes)
-------
<button
  class="button"
&gt;
  foo
</button&gt;
`;

The fact that are versioned helps a lot with keeping in sync the snapshot and the code changes.

Whenever something is modified, Jest will alert that the output that the snapshot has changed, and you’ll have to act on that:

- Snapshot
+ Received

  preact-render-spy (1 nodes)
  -------
- <button
-   class="button"
- &gt;
-   foo
- </button&gt;
+ <button class="buttonish"&gt;foo</button&gt;

Automation testing

The tests are written in Java, and they are being run in a Selenium Grid, emulating different platforms. Currently we have in place around 200 tests, which generate over 2000 test scenarios for the different brands, products and devices.

A controversial decision we recently took, was to store the automation test project in the same Git repository as the source code. There are quite a few articles out there with pros and cons on this code structuring. But the overall gain was better collaboration in teams (devs and QAs), building a tighter community and having better-quality code.

The standard practice after the code developments is complete, is that whenever a merge (pull) request is opened, in the same request the corresponding automation tests changes should be included.

Build and deploy

Bundling everything together in today’s world is not that straight forward. If you want to use the newer JavaScript APIs, ES6 syntaxes and other useful patterns, then you have to use a transpiler for that, to compile your shiny modern code into a compatible version of JavaScript for all the browsers. Code which then has of be minified, concatenated, versioned, compressed, etc.

To deal with this, we opted for Webpack. It has so many helpful features and just using the default configs, can help you a lot achieving a smaller output for your resources. Some of the benefits are:

  • minifying (by using UglifyJS)
  • remove dead-code
  • code tree-shaking for modules
  • resource loading and optimizations
  • code splitting into chunks (and then lazy-load them after the initial load)
  • a big suite of plugins
  • and much more…

One of the plugins worth mentioning is the DefinePlugin, which lets you define build-time variables that are directly mapped to a global variable. In our styles section, above, we had the require for the specific styles for the current build. That was achieved using this DefinePlugin.

// Inside your component
require(`./styles/${brand}/${product}.scss`);
 
// Define plugin
const plugins = [
  new webpack.DefinePlugin({
	brand: JSON.stringify(argv.brand), 
           // process argument: --brand "foo"
	product: JSON.stringify(argv.product),
  })
];

Webpack also has some great heuristics when it comes for code-splitting. It uses a graph tree structure to figure out the dependencies and based on these it can generate or split the code into chunks. This is very useful as it is ensuring that for each entry point or chunk that you’re loading, the subsequent ones are loaded and ready to be evaluated.

One of the tricky parts of doing server-side rendering is that the only JavaScript files that are available in the code are the chunks app (with the application logic) and vendor (with 3rd party libraries, like Preact). The other chunks (for components) are dynamically generated at build time. To make sure that for each page we load only the necessary JavaScript files, we’ve wrapped our dynamic imports into a reporter. That reporter is basically just a function that accepts a name and an import statement. This way after each SSR render, we have a pretty good idea of what JavaScript files are needed for the current route. Once the page is requested by the client, Webpack’s job is to lazy-load the rest of the chunks.

After we have the list of chunks build for each page, we can use HTTP/2 push. You have to add the paths of the chunks (JS/CSS files) or other resources (images, fonts, etc.) to the response in the Link headers. The HTTP2/push works like this:

  • the server sends the response which contains the Link headers in the first byte
  • these headers will instruct the browser to connect and start loading in parallel the resources
  • at the same time the browser is still downloading the rest of the page.

This method will help a lot with the page timings (both First Contentful Paint and First Interactive).

Overall, Webpack is a great tool to master. It’s not that easy to get the perfect configuration that suits your needs but the online documentation is there and covers any subject you’re struggling with.

That’s all folks!

In these 2 articles we summarised months and months of work. It was a roller coaster ride through decisions, development and issues. In the end we got to see that all the effort mattered and the customers are happy with the new products.

We’re continuously optimising our codebase while adding new features and releasing new products. At the same time, we’re looking into newer technologies, or keeping the ones we use up to date (Preact X is almost a stable release).

References and further read on this subject

  1. Raygun Blog – Koa vs Express in NodeJS: 2018 Edition
  2. Dexecure performance engineering blog – HTTP/2 PUSH vs HTTP Preload
  3. PassPil Project – React, Preact and Inferno quick comparison
  4. Hackernoon – The 100% correct way to structure a React app (or why there’s no such thing)
  5. Codeburst – Atomic Design with React
  6. The Startup – Advantages of Using a Preprocessor (Sass) in CSS Development
  7. Google Web Fundamentals – Reduce the Scope and Complexity of Style Calculations
  8. Meteor – An interesting kind of JavaScript memory leak  
  9. TechBeacon – 6 reasons to co-locate your app and automation code
  10. Why Google Stores Billions of Lines of Code in a Single Repository
  11. Hacker Noon – One vs. many — Why we moved from multiple git repos to a monorepo and how we set it up
  12. DZone DevOps – Should You Adopt a Single Code Repository for All Code?
Rebuilding the Gaming lobbies – part 2

One thought on “Rebuilding the Gaming lobbies – part 2

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s