PPB’s Bug Bounty Journey – Looking back four years on

When Paddy Power Betfair’s (PPB) bug bounty journey began in early 2018, it felt like a natural next step to strengthen the company’s security posture. PPB has a Continuous Integration/Continuous Delivery (CI/CD) development culture with an agile mindset, and that means short development cycles and quick release of new features on our web platforms. A common concern with CI/CD in large scale and complex web products is the possibility of introducing bugs in production before the new feature has had time to mature and be thoroughly assessed.

To us, the best way to address this challenge is to embed security in each stage of the Software Development Life Cycle (SDLC), from the early planning stages, through implementation and release, and eventually to production monitoring of established products. To ensure we were able to scale with our growth, we worked hard to automate these security controls in as many stages as possible.

In the early planning and design stages of a project, PPB’s Security Engagement team works with our security architects, project managers and product owners to establish security requirements and ensure every project is on par with our security standards. From then on, our Security Engineering team takes over and ensures we work at scale, developing automation solutions that cover the development, release, deployment, and continuous monitoring of our software. Static Application Security Testing (SAST) solutions built into our pipelines enable quick testing and early feedback to developers by scanning and reporting on every build. Robust automation to track changes to our external surface ensure our Dynamic Application Security Testing (DAST) solution covers all our surface with continuous scans. To placate any gaps that automated tests cannot identify, PPB has an internal Security Testing team, and works with external penetration testers to evaluate all new projects and major changes in our existing products.

Still, an essential unbalance always remains. Security is a cat-and-mouse game. With new vulnerabilities in servers, frameworks, libraries, being identified every day, a security team will always be outnumbered by the sheer number of malicious actors that are present, day and night, on the Internet. This is where a bug bounty program can really shine: by harnessing the power of countless talented security researchers, it can provide crowdsourced expertise in all niches of web and infrastructure security, as well as tap in the hacker underworld to hopefully give us a small edge, enough to make sure we do not become the mouse.

Starting a Bug Bounty

From the very beginning of PPB’s Bug Bounty program, we were looking to go public and make all our external surface available for researchers to analyze, but we did not have enough data on similar scale bug bounty programs to accurately estimate the impact and effort that would result from a fully open program. Instead, we decided to start small and grow towards that goal without knowing how long it would take to get us there. Finally, in early 2020, we met our goal, and PPB’s Bug Bounty program went public.

Before launching PPB’s program, we prepared as best as we could. First, we selected a partner for this journey. Up to that point in time, if a researcher were to find a vulnerability in one of our products, the only way to report it would be by sending a message to a dedicated email inbox, that allowed direct interaction with the security team. We knew that going forward this process would not scale. We needed a platform that would allow us to establish clear rules of engagement for the researchers, and easily track the status of each reported vulnerability, from triage to closure. We also wanted it to support monetary bounties, as we wanted to make sure our program was attractive to researchers and rewarded the community for their efforts on responsible disclosure.  

We considered several established bug bounty platforms carefully, and eventually selected HackerOne (H1) to be our partner in this journey. H1 covered our requirements and offered an excellent communication platform to interact directly with researchers, as well as the possibility of creating a private program by inviting only carefully chosen researchers to participate. H1 also provided an API to interact with the platform, which our Security Engineering team would leverage in order to integrate it with our existing tools. Currently we have integrations with Slack, where our bots work 24/7 to raise alerts if something demands immediate attention, and our Vulnerability Management platform, part of an internally build inventory and security management application, which we call Surface (https://github.com/surface-security/).

HackerOne provided some initial friendly orientation while we established our bug bounty policy. The policy is a description of a Bug Bounty program, establishing the rules of engagement, what bounties we offer, and which sites/applications are fair game and which are out of scope – any vulnerabilities reported on the latter are not eligible for a bounty. 

To establish the initial scope, we selected only the most mature components from our external surface. We began our program as private, so we could have some control over the unlikely but possible impact of multiple researchers scanning our surface, as well as our ability to answer to every hacker who did a report. When creating a bug bounty program, it is essential to allocate time to communicate adequately with the researchers and with product engineering teams. For researchers, this ensures initial triage is quick and bounties are paid in an adequate timeframe (both values are indicators of the quality of a program), as well as providing enough time to offer clarification or details when both parties disagree on applicability, severity, or any other contentious topic. 

HackerOne uses a signal to noise ratio to rank hackers based on how many valid reports they submit overall. In the early beginnings we could not afford to be distracted by the noise, so we opened our private program to 20 invited researchers, selected from the best in the platform, and we waited. 

We did not create a bug bounty program believing we were bulletproof – we know that as a security team we could have blind spots, and that there is brilliant talent out there that can help us find them. Still, when the first reports started to come in, we were both excited and apprehensive.  

Our first reports came in the very day we opened the bug bounty, the 27th of February 2018. Seven valid vulnerabilities were submitted that day, ranging from a no impact info.php disclosure, to a critical issue that after investigation we confirmed was never exploited.  

The number of reports quickly dwindled however, and it became obvious that to keep the researchers focus amid so many competitive bounty programs from other companies, we needed to occasionally reignite their interest. Initially we did that by inviting new researchers, and later, as the program matured, by a gradual increase in the program’s scope until it covered all our externally exposed assets, as well as a gradual increase in the value of the bounties we offered. 

To draw attention to the program we ran multiple private events for a limited time, with good results. On the most recent one we selected the best hackers from our program and offered them the chance to double the bounty of any bug reported during that time window. Because we operate in a highly regulated area of business, many hackers had requested that we facilitate the KYC process so that they could create accounts and fund them, to test the actual betting and gaming flows. This is something we cannot do, as it would be against regulations of the industry, so we offered our top researchers a compromise solution. For the duration of the special event, we provided this limited set of researchers with tests accounts created specifically for this purpose. The accounts had a 10€ balance, enough to run some tests on the major flows of our sites, but they were severely limited: no funds could be added or withdrawn, and the accounts were flagged as test accounts in our systems. This event proved very fruitful, causing a small spike in submissions, most of them valid. Some issues had minimal impact, but we also had a researcher submit an interesting race condition vulnerability that resulted in PPB’s highest paid bounty to date. 

Going public

In early 2020, we had over 700 invited researchers and all applications in our main domains were in scope, so we decided it was time to take the next step, and in May of that year, our Bug Bounty program went public.  

Going public was a mixed experience. Our number of submitted reports spiked, going from around 50 submissions in the first quarter of 2020 to over 550 in the second. However, while up to that moment we were receiving around 50% of valid submissions, in the months following going public, only about one tenth of the issues submitted were valid.  

The graphic below shows the drastic spike in submissions in Q2 2020, and their eventual stabilization in the last quarter of the year, returning to similar levels to the ones we had seen before we went public.

This huge spike in reports was very predictable, so from the moment we went public we opted in on the Hacker One triage functionality, and this was essential in keeping our “In triage” numbers inside a manageable limit. After going public there was a noticeable decrease in the report’s average quality and in the average level of technical knowledge of the researchers we were attracting. Nevertheless, several new issues were found within the same scope, and we remain convinced that this was a valuable step in the evolution of our program. 

The process

Looking back four years later, the experience was incredibly positive. We met brilliant hackers, we found and got rid of multiple bugs and more than a handful of weak spots, and – while we gradually opened our scope and eventually went public – we significantly improved our vulnerability management process to track the work that was being generated.

Not everything was perfect. When the program opened, we knew we had a lot of legacy and third-party components under the PPB brand. Initially we excluded those from scope, as we knew they were likely to be the most fragile, but also the least important to the core of our business. Issues in legacy code produced by third parties were occasionally particularly difficult to handle, as some companies no longer maintained the vulnerable products. Some components were due to be decommissioned, and low severity vulnerabilities were therefore not fixed, which led to multiple duplicate reports of those issues.

Duplicate reports comprehensibly cause a lot of frustration for researchers in bug bounty programs. Since only the first report is paid, researchers that submit subsequent reports will not be compensated by their time and effort. Adding to the problem, while High or Critical issues were treated as incidents and addressed immediately, fixes for low severity issues were being assigned a low priority in the development team’s backlog. To avoid having researchers waste time and give up on our program, we removed those sites from the scope temporarily, but it was obvious that such an approach was self-sabotaging and impossible to maintain in the long run. It is here that the support from upper management becomes essential, as a successful and popular bug bounty program requires that the mitigation of security issues that do not reach incident level be treated as high priority by the product teams. Through these initial stages, we found that the fastest approach to reduce the negative reputational impact of these issues is to be transparent and open with the researchers, in order to build rapport and hopefully achieve a long term and fruitful relationship.

Meanwhile, we were constantly working on improvements to our vulnerability management processes. We made sure we simplified severity assessment by ensuring that the severity rating we use in the program is the same that is used on our internal tools and all communications made by the team. Our SOC team, always available, raises incidents when a High or Critical vulnerability is identified, and the issue is addressed by the on-call security and development teams and solved in a matter of hours. For Medium and Low severities, we group them with same-severity security issues identified through other sources, in a gamified system we call the Security Score. In this system, all security issues are scored according to severity, and different areas of the business compete for the best score. This system gives HackerOne issues a bigger weight, and it has proved an incentive for teams to quickly address the often easy to solve but often deprioritized low severity issues, leading to a shorter “Average time to resolution”, another H1 program quality indicator, and a decrease in the number of duplicate reports.

During this period, we also built a tool called Retester, where we create a small Proof of Concept (POC) for each Bug Bounty vulnerability. The tool runs the POC daily and identifies changes in status, from Not fixed to Fixed and vice versa, by matching server replies with expected values. This often allows us to know that a vulnerability was fixed before the responsible team notifies us of the fix, but most importantly, it raises a warning if for any reason, e.g. a needed rollback of a deployment or an accidental change to a component’s code reintroduces the vulnerability in the application.

The merger

In 2021, when SBG joined PPB to form the UK and Ireland (UKI) Flutter division, we joined efforts to understand how each team managed vulnerabilities, both from an internal management process perspective, and when facing reports by external researchers. SBG had started their bug bounty program on HackerOne approximately two years before PPB, in 2016. They too had had a positive experience with the platform itself and shared the belief that Bug Bounties are a valuable contribution to improve a company’s security posture.

Looking at both programs, it was clear they had similar rules of engagement, even though SBG’s program was still private.

At the time, SBG was running their program with 83 invited researchers, doing their own triage, and receiving around one valid report per month. PPB was already using the H1 triage team and was receiving around 3 valid reports monthly.

Together we concluded there were mutual benefits in merging our programs in a single Flutter UKI one. For SBG, they would go public too, thus benefiting from a much larger pool of researchers, and they would benefit from the H1 triage service that PPB was already using, reducing their workload. For PPB, a merger would reignite attention into the program, as the combined surface of the new Flutter UKI program effectively worked as a significant scope increase for the hackers already working on PPB’s program.

Conclusions

When considering if a bug bounty program is an adequate security solution for a company, several elements have to be factored in.

The first is financial capacity. Without financial incentive, a bug bounty’s program’s only advantage is the creation of a channel for responsible vulnerability disclosure, which hackers are not necessarily incentivized to use. Effectively attracting quality researchers requires offering money bounties that are in line with what other companies of comparable size and online exposure offer, so before starting a bug bounty program a company must establish if the budget they are willing to invest is enough to reap the benefits of this highly competitive market. As an example, aggregating SBG’s and PPB’s results, the Flutter UKI Bug Bounty program has paid over $270.000 in bounties, with an average of $400 per bounty. Currently, we offer bounties up to $3.000 per vulnerability, and we are constantly reevaluating our bounties to ensure we offer fair and attractive compensation to researchers.

The second element to consider is the availability of qualified personnel. Successfully running a bug bounty program involves a sizable number of staff invested in the program itself, both to triage reported issues, to communicate those issues to relevant technical and non-technical stakeholders, including third parties, and to monitor the status evolution of vulnerability reports and fixes. Once a vulnerability has been passed on to the product teams, these need to be fully committed and supported by leadership to prioritize the mitigation work required, and to allocate developers with the technical ability to understand and fix the issue in an appropriate timeframe considering its severity. Using HackerOne triage is something that should be considered to allow large programs to remain viable without increasing allocated staff. However, it is important to notice that without an established vulnerability management process and technically knowledgeable personnel, it is highly likely that the program will lead to overworked teams and unsatisfied security researchers. As a last resort, a company can opt to put a bug bounty program on pause while these issues are addressed, to avoid reputational damage to the program and to the company itself.

The third thing we believe is essential to consider is where the bug bounty program will fit into a company’s security toolkit. By itself as a security control, a bug bounty program is not enough. At Flutter we look at our combined bounty programs as a tool that allows covering small gaps between existing security controls. We find it especially relevant in the SDLC phases that are regularly covered by SAST and penetration testing, but also on the monitoring of our infrastructure and our online footprint in general. In that context it has proven to be one of the best information security tools in our arsenal. Our bug bounty has helped us detect unexpected vulnerabilities: from a zero-day Citrix RCE vulnerability (CVE-2019-19781) to instances of legacy shadow IT practices or accidental misconfigurations, all the way up to complex chained vulnerabilities that are outside the scope of what automated tools can identify.

Finally, we cannot fail to mention the researchers themselves. During the duration of our program, we worked with 263 researchers on valid submissions and many others on out-of-scope or non-applicable issues. We had the chance of working with some of the best reputed bounty hunters in HackerOne. Shout out to our top ten most prolific hackers, damian89, derision, fritzo, nahamsec, patrick, smiegles, tomnonnom, uberpkr, xnutronex, and zseano.

Operating a bug bounty is a balancing act between getting a return on investment and effectively identifying and eliminating vulnerabilities, but also making sure that the researchers that work on the program have an enjoyable experience with it and will remain open to working with us in the future. This happens most obviously through the bounty reward itself, but we found that establishing an active collaboration relationship between both parties is mutually rewarding and a change for learning and growth.

PPB’s Bug Bounty Journey – Looking back four years on

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s