From Checklists to Outcomes: The Next Meaning of “Reasonable Measures”

As AI‑driven risk engines, intrusive affordability checks and diverging channelisation rates collide, iGaming compliance is being judged less by tools on paper and more by what happens to players and where they gamble.


“Reasonable measures” in iGaming are being reset by three converging forces: what real‑time data makes visible, what societies will tolerate in terms of surveillance, and whether regulation actually keeps players in the licensed system. The next definition will be shaped less by checklists and more by how operators perform against those three tests in practice, and by whether they can absorb the growing operational and technology costs of meeting them without driving players offshore.

Abstract blue image showing bright clock hands and tick marks overlaid on rows of digital numbers, suggesting real‑time data and continuous monitoring.

A glowing clock overlaid with digital numbers, representing how real‑time data is reshaping what regulators and operators can reasonably be expected to see and do.

Three speeds: industry, research, regulation

Dr Mark D. Griffiths, Distinguished Professor of Behavioural Addiction and Director of the International Gaming Research Unit at Nottingham Trent University, has argued that the online gambling industry will always move faster than both researchers and regulators, which is why robust evidence depends on access to real player‑tracking data from operators. Peer‑reviewed studies can take years from data collection to publication, and translating findings into law or licence conditions takes longer still, so the system naturally runs at three speeds: rapid product changes, slower evidence, and even slower statutory reform. Griffiths also emphasises that closing this gap will require a level of transparency and cooperation between operators, researchers and regulators that goes beyond what is typical today, including more frequent data‑sharing and co‑designed studies rather than isolated projects. The next phase of “reasonable measures” will depend on whether those three clocks can be better aligned through such collaboration and more agile regulatory responses.

Smartphone with Robinhood logo held in front of GameStop logotypes

What counts as “reasonable measures” when real‑time data, AI risk engines and offshore channels make it easy to switch from regulated gambling to gamified trading in a single tap? Image credit: SOPA Images / © Alamy

Real‑time harm: from alerts to audited outcomes

Real‑time behavioural analytics and AI‑driven tools are increasingly marketed as the backbone of next‑generation responsible gambling, shifting player protection from a checkbox to a continuous operational process. Larger groups build risk engines and case‑management workflows in‑house, while accreditation frameworks and audits now assess how operators detect and respond to risky patterns rather than whether policies exist on paper.

Independent research has identified online markers of harm—changes in staking, chasing losses, multi‑product use and late‑night intensity—as signals that can be monitored and acted on before problems fully crystallise, and Griffiths’ work shows these patterns are meaningful in the development of gambling problems.

Over the past two decades, a growing body of research has underpinned the design of safer‑gambling tools, so many of today’s “AI” systems build on both earlier and more recent evidence rather than on intuition alone. Against that backdrop, a 2030 view of “reasonable measures” is unlikely to accept static reporting and ad‑hoc reviews; regulators will expect higher‑risk cohorts to be monitored in (near) real time, with documented playbooks and auditable evidence that interventions changed behaviour rather than simply removing customers from one brand.

Mindway AI illustrates how this can work when research and practice intersect. Its systems combine behavioural‑risk models grounded in psychological and clinical studies—including case material from psychologists working directly with gambling addiction—with human assessors at regulated operators who review high‑risk flags and, where appropriate, speak to players. Paula Murphy has stressed that early pattern recognition improves outcomes only when those in‑house teams are trained to understand scores and follow clear protocols, and evaluations suggest that the best results come from this hybrid model, in which algorithms surface risk and people inside the operator make nuanced decisions.

Infrastructure‑level anomaly detection adds a different, more privacy‑sparing layer. In an industry interview with iGaming Review, Rickard Vikström of Internet Vikings described how compliance systems can spot clusters of accounts behaving in synchrony, unusual access routes or networks of IP addresses and devices without intrusive profiling of every individual.

At the same time, scepticism about the gap between rhetoric and reality is growing. Media analysis shows much of the responsible‑gambling narrative in mainstream coverage is driven by corporate messaging rather than independent evaluations, and critics question how transparent and independent some industry‑funded responsible‑gambling organisations really are.

Together, these dynamics suggest that “reasonable” real‑time measures will not be defined by the presence of an AI tool or certification badge, but by demonstrable impact—numbers and case records showing that monitored players were contacted, supported and, where necessary, constrained in ways that reduced risk rather than simply improving optics.

Abstract blue image of people and buildings overlaid with network lines, white padlock and people icons, and a large circle with the word “GDPR” in the centre.

Abstract GDPR graphic with padlocks and people icons, highlighting how data‑protection rules shape what counts as “reasonable” financial‑risk and affordability checks. Image credit: Anna Berkut / © Alamy

Financial risk and privacy: proportionate checks in a privacy‑aware society

Affordability and financial‑risk checks are now central to debates about intrusiveness. The UK government’s “High stakes: gambling reform for the digital age” sets out a tiered model where specific loss thresholds trigger escalating checks, from light‑touch screening to more intrusive open‑source and credit‑style assessments, and explicitly acknowledges concerns about proportionality, fairness and data protection.
In parallel, operators and vendors are moving towards data‑hungry responsible‑gambling systems that blend behavioural analytics with open‑banking‑style feeds and automated affordability tools, while warning that these raise serious privacy and oversight questions if poorly governed. In practice, authorities such as the ICO have indicated that affordability‑check processing will usually rely on a mix of legal obligation—where checks are explicitly required by regulation—and legitimate interests, backed by documented proportionality assessments, rather than generic consent or open‑ended data collection.
Swedish survey work offers a useful lens on how such systems might be perceived. The “Svenskarna och internet” series reports that Swedes are among the most digitised populations in the world, yet more than half feel their integrity is violated when personal internet data is collected, and a large majority say people should care about their data even if they have “nothing to hide”. At the same time, around nine in ten think the police should have the right to access private online conversations when crime is suspected, suggesting high acceptance of targeted surveillance in a public‑interest frame but lingering distrust of opaque commercial tracking and profiling. Broader European research points in a similar direction: large majorities report greater awareness of privacy since GDPR, widespread concern that data use could lead to discrimination, and particular unease about how AI may affect data protection, even as many accept some monitoring by intelligence or law‑enforcement agencies.
These attitudes resonate with the technical choices described by Rickard Vikström. Infrastructure‑level monitoring that looks for patterns across accounts, devices and hosting environments can enforce territorial and integrity rules without requiring full financial dossiers for every user, whereas deep affordability profiling and extensive external data pulls may offer stronger individual protection at the price of greater perceived surveillance.
By 2030, “reasonable” financial‑risk measures are likely to mean proportionate, evidence‑based thresholds for checks, clear explanations of what is collected and why, and privacy‑by‑design in how systems are built and governed, rather than indiscriminate data accumulation. Operators that fail to strike that balance may erode trust and add to the list of reasons players cite for choosing less regulated alternatives, even if those alternatives carry higher objective risk.

Channelisation and unintended consequences: protecting consumers and keeping them onshore

Channelisation outcomes show whether rules work in the real world. Empirical work on European online gambling markets finds that average channelisation into regulated offers increased between 2015 and 2021, but there is no straightforward link between tax levels and onshore capture; instead, specific combinations of high taxes, tight product rules and strict advertising limits can depress onshore participation in certain verticals. The same research highlights sharp cross‑country differences, with design, enforcement and product attractiveness often outweighing single‑headline variables.

Sweden illustrates the difficulty of hitting the sweet spot. Official estimates suggest online channelisation has hovered in the mid‑80s percentage range, short of the 90% target, even as the regulator and state‑linked incumbents call for tougher action on unlicensed play, including DNS and IP blocking and a broader test that would catch any site accessible to Swedish players. Industry bodies warn that measures such as comprehensive bonus bans risk weakening the appeal of licensed brands without removing offshore options, and point to leakage data to argue that some well‑intentioned interventions have backfired.

The Netherlands provides another cautionary example: post‑re‑regulation, critics argue that a combination of restrictive advertising rules, tax changes and enforcement focus has coincided with growth in the illegal channel, with some analyses suggesting the unlicensed market now rivals or exceeds the licensed one in certain segments. In contrast, Nordic case studies regularly cite Denmark as an example of a regime in which multi‑licence competition, balanced taxation and clear product rules coexist with strong AML and safer‑gambling standards and high onshore capture.

Against that backdrop, estimates commissioned by industry stakeholders underline the scale of the problem regulators are trying to address. Studies for the European Casino Association suggest that unlicensed operators may be capturing around 71% of online gambling gross gaming revenue across the EU—roughly €80.6bn a year—with associated tax losses of about €20bn, while separate work estimates that global crypto‑casino gross gaming revenue has exceeded $80bn, often routed through offshore hubs and accessed via VPNs and stablecoins. These figures are contested and depend on methodological choices, but they frame the stakes: if licensed markets misjudge the balance between consumer protection, product attractiveness and fiscal demands, a large slice of play can move into environments where “reasonable measures” are not applied at all.

Close‑up of a hand on a table stopping a line of toppling domino tiles, symbolising stopping players sliding from licensed to unlicensed gambling.

A hand blocking a row of falling dominoes, representing efforts to halt the chain of events that can move players from licensed operators into unlicensed markets.

The next meaning of “reasonable”

Across these three questions—can you spot and act on risk in time, can you intervene proportionately without breaking trust, and can you protect consumers while keeping most play onshore—the common thread is that “reasonable measures” is evolving from a static checklist into a performance standard grounded in research and enabled by technology.

That standard does not just depend on what data operators can collect or what models they can deploy, but on whether they have the organisational know‑how to interpret new signals, spot meaningful patterns and act on them in a way that regulators—and players—recognise as proportionate and effective.

In real‑time harm, the bar is moving towards evidence‑backed models, hybrid AI‑human workflows within regulated operators, and audited interventions that demonstrate risk was detected early and handled consistently. In financial‑risk management, it is moving towards risk‑based checks that use lawful, clearly explained data flows and build privacy‑by‑design into systems, rather than relying on opaque, open‑ended profiling. In channelisation, it is moving towards regulatory designs that keep legal offers attractive and accessible while still tightening standards where evidence demands it, so that most gambling remains inside environments where those “reasonable measures” actually apply.

EXPLORE ARTICLE SERIES

Redefining Reasonable: Regulatory Transformation in Global iGaming

Article 4 of 4 in a series exploring ”European gambling regualtion in focus”:  From Checklists to Outcomes: The Next Meaning of “Reasonable Measures”

“Reasonable measures” in iGaming are being reset by three converging forces: what real‑time data makes visible, what societies will tolerate in terms of surveillance, and whether regulation actually keeps players in the licensed system. The next definition will be shaped less by checklists and more by how operators perform against those three tests in practice.

Article series

REGULATION

Redefining Reasonable: Regulatory Transformation in Global iGaming – a four part series