Artificial Intelligence and the Risks to European Antitrust Enforcement
AI challenges EU competition law by enabling opaque collusion, reinforcing dominance, complicating enforcement, and exposing limits in traditional antitrust frameworks.
Competition, alternatively known as antitrust law focuses on consumer welfare and fair market economics. It does so largely by preventing collusive behaviour, abuse of dominance and the emergence of unnatural monopolies. However, competition law is now having to deal with a technology that is able to alter market at unprecedented scale and speed.[1]
Artificial intelligence (AI) is rapidly reshaping modern markets through automated decision-making, large-scale data analysis and adaptive commercial strategies. While these technologies promise efficiency and innovation, they also pose new risks for competition. In Europe, regulators and scholars increasingly warn that AI may facilitate anticompetitive behaviour, reinforce market dominance and bypass existing enforcement tools. Developed for an era of human-driven decision-making, EU competition law now faces the challenge of addressing algorithmic systems that can produce harmful market outcomes without explicit human intent.[2]
This paper seeks solely to explore the risks that AI poses to European Antitrust Enforcement. As such, the argument that AI could increase competition – by granting consumers access to more information, or allowing enforcers to sift through data more easily – will not be assessed. Furthermore, this paper does not aim to tackle the effects of the EU’s AI Act, Digital Markets Act or General Data Protection Regulation. These subjects are large enough to warrant their own papers.
A very brief summary of the pillars of the EU’s antitrust regiment is worth outlining at this to better explain the risks that AI poses. Article 101 Treaty on the Functioning of the European Union bans independent firms from practices of agreements that restrict competition including both explicit cartels as well as coordination to limit innovation or new market participants. Article 102 TFEU controls abuse of dominance cases. Dominance is defined as economic strength allowing a firm to act independently of rivals or consumers. It is not the dominance itself that is banned but its abuse.
Indeed, as AI makes decisions autonomously, it is capable of self-learning and processes data at vast speeds it and challenges the current regulatory framework developed for a slower era.[3] The risks of unlawful algorithmic use (unilateral and collective) and market power consolidation, alongside ineffective merger control, have both increased greatly.[4]
We begin by examining unilateral, illegal use of AI. One of the most common types of this is algorithmic price discrimination, where AI-powered pricing tools analyse extensive consumer data to estimate individual willingness to pay and set personalised prices. Such strategies can increase economic efficiency by offering lower prices to price-sensitive consumers.[5] Indeed, the UK Office of Fair Trading has concluded that it remains unclear whether personalised pricing is generally harmful or beneficial to consumers.[6] However, such tactics raise concerns when implemented by dominant firms. By charging each consumer the maximum price they are willing to pay, firms may capture the entire consumer surplus, raising questions about exploitative abuse under Article 102 TFEU.[7] Furthermore, it relies heavily on online tracking, profiling, and targeting practices that collect and analyse users’ digital behaviour to adjust prices dynamically.[8] These practices can exploit vulnerable consumers and reinforce social inequalities.[9] Furthermore, algorithmic pricing may threaten group privacy, as aggregated data can be used to make decisions about categories of individuals sharing similar characteristics or behaviours.[10] Moreover, establishing such abuse requires a high evidentiary threshold. Following the Court of Justice of the European Union’s judgment in MEO Serviços de Communicações e Multimedia v Commission (2018; C-525/1) competition authorities must demonstrate that the pricing practice produces, or is capable of producing, a competitive disadvantage and lacks objective justification.[11] This makes enforcement of AI risks more acute, for algorithmic market manipulation often does not leave behind the same sort of evidence as does human-led efforts, as will be explored further below.
A related form of unilateral behaviour we note is AI-driven predatory pricing. AI systems enable firms to analyse market conditions and competitor behaviour in real time, allowing them to identify consumers likely to switch to competitors and offer them below-cost prices. This makes predatory pricing strategies more sustainable and difficult to detect. Traditional legal tests, most importantly the AKZO framework, derived from the AKZO Chemie BV v Commission (1991; C-62/86) which presumes abuse when prices fall below average variable cost, may be less effective in digital markets where marginal costs are minimal and AI driven algorithmic pricing strategies evolve rapidly. Despite these concerns, there is currently no EU case law directly addressing predatory pricing implemented through AI systems.[12]
The final form of unilateral use of AI we examine is self-preferencing. This occurs where a dominant digital platform uses algorithms to promote their own products. This distorts competition by using dominance in one market to limit rivals in complementary or adjacent markets.[13] Several key EU enforcement cases illustrate the risks. In Google and Alphabet v Commission (2022; T-604/18) the European Commission fined Google €4.34 billion having found that it had limited competitor’s access to search data therefore reinforcing its dominant position.[14] In the same vein, in Google Search (Shopping) V Commission (2024; C‑48/22 P) it was found that Google had manipulated search results to favour its own comparison-shopping service.[15] Self-preferencing is not limited to online-activities but can span across into physical retail or blend the two. This was illustrated by the Amazon Marketplace (2022; AT.40462) and Amazon Buy Box vs Commission (2022; AT.40703) cases which found that Amazon had anticompetitively favoured its own retail operations and certain sellers on its platform. The Commission did not impose a fine on Amazon like it had on Google but accepted legally binding behavioural changes from Amazon to undo the anticompetitive behaviour. These cases demonstrate that AI powered algorithmic systems can be used to reinforce market dominance.
We now turn to collective use of AI for anticompetitive practices. One of the key issues that antitrust enforcers face is the lack of evidence of collusion and collective market manipulation when AI is used for such purposes. Furthermore, there is “growing experimental evidence that an algorithm can be designed to collude tacitly”.[16] There are four key ways in which AI can be used to such ends; 1) Monitoring an existing cartel, 2) Facilitating entering into a new cartel, 3) Facilitating entering into a new cartel without direct contact and 4) Facilitating tacit collusion.[17]
A brief summary of these is as follows; 1) cartel members can use AI to ensure that each of them is following their illegal, collusive agreements. 2) AI makes creating a cartel easier via data-processing and speeding up agreements on price and consumer allocation. 3) Current cartel members may offer another party an AI tool used by the current cartel members to integrate them into said cartel. Alternatively, a third party may be used to elevate the evidentiary burden. 4) Parties or their AI systems reach an unspoken anticompetitive agreement to collude on prices, consumer allocation without direct communication or agreement.
The difficulty in ascribing blame in all these scenarios but especially 3) and 4) has lead to enforcers in both Europe and the US to focus on identifying the human will and responsibility in creating algorithms and the collective decision to utilise them in a way that leads to anticompetitive outcomes. They have a adopted a “compliance by design” approach which means parties are responsible for the actions of their algorithms (designing and overseeing) to ensure they do not lead to anticompetitive practices.[18] In the EU the prosecutorial scope is wide and includes activities which are only partly informed of the original collective action but who later use AI to similar effect. These may be seen as full members unless they can prove their ignorance of link between the AI and the collective anticompetitive action. However, when an independent party unilaterally adopts parallel behaviour, piggybacking off a collective action, their behaviour is not caught by Article 101 TFEU and cannot be prosecuted. Such enforcement depends on the evidentiary burden required to prove an agreement to coordinate anticompetitive action, this makes scenarios 3) and 4) outlined above hard to police creating an enforcement gap for AI to exploit.[19]
To expand on this point it should be noted that – in certain conditions – each independent algorithm can analyse market signals, themselves often generated by other algorithms and hence together they converge on stable pricing patterns that mirrors collusive results. This is particularly true for systems designed to optimise pricing as these will often interact with rival’s own algorithms. They could create collusive behaviours unintended by their creators. As these results can occur without direct communication, see 4) above, differentiating between illegal collusion and lawful parallel behaviour is increasingly difficult.[20] Furthermore, as noted above, AI based collusion does not leave the conventional markers of conspiracy. Policymakers and scholars therefore caution that such anticompetitive behaviour could occur at a scale far greater than before and this would undermine enforcement and long term competitiveness.[21]
The final issue we examine is that of market dominance which is also becoming increasingly pressing due to AI.[22] Unlike the previous concerns we have looked at, which relate to the use of AI in external commercial activities, this one concerns AI directly and the risk of monopolies forming in the AI market. Powerful AI models require access to vast datasets and the power to query them efficiently. This creates a feedback loop wherein consumers input their data and give their money to a few dominant model makers which can continuously refine, enlarge and cheapen their products locking out would-be competitors. The fear is that the end result will be a concertation of market power and consumer lock-ins, as consumers become dependent on integrated ecosystem of services powered by one model marker’s AI systems. One proposed solution to counter this issue is to treat data as an essential facility and to change the law to compel the sharing of said data to improve competition.[23] Currently, it is clear that dominant platforms are indeed restricting access to essential data which potential competitors would need to develop comparable services.[24]
Within market dominance another area of concern is merger control. Large tech firms have bought many small AI companies whose innovations may have added competition and disrupted markets. These purchases are nicknamed “killer acquisitions” for their aim is to nullify potential risks to current market leaders. Furthermore, these acquisitions often involve firms with little revenue at the time which allows to escape scrutiny under traditional “turnover-based” thresholds for investigations.[25] Historically, unless the target of an acquisition was already an established market player, governments would not examine their acquisitions by (or merger with) a competitor. In the fast developing world of AI however, it is arguable that such practices are no longer fit for purpose.
This leads us to our conclusion; AI is transforming markets at speed and is exposing important limits within the current framework of EU competition law. Both unilateral and collusive anticompetitive behaviours can now be implemented in ways that are faster, easier and harder to prove than traditional human-led conduct. Furthermore, the current structures of the AI market risks allowing a small number of firms to dominate the market and bar new entrants. While the core principles of EU competition law remain key, the protection of consumers and promotion of competition, legislation and enforcement strategies must adapt to the new technological reality.
Article written by Paul Marc: www.linkedin.com/in/paul-marc-859522205
References
[1] Bostoen 2025 pp.174-175
[2] Mak 2025 pp.5-6.
[3] Scheuerer 2021 pp.834-836
[4] Calvano et al 2020
[5] Capobianco & Gonzaga 2020 pp.49–55
[6] Coen P. and Timan N. 2013 p.11
[7] Klotz et al 2025 p.4
[8] Chen et al 2016 pp.1346–1348
[9] Li 2025 p.5
[10] Mittelstadt 2017 p.145
[11] Klotz et al 2025 p.4
[12] Klotz et al 2025 p.5
[13] Klotz et al 2025 p.2
[14] Levy et al 2022
[15] Ahlborn et al 2024
[16] Deng 2018 p.88
[17] Bergqvist & Ringeling 2024 p.154
[18] Bergqvist & Ringeling 2024 p.152 & p.155
[19] Bergqvist & Ringeling 2024 p.159
[20] Mak 2025 pp.9-10
[21] Beneke & Mackenrodt 2021 pp.157-60
[22] Hutchinson 2022 p.443
[23] Mak 2025 p.11
[24] Klotz et al 2025 p.2
[25] Mak 2025 p.12article

