This article builds upon the paper written by Fide after the 2019 Oxford Congress. Click here to consult it

- Introduction.
The Dyckensian theme –it was the best of times, it was the worst of times1– that stems from the symbiotic intersection between digital platforms and Big Data / Artificial Intelligence (AI) technical/business models has sprung attention for some time already throughout the wider community of Law, Economics and Policy practitioners.
In the relevant chapter within the Conclusion Document to FIDE´s 2019 Oxford Congress2, a parallel analysis regarding the then existing situation was proposed, stemming from two distinct challenges that the mentioned phenomena were, already noticeably, giving rise to:
- On the one hand, in the context of increasing market power of digital platforms brought about by the exponential increase in data access and management capabilities, the potential for response from traditional Competition Law enforcement was discussed.
- On the other hand, as pertains consumer disempowerment and, furthermore, even potential harm, the role of behavioural monitoring by economic agents -and also public administrations- towards digital users was analysed. We provided the specific example of the egambling and betting ecosystem to illustrate the dichotomy potential harm vs. enhanced opportunity to protect consumers through Big Data/AI techniques.
Some years down the line, it seems appropriate to assess to what extent such fears have consolidated or sorted out, in particular in the EU´s context. This contribution will provide a brief update on such themes as they stand in early to mid-2024.
- Competition Law and Policy: bring on the DMA.
Pre-2020, the limitations in applying “traditional” Competition Law to the exertion of market power facilitated by Big Tech´s leverage of data presented via several manifestations. Some of them could be: the possible inadequacy of merger notification thresholds; the pervasiveness of the so-called killer acquisitions in this context; the onus of the burden of proof to Competition Authorities in demonstrating anti-competitive behaviour, the narrow contours of traditional market definition and market share thresholds in a multi-service context or the considerable competitive advantage derived from Big Tech enjoying extensive user data, whilst not giving access to it to potential competitors.
These issues, globally considered, pointed out towards a familiar tension between Competition Law, a toolkit based on ex-post intervention to remedy anti-competitive behaviour and the ill-effects of market power, and ex-ante regulation, a traditional instrument in presence of market failures such as natural monopolies and the competitive bottlenecks derived thereof, at certain stages of the value chain.
Given the above, any update to the current situation of Competition Law in Digital Markets and more specifically Big Tech at this point in time makes it mandatory to turn attention to the EU´s Digital Markets Act (hereinafter DMA)3.
Finally passed in September 2022 after the classical, painstaking European regulatory exercise, DMA is the response to that perception of ineffectiveness or to the very least untimeliness of ex post Competition Law in the context of digital platforms. Not that the European Commissions and her national counterparts have been short of antitrust investigations recently in the sector (Amazon Buy Box, three different ones involving Google, Apple Pay and Apple´s App Store); however, these are complicated, resource-consuming exercises whose end, if at all, arrives belatedly.
DMA deals with that by endorsing an ex-ante approach to promoting both market contestability and fairness. It does so by targeting core digital platform services such as online search engines, web browsers, advertising services, social network services, and video sharing platforms. Amongst them, it identifies those so-called gatekeepers, namely those core digital platform services that meet certain elements: a significant impact on the internal market, an important gateway for business users to reach end users, and an entrenched durable position in such capacity. Each of these aspects is in turn comprised by a subset of other elements.
Contestability and fairness towards downstream businesses and end users are core competitive elements in markets, such as certain digital environments, with high demand (network) externalities and/or a “two-sided” (platform) character. DMA purports to assure the latter by imposing a range of obligations upon gatekeepers. Some of them relate to end-users: limitations to use-collect personal data without user consent, whether or not coming from third parties, for advertising purposes; prohibitions to self-referencing preclusions to gatekeeper services; effective portability; enabling of installing or uninstalling third party applications. Other measures address behaviour towards downstream business competitors: prohibition to use non-public data generated by third businesses; obligation to provide ad-related information and access to anonymised ranking under fair, reasonable and non-discriminatory conditions; a ban on tying and bundling services; and several others.
Whilst not easy to sum up its content -indeed the description above does not exhaust it-, it seems less complicated to convey the strategic significance of DMA: a yet again mighty example of “The Brussels Effect” in setting regulatory pace throughout borders, a geo-political strike on the largely non-European platform industries, a major shift in Competition Policy principles… However, and even if it is still early days, the merits of the exercise are not short of uncertainties as to its real degree of effectiveness. Some thoughts spring to mind in this respect.
Firstly, the shadow of increased red tape and transaction cost looms large regarding the workability and overall effectiveness of the scheme. Six gatekeepers (Meta, Apple, Microsoft, Amazon, Alphabet and Bytedance) were designated by the Commission in September 2023 on the grounds of certain specific services; some of the designations have been contested. The course of affairs that this can lead to -either a market investigation by the Commission to either confirm or override the gatekeeper label, or an ECJ revision- may lead to undermining the overall effectiveness of the regime. Furthermore, the obligations imposed upon gatekeepers must be monitored and ultimately enforced, which involve institutional and procedural efforts.
Another very specific source of uncertainty is how the information obligations posed to gatekeepers on any merger -even if not notifiable under the European Merger Control Regulation- will play out in order to tame the risks of the so-called killer acquisitions, whereby prominent digital operators and particularly platforms eliminate prospective competition in related or nascent services.
Equally, there are competitive dynamics in certain digital markets left outside the remit of DMA, however exhibiting similar network effects and propensity to the tying-bundling of services than DMA-included services. One such example is cloud infrastructure services, a market that has spark attention of Competition Authorities both within (France, Spain) and outside Europe (CMA´s study, FTC investigation) and where reputed Competition analysis have identified anticompetitive overcharges against businesses ecosystems in Europe above EUR 1 billion4.
Lastly, there is already a certain scepticism, if merely speculative at this stage, in how market evolution and innovation will develop in the middle to long-run in certain markets. In this regard, it has been pointed out that DMA´s approach may be too rigid in the application of its provisions to all kinds of gatekeepers and situations, which may be different between them, and so not contemplate how specific contexts may not benefit from such a strait set of rules. This would lead to some apparently bona fide provisions playing indeed counter to competitive dynamics. Some examples that have been pointed out5: preventing gatekeepers from restricting access to their platforms; requiring platforms to share with third-party business users all data generated through their activity on the platform; prohibiting gatekeepers from better ranking their own products or services on their platform or allowing users to choose whether or not to let their data to be processed and to offer a less personalized alternative.
In any case, however successful, DMA´s implementation and scope will not override completely the challenges that platform dynamics, size and increasing use of AI techniques will bring about, both within and outside the EU. In this regard, one prominent aspect of the AI acceleration that we are currently going through is that of algorithmic collusion. A phenomenon pointed out by, inter alia, OECD as early as 20176, the scenarios under which algorithms (whether monitoring, parallel, signalling or self-learning ones) may help cartel behaviour and stability has grown tremendously of late and the fight against strategic price-setting via such tools will undoubtedly constitute one of the prospective challenges ahead. This may well be the next milestone for Competition Policy and, quite specifically, for the tension between ex ante regulation and ex post enforcement7, in the pacy AI-infused economy that is taking shape.
- Consumer protection: growing awareness, hectic activity.
Pre-2020, it was already apparent that the exponential increase in the behavioural monitoring of users stemming from the irruption of Big Data/AI provided both challenges and opportunities to consumer protection in digital services.
In short, data processing capabilities and machine learning/AI systems allow for refined user tracking throughout digital platforms and service providers, with the corresponding implications in terms of disinformation/misinformation, advertising patterns and nudging towards increasingly intense consumption behaviours across all types of digital services, with the corresponding alterations, if not directly harm and addictive patterns, -such as, but not only, in the case of gambling-.
Taking stock from the EU´s 2019-2024 political cycle, an increased awareness on the matter is to be acknowledged, and so a hectic activity on such front, under several initiatives, has taken place. For systematic purposes, three broad elements may be distinguished: a general framework aimed at protecting consumers; specific instruments to tackle the perils of AI as regards content and illicit commerce; and a call for reinforcing digital identity as a cross-sectoral response to it, with an emphasis on the protection of minors.
Thus, regarding firstly the pure consumer protection framework, the European Commission initiated the ‘Fitness Check on Digital Fairness’8 in the spring of 2022 -expected to conclude in 2024´s second quarter- to determine if additional actions were needed to ensure consumer protection in the digital environment. This Fitness Check, framed within the so-called New Consumer Agenda, will evaluate several Directives to determine the suitability of EU consumer legislation in addressing issues such as consumer vulnerability, dark patterns, personalization practices, marketing of virtual items, and addictive use of digital products9.
Under this exercise, the practice of dark patterns on consumers has been identified as particularly concerning. Some of the issues are hidden information or false hierarchy (prioritising a particular option to encourage a given behaviour, such as accepting cookies; default preselection of options; nagging (constant appearance of prompts to encourage the consumer to take certain actions, such as subscribing to a premium account) or difficult cancellations or forced registration.
In parallel, the European Parliament´s report on Addictive Design of Online Services and Consumer Protection in the EU Single Market10 highlights relevant aspects of consumer protection in digital services designed to keep users on the platform for as long as possible through «addictive design» or «behavioural design» leading to forms of digital addiction. Among the measures proposed are the prohibition of interaction-based recommendation systems, particularly hyper-personalized systems designed to be addictive and keep users on the platform for as long as possible rather than providing them with more neutral information.
Equally, a particular source of concern in the context of potential exploitation of psychological vulnerabilities is minors. This has been evident specifically in the context of proposals, within these consumer protection initiatives, regarding micro-transactions in videogames, for instance as regards the marketing, display, and articulation of random-reward mechanisms -the so-callled loot boxes-11.
Secondly, regarding specific instruments to tackle content to users beyond the mere notion of consumers, as well as illicit commerce, firstly we need to note the Digital Services Act (DSA), DMA´s sibling12. DSA introduces a new set of measures for online intermediation services aimed at limiting the dissemination of illegal content and products online, increasing protection for minors, and providing users with more choice and better information. This Regulation primarily applies to social media platforms, collaborative economy platforms, web hosting and cloud platforms, intermediary services offering network infrastructure, or large-scale search engines.
Among the main features of the DSA are: measures to counter illegal goods, services, or content online, such as a mechanism for users to report such content and for platforms to cooperate with «trusted flaggers»; effective safeguards for users, including the ability to challenge content moderation decisions by platforms; transparency measures for large-scale online platforms, including the algorithms used for recommendations; and obligations for very large platforms reaching more than 10% of the EU population to prevent abuse of their systems by adopting risk-based measures and independent audits of their risk management systems.
Alongside this, the already agreed AI Act13 aims at regulating AI systems: machine-based systems designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. This includes General Purpose AI such as ChatGPT-4.
The norm introduces a risk-based classification scheme for AI applications, in regards to the level of risk posed by the AI application to individuals or society as a whole (unacceptable/high/unproblematic risk).The AI Act also regulates foundation AI models, such as large language models, -included in applications such as AI-supported translation services, AI-supported chatbots or AI-supported creative systems- establishing transparency and accountability obligations, and Biometric Identification, restricting the use of biometric data in public places, but for exceptional circumstances such as missing persons search or preventing terrorist attacks. Final negotiations to the AI Act in late 2023 conducted however to the proposed initial content being reportedly watered down by certain prominent Member States.
Finally, a third issue is the reinforcement of digital identity, a move that has both European and national ramifications. Firstly, we have the eIDAS 2 Regulation process, whose official publication is due this 2024. Building from the existing eIDAS, this second iteration targets not only electronic transactions but physical services, enabling Europeans a set of digital identity credentials (such as identification cards, passports, professional certifications, and driving licenses).
Furthermore, in a national context, the issue of ID verification has been identified by governments and policymakers as a key instrument to reinforce the protection of minors, particularly to disallow them from accessing content inadequate to them, such as pornography. Thus, in Spain, Government has promoted a State Pact to protect infants and adolescents14, with age verification mechanisms at the forefront. Within such context, an initiative to develop a public application for such age verification has been launched by Spain´s Data Protection Agency and Spain´s Public Certification Authority (FNMT-RCM).
The above elements allow for some global tentative considerations ahead. Firstly, it is undeniable that the EU is dealing with Big Data / AI issues at a regulatory level, from several standpoints. Secondly, time will tell whether this intense regulatory approach will be effective -given the innovation pace – or proportionate -in terms of prescription. Lastly, member States are already exerting some leadership and anticipation: not only the example above but other initiatives such as the Bill for Digital Random Reward Mechanisms that was essayed in 2022 also in Spain, or the so-called SREN Act -commonly known as the Sorare Law – in France, regarding monetizable digital items.
- Conclusion.
Five years come a long way. If, back then, the challenges that the uprise of digital platforms /Big Tech and the use Big Data / AI were to pose to the digital economy were clear, in mid-2024 we can confidently say that, from the EU in particular, the response has been bold, even courageous, both at the Competition Law Policy and the Consumer Protection fronts.
It remains to be seen however the overall effectiveness of what already hints as a cumbersome rule system, which anyhow is still work in progress. Prescriptive regulation is hardly a good companion to innovation pace and some imbalances might crop up. Equally, the next political term in the EU will witness the challenges derived from practical implementation of such rules: algorithm surveillance, overall enforcement, potential market fragmentation if national rules pre-empt EU regulation, to name a few. Hence, in the purest tradition of European integration nonetheless, we will need to take it one step at a time to see where the concerns shown five years ago finally lead us to.
[1] As in A Tale Of Two Cities (1859).
[2] FIDE, 2019, AI, Big Data and the Digital Economy: Challenges and Opportunities, Conclusion Document. See Article by Cani Fernández and Juan Espinosa, Regulatory Challenges: Consumer Protection and Competition.
[4] See Frédéric Jenny´s recent followup study -June 2023- on the matter, issued to CISPE.
[5] As mentioned in Cennamo, C. and Santalò, J. (2023), Potential risks and unintended effects of the new EU Digital Markets Act, Esade EcPol .
[6] OECD, Algorithms and collusion.
[7] For a detailed discussion see Van Uytsel, S. et al, (2023), Algorithms, Collusion and Competition Law, Edward Elgar Publishing, or Caforio, V. (2022), Algorithmic Tacit Collusion: A Regulatory Approach (2023) 15 Competition Law Review 9.
[8] See here.
[9] These Directives would be:
- Unfair Commercial Practices Directive 2005/29/EC (‘UCPD’).
- Consumer Rights Directive 2011/83/EU (‘CRD’).
- Unfair Contract Terms Directive 93/13/EEC (‘UCTD’).
[10] See here.
[11] See for instance European Parliament´s 2023 report 2022/2014(INI) Consumer protection in online video games: a European Single Market approach.
[12] REGULATION (EU) 2022/2065 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act).
[13] The initial proposal was issued by the European Commission in 2021 and a final agreement to the text was reached in December 2023.
[14] See here





