top of page
Search
velislava777

4 Ways the New EU Digital Acts Fall Short and How to Remedy It

There has been an outpouring of enthusiasm for the EU’s recent efforts to address digital giants. After a year of relative quiet following the announcement of a new mission described as a “Europe Fit for the Digital Age” in 2019, a barrage of new regulations were issued. These included the Data Act, Digital Governance Act (DGA), Digital Services Act (DSA), Digital Markets Act (DMA) and Artificial Intelligence Act (AIA). While these do represent a novel intervention that has sought to learn from past experience elsewhere, this enthusiasm is premature, perhaps even misplaced.



Many however think that this new suite of legislation not only represents the EU’s definitive response to the digital age, but also offers the comprehensive solution that Big Tech critics have been waiting for. Considering that Big Tech is firmly rooted in the USA and China, and Europe has no equivalent technology providers who have proven themselves at scale, it is vital that Europeans gain a voice in shaping global computational infrastructure.

What is on offer here, we will argue, is a 1990s-grade neoliberal solution begging for a 21st century rethink that meets the challenges of the fast-expanding 21st century data economy. Europe set the stage for regulating digital media in 2018 with the GDPR — an acronym that is now familiar well beyond data protection enthusiasts. Apart from increasing general awareness around data protection, this piece of regulation was key to harmonizing laws across the continent that pushed for data transparency for all EU citizens. Since enactment, flaws in its framework have become apparent with the incessant “data transparency” popups that have become ubiquitous; however, without a doubt, the GDPR has been instrumental in increasing the geopolitical bargaining power of Europeans when negotiating with Big Tech abroad. A strong European voice in shaping this sphere could potentially act as a bulwark, of a sort, against the existing models represented by the USA and China.

In the former, we find a largely unfettered market regime and as of yet little hope of strong regulatory rules emerging. In the latter, recent new regulations offer an emergent framework that on its face tackles structural concerns, borrowing liberally from the GDPR; it also steps in to regulate the behavior of both tech companies and users alike in ways likely both the EU and the USA would find dramatic overreach. But both systems converge on continuing powerful state and corporate access to sensitive information nonetheless in problematic ways. In considering the new EU regulations, the potential bulwark represented by the EU’s digital acts is weaker than might first appear, and certainly does not question deeply disturbing underlying features of the data economy.

While the EU has developed a framework that very obviously responds to its interactions with its counterparts in the USA and China, its approach remains incomplete. At their core, these efforts mistake the calculations of markets for the making of policy that steers us toward a future common good. Utopic promises of possible public-private initiatives to grapple with public health, environmental, and other public ills — the recitals in these regulations even seem to argue that battling global climate change would be well-nigh impossible without a new open-sharing regime for data — all amount to wishful thinking that ignores the dangers of this market-driven approach.

In this article, we explore four key areas where the EU digital legislation needs to be developed further.

Firstly, it is the business model of the data economy itself that needs to be challenged, rather than accepted by default.

Building from this point, secondly, it is increasingly clear that certain forms of data collection must be off-limits, regardless of consent. This leads to our third argument: for all of these regulations’ concerns of large platform power, small platforms in the data economy are, in fact, equally of concern in many ways, not less. Finally, alongside the movement of data, the computational infrastructure utili in facilitating these flows needs also to be considered in holistic fashion. What we have with the EU digital acts is tech and competition policy standing in for a broader social policy, with no clear horizon toward which we are steering beyond the outcomes of enhanced market frameworks: we argue for something different and broader.

The Big Tech Business Model Must be Challenged, Not Accepted by Default The new digital acts shift from protecting data subjects to stimulating a struggling digital economy in Europe, and the impulse certainly feels rational and necessary to implement before US or Chinese giants gain a firmer foothold on the continent. These acts could help seed a home-brewed tech sector in the EU, and at times they even seem to gesture towards the tantalizing possibility of new models for digital media to take shape. Taken as a whole, however, these acts amount to little more than an ‘open access’ regime for data resources to power this fledgling sector, something very similar to how telecommunications are already approached in the EU. This open access regime is also similar to what was attempted in the USA in the 1990s, which ultimately gave way to massive, government-approved consolidation of powerful telecommunications and cable giants and a consummate failure to conjure strong digital privacy laws since then. The problem with such market-based regulatory solutions, historically and especially now, is that data markets are distinctly different from supplying access to communications infrastructure, where the goal has been cost control and widespread access to the means of communication (if not unfettered, non-discriminatory treatment of information aboard these systems). The data economy has severe informational and power imbalances.


These new ‘digital-something-acts’ see market dominance of large digital platforms — such as Google and Facebook — as the core problem to be solved and an unrestricted flow of data between businesses as the primary solution.

However, an unrestricted flow of data between businesses as a goal in itself does not address the concerns and pressure-points of online users, both in the EU and across the globe, even with the existing protections of the GDPR and the new Data Services Act.

The giving of new oxygen to Big Tech’s overarching business models by these new regulations, when read together, is our chief concern. We leave to one side the important additional question of how far the EU digital acts take the final form they do because of lobbying from precisely the businesses whose models should be open to scrutiny. While there is benefit to making large service providers open up access, for example, to their recommendation systems (Article 29), the majority of the regulatory framework being implemented by the EU is unabashedly geared towards squeezing every last ounce of profit from corporate data that can possibly be extracted. The DMA, for instance, proposes to force large intermediary platforms to share their data with other businesses in the hope that these smaller platforms would then be more viable. While competition in the marketplace could potentially benefit some consumers, simply opening up data flows to smaller businesses is likely to lead to adverse effects that greatly overshadow these benefits. It is the logic that values data extraction in and of itself that needs to be addressed.

The problem here is a vision of the market’s invisible hand as the primary mechanism by which consumers will be protected. For instance, the DMA envisions a kind of trickle up privacy mechanism, where users flock to smaller platforms because of the enhanced privacy they afford users. In Recital 61 of the DMA, a prediction is made that it will put “external pressure on gatekeepers to prevent making deep consumer profiling the industry standard” by forcing gatekeepers to make their data accessible to other businesses. Further, we read in Recital 31 that this transparency will then “allow other providers of core platform services to differentiate themselves better through the use of superior privacy guaranteeing facilities.” Privacy here is thus seen as facilitated by the imagined advantage of data flowing freely to third parties. Yet the logic of trickle-up privacy solutions is about as sound as trickle-down economics.

Looking across the new EU legislation as a whole, then, we find, at best, an agnosticism and at worst a complete alignment with the business models that have driven platforms to circulate misleading and harmful content and compel the maximum extraction of data from every social process. Even within the DSA, there is no direct regulation of the business models that drive data extraction for ‘very large online platforms’ that are given additional responsibilities to monitor the wider societal consequences that flow from their operations. Nor does the Data Act anywhere restrict data extraction more generally, with it relying entirely on the GDPR’s pre-existing restrictions on the extraction of personal data without consent (a point to which we’ll return).

But there is one way the EU hopes to shift the emphasis of data collection in society and that is through the DGA’s innovative idea of ‘data altruism’. Data altruism is vaguely defined to mean either individuals agreeing to process their own personal data (hardly a problem) or people allowing their ‘non-personal data’ to be used for free ‘for purposes of general interest’ (article 2(10)). But once again, the question of whether data of certain types should be collected and so be available to flow in the first place is never raised. ‘Data altruists’, of course, while perhaps not commercial entities, may still be privately owned and controlled, such as any number of non-governmental organizations. What is missing entirely from the EU digital acts is an evaluation of the macro-business models that drive data extraction more generally: unsurprisingly perhaps in a public debate that remains dominated by Big Tech itself, as former European Commission official, George Riekeles, recently noted.

Some Forms of Data Collection Must be Off Limits, Regardless of Consent The new EU regulations contain certain platform accountability provisions in regard to illegal content and offer EU nation states easier access to data for the purpose of stopping terrorists. For many, however, terrorists and illegal content are not their primary concern. They are concerned with the inadvertent privatization of public infrastructure when the black-boxed nature of recommendation systems effectively determines their access to necessary resources for social life and relationships. These include critical resources for education, employment, health, political engagement, and staying connected with friends and family. Such resources impose levels of automated data collection with which they might not agree, if they are given the opportunity to offer real consent.

The GDPR on which the whole design of the new Digital Acts relies assumes that a consent regime works to control harms. But what if consent cannot freely be given? The “must-consent-to-T&Cs” problem is particularly troubling when applied to children at school. Children are too young to provide meaningful consent to data extraction, and when their parents do not provide consent, they run the risk of having limited their kids’ access to education. Consent in health tech is also troublesome. Not consenting to the software used to book doctors appointments or run diagnostic tests has real implications for access to healthcare. This problem has the potential to get worse with passive data collection by the ‘Internets of Things’ and in vitro medical devices connected to the Internet.

The DSA gets closest to addressing this concern in one provision in article 29. This says, “very large online platforms that use recommender systems shall set out in their terms and conditions, in a clear, accessible and easily comprehensible manner, the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence those main parameters that they may have made available, including at least one option which is not based on profiling.” But, as it stands, terms and conditions for platforms are read by almost no users. This is, in part, because those conditions regularly change, making any investment in understanding them effectively useless. One study estimated it would take 76 8-hour working days to read all T&Cs per year.

Additionally, both the Data Act and the DGA assume there is a sharp division between non-personal and personal data, which has yet to be demonstrated in practical application. Data rarely sits neatly around one person and usually describes a space in-between individuals which lacks clear definition. For example, if you have a conversation with your mother on Whatsapp, is the data belonging to you, your mum, Whatsapp, or the company providing the submarine cable? If data cannot be neatly structured around the individual it becomes unclear who should consent to what.

While the DSA includes no proscription against data collection, the DMA Article 6 1. (i) emphasizes opening up data to all businesses. The DMA Article 6 1. (i) states: “a gate keeper shall provide business users, or third parties authorized by a business user, free of charge, with effective, high-quality, continuous and real-time access and use of aggregated or non-aggregated data”. However, what if deep consumer profiling by powerful players is already entrenched, making the consent regime ineffective to constrain it? If so, we need attention put into limiting types of data gathering which may be intrinsically harmful, even if they are nominally “consented to”. We recognize, of course, that the EU’s grand design is to rely here on the GDPR’s preexisting constraints on the processing of ‘personal’ data for its ability to block certain types of data collection. But the GDPR’s effectiveness depends on whether you believe that a consent regime can work in an extremely uneven playing field where large players have transformed how whole sectors operate, leaving few alternatives for the non-consenting to turn to. Various scholars have expressed major doubt on its effectiveness (for example Elettra Bietti 2020).

One of the practical strengths of the GDPR was creating a single touch-point that was responsible for addressing the data protection concerns of citizens who had previously been left bouncing emails between various bureaucracies. The DSA, DMA, DGA, and AI Act reverse this success, creating many new offices with overlapping remits, even while Data Protection Authorities have been underfunded. Rather than starting several new offices with slight variations on a theme, shouldn’t existing DPAs be given enough funding to do their jobs? While the GDPR compelled data subjects to trust in digital technology, these subsequent digital-something acts leverage that trust to fuel the EU’s fledgling digital economy while potentially making it harder for citizens’ concerns to be addressed.

It is illustrative to look in more detail at the case of education and ‘edtech’. There are three major risks that flow from extensive data collection on children’s learning in schools, where requiring consent is unlikely to make a difference. First, there is always the uncertainty of what happens to corporate-collected data once a company is acquired or merged with another. For example, Naviance, an online learning platform that was formerly owned by Hobsons, the subsidiary of the UK Daily Mail, was sold in 2021 to the American PowerSchool, which is owned by Vista Equity partners, a large US investment firm with a portfolio of companies from education to insurance, agriculture, healthcare and marketing. What will happen now to data gathered in schools that relied on Naviance?

Second, even when education technology (edtech) businesses are GDPR-compliant and consent is given, this does not guarantee that some exploitation of the collected data will not happen now or in the future.

Edtech companies monitor children without their consent and knowledge, harvesting data on what they do, who they are, who their families and friends are, and inferring what they are likely to do next. Many of these online learning platforms do not clearly state in their terms and conditions that they deploy often extensive and invasive tracking techniques that are neither necessary nor proportionate to the aim to deliver education. Moreover, even if educational authorities are sophisticated enough to look out for such activities, some of these techniques can be easily removed during the process of vetting, and then re-installed again. This leaves the education sector at the mercy of speculative businesses whose products can change objectives, quality and ownership (and therefore jurisdiction). Recent research conducted by Human Rights Watch (HRW), a non-governmental organization, demonstrated how the largely unregulated edtech market has been exploiting children’s data by deploying ad-tracking, dangerous permissions and other invasive technologies into their software. Of the 165 edtech products the HRW reviewed, 89% engaged in data practices that put children’s rights at risk, undermined or actively violated them, by allowing access to children’s data to advertising technology (adtech) companies, data brokers and third-party commercial entities.

Third, because the responsibility of digitalizing education lies in the hands of schools, districts and teachers, there is little that they can do to protect children’s data beyond looking at and expecting GDPR compliance (a mere tick-box exercise).

However, as the HRW report demonstrates, these efforts are not enough. The level of technological expertise required to truly understand covert data extraction practices is beyond schools’ abilities or even responsibilities. Some of these products, including powerful actors like Microsoft, Google, Cisco, and Zoom, were recommended by governments around the world as the means to provide education to millions of children during the coronavirus lockdown. Due to the lack of a mandate to scrutinize Edtech companies and hold them accountable, consent and GDPR compliance generally have proven futile in protecting children’s data from being exploited. The key question therefore, which is still rarely asked, is whether such activities should be banned outright because of their long-term consequences on the freedom of children and the rights of parents to protect their children from inappropriate external influence. Asking this question would begin to address the limits of the consent regime set up by the GDPR on which the new Digital Acts entirely rely.

Small Platforms Can Harm Us Too and may need more regulation The DSA is the new EU legislation that has attracted the most attention, because it appears to do most to regulate the acknowledged negative consequences of platforms’ data-driven practices. But in fact, its most stringent provisions are reserved for only a small number of large platforms. There are major exceptions from regulation for ‘micro platforms’, and even so-called ‘large-medium’ platforms don’t need to consider societal effects of their operations. Only the very large platforms (which means entities that have over 45 million users within the EU) find themselves under truly heightened scrutiny. It follows that if you offer a service to 5% of Europe’s citizens (hardly trivial), you fall under the radar.

Meanwhile, under the DMA there are no limits on data collection by SMEs whose data business will presumably be stimulated by access to large platforms data. Yet SMEs represent 99% of all businesses in Europe and are defined as less than 250 staff and less than €50m turnover. Additionally, small businesses can easily be bought by bigger businesses. As a result, by not imposing key standards across the board, privacy risks abound: cautious individuals may need to switch chat apps once a smaller, more ‘ethical’ one is bought by some less judicious larger player. Such transactions, of course, are increasingly common in this space. For example, Kahoot, a globally popular games-based learning app for K-12 students, bought Clever, a digital learning K-12 platform . Another example is with Blackboard, an American learning management system, which was bought in 2011 by Providence Equity Partners, a private equity investment firm that operates in both the US and Europe. In 2021, however, Blackboard merged with Anthology, another student management system , making an edtech behemoth which is now working with thousands of colleges across the US.

Ultimately both small and large players need to be held accountable to certain minimum standards that monitor their business models for data extraction. In fact, there is no evidence small players are more ethical than large players. And because small players typically integrate and rely on many other platforms to function in what are often called ‘platform ecologies’, their mode of governance already tends to be less clear than larger platforms.

While focussing the most stringent provisions on big tech sounds good, then, in the long run it may make technology harder, not easier, to regulate.

But before we become too surprised at the emerging gaps in the EU legislation, it’s worth remembering one early provision in the agenda-setting GDPR that puts these omissions into context. Framing rights as attributable to “data subjects” in a way that sets the stage for the data deluge we can expect from the ‘digital-something acts’ recently passed, we have, in its very first Article, a foreshadowing of these new laws. The GDPR says: “the free movement of personal data within the Union shall be neither restricted nor prohibited for reasons connected with the protection of natural persons.” If the GDPR so pointedly avoids restricting the flow of personal data to protect persons, perhaps we shouldn’t expect digital acts that build from it to do any better.

Data sharing concerns go all the way down the hardware/software stack The EU regulatory regime taking shape takes at face value the truism that data is the new oil and that for economies to be successful they must let this new oil run free. Underlying this metaphor is the assumption that an information or dataeconomy is sustainable, in sharp contrast to an economy based on oil itself. But just as we can now see that the unrestricted flow of oil has created a huge environmental problem, an unrestricted flow of data between businesses should be seen as likely to create a huge problem today. The social effects of unrestricted data flow have already been well documented, especially because it is impossible to track how data is being used as it flows from business to business. Because the regulatory regime imposed today makes it illegal to restrict the flow of information from business to business, it exacerbates the problem of knowing where user data is flowing and how it is being used once it finds its destination.

To get to the root of such concerns, it is important to talk not only about the data but the infrastructure that the data needs to exist. Data needs computational infrastructure to be processed — mobiles, laptops, databases, submarine cables, chips, web standards, and software packages. Today, Big Tech privately owns much of the computational infrastructure meaning that they have the ability to influence its design and access to it. In 2021 Facebook backed the 2Africa cable making the African continent which was announced with almost no public debate. Similar dependencies on private infrastructure can be found on the Submarine Cable map. What kind of control can anyone expect to exert over data if there is no control over the tools used to process that very same data? For example, if one has the ability to switch the storage of your data from AWS to Azure, one is still dependent on the USA cloud providers’ infrastructure.

For that matter, the focus of so much privacy discussion on tech giants has taken attention away from the initiatives undertaken by large telecommunications and cable companies themselves. In the USA, for instance, major telcos and cable companies successfully batted back what would have been strong privacy protections to users aboard their networks as they sought to amass their own adtech arsenals. Even as several large players in the US have recently sold these interests to either private equity firms or tech competitors, the universe remains an open playing field. Telecommunications and cable companies themselves are deeply embedded in ecosystems for transferring consumer and corporate data.

Particularly, if public services such as education and healthcare become dependent on computational infrastructure through digitalization, these services, in turn, will be inadvertently privatized themselves. Digitalization of public institutions results in the transfer of control over key infrastructure to foreign private parties (Collington 2021). The new digital EU acts are still very much focused on the data which completely misses the mark on regulating the machinery needed to collect, store, and process data.

If we want to use data in a way that benefits the general public then it will be necessary to seize elements of the computational infrastructure as public utilities governed by global public institutions. But the path towards this goal is a path not laid out in the new EU legislation.

CONCLUSION The approach taken by the EU in its new digital Acts begins to think through in a productive way the structural features of the ecosystems within which data transfer takes place. Such a lens is surely lacking elsewhere, such as the USA, where structural remedies as antitrust have been effectively neutered by recent Supreme Court decisions surrounding multiple-sided markets. The EU’s appears to have learned from the experience of the USA and its most prominent critics, such as new Federal Trade Commission chair Lina Khan. All the same, the lessons are applied exclusively to adopting a regime that assumes that the stimulation of new data markets in the EU via the creation of freer flow of data, if with some constraints on large players, is what is needed to stanch the power of large US-based and Chinese tech giants. Throughout the design of the new EU acts assumes that effective protections on the misuse of personal data from the GDPR are sufficient.

However, such optimism needs to be balanced with caution. On the one hand, an effort to intervene in this sphere with long-reaching and forward-thinking structural regulations is incredibly welcome. The EU’s approach to its telecommunications and data regimes, the obvious predecessor to these efforts, emphasizes prying open a consolidated field via network neutrality requirements on the flow of information and access to telecommunications facilities to competition is an obvious predecessor to these efforts. (The US in the late 1990s attempted such a regime but its effort ran into the buzzsaw of powerful corporate lobbying and pushback by incumbents, resulting in a further consolidation of multiple players into even more powerful monoliths, leading even to the death of common carriage in the USA — a happenstance from which the EU would be wise to take warning.) Among the challenges facing these new EU structural regulations is that relying on notions of data portability and the introduction of more restrictive conditions and requirements on the largest entities neglects how markets in this realm have shifted over the last two decades. It is often smaller actors, working in concert with each other or with larger operators, that are of concern in terms of the development of problematic business models. Smaller actors working even with each other and trading on particular realms of expertise, can comprise in themselves a powerful force worth attending to. But for the time being, there appears no mechanism proposed by the EU for addressing these issues.

Adding to the complexity is that actors in many markets for data are no longer usual market actors: they are algorithms, driven by artificial intelligence and machine learning, with rationalities which do not necessarily align with the economic theories of those who composed these rules. Other logics rule their operation, such as the drive to put advertising in front of favored individuals, to favor certain populations in the provision of particular services, and profit maximization, all logics which may or may not coincide with our understanding of how markets worked even a decade ago. Accountability mechanisms and reports required by these new regulations may or may not capture these dynamics. Rather than relying on the hope that the EU regulators have already produced the comprehensive toolkit for addressing platform power and all the social externalities that flow from the concentration of platform power, we need instead an approach that is more willing to challenge head-on the fundamental assumptions of data markets and Big Data rhetoric. An alternative, bolder approach will need to stay closer to the human rights priorities that were at the core of the GDPR’s original intervention.

The core point was made by no less than the UN Secretary-General in the UN’s Roadmap for Digital Cooperation in May 2020: ‘the world is at a critical turning-point for technological governance’ and human rights must be placed ‘at the center of regulatory frameworks and legislation’. As things currently stand, the EU’s legislative landscape falls short of both the UN’s expectations and the hopes originally stimulated by the GDPR. The first sentence of the GDPR read: ‘the protection of natural persons in relation to the processing of personal data is a fundamental right’. That principle is surely more important than the business principle that data must be free to flow. The deep contradiction between the two principles is’ we’ve argued, only partly resolved in the new EU legislation.

The EU, and any legislators who wish to follow in the EU’s footsteps, need to tackle this contradiction if they and we are to truly address the complex challenges of the digital platform age. Treating privacy and platform power as standalone issues in themselves creates its own problem. There are cross-cutting questions here about the shape of developing global capitalism and platform power that need to be addressed, as Frances Haugen has suggested, by listening more to civil society activists beyond regulators’ walls. The gaps we see in these new regulations are less matters of language than a fundamental conceptual weakness; using markets to solve market problems, and considering all markets as similar in nature rather than distinct and nuanced in their operation. Developing strategies to address these deep-seated problems of platform power has never been more important and necessary.

Acknowledgement The authors thank Stefania Milan for her generous contributions to early discussions on this topic.

About the authors Mitzi László is an independent consultant. Gregory Narr is a lecturer at Harvard University. Velislava Hillman (phd) is a visiting fellow at the London School of Economics & Political Science and founder of EDDS, a social enterprise of independent researchers who are putting in place a system of evaluating education technologies. Russell Newman is Associate Professor at Emerson College and Faculty Associate at Harvard’s Berkman Klein Center for Internet and Society. Nick Couldry is Professor of Media, Communications and Social Theory at the London School of Economics and Faculty Associate, Berkman Klein Center for Internet and Society, Harvard University


This article first appeared on Medium and here.




72 views0 comments

Comments


bottom of page