AI and Privacy: Should Europe Be Wary of Regulatory Taliban?

I rarely write about regulatory questions and I am putting the question in deliberatively provocative terms. So let me be clear: I firmly believe that privacy and AI regulations are necessary. The principles underpinning EU regulations – from the GDPR to the EU AI Act – are fundamentally sound. But the problem I want to point out here has less to do with the legislation itself and more with what has been developing around it and the perverse asymmetries it has been generating for European research and, if we are to believe the Draghi Report, European small and medium businesses.

Most of these regulations were designed to keep Big Tech — predominantly US companies — in check. These firms have been voraciously collecting data from European citizens while deploying generative AI applications and popular chatbots. Yet the same stringent rules apply with equal force to scientific research, SMEs and even charities. Complying with these regulations is demanding and often costly and time-consuming. And this is where we run into the first asymmetry. Big Tech can rely on armies of top-notch lawyers who don’t only understand the law, but also the technology and who excel at risk evaluation – although they only represent a small overhead spread over a giant business operation. Academic researchers and small business don’t enjoy such legal resources. Often they struggle to comply or to get GDPR-approval. Jurists on university bodies in charge of approving ethical and GDPR-compliance can be surprisingly unfamiliar with protocols, data-collection platforms and technologies now used in advanced research.

But this would not be so bad if these regulations had not spurred the emergence of a regulatory culture promoted by a large EU commentariat excessively focused on the dangers and threats associated with new technologies. Privacy is invoked at every turn, often legitimately, but sometimes in more questionable ways. With many adverse consequences for European scientists. Conducting legal AI research in Europe is very hard, when not outright impossible, because many judiciaries hide behind the GDPR to block access to the large corpora of decisions researchers need. The fear of facing complaints have made universities risk averse. And then you have the young, inexperienced jurist on the board of ethics or working for the local regulator who has persuaded himself that the modest research proposal he is asked to review must be the next Cambridge Analytica.

It is not rare for even a very-low risk scientific proposal to take weeks of wrangling to get approved. I have seen instances where researchers eventually had to ask anonymous participants to consent three times to data collection. Once to join the online platform that recruited them in the UK. Then a second time to get their GDPR consent. And, finally, a third time, because they are also supposed to give their ethical consent to data collection (a nuance surely lost on 99.9% of participants).

The problem is not just that researchers end up devoting inordinate amounts of time to convince university reviewers that their work is compliant (which comes on top of all the academic bureaucracy – reporting, time-sheeting, submitting data management plans – that keeps inflating and keeps distracting scientists from the actual research). To avert problems with finicky reviewers, researchers often choose to do less. Don’t collect demographics if they are not strictly necessary for the study or if it is likely to delay approval! Master students interested in conducting experiments involving human participants face the real risk that GDPR and ethical review will prevent them from graduating in timely fashion.

There are no empirical studies of this phenomenon. But I do see scientific areas – such as psychological and behavioural studies – where this is clearly hurting European research. Because studies come with fewer covariates, there is less room for exploratory analysis and the generation of new hypotheses. The same may be occurring in medical research. As studies collect less information about patient characteristics, the resulting data will inevitably offer fewer possibilities to explore and detect interaction effects or understand rare complications.

Meanwhile, Big Tech and multinationals are moving and processing tons of data, often in ways that raise red flags, but as part of complex and opaque business operations that tend to elude the attention of regulators. Discussing with business people, I am often astonished at what Big Tech and multinationals dare to do with data. Which points to another asymmetry: the regulations we are talking about here are much easier to enforce on academics, small charities and companies running far less complex operations. The popularity of their apps and the attending network effects afford Big Tech giants a huge bargaining chip to force you and me to hand them over our data. Academics, obviously, don’t have this leverage, but can still feel at the mercy of frivolous accusations of breaching privacy rights.

The fixation on dangers and threats, some phantasmagorical (e.g. people are going to commit suicide if we let them freely talk to LLM-based chatboxes as a petition circulating in Belgium suggested last year), in the European regulatory bubble has overshadowed the need for more discussion on plain enforcement and the compliance and enforcements asymmetries which are hurting Europe’s research and long term economic prosperity.

What is the solution? What we should try to do first is to change the culture and mindset that have developed on and around the legislation, rather than tinkering with the legislation itself. Let prioritize enforcement and the real risks. And make the lives of European scientists and researchers easier and more productive!

Leave a Comment

Filed under Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *