Comment: New AI regulation in the EU seeks to reduce risk without assessing public benefit

Barbara Prainsack & Nikolaus Forgó commented on the new AI regulation in the EU in the journal Nature Medicine. The European Union’s new AI Act focuses on risk without considering benefits, which could hinder the development of new technology while failing to protect the public.

The new AI regulation of the European Union pursues a risk-based approach to regulating AI applications – the higher the risk, the stricter the regulation. It is precisely this approach that is now at the centre of the discussion. Barbara Prainsack and Nikolas Forgó, directors of the research platform "Governance of Digital Practices" at the University of Vienna, argue that a risk-based approach has some clear advantages over one-size-fits-all alternatives that over-regulate on the low-risk end of the spectrum while missing important problems on the other end. However, the risk-based approach also raises serious practical and political issues. Next to the difficulties of assessing risks ex ante, which is very difficult in a rapidly moving field, risk classification based on preliminary self-assessment is likely to increase the problem of developers deliberately misclassifying their innovations to avoid stringent requirements. Additionally, the AI Act could create competitive advantages for companies with sufficient economic power to legally challenge high-risk assessments.

Achieving objectives with data solidarity

The authors also criticise that the Regulation focuses only on risks and does not evaluate the value that different technology applications can create for the public. Such a purely risk-based approach misses the opportunity to prioritise innovation that creates significant public value vis-à-vis technology that merely increases commercial profits. A data solidarity perspective, the authors argue, would lead to better regulation while simultaneously strengthening European competitiveness and the common good.

The authors disagree with the statement that regulation stifles innovation. While ambiguous or overzealous regulation hurts innovation, clear rules about what technology developers can or cannot do support innovation rather than hinder it. Regulation should protect people from harm and support technology development that will yield public benefits. In the context of AI, this means that the EU needs to go beyond its well-worn frame of fair market competition. What is needed instead is for the EU regulator to increase democratic control over digital technologies in an effective manner. The creation and operation of these technologies need to be moved back into the realm of effective control by the people. Publicly owned infrastructures and technologies, such as publicly owned foundation models, would increase democratic control over AI. Furthermore, the EU should invest in education, research and knowledge transfer to bolster European technical competitiveness. Without such investments, the development and ownership of critical technological infrastructure will remain in the hands of the private sector. This will not only lock in the public sector’s dependence on tech giants but also limit the possibility for effective regulation.

 

Publication in Nature Medicine:
New AI regulation in the EU seeks to reduce risk – but not increase public benefits. Barbara Prainsack and Nikolaus Forgó. DOI: 10.1016/S2589-7500(22)00189-3

Press release University of Vienna