EURA is proud to announce the 2nd edition of its annual conference, “Regulating UncertAInty”!


The Conference will be held on Zoom on the 8th and 9th of April 2021.



We will discuss the different questions involved in the regulation of artificial intelligence against intrinsically uncertain scenarios, and in particular:




Often, when debating the regulation of AI, ethics and law are brought together. Indeed, social sciences influence one another. Ethical considerations ground legal norms, even the most fundamental ones. However, each domain has its specificities. Philosophical considerations are not constrained within the academic debate.


To the contrary, the law has a solid hierarchy that defines the relationship between norms and determines how the conflicts among opposing interests are to be composed. Constitutional principles and fundamental rights might not be violated, affected, nor limited in any way by norms of lesser value.


For the same reason, the philosophical frameworks that may conflict with said fundamental rights and principles are not admissible in the policy debate aimed at developing – either soft or hard – law solutions, also in the field of AI.


Yet, philosophy is often said to be capable of influencing the interpretation of norms and to fill the gaps left by the legal system, adding additional criteria that might further specify what is desirable and commendable in a given circumstance.


This complex interaction of law and philosophy, so strongly recalled in the field of technology regulation, deserves a more attentive theoretical debate and assessment, too often neglected due to the speed at which the policy debate in this matter occurs. Indeed, it is very necessary not only for its possible theoretical implications, but also – and primarily – for the consequences it would most certainly bring in its application.




A fundamental question to be addressed when designing «good regulation» – one capable of taming technological innovation – revolves around the level at which the latter shall be adopted: namely, if it shall be developed at an international, European or local level. The matter does not lead itself to simple, one-size-fits-all answers, and deserves to be carefully addressed under multiple perspectives.


With the cross-border relevance of technological development, deployment and commercialization, as well as the social impacts deriving therefrom, legal systems cannot help being called upon to deal with AI applications which have been developed abroad. While a unitary regulatory framework would benefit ease and enhance the commercialization of different products across the globe and reduce transaction costs, yet uniform regulations, may be difficult to achieve, and is better seen as a dynamic project complementary to more local-centred regulation, and may be more apt for certain profiles of regulation – namely technical standards – than others.


Although soft law and hard law often work in collaboration , the latter mainly operates through legal harmonization – such as the already recalled GDPR – and international agreements – such as the much-contested Privacy Shield  –. However, the wider the geographical and political scope of these instruments is, the more limited the object of their convergence will be, often covering only some elementary content, which is insufficient for governing such complex matters.


To achieve «good regulation», it is thus fundamental to understand whether there is a specificity in the European approach and to draw a possible differentiation among the EU approach to AI and that of other western countries, such as the US, also reflecting the diverse background that the two have displayed in the field of bioethics.




There is no real doubt that regulation will and should occur at some point.


However, determining when to intervene is crucial. Early intervention may be preferred as a way to shape technological development and functionalize it to perceived needs and awaited gains. At the same time, it could be based on partial information, and fail to account for some uses and applications that might then prove of primary importance and widespread use, ultimately failing the correct identification of the object to be regulated.


Nonetheless, delayed intervention could prove even more detrimental, allowing for path dependencies to be created that are non-virtuous if not altogether infringing on individual or collective rights, in a way that first is tolerated then eventually plainly accepted. Market mechanisms might lead – if left completely unrestrained – towards similar outcomes, for individuals – even when formally informed – might find it difficult, if not impossible, to discount all eventual negative effects of possible present and apparent benefits provided by single applications.




What role can AI play in dealing with climate change and other global challenges? Could AI be a threat, rather than a solution, to coping with global crises such as climate change and pandemics? And if so, what does that mean for the ethics and regulation of AI?


Moreover, is AI ethics and regulation a postcolonial hobby of Western/Northern states, neglecting potentially more urgent issues at a global level? Or is it possible to do ethics of AI and build legal frameworks to regulate AI in a way that is sensitive to concerns people in the Global South/developing countries have? Can AI help these countries to deal with their problems? And who will make the decisions about AI: a handful of powerful Western multinationals? Again: can we find democratic forms of regulation, and what does that mean in a global context of socio-economic, geopolitical, and cultural difference?