Artificial Intelligence is a transversal discipline that can be applied to any field and thus it is profoundly changing all aspects of society. It has a significant potential to generate sustainable economic growth and help us tackle the most pressing challenges of the 21st century, such as climate change, pandemics and inequality.
However, the broad adoption of algorithmic decision-making over the last decade on a wide range of domains - from social media to the labor market - already poses significant societal challenges that need to be understood and addressed.
The emergence of highly capable generative models exacerbates many of the existing challenges while creating new ones that deserve careful attention. The unprecedented scale and speed with which these tools have been adopted by hundreds of millions of people worldwide is placing further stress on our societal and regulatory systems.
The public conversation - most notably the recent open letters about existential risks of AI - has raised some relevant concerns. However, it does not fully encompass the different perspectives of the broader research community and contributes to apocalyptic visions of AI. While nobody can guarantee that such visions are impossible, focusing on the hypothetical risk of a superhuman and uncontrollable AI may divert attention from the imminent and real challenges and concrete risks posed by the use of AI today, such as the accumulation of power in the hands of the companies controlling the AI systems, algorithmic discrimination, lack of veracity and transparency, violations of privacy and IP rights, loss of jobs, exploitation and manipulation of humans, excessive carbon footprint, and digital exclusion, to name a few. Society would benefit from acknowledging and including the necessary complexity and nuance of these issues, and from designing concrete actionable solutions to address these challenges.
We appreciate recent efforts by leaders from the tech sector to engage with the European agenda on AI but urge them to focus on the tangible issues raised by AI systems today. More importantly, addressing these challenges requires the collaboration and involvement of the most impacted sectors of society together with the necessary technical and governance expertise.
Today, more than ever, Europe has a pivotal role to play in shaping how AI systems should be developed and deployed to respect European values and lead to progress. The need for a network of research laboratories independent of industry interests, devoted to open AI research focused on finding solutions to these risks is increasingly apparent. Since its creation, ELLIS has been an active contributor to this goal. We invite you - scientists, engineers, policymakers, and all citizens - to join and support us in this effort.
The ELLIS Board
Note: While the text above takes into account personal discussions with members of the AI community, it does not claim to speak for all of ELLIS. Our motivation is to balance the public perception which seems disproportionately dominated by fears of AI-related existential threats.
ELLIS (The European Laboratory for Learning and Intelligent Systems) is a pan-European AI network of excellence which focuses on fundamental science, technical innovation and societal impact. Founded in 2018, ELLIS builds upon machine learning as the driver for modern AI and aims to secure Europe’s sovereignty in this competitive field by creating a multi-centric AI research laboratory. ELLIS wants to ensure that the highest level of AI research is performed in the open societies of Europe and follows a three-pillar strategy to achieve that by creating research programs connecting leading AI scientists in Europe, building ecosystems around ELLIS units and institutes, and attracting international talent with a pan-European PhD & Postdoc program.
View all members of the ELLIS Board here: https://ellis.eu/board
Information for media: https://ellis.eu/formedia
Contact for further information and interview requests: email@example.com