Exploring uses and abuses of artificial intelligence

  • Briefing

February 2025 

Overview

In less than half a century, the application of electronic tools, systems, devices and resources to generate, retrieve, manipulate, transmit or receive information has become ubiquitous. Online technologies have developed the capacity to bring together an inordinate number of smart devices and systems to gather and analyse data on most aspects of everyday life. Digitisation of information and communications technologies (ICTs) has transformed communications networks, business models, and processes, affecting how people learn, work, travel, connect, use their time and lead their lives.

In combination, these technological advances in data science and artificial intelligence (AI) present unprecedented challenges for contemporary societies and for researchers from across disciplines and sectors. They produced opportunities for individuals and whole societies extending from climate change mitigation, responses to global health scourges and medical therapies to intergenerational connectivity, urban regeneration and e-governance. In addition though, since many AI applications and especially those related to Artificial General Intelligence are designed to act as autonomous and self-learning agents capable of adapting to and learning from its environment, they have also created risks associated with loss of human autonomy, online abuse, threats to children’s safety and to national and international security, requiring innovative strategies to combat cybercrime and cyberwarfare.

Science, mathematics, engineering and computing are considered as the core disciplines comprising the fields of data science and AI. For social scientists searching for solutions to global societal challenges, big data and AI have created opportunities to hone their expertise in monitoring and assessing the impacts of digital innovations on society and in shaping policy responses of governments. Social and human scientists from law, criminology, public administration, political economy, human geography, sociology, social policy, psychology, anthropology and philosophy have learnt to work in symbiosis with scientists in different research cultures, whether they be mathematicians, engineers, physicists, medical researchers, biotechnologists, biologists or environmentalists, to address global societal challenges.

Partnerships have been formed between public and private sectors, civil society and households in growing recognition of the need for a consciously multidisciplinary and cross-sectoral approach to ensure that technological innovation is relevant and adapted to the societies in which it is introduced. The UK’s Alan Turing Institute for data science and AI exemplifies the ways in which collaborative partnerships between scientists in universities, businesses, and public and third-sector organisations are advancing world-class theoretical and applied research. Their aim is to ‘harness data science and AI technologies for the public good’ and to apply them to national and global challenges. The Institute’s public policy programme works alongside policymakers to explore how data-driven public service provision and policy innovation might solve long running ‘wicked’ policy problems, and to establish the ethical foundations for the use of data science and artificial intelligence in policymaking.

While this section of the Academy of Social Sciences’ IAG briefings focuses on selected aspects of AI, the contributors have in common with authors in other sections their interest in understanding how the research−policy interface operates in different societal contexts, and what governments might learn from experiences in other jurisdictions. Briefings addressing the opportunities and risks associated with AI invite researchers and policymakers to reflect on policy challenges resulting from the ubiquitousness of AI, and on the effectiveness of policy responses being formulated across sectors, disciplines and cultures.

Key evidence

A plethora of evidence illustrates the impacts of the digitisation of society on everyday life. The focus here is on a few of the recurring themes identified across the series of policy briefings.

  • Regulation of AI: The European Union’s AI Act, which came into force in August 2024, provided the first comprehensive legal framework for AI worldwide, ensuring that AI systems respect fundamental rights, safety, and ethical principles. Frameworks and rules have been established for harnessing the beneficial potential of AI technologies in a sustainable way, for guiding their development and use while addressing associated risks, most notably of autonomous learning machines acquiring a moral status.
  • AI as a tool for improving governance: By enabling important innovations in the field of regulation, AI has increased the effectiveness of governance mechanisms, while raising awareness that AI’s learning strategy is confirmation bias, that regulation should not be mistaken for ethics, and that AI accountability requires societal input. In the field of urban planning AI increases control over life in cities, it serves as a technical tool and assistant for planners and can serve as a digital twin providing planners with alternative options.
  • AI as a tool for improving healthcare: AI, robotics and machine learning were found to have played a vital role in responding to the epidemiological, biomedical, and socioeconomic challenges raised by the COVID-19 pandemic. In Estonia, more than 95% of healthcare records and medicines have been incorporated into the e-health system. Nearly 99% of Estonian health data is digitalised, and nearly 100% of patients have digital health records, enabling swift, coordinated, and personalised care across the entire healthcare network.
  • AI as a source of global risks: The World Economic Forum’s 2024 Global Risk Report ranked disinformation as the top immediate threat currently facing the world, whether it be with reference to climate change, environmental degradation or human displacement and migration. Research into data storage shows that around 4% of global greenhouse gas emissions are driven by digital activities. The data industry is estimated to account for more carbon emissions than the automotive, aviation and energy sectors combined.

Policy context

Social science research has drawn attention to the importance of socio-cultural embeddedness of social processes in the adoption of technical innovations in different societies to explain disparities in take-up of digital technologies. Governments − national, regional and local − differ in their capacity and willingness to embrace new technologies and fund innovative research, just as their populations diverge according to age, gender, socio-economic and ethnic characteristics in terms of digital competence, resources, acceptance and take-up of technological advances.

Although the digital revolution is global by definition, its impact has been unevenly spread across continents: whereas 95% or more of the populations in Northern and Western Europe and North America are internet users, the lowest rates are found in Eastern, Middle and Western Africa. Most children and young people live in the global south, where digital resources and rights are both limited and unequally distributed. By changing inexorably how countries are governed and how people live, learn, work and use their time, ‘disruptive technologies’ associated with the digitisation of societies deepen the digital divide within and between societies.

Recommendations

Despite controversial conceptual, socio-cultural and disciplinary positions among researchers producing the evidence, incompatible ideological standpoints of policy audiences that research is seeking to inform, and ideational changes within populations, a common set of recommendations can be identified to ensure greater international cooperation between socio-cultural and political communities in

  • developing ethical foundations for the use of data science and artificial intelligence in policymaking capable of anticipating and adapting to the impacts of AI on political systems;
  • promoting the public good, allowing AI to work for the benefit of societies by intensifying efforts to overcome digital divides and to invest in areas with proven positive outcomes, such as healthcare;
  • to establish, implement and keep under review regulations designed to protect society from harmful social and political consequences, while dealing with issues of ownership, management and control of the human−machine interface, and mitigating associated risks.

This briefing was written by Linda Hantrais FAcSS.

Download this briefing on exploring uses and abuses of artificial intelligence