plums cut in half showing the 'core' on a blue and white checkered table cloth

Participation at the core: AI, ELSI and community engagement

Alondra Nelson, a sociologist, STS scholar and expert on AI policy and ethics, recently published a letter in Science proposing that artificial intelligence (AI) should adopt the Ethical, Legal, and Social Implications (ELSI) framework from genomics, an approach designed to put social concerns at the heart of technology governance. Nelson argues that meaningful community engagement must be central to ELSI for AI, not just peripheral. 

In this post, I use this letter as a springboard to deepen my reflections on potential linguistic and epistemic obstacles to public engagement with AI and to broaden discussions of human oversight, alignment and ethical decision making in AI.

ELSI: From genes to algorithms

The ELSI framework originated with the Human Genome Project in the 1990s, which set aside significant funding to study the implications of genetic research in society. This approach fostered a network of researchers who considered topics like equity, discrimination, and public trust, influencing fields beyond genomics, including synthetic biology, gene editing and nanotechnology.

Nelson’s proposal for applying ELSI principles to AI includes four principles echoing the ELSI tradition: integration of ethical research, sustained funding, community engagement, and rigorous evidence-gathering. (I would also argue that these ELSI principles should extend to the rapidly expanding use of AI in genomics and generative biology)

Nelson points to troubling incidents linked to AI: misguided therapy chatbots, medical misinformation, wrongful arrests by facial recognition, and tragic outcomes from unregulated systems. She stresses that these are not isolated technical failures but signs that systematic safeguards and community voices are urgently needed. What can AI ELSI learn from genomic ELSI in this context?

Community engagement and AI

Unlike genomics, where community engagement often included town halls, citizen deliberations, public dialogues and partnerships with Indigenous organisations, AI’s attempts at engagement are still emerging. Examples exist though, such as recent “Living with AI” seminars here in Nottingham, participatory design workshops, and efforts to involve communities in setting technology priorities. In Alaska’s Tribal Health System, ELSI-informed frameworks are being developed to ensure AI systems respect local values and needs. I bet there are many more examples. But there are also particular challenges to community engagement with AI.

Engaging with mathematical abstractions

As I have pointed out in a previous post, there is, in my view, a particular challenge for public and community engagement in AI. In genomics, the basic concepts, such as DNA as a ‘code’ or gene editing as ‘snipping’, are visually and conceptually graspable, for good or for ill. Social scientists can observe lab work, talk to scientists, and relay findings to lay audiences.

In AI, especially large language models or LLMs, the most critical decisions, like setting fairness constraints or choosing what the system should prioritise or optimise for, happen through mathematical processes such as, for example, “gradient descent optimisation”. This is a way for algorithms to ‘find the best solution’, but it operates in abstract mathematical spaces inaccessible to most people.

Community engagement with AI therefore also means grappling with how human values (fairness, transparency, inclusion, justice and so on) can be translated into mathematical objectives. For instance, ELSI/AI engagement projects could attempt to involve communities by setting targets like ‘maximise fairness across groups’ instead of just ‘maximise accuracy’. Yet, explaining these concepts and allowing meaningful input remains rather challenging, as it demands both technical literacy and innovative translation mechanisms. To overcome linguistic and epistemic barriers between technical communities and lay communities may be a significant challenge.

Progress and possibilities

There are, however, promising signs on the horizon – as I found out while writing this post. Participatory frameworks and initiatives to engage the public in AI governance have been developed for some time, even before the advent of ChatGPT in 2022. Recent participatory AI experiments go further than simple consultation. They involve for example patients with lived experience of a disease in co-designing algorithms, even walking participants through the process of AI model building.

Some participatory frameworks now use conversational interfaces and visualisations to demystify parts of mathematical optimisation, inviting non-experts to make meaningful choices regarding things like privacy settings in public decision-making tools. OpenAI’s Democratic Inputs program has funded teams whose goal it is to design mechanisms for public participation in generative AI decisions.

Still, most community engagement in AI remains focused on application and interface decisions, while the core mathematical design often stays in the hands of technical/industry experts. To make the ‘ELSI for AI’ vision real, new roles may need to be fostered, such as interdisciplinary translators who can bridge mathematical abstractions and human values for broader audiences. Examples that spring to my mind are Tim Lee and Sean Trott, but there are probably many more.

Looking forward

Nelson’s ELSI model offers valuable lessons, but I argue that AI’s unique challenges, especially private sector control and power, rapid development timelines, and mathematical complexity, require fresh ways to make community engagement robust and impactful.

Ideally, genuine community engagement and inclusion should extend beyond public consultation and dialogue and touch the core of how AI decisions are made. The real challenge is to ensure that community voices help shape not just the uses and interfaces of AI, but also the values deep within the mathematical heart of AI systems.

There are promising signs that people are taking up this challenge and I’d love to hear more from those who work in this space and have experience with emerging translational and participatory approaches. There are of course also developments all around us that threaten the very community values we want to embed in AI systems.

Selected further readings (there is much more out there!)

Ahrweiler, P., Späth, E., Siqueiros García, J. M., Capellas, B. L., & Wurster, D. (2025). Inclusive technology co-design for participatory AI. Participatory Artificial Intelligence in Public Social Services: From Bias to Fairness in Assessing Beneficiaries, 35-62. Available at: https://library.oapen.org/bitstream/handle/20.500.12657/99852/9783031716782.pdf?sequence=1#page=47

Birhane, A., Isaac, W., Prabhakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022, October). Power to the people? Opportunities and challenges for participatory AI. In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (pp. 1-8). Available at: https://dl.acm.org/doi/pdf/10.1145/3551624.3555290

Dafoe, A., Garfinkel, B., Ovadya, A., Seger, E. and Siddarth, D. (2023). Democratising AI: Multiple meanings, goals, and methods, Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pp. 715-722. Available at: https://arxiv.org/abs/2303.12642

Hassan, S., Asad, S. M. N., Eslami, M., Mattei, N., Culotta, A., & Zimmerman, J. (2024, October). PACE: Participatory AI for Community Engagement. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 12, pp. 151-154). Available at: https://ojs.aaai.org/index.php/HCOMP/article/view/31610

Holistic AI. (2024). Human in the Loop AI: Keeping AI Aligned with Human Values, Holistic AI Blog. Available at: https://www.holisticai.com/blog/human-in-the-loop-ai

Hossain, S., & Ahmed, S. I. (2021). Towards a new participatory approach for designing artificial intelligence and data-driven technologies. arXiv preprint arXiv:2104.04072. Available at: https://arxiv.org/pdf/2104.04072

Hsu, Y. C., Verma, H., Mauri, A., Nourbakhsh, I., & Bozzon, A. (2022). Empowering local communities using artificial intelligence. Patterns, 3(3). Available at: https://www.sciencedirect.com/science/article/pii/S2666389922000228

Montreal AI Ethics Institute. (2023). Going public: The role of public participation approaches in commercial AI labs. Available at: https://montrealethics.ai/going-public-the-role-of-public-participation-approaches-in-commercial-ai-labs/

National Human Genome Research Institute. (2024). ELSI Research Areas and Sample Topics, ELSI Research Program. Available at: https://www.genome.gov/Funded-Programs-Projects/ELSI-Research-Program/research-areas

Nelson, A. (2025) An ELSI for AI: Learning from genetics to govern algorithms, Science, 370(6521), pp. 1234–1236. Available at: https://www.science.org/doi/10.1126/science.aeb0393

OECD. (2023) OECD Participatory AI Framework. Available at: https://oecd.ai/en/catalogue/tools/participatory-ai-framework

OpenAI. (2024). Democratic Inputs to AI. Available at: https://openai.com/index/democratic-inputs-to-ai/

Prabhakaran, V., & Martin Jr, D. (2020). Participatory machine learning using community-based system dynamics. Health and Human Rights, 22(2), 71. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC7762892/

World Economic Forum. (2025). AI value alignment: Aligning AI with human values. Available at: https://www.weforum.org/stories/2024/10/ai-value-alignment-how-we-can-align-artificial-intelligence-with-human-values/

Yang, J., Park, T. (2020). Methods for Inclusion: Expanding the Limits of Participatory Design in AI. Available at: https://partnershiponai.org/methodsforinclusion/

Some useful blog posts:

Nerlich, B. (2025, June 13). Public engagement with AI: Some obstacles and paradoxes. Making Science Public. Available at: https://blogs.nottingham.ac.uk/makingsciencepublic/2025/06/13/public-engagement-with-ai-some-obstacles-and-paradoxes/

Trott, S. and Lee, T. B. (2023). Large language models, explained with a minimum of math and jargon. Available at: https://seantrott.substack.com/p/large-language-models-explained

Trott, S. (2024). ‘Mechanistic interpretability’ for LLMs, explained. Available at: https://seantrott.substack.com/p/mechanistic-interpretability-for

Image: Pixabay


Discover more from Making Science Public

Subscribe to get the latest posts sent to your email.

Comments

One response to “Participation at the core: AI, ELSI and community engagement”

  1. Making Science Public 2025: End-of-year round-up of blog posts – Making Science Public Avatar

    […] I also returned to the issue of communicating AI and issues around ethics/ELSI and propose that participation and community engagement with AI should include both creators and users of AI and must find a way of bridging a linguistic gap that […]

    Like

Leave a comment

Discover more from Making Science Public

Subscribe now to keep reading and get access to the full archive.

Continue reading