Loading…
This event has ended. Visit the official site or create your own event on Sched.
                                                      THIS EVENT IS OVER NOW

Recordings of the sessions are available NOW by clicking on the session you are interested in.

ISC and its partners organised the 9th edition of the Science Summit around the 78th United Nations General Assembly (UNGA78) on 12-29 September 2023.
The role and contribution of science to attaining the United Nations Sustainable Development Goals (SDGs) will be the central theme of the Summit. The objective is to develop and launch science collaborations to demonstrate global science mechanisms and activities to support the attainment of the UN SDGs, Agenda 2030 and Local2030. The meeting will also prepare input for the United Nations Summit of the Future, which will take place during UNGA79 beginning on 12 September 2024.
Back To Schedule
Monday, September 25 • 12:00pm - 2:00pm
[VIRTUAL] Responsible AI = AI of the future (251201)

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
As algorithmic systems increasingly shape decisions that affect people's lives, the need to identify and address algorithmic bias has become urgent. Socio-technical approaches to algorithmic audits recognize that algorithms are not neutral technical tools, but are embedded in complex social and cultural contexts. They seek to identify and address the ways in which algorithms can perpetuate or exacerbate existing social inequalities.

This presentation will describe approaches used in socio-technical algorithmic audits and how they can help to tackle bias in algorithms and promote people's rights. We will discuss the importance of analyzing the broader social factors that shape the development and deployment of algorithms, and the need to engage with stakeholders who may be impacted by algorithmic decisions. We will also explore the role of interdisciplinary collaboration in conducting these audits.

One key approach to socio-technical algorithmic audits is the use of. This theoretical framework can help to identify the ways in which algorithms may perpetuate systemic racism or other forms of social inequality. For example, algorithms used in criminal justice systems may disproportionately affect black people, as they may be trained on biased datasets or may reflect underlying social biases. By using critical race theory to analyze these issues, we can identify strategies for mitigating these biases and promoting fairness and accountability.

Another important approach to socio-technical algorithmic audits is participatory design. This approach involves engaging with stakeholders in the design and implementation of algorithms, to ensure that they align with the values and needs of the communities they serve. By involving stakeholders in the decision-making process, we can identify potential biases or other forms of unfairness and develop strategies to address them.

An example of this is our algorithmic audit of VioGén, a system used in Spain by the police to determine the level of risk of a victim of gender violence and therefore assign a level of protection. In the process of auditing it we were in constant contact with Fundacion Ana Bella, a foundation of gender violence survivors advocating for woman’s rights, to ensure their needs were addressed and therefore the algorithm recommendations make it safer for them.

In our work we contribute to the SDG´ GOAL 9 by building accountability tools to show that innovation does not have to go against people's rights and established regulations. We also tackle the issues connected to the GOAL 5 as most ADMs discriminate against women and other vulnerable communities. Therefore GOAL 10 is also very relevant - we want to ensure that people have the same chances when interacting with algorithms, and that we all can trust ADM systems, regardless of skin colour, gender, location or background. We also support GOAL 16 as automation will shape our decision-making processes in the coming years, and without accountable AI, rights, guarantees and institutions will suffer.

Speakers
avatar for Gemma Galdon Clavell

Gemma Galdon Clavell

Dr. Gemma Galdon-Clavell is a leading voice on technology ethics and algorithmic accountability. She is the founder and CEO of Eticas Consulting, where she is responsible for leading the management, strategic direction and execution of the Eticas vision.  Her multidisciplinary... Read More →

Convenors
avatar for Gemma Galdon Clavell

Gemma Galdon Clavell

Dr. Gemma Galdon-Clavell is a leading voice on technology ethics and algorithmic accountability. She is the founder and CEO of Eticas Consulting, where she is responsible for leading the management, strategic direction and execution of the Eticas vision.  Her multidisciplinary... Read More →


Monday September 25, 2023 12:00pm - 2:00pm EDT
ONLINE