Human rights in the digital environment: Principles and trade-offs of algorithm design in social programs

Between the 14 and 15 of November, I had the privilege to represent the Centre for Access to Justice & Inclusion (CAJI) in a series of meeting at Geneva that brought together representatives form civil society organisations, academia, companies and governments to discuss about issues related to human rights in the digital environment, with a particular focus on AI. As part of these meetings, I will write a series of three blogs with some key conclusions and key trends in this area.

This first blog piece is based on a workshop led by Responsible AI, Law, Ethics and Society. The training covered algorithm design, the technical aspects of artificial intelligence, and how to navigate policy recommendations. Attended by diverse stakeholders, including civil society organizations, companies, and members of Freedom Online Coalition states, the workshop focused on the importance of multistakeholder participation in the implementation of algorithmic decision-making.

The workshop exercise aimed to make a participatory discussion in the process of developing a social policy to assess children's vulnerability to abuse through algorithmic decision-making. The model/exercise presented a system that involves two primary agents: anonymous callers reporting concerns and a data system collecting additional information from various government sources. The system generates a numeric risk score between 0 and 100, with a lower score indicating no risk and a higher score triggering the need for further investigation by the police.

The system's functionality hinges on two crucial factors: the breadth of data sets used and the level of aggressiveness in decision-making. Striking the right balance is paramount for policymakers aiming to identify at-risk children without generating false positives—incorrectly flagging children as at risk when they are not. In a role play discussion members representatives of different stakeholders had to use this tool to determine both the datasets that they would use and the level of aggressiveness they wanted for their algorithm.

Here I expose some preliminary discussions of the need of multistakeholder approach and other principles in the stage of design of an algorithm decision making for social purposes.

Data sets and aggressiveness: a delicate dance

The exercise started with a provocation: to enhance accuracy of this specific system stakeholders should consider incorporating diverse data sets, encompassing information from child protection records, demographics, health records, criminal records, social benefits, employment records, credit scores, and even social media profiles. The breadth of data, when appropriately utilized, is key to honing the system's accuracy.

Simultaneously, the level of system aggressiveness plays a pivotal role. Setting a high aggressiveness threshold broadens the scope of flagged cases but risks including children who may not be at actual risk. This decision requires careful consideration, especially in regions where police intervention might be traumatic for children. A local approach that factors in the context of police repression and authoritarianism is then crucial in the design process of these kind of policies.

Multistakeholder collaboration: navigating ethical values

A multistakeholder approach becomes essential in algorithm design. Social services, child protection organizations, civil liberties and human rights organizations, police, companies, and different state agents may have different perspectives on ethical values such as privacy, discrimination, equality, fairness, and the goal of rescuing at-risk children. Collaborative discussions throughout the algorithm's design process ensure diverse viewpoints are considered.

However, the challenge lies not only in defining ethical values but also in operationalizing them. Explainability and transparency are imperative in translating high-level AI mechanisms into practical designs. Policymakers must strive for a balance that incorporates the ethical considerations of various stakeholders.

Human values in decision-making: striking a balance

Ultimately, a human-in-the-loop model is crucial to consider. While algorithms provide valuable insights, decisions must involve a human who can weigh potential consequences and navigate the nuances of each case. Striking a balance between incorporating individual perspectives and maintaining the integrity of social institutions is key.

In conclusion, the journey of designing an algorithm for child protection involves navigating a complex web of technical considerations, ethical values, and the practicalities of implementation. Policymakers must create mechanisms that explicitly address human rights, promote transparency, and ensure human involvement in the decision-making process. Only through a collaborative and informed approach can we strive for a system that protects vulnerable children without compromising fundamental values.

Sebastian Smart

Human rights in the digital environment #2: demystifying large information models and foundation models in AI

Human rights in the digital environment #3: navigating the interplay of global policy and technology in AI