Subscribe
EN

UN Global Compact Network Germany on AI and Human Rights

As artificial intelligence continues to reshape industries and societies, its impact on human rights, ethics, and corporate responsibility is becoming a crucial discussion point. AI holds immense potential to drive innovation, efficiency, and economic growth, but it also brings significant risks—including data privacy concerns, algorithmic bias, and the misuse of AI for human rights violations.

To address these challenges, the UN Global Compact Network Germany has taken a leadership role in guiding businesses on the responsible and ethical deployment of AI. In this interview, Richard Hülsmann, Head of Social and Governance Programmes, and Sarah Hechler, Advisor for Human Rights and Labour Standards at UN Global Compact Network Germany, discuss their 2023 report, Artificial Intelligence and Human Rights: Recommendations for Companies. They explore key human rights risks in AI, strategies for corporate due diligence, and the future intersection of AI and business ethics. 

Could you share insights into the UN Global Compact Network Germany’s work on AI, and how AI can support the achievement of the Sustainable Development Goals? 

As highlighted by the Forward Faster initiative of the UN Global Compact, we are currently on track to meet only 17% of the Sustainable Development Goals targets by 2030. To achieve these goals, a fundamental shift and transformation across all sectors of the global economy will be necessary. Artificial Intelligence and its potentially profound impact on businesses and society can help in creating such a shift. At the same time, AI is also not without its risks and – under the wrong conditions – could contribute to a further setback of SDGs, such as Gender Equality. The UN Global Compact Germany has explored the potential human rights risks associated with AI in a 2023 publication.  

The report Artificial Intelligence and Human Rights: Recommendations for Companies provides valuable insights into responsible AI deployment. Could you walk us through the key objectives of this report, its development process, and why it is essential for businesses today? 

The report was written in cooperation with researchers and experts from the TU Munich. During the writing process, we consulted German companies on their experiences, processes, and lessons regarding AI and human rights. Overall, the publication explores how the development and implementation of AI will affect corporate human rights due diligence obligations. It provides recommendations on how companies can integrate AI-specific risks into their existing corporate due diligence processes.  

The report identifies various human rights risks associated with AI deployment. Could you elaborate on the most pressing risks companies should be aware of when implementing AI systems? 

Understanding the human rights risks associated with AI requires first examining the general characteristics and potential liabilities of AI systems.  

  1. AI systems rely on large quantities of data. Collecting this amount of data can raise data protection issues.  
  1. AI systems are often complex and opaque. Every new data point can cause the AI model to adapt, making it hard to foresee and understand how an AI reaches a conclusion.  
  1. AI systems are dependent on data and thereby vulnerable to unintended errors and algorithmic biases.  
  1. AI systems rely on standardisation to draw conclusions which leads to more far-reaching impacts and can cause them to overlook outliers.  

The resulting human rights risks can be grouped into three types of risk categories:  

  1. Misuse of AI to commit human rights violations 

Companies that develop or use AI may be indirectly complicit in human rights violations, e.g., by providing software solutions to state actors that use AI to target ethnic minorities.  

  1. Insufficient consideration of human rights in the design of AI solutions 

For example, if a company collects personal data from its workers to train and develop an AI system without the consent and knowledge of those persons. 

  1. Adverse human rights impacts of the use of AI 

The vulnerability of AI to algorithmic biases can cause the use of AI to (inadvertently) discriminate between different groups. One example might be an AI that analyses the CVs of current employees and thereby learns to favor white male applicants.  

How can companies effectively integrate AI-specific human rights risk assessments into their existing due diligence processes? Are there particular methodologies or tools you recommend? 

One central aspect of accounting for AI risks is to consider all three categories of risks listed above in human rights risk analysis. In the second step, companies should develop corresponding mitigation measures and, lastly, communicate their commitment and actions.  

  1. To assess the likelihood of misuse, companies should evaluate their business partners. Close links with security organisations and state institutions in authoritarian regimes could, for example, constitute a risk factor and warning sign. Potential measures include a screening of business partners or the integration of protective clauses in contracts and codes of conduct. 
  1. Accounting for the insufficient consideration of human rights in the design can be addressed by assessing the risks associated with data input. A particular focus could be participation and information rights. To that end, companies might consider training their development teams and introducing review processes for data collection.  
  1. Adverse human rights impacts should be considered in a prior risk analysis by, for example, examining potential sources and impacts of AI bias. One means of addressing potential biases is to create interdisciplinary and diverse teams that train and evaluate the AI. Moreover, companies can contact independent and external parties to audit their AI systems.  

Are there particular industries where AI poses unique human rights challenges? How should companies in these sectors approach AI deployment differently? 

AI is likely to impact and affect all sectors. Particularly sensitive contexts with a higher likelihood of severe human rights impacts include the medical and security sectors. AI has a wide range of applications in the healthcare system, particularly in diagnostic support, personalised medicine, drug development, and management of health data. The potentially adverse on the right to health make this a high-risk context. The security sector, on the other hand, has been linked to AI systems that are restricted or prohibited under the EU AI Act, like the use of ‘real-time’ facial recognition systems in public spaces.  

Looking ahead, what emerging trends do you foresee at the intersection of AI and human rights? How can companies prepare to address these future challenges? 

We are likely to see AI expand into several new areas, sectors, and applications in the coming years. As such, AI and the risks associated with it will not be limited to businesses developing AI themselves. Companies will first need to stay abreast of these developments and should second integrate human rights risks associated with AI into their existing corporate due diligence processes.

Share: