AI and Human Rights: Understanding Implications and Corporate Responsibilities
Author:
Akaki Kukhaleishvili
Business and Human Rights Manager
UN Global Compact Network Georgia

In recent years, artificial intelligence has emerged as one of the most transformative technologies of our time, fundamentally reshaping how we live, work, and interact. AI helps create smart systems that can think, learn, and make decisions like humans1. AI systems can be software-based, like voice assistants, search engines, speech, and face recognition, or integrated into hardware, such as robots, self-driving cars, and smart devices2.
The rapid growth of AI and the increasing public access to related tools have sparked significant interest. AI is evolving quickly, offering incredible potential to enhance many areas of life, including scientific research, healthcare, and business. As industries transform, the benefits of artificial intelligence are becoming increasingly evident, enabling businesses to operate more efficiently, make smarter decisions, and drive innovation. AI has already helped businesses improve decision-making, enhance customer service, and foster economic growth. AI offers a wide range of advantages for businesses, from streamlining processes and increasing efficiency to reducing costs and driving innovation. By integrating AI into performance optimization, risk management, and overall business efficiency, companies can gain a competitive edge in today’s dynamic market3.
However, as AI becomes more involved in making decisions that affect people’s daily lives, important concerns have emerged about privacy, bias, discrimination, and the loss of human control over key decisions4. These developments have raised questions about the extent to which companies are effectively assessing and addressing the risks to people and society associated with their generative AI products and services5.
This article will explore the complex relationship between AI and human rights, focusing on how businesses can impact human rights, both positively and negatively. It will examine key regulatory frameworks, including the United Nations Global Compact, the UN Guiding Principles on Business and Human Rights (UNGPs), the European Union AI Act, and the Council of Europe Framework Convention on AI and Human Rights. Additionally, it will discuss corporate human rights due diligence obligations, highlighting how businesses can ensure their AI practices align with international human rights standards.
Relationship between AI and human rights
A common question in discussions about the relationship between human rights and artificial intelligence is how AI impacts fundamental rights6. Human rights protect the dignity, freedom, and equality of individuals. As AI becomes more embedded in society, ensuring these rights are upheld within AI systems is crucial. AI is increasingly influencing various sectors such as law enforcement, healthcare, and employment, making it essential to develop and regulate these technologies in ways that respect and protect fundamental rights. Nearly every AI application is linked to human rights, either directly or indirectly. For example, AI in public transport can impact rights such as freedom of movement, privacy, non-discrimination, and recognition of legal personality7. Addressing these concerns as AI’s role in society expands becomes even more critical to ensuring ethical and fair technological development.
AI can improve decision-making by detecting discrimination or monitoring harmful content like hate speech. For example, social media platforms use AI to remove harmful content, and hiring tools help reduce bias in recruitment8. However, AI also poses risks, such as privacy concerns, limits on freedom of expression, and the potential to reinforce bias. Some hiring algorithms have been found to favour certain groups over others, and facial recognition technology, while useful in law enforcement, has raised concerns about mass surveillance and wrongful arrests. Additionally, automation is reshaping the job market, replacing roles like customer service agents and cashiers, creating a need for workers to learn new skills9. As AI advances, balancing its benefits with human rights protection is crucial to ensure fairness and accountability.
The growth and maturation of responsible AI programs have laid important foundations for addressing the risks to people and society associated with AI. While some companies incorporate a human rights lens, there is a growing need for more consistent integration of risk management processes, such as corporate human rights due diligence, into their responsible AI policies. As AI grows, companies must identify and address potential human rights risks, ensuring their policies are aligned with frameworks like the UN Guiding Principles on Business and Human Rights. This will help companies more effectively manage these risks and promote ethical AI development.
Existing regulatory frameworks
International regulatory frameworks increasingly require businesses to protect human rights, particularly as they adopt and implement AI systems. These frameworks demand companies assess human rights risks and ensure their operations align with fundamental rights protections. Notable among these are the United Nations Global Compact, launched in 2000, and the United Nations Guiding Principles on Business and Human Rights, adopted in 2011.
The UNGPs marked a significant step in global human rights governance, emphasizing businesses’ roles in upholding human rights alongside state actors. These principles have become a foundational standard for regulating corporate responsibility in human rights, influencing both international and national regulations.
The European Union AI Act 10represents a key regulatory framework that requires businesses to consider the risks AI systems pose to human rights, particularly when implementing AI technologies. The act underscores the importance of aligning AI development with protections for fundamental rights, ensuring that businesses do not disregard the potential negative impacts of AI on individuals’ privacy, dignity, and autonomy.
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law is the first international legally binding treaty in this area. It ensures that all activities throughout the AI system lifecycle align with human rights and democratic principles. Unlike traditional regulatory approaches, the Convention is technology-neutral, focusing on ethical principles rather than specific technological solutions. It highlights essential fundamental principles such as human dignity, privacy protection, equality, transparency, accountability, and safe innovation. The Convention also requires companies to conduct risk and impact assessments on AI systems, ensuring that potential risks to human rights, democracy, and the rule of law are identified and mitigated. Authorities are empowered to intervene in situations where AI applications could lead to significant harm, including the introduction of bans or moratoria on certain AI systems.11
As AI systems become more integrated into decision-making, concerns about their potential to infringe on human rights have increased. It is crucial for businesses to understand their legal obligations under international human rights law and implement human rights due diligence. Companies must assess and address AI-related risks to prevent harm. This highlights the importance of proactive risk assessments in AI development. Comprehensive governance that prioritizes human rights is essential, and collaboration among governments, businesses, international organizations, and civil society is necessary to ensure AI systems respect and protect human rights.
Corporate Human rights due diligence responsibilities for the use of AI
Companies developing and deploying AI systems must conduct human rights due diligence in line with the UN Guiding Principles on Business and Human Rights. The UN General Assembly has highlighted these principles as particularly relevant for addressing AI-related human rights challenges12. Effective human rights due diligence for AI requires companies to take several key steps. First, companies must identify and assess their AI systems’ potential human rights impacts. Companies must then integrate these findings into their operations. This might involve incorporating human rights standards into partner selection processes, particularly when working with entities in countries with poor human rights records. For instance, facial recognition technology companies require strict controls when potential partners include security agencies in authoritarian regimes13.
Action is the next critical step. Companies should implement preventive measures such as training AI developers on human rights implications, forming interdisciplinary teams with diverse perspectives during system design, and involving disability rights organizations when creating human-machine interfaces to ensure accessibility. Microsoft’s AI ethics review board exemplifies this approach by evaluating products before market release14.
Tracking effectiveness requires ongoing monitoring of AI systems and their impacts. This means regularly reviewing whether mitigation measures prevent human rights violations across all applications and updating approaches as new risks emerge or business environments change in deployment regions. Amazon’s regular Rekognition facial recognition software audits demonstrate this ongoing responsibility.
Finally, companies must communicate their processes and findings. While external publication of risk assessments isn’t always legally mandated, transparency about AI development in high-risk areas builds trust and aligns with the growing convergence of national and international frameworks requiring human rights impact assessments15. Google’s AI principles and transparency reports represent industry best practices in this area.
By following these due diligence steps, companies can better ensure their AI systems respect human rights while still delivering innovation and value across markets. AI offers a wide range of advantages for businesses, from streamlining processes and increasing efficiency to reducing costs and driving innovation16. Its economic impact is particularly significant, as AI has the potential to boost productivity, encourage innovation, and lower operational expenses. By integrating AI into performance optimization, risk management, and overall business efficiency, companies can gain a competitive edge in today’s dynamic market. Additionally, automation is reshaping the job market, replacing roles like customer service agents and cashiers, creating a need for workers to learn new skills17. As AI advances, balancing its benefits with human rights protection is crucial to ensure fairness and accountability
In conclusion, as AI continues to evolve and reshape industries, its impact on human rights cannot be overlooked. Companies must take proactive steps to ensure their AI systems respect and protect fundamental rights. Key recommendations include implementing robust human rights due diligence processes, conducting thorough risk assessments, and integrating human rights considerations into the AI development lifecycle. Businesses should also embrace transparency, regularly monitor the impacts of their AI systems, and align with international frameworks such as the UN Guiding Principles on Business and Human Rights and the European Union AI Act. Businesses can foster innovation while mitigating potential harms by prioritising human rights and ensuring AI contributes positively to society.