The initiative reveals that only 4 of the 15 companies targeted include facial recognition technology in their human rights policies.
- Led by Candriam, the coalition of 21 investors including Aviva Investors, Domini Impact Investments LLC and Robeco, accelerated the collaborative engagement initiative launched in 2021 on the risks posed by facial recognition technology.
- It calls for the immediate suspension of the sale of these services to law enforcement until proper regulation is put in place, and to ensure that this technology is used responsibly.
Candriam, a global multi-specialist asset manager, leader in sustainable and responsible investing, has released an update to its Facial Recognition Technology (FRT) investment initiative, a year after its launch. This report details the commitments made around the human rights risks posed by this technology. Candriam, with the help of 20 investors including Aviva Investors, Domini Impact Investments LLC and Robeco, interviewed 15 companies involved in FRT to understand how they assess, manage and mitigate the risks this technology poses to human rights.
The report raised four major areas of concern: racial and gender bias, limited reliability of technology, breach of personal data protection and its misuse.
The main points to remember:
- With regulations struggling to keep up with the evolution of technology, it is essential for companies to go beyond the legal framework to focus on ethics. Companies that proactively and transparently communicate about the ethics of artificial intelligence (AI) and a responsible approach to FRT send a positive signal about the attention and consideration given to these topics, which is generally reflected in all of their activities.
- Companies must both put in place specific governance to deal with human rights risks and publish a detailed policy on the subject, integrating the use of AI and FRT.
- Categorization technology is a vehicle for potential discrimination and human rights violations and should therefore be avoided at all costs.
- Companies are encouraged to avoid selling FRTs to law enforcement until proper regulation is in place.
- The FRT must focus on supporting humans in their work of identification and authentication. Supervision and control by humans are essential: an algorithm should not make a decision that can lead to an action with consequences.
Three companies stand out for their efforts to mitigate human rights risks in their use of AI and FRT: Microsoft, Motorola Solutions and Thales.
- Microsoft has implemented strong governance around the ethics of AI, and more specifically FRT, and recently retired facial classification modes. Additionally, Microsoft is one of the first tech companies to put a moratorium on FRT sales to law enforcement.
- Motorola insists that the system including the FRT must help the human rather than decide for him and has integrated two-factor authentication into its technology.
- Thales has implemented passport control assisted by the FRT which requires rigorous monitoring. The company is also committed to deleting user data after each passage.
This update follows the successful launch of Candriam’s FRT campaign in March 2021 and the signing of the investor statement in the use of facial recognition technology by 55 global investors in June 2021. Following this report, Candriam will discuss with each company how they can implement the suggested best practices. The results of this second round of engagement are expected to be published in 2023.
Candriam has also joined the Collective Impact Coalition (CIC) for Digital Inclusion, recently launched by the World Benchmarking Alliance, to promote the responsible use of AI.
Benjamin Chekroun, Voting and Engagement Analyst at Candriam, says: “The speed with which facial recognition is evolving and the delay in regulating this technology means that increased oversight and understanding of companies involved in this area is now imperative. in matters of human rights. As responsible investors in the technology sector, we have an important role to play in encouraging the companies in which we invest to identify, manage and mitigate human rights risks in their use of AI and of the FRT. We hope that our report and its conclusions will serve as a catalyst for companies to address the issue of the obligation of vigilance and monitoring of human rights, encouraging them to be aware of the risks posed by FRT and to tackle.”
Louise Piffaut, senior ESG analyst at Aviva Investors, adds: “At Aviva Investors, we are pleased with the publication of this interim report, which is an important step in this collaborative initiative. As regulation remains limited in the technology sector, companies are not yet fully considering their responsibilities in managing the societal impacts of FRT, depending on where in the value chain they are involved. Investors have an important role to play in pointing out best practices and engaging with companies on this issue.”