Algorithms should not control what people see, UN chief says, launching Global Principles for Information Integrity
“At a time when billions of people are exposed to false narratives, distortions and lies, these principles lay out a clear path forward, firmly rooted in human rights, including the rights to freedom of expression and opinion,” he said.
The Secretary-General urged governments, tech companies, advertisers and the public relations (PR) industry to take responsibility for the spread and monetization of content that results in harm.
Harming our world
He emphasized that combating misinformation and hate speech is critical to safeguarding democracy, human rights, public health, and climate action.
“The spread of hatred and lies online is causing grave harm to our world,” he said, addressing the media at UN Headquarters, in New York.
The UN’s own humanitarian and peacekeeping operations are at risk, as its personnel deal with a “tsunami of falsehoods” and “absurd conspiracy theories”, the UN chief added.
False narratives and lies breed cynicism and undermine social cohesion and sustainable development.
Opaque algorithms
He asserted that everyone should freely express themselves without fear of attack and be able to access a wide range of views and information.
“No one should be at the mercy of an algorithm they don’t control, which was not designed to safeguard their interests, and which tracks their behaviour to collect personal data and keep them hooked,” he said.
The Global Principles aim to empower people to demand their rights, help protect children, ensure honest and trustworthy information for young people, and enable public interest-based media to convey reliable and accurate information, Mr. Guterres added.
Trust and resilience, for the public good
The Principles evolved through wide-ranging consultations with UN Member States, the private sector, youth leaders, media, academia and civil society.
They focus on building trust and resilience, ensuring an independent and pluralistic media, creating healthy incentives based on factual information, enhancing transparency and research, and empowering the public.
Key recommendations include urging governments, tech companies, advertisers, and media to avoid using or amplifying disinformation and hate speech. At the same time, governments should ensure timely access to information, support an independent media landscape, and protect journalists and civil society.
Tech companies should prioritize safety and privacy, apply consistent policies and support information integrity, especially around elections – while stakeholders involved in the development of artificial intelligence (AI) should ensure its safe, responsible and ethical deployment factoring in human rights.
Prioritize safety and privacy
Furthermore, tech companies should explore business models that do not rely on programmatic advertising and which do not prioritize engagement above human rights. Instead, they should prioritize user privacy and safety.
Advertisers should demand transparency in digital advertising processes from the tech sector to help ensure they do not end up inadvertently funding disinformation or hateful messaging.
Tech companies and AI developers should also provide meaningful transparency, allow researcher access to data while respecting user privacy. Executives should also ensure independent audits and boost accountability.
Government, tech companies, AI developers and advertisers should take special measures to protect and empower children, with governments providing resources for parents, guardians and educators.
These recommendations stem from the Secretary-General’s 2021 report, Our Common Agenda, which outlines a vision for future global cooperation and multilateral action. They serve as a resource for Member States ahead of the Summit of the Future, taking place in September.