German Ethics Council: “AI must not replace humans”
Fundamental rights such as the right to privacy, minorities and a democratic, diverse society must be better protected in a world increasingly influenced by artificial intelligence (AI). This is a central requirement of the German Ethics Council’s “Man and Machine” statement, which deals with “challenges” posed by (AI). The use of the key technology, which is mostly based on machine learning, “must expand human development and must not diminish it,” said the chair of the independent expert panel, Alena Buyx, on Monday when presenting the position. “AI must not replace humans.”
“At the latest since ChatGPT entered public awareness in November, the broad application of artificial intelligence has arrived in our everyday life and in the public debate,” says Buyx. Algorithmic systems could be used to diagnose cancer or help students learn English vocabulary. But they have long since had a say in “who should receive certain social benefits” and influenced “our behavior on social media”. It is precisely here that the ethicists are concerned, for example, “that the algorithmically mediated information and communication offers of digital platforms and social media have consequences for the democratic legitimation structure”.
Brutal discourse on social media
The deputy chairman of the Council, Julian Nida-Rümelin, added that polarization and the brutalization of discourse had increased on Facebook, Twitter & Co. “This challenges the formation of political opinions in the digital age.” Many platform operators rely on AI to “moderate” content. Overblocking could be the result. The fundamental question, however, is “whether we want to leave such important decisions to private commercial groups.” It must be clarified whether “an alternative digital communication infrastructure in public responsibility is not necessary to protect diversity, communicative ethos and democracy”. According to the experts, the consequences of personalized advertising, profiling, microtargeting and data trading must be examined more closely.
On the 297 pages, the committee warns against excessive, automated monitoring in real time, for example with “predictive policing”. Existing social inequalities and prejudices are often taken up by training data and cemented by incorporating them into apparently neutral technology. Especially in state contexts and in relation to decisions that have a great impact or affect particularly vulnerable groups, high and binding requirements must be made for accuracy, avoidance of discrimination and traceability. Compliance with these criteria should be open to external verification.
Quality assurance in medicine
For the medical sector, the recommendations are aimed, for example, at quality assurance in the development and use of AI products, but also at avoiding a loss of medical competence and at the goal of harmonizing the privacy of patients with intensive “public interest-oriented” data use in medical research bring. If future machines could act intentionally in the full sense and develop a consciousness, the researchers continued to work. Your short answer is: No. “AI applications cannot replace human intelligence, responsibility and evaluation,” emphasizes Nida-Rümelin. “Human reason is inextricably linked to his physicality, which is part of his identity.” The ethicists see the potential of AI, for example, to advance climate and environmental protection. To do this, however, the resource consumption of the AI models must be better recorded and reduced.