Cyberattacks, privacy invasion and misinformation campaigns. Malicious uses of technology are increasing alongside rapid advances in artificial intelligence (AI), but many companies are not doing enough to mitigate the risks.
This is a potential concern for responsible investors, who are looking to do good with their investments while targeting sustainable long-term financial returns.
As AI becomes a big part of various business functions today, a lack of risk mitigation could result in cybersecurity breaches that damage a company’s reputation; while some applications of AI, such as tracking of individuals and content moderation, raise legal and ethical concerns.
“We are convinced that mitigating the risks associated with AI systems – and addressing regulatory considerations – are closely related to the ability of companies to deliver long-term value creation with these technologies,” says Mr Theo Kotula, an ESG analyst at AXA Investment Managers (AXA IM).
“We also believe that AI systems can better provide long-term and sustainable opportunities when responsible AI is practised,” he adds.
Responsible AI refers to business practices that use AI in a fair, ethical and transparent manner while maintaining human oversight over the activities of the AI systems.
Opportunities and risks of AI
Today’s AI systems can mimic human problem-solving and decision-making abilities and have wide-ranging real-world applications, such as customer-service automation, risk modelling and analytics, as well as fraud detection.
Businesses have gained real value from the use of AI, according to the latest survey on the state of AI by consultancy McKinsey. In the report, released in December 2021, 27 per cent of respondents attributed