Artificial Intelligence In Finance: How To Avoid Unethical Outcomes

Gerry Zollinger, Head of Data Science & Analytics at Avaloq | 20 May 2022

Gery Zollinger, Head of Data Science & Analytics at Avaloq, discusses the ethics of artificial intelligence in financial services.

AI can deliver real value to companies by supporting targeted marketing and analytics, chatbots and other important functions. However, it can also produce biased decisions. If systems are trained with incomplete or unrepresentative data sets, it can result in biased decision and lead to unethical outcomes for ethical minorities or women, for example. So what can financial firms do to mitigate the risks and reap the rewards?


Ethical challenges when designing AI

Artificial intelligence is a blend of machine learning and real-life data – data which may contain explicit or implicit prejudices that can be learned by the AI system. Human resources can be a sensitive area when it comes to the use of AI, with recruiting tools constantly under scrutiny regarding potential gender bias when selecting candidates for technical roles. Using historical data to train systems may reinforce frequent human prejudices – including subconscious ones we may be unaware of. When designing an AI system, it is crucial to identify which facets of the business are high risk and outline a plan to constantly monitor and calibrate the AI algorithm. By fully comprehending the risks, firms can prevent outcomes that are potentially unethical and maximize the benefits AI can generate for the business.

Potential impact on banks and wealth managers 

The use of AI is growing within financial services: it has been identified as a clear differentiator and a catalyst for growth. However, regulation has failed to keep up with the pace of innovation, making it challenging for financial institutions to abide by any set of AI best practices or principles. The Monetary Authority of Singapore (MAS) published a set of principles in 2018 to promote fairness, ethics, accountability and transparency (known as the FEAT principles) in the use of AI in data analytics, specifically use cases that concern the financial sector. These principles aim to provide guidance to firms offering financial products and services (including banks and insurers) on the responsible use of AI and data analytics to strengthen internal governance around the management and use of data and promote public confidence in the use of this technology.

Current applications of AI within the financial industry

The typical use case of AI in finance services is to automate routine processing, freeing up resources for wealth managers to focus more on honing their value proposition and strengthening relationships with clients. But AI has already evolved beyond its scope is able to do considerably more. Financial institutions can now leverage AI to create personalized portfolio recommendations on the fly based on investor suitability and preferences. Another innovative area is conversational banking, where AI systems use natural language processing (NLP) to understand client intent and recommend best next actions to advisers. This offers more than just efficiency gains – it enhances the client experience and boosts engagement. Banks in Singapore have been using chatbots for years to allow clients to perform simple transactions by simply ‘chatting’ with their adviser. But the real strength of AI-powered virtual assistants is that they help relationship managers to cater to a larger, and more diverse, client base while maintaining a highly personalized service. 

Maximizing AI

To get the most out of an AI system, financial institutions need a provider that understands the technology, local regulations, and the financial sector. The AI solution needs a robust monitoring system to constantly improve performance and rectify any shortcomings such as biases to prevent unethical outcomes across a variety of functions, such as credit decisioning, fraud detection and predictive underwriting. 

The use of AI is set to become even more widespread in the financial industry. At the end of 2021, MAS and Singapore’s National AI Office (NAIO) launched the National Artificial Intelligence (AI) Programme in Finance. This initiative aims to build comprehensive AI capabilities within Singapore’s financial sector to strengthen client service, risk management, and business competitiveness. The initiative will also help increase productivity through the adoption of AI, create new jobs through increased AI innovation and upskilling in AI-related competencies, and improve al acceptance of AI through sound governance. With the growing spotlight on the industry and the potential benefits on offer, it is time to rethink how we use AI currently and what opportunity this technology can unlock in the future.