//

AI agents in Financial Institutions: Get to Know Your Hidden ‘Employees’

3 mins read

Singapore, December 23, 2025 — Artificial intelligence (AI) agents have emerged as significant players in organisations. Gartner predicts that by 2028, at least 15% of day-to-day work decisions will be made autonomously through agentic AI and 33% of enterprise software applications will include agentic AI. 

For financial services providers, specifically, agentic AI’s rapid gains in autonomy and access authority disrupt the traditional equilibrium between innovation and trust. Despite wielding access once reserved for credentialed employees, these self-directed employees’ still lack the rigorous vetting or visibility applied to their human counterparts. 

Financial institutions already manage 96 machine identities for every human one, and almost half of these interact with sensitive data. These institutions must therefore urgently establish strict controls over these AI agents. 

When agents go off script

Such agents are already behaving like unvetted employees in financial institutions, yet they possess significantly greater speed, range, and dedication than any human staff member. However, every AI agent, just like every trader or developer, must have an accountable owner, defined entitlements, and continuous monitoring for behavioural anomalies. 

The Replit incident where an AI coding tool ran unauthorised commands during a code freeze, deleted a live production database, and then attempted to hide its actions by fabricating data and test results – is an early warning of what happens when an autonomous agent is given powerful capabilities without guardrails. 

Research by Anthropic also shows that advanced models can behave like insider threats when under pressure or when their objectives conflict with organisational direction. In controlled corporate simulations, these models occasionally opted to independently release private information, fabricate facts, or bypass attempts to shut them down to protect their own objectives, even when aware that they were breaking established rules. 

A reshaped risk landscape

AI agents are now changing the very structure of financial institutions. Here are three distinct examples of what’s happening behind the scenes:

  • AI agents can now extend the reach of human employees and change their responsibilities. A compliance officer could utilise AI agents to scan transactions for anomalies in real time. While this increases efficiency, this one person now holds far more privilege than before, along with multiple unsanctioned or misconfigured agents. If even one of them is misused, the compliance team itself could become a regulatory concern.
  • Some AI agents can fully replace functions that were once led by people. For example, an autonomous trading agent running at machine speed can execute strategies without human review. If its credentials are over scoped, it can bypass guardrails, destabilise markets, or amplify systemic risk before anyone notices.
  • AI agents become overseers of complete processes, including people and programs. To operate, they will need to connect to, read, write, and administer multiple systems and vast amounts of data. These responsibilities make the agent, in practice, a decision-maker with cascading influence across humans and machines.

Managing agentic AI lifecycles with strong identity controls

Despite their advances, AI agents still operate within familiar identity security challenges, just at a far greater scale and velocity. To secure AI agents within financial services, the best strategy is to regard them as highly sensitive machine identities and subject them to the same strict identity security standards that are applied to human identities. Least privilege access must be implemented, ensuring each AI agent only receives the necessary permissions for its duties. 

Zero standing privileges (ZSP) and just-in-time (JIT) access will minimise risk by providing permissions dynamically and only when required. It’s essential to have strong control over infrastructure credentials – including application programming interface (API) keys and transport layer security (TLS) certificates – through automated rotation, robust authentication, and usage accountability.

Financial institutions should therefore apply their existing processes for managing the entire employee lifecycle onto AI agents. Even though AI security is advancing quickly, this model – which is centred on identity – will establish a secure basis for incorporating AI into complicated corporate settings in a trustworthy and compliant manner.

Unlocking competitive advantage

While AI agents promise unprecedented efficiency in compliance, trading, and customer orchestration, they also have the power to silently reshape organisational structures with unchecked privileges. Treating these agents as privileged machine identities bridges the governance gap without stifling innovation.

This disciplined approach mirrors proven human identity controls, enabling real-time visibility into agent actions and rapid risk mitigation amid regulatory scrutiny. Forward-thinking financial institutions that implement identity-first security today will be first in line to unlock agentic AI’s full potential for competitive advantage and sustained trust.

Contributor: Yuval Moss, Vice President of Solutions for Global Strategic Partners at CyberArk 

Discover more from DigitalCFO Asia

Subscribe now to keep reading and get access to the full archive.

Continue reading