AI agents are becoming credit brokers as regulators start to sharpen their knives
World's first Agentic Credit Broking Protocol sets a standard for financial services bots as they take on mission-critical roles in lending.
The world’s first standard for agentic credit broking launched in London today - prompting immediate questions about whether AI-driven systems will be able to comply with UK rules on protecting vulnerable customers.
In a move echoing the creation of Britain’s world-leading Open Banking standards, the ClearScore Group has released a new Agentic Credit Broking Protocol (ACBP) that aims to define how AI agents can conduct credit broking journeys on behalf of users while remaining compliant with financial regulations.
However, the launch has not passed without incident. After the standard became available on GitHub, a developer raised 11 issues, including regulatory concerns, then forked the open protocol, claiming the standard contains "no structured mechanism for the broker to signal that a user has been identified as vulnerable".
When announcing the standard, ClearScore said the integration of agentic AI into credit journeys marked a move from "passive assistance to active agency", admitting this is "potentially challenging" due to the risk of agents giving unregulated recommendations – or even financial advice.
Traditionally, regulated firms took full responsibility for all the user interaction surfaces - whether that was speaking to an advisor or even visiting websites.
ClearScore warned: "AI assistants break that model. A user may begin their credit journey through a chat assistant, a financial app, a comparison tool, or an embedded assistant in another service. The regulated firm no longer controls the first interaction surface.
"Without a protocol, these journeys are opaque - to the broker, to the lender, and to any regulator who later needs to understand what occurred."
READ MORE: Bank of England: Cloud outages and DDoS attacks pose risk of cascading systemic failure
The ACBP is designed to solve this problem. It enables "interaction and responsibility to travel separately". What this means is that an agent can carry out a complete credit journey without ever needing to become a regulated identity.
Essentially, it allows a chatbot or other agent to gather information such as income, outgoings and existing debt via natural dialogue, before handing over this information to a broker as structured data.
The broker then assesses the user's circumstances, develops a plan and sources offers from lenders, which the AI agent then conveys to a customer before steering them on through the non-regulated parts of their application.
"An agent can mediate a complete credit journey without becoming a regulated entity," ClearScore wrote on its whitepaper on the standard. "The broker retains regulatory control and gains an evidence trail, even though the conversation happened elsewhere."
When agents become the interface
ClearScore said the protocol works across jurisdictions - leapfrogging the interoperability issues that have dogged open banking.
Justin Basini, Co-founder and CEO at the ClearScore Group, said: “We are building an Agentic Credit Broking Protocol because users will, of course, begin their financial journeys through AI assistants and applications, much like they did many years ago through the internet.
"Without defined and shared approaches, however, those assistants have no way to participate in regulated credit broking and provide the seamless experience that people expect.
READ MORE: How financial tech can raise systemic risk by accelerating bank runs
"This protocol will play a fundamental role in the infrastructure for the next era of credit and financial services. It will allow secure, compliant and seamless agent-to-agent interaction, leveraging data and deep integrations with lenders to ensure that a user can execute the whole credit journey through their chosen agent."
On GitHub, a user called sentinel-source raised issues around topics including the risk of prompt injection attacks and model drift, the decline in a model's predictive accuracy over time.
He also questioned whether the protocol complied with FCA rules requiring firms to identify and respond to signs of customer vulnerability, which "apply regardless of the interaction channel".
The systemic risks of AI in financial services
In a wider context beyond this credit broker use case, the rollout of agentic AI across financial systems has the potential to cause major systemic risk, contributing to fragility and perhaps even exacerbating the danger of cascading failures.
Gartner has predicted that 40% of all financial services firms will be using AI agents by the end of 2026, meaning that working out standards like the ACBP should be a priority for all players in the ecosystem.
Behind the scenes, regulators are already moving to understand the implications of AI, with the Financial Conduct Authority currently undertaking a probe called The Mills Review to specifically examine the impact of AI on retail financial services.
READ MORE: Dark pool trading is casting a shadow over market stability, researchers warn
"Advanced, multimodal and agentic AI systems could reshape market dynamics, alter how financial products are designed and distributed, and transform how consumers engage with firms," wrote Sheldon Mills, executive director of the FCA.
"AI adoption in financial services also introduces growing risks, including sophisticated AI-enabled fraud and identity abuse, algorithmic bias, and opaque decision-making," he added.
"It could also potentially reduce consumer agency and introduce new forms of market concentration or systemic vulnerability. Over the longer term, increasingly autonomous and interconnected AI systems may amplify existing risks and create new ones."
In the US, regulators have so far taken the view that AI systems must comply with existing financial rules, rather than creating new frameworks - raising questions about how autonomous agents fit into regimes designed for human decision-makers.