Sizing up agentic AI: The data challenge facing every enterprise
"The reorganization of data for AI models and agents amounts to the world’s largest ever information preparation endeavour."
Agentic AI has been subject to a huge amount of hype. Enterprises are now moving to translate that enthusiasm into concrete spending.
A McKinsey survey found that 92% of companies plan to invest more in generative AI over the next three years. This surge of investment into AI is set to lead to increased implementation of agentic AI in enterprise settings, with Gartner predicting that 40% of enterprise applications will be integrated with task-specific agents by the end of 2026 — up from less than 5% today.
We need to be clear-eyed about the implications of agentic AI. Essentially, every company in the world today organizes data based on the business applications they use internally. Companies with advanced analytics disciplines have built data lakes to aggregate their data into one place. But data from an ERP system, for example, is still structured for that system — data lake or no data lake.
The reorganization of data for AI models and agents amounts to the world’s largest ever data preparation undertaking. Boards are naturally going to turn to their analyst teams, who prep and blend data every day, to carry this out. They should get prepared for the road ahead.
The great information collation
AI agents must have direct access to data to deliver value. This comes with caveats, however. Too often, desperate pushes to roll out AI at hyper speed result in multiple AI agents being engaged on disparate platforms with disparate data. This is a recipe for disaster which skips over the stage of data curation for successful AI agents.
To curate in this manner, enterprises need a new business layer to collate and make first-party data directly accessible to AI. But even this isn’t simple. Data in business applications carries embedded business logic — and that logic must be preserved when making data AI-ready.
Think of it like asking an analyst or department head about a business process: questions are bounced back for clarification. AI workflows need the same contextual logic built into the interaction, blending technical and business requirements. This task is creating a new role in the enterprise: the AI analyst.
Coming to a job board near you: Rise of the AI analyst
Today’s discourse around LLMs and AI agents often focuses on technical implementation, but business expertise is still crucial. Enter the AI analyst. This nascent discipline will bridge the obvious gap between the implementation of AI and the business knowledge required for it to make an impact.
Take an example of a modern RevOps team that’s fielding complex questions about business programs and initiatives every day. To respond, members of these teams are consulting multiple systems. Their knowledge can be gathered and applied by an AI analyst to inform the useful deployment of AI internally, with relevant accompanying data.
READ MORE: "We still need humans!": Data centre truckers gear up for a blue-collar AI boom
The AI analyst won’t necessarily have a background in computer science or data science. They’ll be the individuals who understand how data makes a modern business function and can answer questions from managers or senior leaders using data and modelling.
They may have moved beyond working with data in spreadsheets to prepping and blending data sources in a BI platform and are looking for their next challenge. Whatever the background, organizations need to start thinking about the talent required to prep data for AI agents to work effectively.
The case for an AI data clearinghouse
AI rollout inevitably raises trust and governance concerns. Many leaders mandate rapid AI adoption while simultaneously restricting the input of company IP into AI systems. Reconciling these priorities requires careful planning.
A data clearinghouse — a platform of visual workflows created by AI analysts — can help. It shows business leaders which data an AI system uses, the context behind it, and the governance controls in place. These workflows can be presented to boards or audit committees to build confidence in AI deployment without compromising security.
Approval processes can be integrated into these workflows, allowing analysts to pause or restrict AI access to sensitive data while maintaining operational agility. This approach accelerates AI adoption while reducing friction for IT and analytics teams.
Despite the hype, practical pathways exist for realizing AI agents’ potential. Targeted deployment, underpinned by accessible AI-ready workflows and embedded business logic, will allow organizations to harness AI effectively, unlocking value while mitigating risk.
Joshua Burkhow is Chief Evangelist at Alteryx