Listen to the article
A new industry report reveals that despite significant interest, trust, data quality, and regulatory concerns continue to impede the widespread deployment of agentic AI in the pharmaceutical and life sciences sectors.
Although pharmaceutical and life sciences companies have actively explored agentic AI, a new industry study shows the technology’s promise is far from realised as trust, governance and data quality hold back scaled deployment. According to Camunda’s report, more than two-thirds of surveyed organisations see a gap between their agentic AI ambitions and what has actually been implemented, and only about one in ten use cases reached production in the last 12 months. [1][7]
The survey, conducted last autumn by Coleman Parkes for Camunda and drawing on responses from 1,150 senior automation and technology leaders across the US and Europe, highlights transparency and trust as primary barriers. “The promise of agentic AI is undeniable, but trust remains the key barrier to adoption,” Kurt Petersen, senior vice president of customer success at Camunda, said in a press release, noting that caution is keeping many organisations at pilot stage or confined to isolated use cases. [1]
Respondents signalled specific operational concerns that limit progress: roughly three quarters said most AI agents at their organisations are limited to chatbots or assistants that answer questions and summarise text, while half reported AI agents operate in silos rather than being integrated into end-to-end business processes. Two-thirds identified compliance as a deployment concern, reflecting the sector’s heavily regulated environment. The Camunda report argues that “agentic orchestration, not standalone agents, is the key to closing the AI vision-reality gap.” [1]
External analyses corroborate the need for stronger governance, cultural change and risk management if agentic systems are to move beyond experiments. A Pistoia Alliance study found 51% of life‑science professionals view resistance to change as the biggest barrier to agentic AI adoption and reported that many organisations rarely assess AI-related risks, underscoring a shortfall in proactive oversight. Industry security leaders have similarly warned of high failure rates where governance and cybersecurity are weak. [2][5]
Practical obstacles extend beyond culture and policy to the raw materials that drive agentic systems. Technical commentary and sector guidance emphasise the critical role of high‑quality, integrated data, robust document capture and clear audit trails; poor inputs can produce costly errors in high‑stakes settings such as healthcare. With data sources increasingly siloed and technology stacks more complex, respondents told Camunda they want better tools to manage overlapping processes, precisely the kind of problems proponents say agentic orchestration can address if underpinned by reliable data and controls. [4][3][1]
Academics and standards-minded researchers caution that current evaluation practices for agentic systems overemphasise technical metrics while downplaying human‑centred, safety and economic assessments, creating a gap between benchmark success and real‑world value. A recent review of published agentic‑AI work argues for balanced evaluation frameworks that include safety and human factors before scaling into regulated industries. That perspective reinforces the Camunda finding that many organisations do not yet consider their processes mature enough to support coordinated, multi‑agent workflows. [6][1]
Despite the hurdles, automation continues to show business value for life sciences organisations. Camunda’s survey found more than 90% of respondents reported higher business growth after introducing process automation, and firms have automated on average nearly half of their processes with plans to increase automation budgets by around 18% over the next two years. Consulting analysis suggests agentic AI could dramatically reconfigure the sector’s productivity and capacity if governance, data and security issues are resolved, with potential gains across R&D, manufacturing and commercial functions. Achieving those gains will depend on translating pilot‑level promise into governed, auditable orchestration that earns the trust of regulators, clinicians and the organisations that must rely on it. [1][7][3]
📌 Reference Map:
Reference Map:
- [1] (TechTarget / Pharma Life Sciences) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 5, Paragraph 6, Paragraph 7
- [7] (McKinsey) – Paragraph 1, Paragraph 7
- [2] (Pharmiweb / Pistoia Alliance report) – Paragraph 4
- [5] (ITPro / Palo Alto Networks) – Paragraph 4
- [4] (TechRadar Pro) – Paragraph 5
- [3] (XenonStack blog) – Paragraph 5, Paragraph 7
- [6] (arXiv paper) – Paragraph 6
Source: Fuse Wire Services


