We’re now in year three of the generative AI era.

That’s enough time to see clear patterns emerge.

So why are most companies still stuck at the starting line?

I’ve been thinking a lot about this lately, and there are three specific obstacles blocking widespread AI adoption. The good news is that each one has a solution.

We’ve Entered the Second Phase of AI

First, let’s get clear on where we actually are. Since ChatGPT launched in November 2022, we’ve been living through what some call “Phase One.” AI as intelligent chatbots that answer questions. That phase is essentially over.

We’re now firmly in Phase Two: AI agents that automate entire workflows autonomously. We’re moving from individual productivity tools to systems that can execute complex tasks with minimal human intervention. For hours, even days, without supervision.

Anthropic just released a coding agent that works autonomously for 30 hours straight. OpenAI’s Deep Research feature will spend 20 minutes scouring the internet and compile comprehensive reports while you grab coffee. This isn’t hypothetical anymore.

And here’s what matters: The companies succeeding are the ones who started experimenting early. The ones waiting for the technology to “mature” are struggling to catch up.

The Three Obstacles Holding Everyone Back

1. Infrastructure Isn’t Ready for This

The most fundamental problem? We literally don’t have enough infrastructure to power AI at scale.

This isn’t just about servers or GPUs. It’s power, compute capacity, and network bandwidth. The industry is projecting a $5 trillion investment in data center expansion just to meet AI’s computational appetite.

Think about that number. Not $500 billion. $5 trillion.

And here’s how severe the constraint is: data centres are now being built where power is available, rather than bringing power to data centres. Countries are recognizing that their ability to generate AI compute capacity (measured in “tokens”) will directly determine both their economic prosperity and national security.

For businesses, this means if you can’t reliably access compute resources, you’ll hit growth bottlenecks as AI shifts from simple chatbots to 24/7 autonomous agents consuming exponentially more bandwidth.

2. The Trust Deficit Is Real

Even as AI tools become commonplace, there’s a fundamental trust problem: these systems are non-deterministic. Ask the same question twice, get different answers. That unpredictability becomes a serious issue when you’re moving beyond Q&A to autonomous systems executing complex tasks.

You’re trying to build predictable systems on unpredictable models.

The security concerns are significant, especially for enterprises handling sensitive data or operating in regulated industries. How do you trust an AI agent to work autonomously for 30 hours without supervision?

The solution requires proactive monitoring. Validating the data feeding into models, testing model behaviour, and enforcing real-time guardrails. When Cisco tested the Chinese DeepSeek model, they were able to jailbreak it 100% of the time across the top 50 risk categories within 48 hours. That’s not a criticism. It’s a demonstration that continuous validation is essential.

Companies need AI security infrastructure that can algorithmically identify when models behave unexpectedly and dynamically enforce guardrails. Without this, enterprises will continue to hold back from full adoption.

3. The Data Gap Isn’t What You Think

Here’s the misconception most companies have: “Our proprietary data is our competitive moat.”

The reality? Most organizations haven’t figured out how to effectively harness, structure, and deploy their existing data for AI.

And the data landscape is shifting dramatically. 55% of new data growth is now machine-generated, not human-generated. These are time-series event streams from automated agents, and they’re largely untapped.

We’ve essentially exhausted the publicly available human-generated data on the internet. Models are now being trained on synthetic data. But the real opportunity lies in merging machine-generated operational data with human-generated contextual information.

The companies that invest in enterprise-grade data pipelines, governance frameworks, and systematic integration of both data streams will see differentiated AI performance. Everyone else is sitting on unused assets.

How to Measure What Matters

Infrastructure investments are straightforward to track. But measuring trust and data initiatives requires different KPIs.

For trust : Track model hallucination rates and contexts. Use benchmarking systems like “harm bench” to stress-test models before they damage your brand or violate compliance.

For data : Don’t just measure volume. Track pipeline readiness, data usability in model training, and the actual business impact of AI outputs on workflows.

The goal isn’t using more data. It’s extracting actionable insights that align with business objectives.

The Real Opportunity Everyone’s Missing

There’s a lot of fear about AI replacing jobs. I think that’s overhyped.

What’s under-hyped is AI’s capacity to generate original insights . Solutions that don’t exist in the current human knowledge base. Not just aggregating what we already know, but discovering what we don’t.

Until now, AI has been an aggregation mechanism. But we’re reaching the point where it can produce genuinely new knowledge. The kind that could lead to breakthroughs in medicine, materials science, and problems we haven’t yet imagined solving.

That’s the transformation people are underestimating.

My Takeaway

If you’re waiting for AI to “get perfected” before you start experimenting, you’re already behind. The companies winning right now are the ones who started early, learned through trial and error, and built the instinct to navigate rapid change.

Don’t wait. Start experimenting with what’s available today, because the pace of innovation isn’t slowing down, and the gap between early adopters and everyone else is only widening.