Making Sense of What AI Actually Delivers at Work in 2026

If you’ve sat through a board meeting lately where someone nervously asked whether your company is “doing enough with AI,” you’re not alone. Over the past two years, organisations have thrown considerable money at AI tools, pilots, and consultants. Yet when it comes time to explain what all that spending has actually achieved, the room often goes quiet. We’re now at that awkward stage where the initial excitement has worn off and people are starting to ask harder questions about value. Getting serious about whether AI is doing anything useful for your particular business matters more than dismissing or championing the technology itself.

The Problem with Hype Cycles

The way most companies approached AI reminds me of how my neighbour bought a bread maker in 2019. Everyone seemed to have one, the reviews were glowing, and there was this nagging feeling that not having one meant missing out on something important. The bread maker now sits in his garage, still in its box. AI adoption has followed a similar pattern, except with significantly larger price tags attached.

Investment decisions got made because executives read the same articles, attended the same conferences, and felt the same pressure to demonstrate they were taking AI seriously. Vendor promises didn’t help. We heard about tools that would revolutionise everything from customer service to supply chain management, often with surprisingly little detail about how these miracles would actually occur or what they’d require from the organisation implementing them.

The result has been a strange disconnect. Companies bought tools that employees barely touched. They ran pilots that generated impressive presentations but never scaled beyond the initial team. They measured things like “number of staff with access to AI tools” or “AI training sessions completed,” which told them precisely nothing about whether any of this was making the business better at what it does.

This happened for understandable reasons. There was genuine pressure to act quickly, a sense that waiting meant falling behind competitors who were presumably doing something clever with machine learning. Success criteria were vague because, honestly, most people weren’t sure what success should look like. And there was a tendency to get distracted by technical capabilities rather than starting with actual business problems that needed solving.

What Measurable Impact Actually Means

When I talk about measurable impact, I’m referring to changes in specific, tangible aspects of your business that you can track over time. Counting how many employees have logged into your new AI assistant doesn’t tell you whether their work has improved. What matters is whether you can point to real differences in how work gets done.

For a customer service team, this might mean looking at how quickly issues get resolved, or how often customers need to contact you multiple times about the same problem. For a finance team, it could be how long the monthly close process takes, or how many errors get caught before they make it into reports. In sales, you might track whether your team is spending more time actually talking to customers versus wrestling with CRM data entry.

The tricky bit is establishing baselines before you implement anything. You need to know where you started to measure where you’ve got to. This sounds obvious, but plenty of organisations skip this step in their rush to deploy something. Then six months later, when someone asks whether the new tool has helped, everyone relies on their gut feeling rather than actual data.

There’s also the challenge of attribution. If your customer satisfaction scores improve after implementing an AI chat system, was it the AI, or was it the three other changes you made to your support process at the same time? Real measurement means accounting for these complications, not pretending they don’t exist.

Three Companies Getting It Right

A mid-sized insurance company I spoke with last year had a specific problem: their claims processors were spending hours each day extracting information from medical reports and accident scene documentation. The company mapped out exactly how long this took, what types of errors occured most frequently, and which parts of the process caused the most frustration. Only then did they look for an AI tool that addressed these specific pain points.

They set clear targets upfront. They wanted to reduce processing time by 30% and cut transcription errors by half. They picked a small team to test the solution for three months, with weekly check-ins about what was working and what wasn’t. Some things failed. The AI struggled with handwritten notes, so they had to adjust their approach. But after six months, they’d hit their targets and could point to specific, measurable improvements in both speed and accuracy. More importantly, the processors themselves reported less end-of-day fatigue from repetitive data entry.

A logistics company took a different approach with route optimisation. They’d tried an AI system two years earlier that promised dramatic fuel savings but never delivered. This time, they started smaller. They identified five routes where drivers consistently reported traffic problems and picked those as their test case. They tracked fuel consumption, delivery times, and driver feedback for those specific routes over eight weeks. The results were modest but real, about 12% improvement in fuel efficiency on those routes. That gave them confidence to expand gradually rather than attempting a wholesale rollout that might have failed again.

Then there’s a professional services firm that used AI to help with proposal writing. Their problem was consistency, not speed. Different teams were reinventing similar content for every pitch, and the quality varied wildly depending on who was writing. They built a system that pulled from their best previous work and helped teams draft sections more consistently. The metric that mattered to them was win rate. After a year, they’d seen their success rate on competitive bids increase by 8 percentage points. They could also show that proposals were getting produced in about 40% less time, freeing senior staff for client work.

What connects these examples is how they all started with clearly defined problems, set realistic metrics before implementing anything, and actually listened to the people using these tools day-to-day. They also accepted that not everything would work perfectly, and that’s fine as long as you’re learning and adjusting.

Building a Framework for 2025

If you’re trying to figure out where AI fits in your organisation, start by mapping out your actual processes and identifying where things consistently go wrong or take longer than they should. Talk to the people doing the work, not the people managing them. Front-line staff usually know exactly where the pain points are, even if they don’t always get asked.

Once you’ve identified genuine problems worth solving, set specific success criteria before you buy or build anything. These should be things you can measure objectively over a defined time period. “Improve efficiency” is too vague. “Reduce the time spent on monthly reporting by 25% within six months” gives you something concrete to aim for and assess.

Build proper feedback loops with actual users. Not quarterly surveys, but regular conversations about what’s working and what’s getting in the way. The people using these tools daily will spot problems and opportunities that won’t show up in usage statistics. Their insights are worth more than any vendor presentation.

Accept that some things won’t work. This is normal and expected. Some organisations get so invested in proving a particular tool was the right choice that they ignore mounting evidence that it’s not delivering value. Learning from failures and adjusting your approach matters more than avoiding failures altogether.

Finally, resist the temptation to run endless pilots. At some point you need to make a decision: either this is working well enough to scale, or you should try something else. I’ve seen companies pilot the same type of solution for two years across different teams, never quite committing to a full rollout because they’re waiting for perfect results. Perfect won’t happen. Good enough to provide measurable value should be your bar.

Where This Leaves Us

Every technology goes through this cycle from hype to reality. The shift represents a natural maturing process, not a failure. The organisations that will build real advantages are the ones approaching AI with clear-eyed pragmatism, measuring what actually matters, and being willing to adjust based on evidence rather than vendor promises or conference keynotes.

We’re entering a phase where “we’re experimenting with AI” won’t be an acceptable answer anymore. Boards and stakeholders want to know what you’ve learned, what’s working, and what return you’re getting on the investment. That’s a healthier place to be than where we were 18 months ago, even if it feels less exciting. The companies that have been quietly focused on solving real problems whilst everyone else was chasing headlines are the ones with interesting stories to tell now.

The good news is that you don’t need to have this all figured out already. But you do need to start asking better questions about what you’re trying to achieve and how you’ll know if you’ve got there. The technology will keep improving, but that won’t matter much if you’re not clear about what you’re trying to improve in the first place.