Will the Generative AI bubble start deflating in 2025?

This is a copy of an article I wrote for Manchester Digital. You can read the original here and their full series here

As we approach 2025, it’s been nearly two years since OpenAI’s GPT-4 launched the generative AI boom. Predictions of radical, transformational change filled the air. Yet these promises remain largely unfulfilled. Instead of reshaping industries, generative AI risks becoming an expensive distraction – one that organisations should approach with caution.

Incremental gains, not transformational change

Beyond vendor-driven marketing and from those with vested interests, there are still scant examples of generative AI being deployed with meaningful impact. The most widespread adoption has been in coding and office productivity assistants, commonly referred to as “copilots.” However, the evidence largely suggests that their benefits are limited to marginal gains at best.

Most studies on coding assistants report a modest boost in individual productivity. The findings are similar for Microsoft Copilot. A recent Australian Government study highlighted measurable, but limited benefits.

Notably, the study also highlighted training and adoption as significant barriers. Despite being widely in use for years, many organisations still struggle to use the existing Office 365 suite effectively. Learning to craft clear and effective prompts for an LLM presents an even greater challenge, where good results rely heavily on the ability to provide precise and well-structured instructions. A skill that requires both practice and understanding.

Busier at busywork?

These tools are good at helping with low-level tasks – writing simple code, drafting documents faster, or creating presentations in less time. However, they don’t address the underlying reasons for performing these tasks in the first place. There’s a real risk they could encourage more busywork rather than meaningful, impactful change. As the old adage in software development goes, “Typing is not the bottleneck.”

All in all, this is hardly the kind of game-changing impact we were promised. 

But they’ll get better, right?

Hitting the wall: diminishing returns

The initial promise of generative AI was that models would continue to get better as more data and compute were thrown at them. However, as many in the industry had predicted, there are clear signs of diminishing returns. According to a recent Bloomberg article, leading AI labs, including OpenAI, Anthropic, and Google DeepMind, are all reportedly struggling to build models that significantly outperform their predecessors.

Hardware looks like it may also be becoming a bottleneck. The GPU chip maker NVidia, which has been at the heart of the AI boom (and got very rich from it), is facing challenges with its latest GPUs, potentially further compounding the industry’s struggles.

Another exponential leap – like the one seen between ChatGPT3.5 and ChatGPT4 – currently looks unlikely.

At what environmental and financial costs?

The environmental impact of generative AI cannot be ignored. Training large language models consumes vast amounts of energy, generating a significant carbon footprint. With each new iteration, energy demands have risen exponentially, raising difficult questions about the sustainability of these technologies.

Additionally, current generative AI products are heavily subsidised by investor funding. As these organisations seek to recoup costs, customer prices will undoubtedly rise. OpenAI has already said they aim to double the price of ChatGPT by 2029

Advice for 2025: Proceed with caution

Generative AI remains a promising technology, but its practical value is far from proven. It has yet to deliver on its transformational promises and there are warning signs it may never do so. As organisations look to 2025, they should adopt a cautious, focused approach. Here are three key considerations:

  1. Focus on strategic value, not busywork
    Generative AI tools can make us faster, but faster doesn’t always mean better. Before adopting a tool, assess whether it helps address high-impact, strategic challenges rather than simply making low-value tasks slightly more efficient.
  2. Thoughtful and careful adoption
    GenAI tools are not plug and play solutions. To deploy them effectively, organisations need to focus on clear use cases, where they can genuinely add value.Take the time to train employees, not just on how to use the tools but also on understanding their limitations and best use cases.
  3. Avoid FOMO as a strategy
    As Technology Strategist, Rachel Coldicutt highlighted in her recent newsletter, “FOMO Is Not a Strategy”. Rushing to adopt any technology out of fear of being left behind is rarely effective. Thoughtful, deliberate action will always outperform reactive adoption.

Is “computer says maybe” the new “computer says no?”

GenAI and quantum computing feel like they’re pulling us out of an era when computers were reliable. You put in inputs and get consistent, predictable outputs. Now? Not so much.

Both tease us with incredible potential but come with a similar problems: they’re unreliable and hard to scale.

Quantum computing works on probabilities, not certainties. Instead of a clear “yes” or “no,” it gives you a “probably yes” or “probably no.”

Generative AI predicts based on patterns in its training data, which is why it can sometimes be wildly wrong or confidently make things up.

We’ve already opened Pandora’s box with GenAI and are needing to learn to live with the complexities that come with its unreliability (for now at least).

Quantum Computing? Who knows when a significant breakthrough may come.

Either way it feels like we’re potentially entering an era where computers are less about certainty and more about possibility.

Both technologies challenge our trust in what a computer can do, forcing us to consider how we use them and what we expect from them.

So, is “computer says maybe” the future we’re heading towards? What do you think?

My restaurant anecdote: a lesson in leadership

I want to share a story I often use when coaching new leaders – a personal anecdote about a lesson I learned the hard way.

Back when I was at university, I spent a couple of summers working as a waiter in a restaurant. It was a lovely place – a hotel in Salcombe, Devon (UK), with stunning views of the estuary and a sandy beach. It was a fun way to spend the summer.

The restaurant could seat around 80 covers (people). It was divided into sections and waiters would work in teams for a section.

I started as a regular waiter, but was soon promoted to a “station waiter.” This role had to co-ordinate with the kitchen and manage the timing of orders for a particular section. For example, once a table finished their starters, I’d signal the kitchen to prepare their mains.

Being me, I wanted to be helpful for the other waiters. I didn’t want them thinking I wasn’t pulling my weight, so I’d make sure I was doing my bit clearing tables.

Truth be told, I also had a bit of an anti-authority streak – I didn’t like being told what to do, and I didn’t like telling others what to do either.

Then it all went wrong. I ordered a table’s main course before they’d finished their starters. By the time the mains were ready sitting on under the lights on the hotplate, the diners were still halfway through their first course.

If you’ve worked in a kitchen, you’ll know one thing: never piss off a chef.

I was in the shit.

In my panic, I told the other station waiter what had happened. Luckily, they were more quick witted than me. They told me to explain to the head chef that one of the diners had gone to the toilet, and to keep the food warm.

So I did.

The head chef’s stare still haunts me, but I got away with it.

That’s when I realised what I’d been doing wrong. My section was chaotic. The other waiters were stressed and rushing around, and it was clear that my “helping” wasn’t actually helping anyone.

My job wasn’t to be just another pair of hands; it was to stay at my station, manage the orders, and keep everything running smoothly. I needed to focus on the big picture -keeping track of the checks, working with the kitchen, and directing the other waiters.

Once I got this, it all started to click. People didn’t actually mind being told what to do, in fact it’s what they wanted. They could then focus on doing their jobs without feeling like they were also panicking and running around.

What are the lessons from this story?

The most common challenge I see with new leaders is struggling to step out of their comfort zone when it comes to delegation and giving direction.

Leadership is about enabling, not doing. Your primary role isn’t to do the work yourself; it’s to guide, delegate, and create clarity so your team can succeed. Trying to do everything means you’ll miss the big picture, creates confusion and stress.

It’s tempting to keep “helping” or to dive into the weeds because it feels safer. But that’s where things start to unravel – and where many new leaders experience their own “oh shit” moment.

And remember, giving direction doesn’t mean micro-managing, it’s about empowering. Set clear priorities, communicate expectations, step back and allow people to do their jobs.

And yes, sometimes it’s OK to be quite directive – that clarity is often what people need most.

Are GenAI copilots helping us work smarter – or just faster at fixing the wrong problems?

Are GenAI copilots helping us work smarter – or just faster at fixing the wrong problems? Let me introduce you to the concept of failure demand.

The most widespread adoption of GenAI is copilots – Office365 CoPilot and coding assistants. Most evidences suggests they deliver incremental productivity gains for individuals: write a bit more code, draft a doc faster, create a presentation in less time.

But why are you doing those tasks in the first place? This is where the concept of failure demand comes in.

Originally coined by John Seddon, failure demand is the work created when systems, processes, or decisions fail to address root causes. Instead of creating value, you spend time patching over problems that shouldn’t have existed in the first place.

Call centres are a perfect example.

Most call centre demand isn’t value demand (customers seeking products or services). It’s failure demand: caused by unclear communication, broken systems, or unresolved issues.

GenAI might help agents handle calls faster, but the bigger question is why are people calling at all?

The same applies to all knowledge work. Faster coding or document creation only accelerates failure demand if the root issues (e.g. unclear requirements, poor alignment, unnecessary work) – go unaddressed.

Examples:

– Individual speed gains might mask systemic problems, making them harder to spot and fix and reducing the incentive to do so.

– More documents and presentations could bury teams in information, reducing clarity and alignment.

– More code written faster could overwhelm QA teams or create downstream integration issues.

There’s already evidence which suggests this. The 2024 DORA Report (an annual study of engineering team performance) found found AI coding assistants marginally improved individual productivity but correlated with a downward trend in team performance.

The far bigger opportunities lies in asking:

– Why does this work exist?
– Can we eliminate or prevent it?

Unless GenAI helps addressing systemic issues, it risks being a distraction. While it might improve individual productivity, it could hurt overall performance.

How do Generative AI tools impact software developer productivity and code quality?

How do Generative AI tools impact software developer productivity and code quality? A recent large-scale study – one of the most empirically robust I’ve seen – tackled this question by analysing 218k+ developers across 880m+ commits (with control groups and everything).

The results? A modest but consistent 4% productivity boost without sacrificing code quality.

Other key findings:

– Moderate GenAI users emerged as the highest overall performers (i.e. less is more)
– Only 1% of developers committed GenAI-authored code without significant rework.

It’s a nice 4% gain right? Not exactly game-changing. but worth it if licence costs stay reasonable. Enough to reduce a team of 100 to 96 engineers (or boost output by 4%)?

Not that simple – the study is somewhat flawed because it’s measuring effort not outcomes and individuals and not teams. The latest DORA Report (the gold standard when it comes to measuring high performing tech teams) found AI Tools marginally improving individual productivity, but having a NEGATIVE impact on overall team productivity.

As Giovanni Asproni said to me on Bluesky:

“I think that better product management, with a decent strategy, and proper prioritisation and planning will increase output far more than 4% in most organisations. And it may actually help saving energy and CO2 emissions.”

Poll of engineers using AI assistants with production code

I ran a LinkedIn poll asking software developers about their experiences using AI coding assistants with production code. The results were interesting 👇

Why production code? Developers spend most of their time writing it, and AI tools can handle non-critical activities like proof-of-concepts, solo projects, and quick experiments fairly well.

I received 50 responses – not a huge sample, but interesting trends emerged nonetheless. Also, LinkedIn polls aren’t anonymous (which I didn’t know!), so I could see who voted. It allowed me to exclude non-developers and gauge experience levels from LinkedIn profiles and add it to the data.

You can see the results broken down by answer and experience level in the chart attached.

What stood out to me?

– Only one respondent – an apprentice engineer – said they “automate most things.”

– 74% found AI assistants helpful, but 26% felt they’re more trouble than they’re worth.

– All who said AI assistants “get in the way” (24%) had 8+ years of experience.

– 80% of respondents had 8+ years’ experience (I know a few of them, and others’ profiles suggest they’re legitimately experienced engineers).

My thoughts? Unless they improve significantly:

1) They’re unlikely to shift productivity dramatically. From my experience, “task” work takes about 10-30% of a developer’s coding time. So, while helpful, they’re not game-changing.

2) That a significant portion – especially experienced engineers – find these tools intrusive is concerning.

3) Having heard stories of AI tools introducing subtle bugs, I was surprised not to see more votes for “considered harmful.”

4) The idea that AI tools might replace software engineers – or that even “conversational programming” is anywhere close – still feels a very long way off.

5) I worry about an apprentice software engineer getting an AI tool to do most of their work. Partly because more experienced developers are a lot more cautious and must have a reason for this, but mainly because they won’t be learning much by doing so.

Creating an Effective Recruitment Process

Hiring is one of the most impactful decisions for any organisation. The wrong person can badly impact the culture and performance of the organisation, absorb time in performance management and be a significant distraction.

Informal recruitment processes are leaving things down to chance.

Inefficient processes consume time and risk losing top candidates who move quickly.

This is why fixing the recruitment process is typically one of my first priorities when working with organisations.

Having interviewed thousands and hired hundreds over the past two decades, here are my top tips:

All candidates should have a positive experience

Whether or not a candidate gets the role, they should leave with a positive impression. A poor experience not only harms your brand but can dissuade other candidates from applying.

  • Being warm, friendly, and engaging throughout the process costs nothing, and it doesn’t prevent you from asking tough questions.
  • Give constructive feedback to unsuccessful candidates so they know where they can improve.

Short Lead Times

It’s all too common for the time from CV received to decision made to be a very lengthy process, often taking months.

  • Aim for a two week turnaround from CV received to decision made.
  • Much of the following advice is for how you can achieve this 👇

Start with a brief screening call

To make sure a candidate is a basic fit, get eligibility information and to introduce them to your company, and the recruitment process.

  • If you’re working with a recruiter, ideally empower them to handle this stage (if properly briefed).
  • It’s important this conversation isn’t just a box-ticking exercise. It sets the tone for the rest of the process, so make sure it’s welcoming and informative.

Pre-booked interview slots

So much time is lost in calendar tennis for scheduling interviews. Avoid this by agreeing, and putting in place, pre-booked interview slots upfront in interviewers’ calendars. Whoever is engaging with candidates can then offer them any of the available pre-booked slots. If they can’t do any of them you can always compromise and try and find other times, but make this route one.

Schedule all stages up front

Again, this saves time for both the candidate and your organisation, and gives the candidates clarity on timelines. If you’re doing multiple stages, there’s no reason why you can’t cancel a further stage if you don’t consider it’s worth proceeding with the candidate, just let them know up-front that you may do this.

Do as many stages as you can in one go

Consolidating interview stages reduces cycle times significantly. While you may end up spending more time with candidates who may not progress, you’ll save more overall by avoiding the delays of additional scheduling and decision-making between stages.

If you’re concerned about wasted effort, you can always tell the candidates up front that you may choose to finish the interview early if it’s clear to you it’s not progressing well.

Structured interviews

Every interview should be structured, with the same scripted questions for all candidates and scoring indicators for responses.

This reduces bias, ensures consistency, and simplifies decision-making. You never use scoring indicators as a pass or fail, they just help inform your decision.

I’ve developed interview templates where each stage has its questions and scoring indicators on the same form.

Two interviewers in each stage

This has several benefits:

  • One can focus on the candidate while the other takes notes.
  • It avoids leaving decisions to a single viewpoint.
  • It provides an opportunity to train future interview leads.
  • It allows candidates to meet more team members.

The second interviewer doesn’t have to be sitting there quietly, they can take part in asking follow up questions and responding to candidate questions.

The interview forms I mentioned previously? Use them as a shared document in e.g. Google or OneDrive and interviewers can see the notes as they’re typed, and make their own comments and observations.

Same-day candidate debrief

Debrief as quickly as possible after each stage or at the end of all interviews. The structured format, shared notes, and scoring indicators will help you avoid lengthy debate, maintain consistency and make timely decisions.

Who wants an AI toothbrush?

AI toothbrush anyone? Yes this is real, but I don’t want to talk about AI. I want to talk about technology having it’s place. Not everything is best solved by technology.


I’m a hardcore techie, but when it comes to personal hygiene, I couldn’t be more lo-fi. After years of trial and error, I’ve settled on a wooden toothbrush, basic Colgate toothpaste, and the same cheap Dove soap for both washing and shaving (with a basic razor and shaving brush).


A ~£100 “AI-powered” toothbrush solves no problems for me – or anyone, for that matter.


It’s why I challenge early stage startups if they really need to build something yet. It’s why I’m an advocate for practices like Value Stream Mapping and Service Design, and how often the desired outcome is best solved *without* needing to chuck technology at the problem.


It’s why I’m obsessed with process optimisation and systems thinking.


It’s why I loved the recent episode of “The Digital Human” on the potential for AI in the NHS (BBC Radio 4, will share link in comments), where the the eminently sensible Jessica Rose Morley, PhD pointed out there’s little point in getting carried away when, in reality, it’s still common for three wards to share one outdated PC on wheels, and the data is in such a mess.


Technology is expensive, it comes at a cost. Only when you’ve identified the user need and/or optimised existing processes is it worth investing in.

Don’t tolerate brilliant jerks

I believe it was Reed Hastings, CEO of Netflix who coined the phrase “brilliant jerk.”

Common in tech orgs – smart, technically gifted, and highly productive. They’re seen as the people to go to for solving big problems, and they often save the day in a crisis.

But boy do they ruffle some feathers getting there.

As a leader you can spend a lot of time cleaning up after the mess they create. But you put up with them coz they get stuff done.

“Jerk” is a big bucket of behaviours of course. They can often dominate conversations, dismiss others’ opinions, and aren’t shy about pointing out what’s wrong with the org, proudly “telling it like they see it,” but they’re typically the last to offer any suggestions.

At one organisation I worked, we had a toxic “them and us” culture between Sales and Dev. Interestingly, when one particular person left, it pretty much evaporated.

And don’t forget, “Your culture is defined by the worst behaviour you tolerate”

How to Plan Effectively in the Face of Uncertainty

In a recent post, I explained why we’re inherently bad at estimating, which is a major reason software projects often run late. But that doesn’t mean we can’t plan ahead for the longer term and manage expectations. Here are some techniques I’ve found effective for longer-term planning, even in the face of uncertainty:

𝗣𝗿𝗼𝘃𝗶𝗱𝗲 𝗥𝗮𝗻𝗴𝗲𝗱 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁𝘀 🌦, 𝗡𝗼𝘁 𝗙𝗶𝘅𝗲𝗱 𝗘𝘀𝘁𝗶𝗺𝗮𝘁𝗲𝘀

Over any reasonable period of time, providing likely delivery dates as a range rather than a fixed date is better. Doing so embraces the inherent variability and help stakeholders better appreciate uncertainty. Giving a fixed date, even when you all know it’s just an estimate, sets false expectations.

𝗗𝗼𝗻’𝘁 𝗷𝘂𝘀𝘁 𝗲𝘀𝘁𝗶𝗺𝗮𝘁𝗲 𝘁𝗶𝗺𝗲 ⏱, 𝗲𝘀𝘁𝗶𝗺𝗮𝘁𝗲 𝗰𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝗰𝗲 💪

Along with estimating how long a task might take, assess how confident you are in that estimate. Is it something we’ve done before and know well? High confidence. Is it new, complex, or something we’ve never tackled before? Medium or Low confidence.Multiply time by confidence. For example, a “Small” task (e.g. 1-3 days) with “Low” confidence can be re-forecast as 1-7 days.

𝗟𝗼𝗻𝗴𝗲𝗿 𝗧𝗲𝗿𝗺 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 𝗮𝘀 𝗥𝗶𝘀𝗸 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 🚨

Effective planning isn’t just about setting expectations on delivery, but managing risks over the project lifecycle. All those estimates which have come in as Large or with Medium-Low confidence? Those are your biggest risks and represent the most uncertainty. Identify risks early, identify potential mitigations, highlight those risks to stakeholders.

𝗔𝗰𝗰𝗼𝘂𝗻𝘁 𝗳𝗼𝗿 𝗢𝗽𝘁𝗶𝗺𝗶𝘀𝗺 𝗕𝗶𝗮𝘀 🌈

Whilst you can’t completely mitigate against this, there are things you can do to be a bit less vulnerable. Involve the entire team in estimates, account for full end to end delivery (not just developer time) and factor in holidays, sickness and other factors that could impact delivery.Also, those ranged forecasts which provide a cumulative lowest range for all the work? Highly unlikely due to optimism bias! It’s probably best not to present the most optimistic forecast for this reason.

𝗡𝗼 𝗦𝗶𝗹𝘃𝗲𝗿 𝗕𝘂𝗹𝗹𝗲𝘁, 𝗕𝘂𝘁 𝗮 𝗕𝗲𝘁𝘁𝗲𝗿 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵

These practices won’t eliminate uncertainty or guarantee perfect outcomes – there’s no silver bullet in longer term planning. However, I’ve found they help organisations plan more realistically, reducing the stress and frustration that often come with missed deadlines, and enabling more effective, adaptable strategies in the face of uncertainty.