A plea to junior developers using GenAI coding assistants

The early years of your career shape the kind of developer you’ll become. They’re when you build the problem-solving skills and knowledge that set apart excellent engineers from average ones. But what happens if those formative years are spent outsourcing that thinking to AI?

Generative AI (GenAI) coding assistants have rapidly become popular tools in software development, with as many as 81% of developers reporting to use them. 1Developers & AI Coding Assistant Trends by CoSignal

Whilst I personally think the jury is still out on how beneficial they are, I’m particularly worried about junior developers using them. The risk is they use them as a crutch – solving problems for them rather than encouraging them to think critically and solve problems themselves (and let’s not forget: GenAI is often wrong, and junior devs are the least likely to spot its mistakes).

GenAI blunts critical thinking

LLMs are impressive at a surface level. They’re great for quickly getting up to speed on a new topic or generating boilerplate code. But beyond that, they still struggle with complexity.

Because they generate responses based on statistical probability – drawing from vast amounts of existing code – GenAI tools tend to provide the most common solutions. While this can be useful for routine tasks, it also means their outputs are inherently generic – average at best.

This homogenising effect doesn’t just limit creativity; it can also inhibit deeper learning. When solutions are handed to you rather than worked through, the cognitive effort that drives problem-solving and mastery is lost. Instead of encouraging critical thinking, AI coding assistants short-circuit it.

Several studies suggest that frequent GenAI tool usage negatively impacts critical thinking skills.

GenAI creating more “Expert Beginners”

At an entry-level, it’s tempting to lean on GenAI to generate code without fully understanding the reasoning behind it. But this risks creating a generation of developers who can assemble code but quickly plateau.

The concept of the “expert beginner” comes from Erik Dietrich’s well known article. It describes someone who appears competent – perhaps even confident – but lacks the deeper understanding necessary to progress into true expertise.

If you rely too much on GenAI code tools, you’re at real risk of getting stuck as an expert beginner.

I’ve seen this happen. I’ve watched developers “panel beat” code – throwing it into an GenAI assistant over and over until it works – without actually understanding why.

And here’s the danger: in an industry where average engineers are becoming less valuable, expert beginners are at the highest risk of being left behind.

The value of an average engineer is likely to go down

Software engineering has always been a high-value skill, but not all engineers bring the same level of value.

Kent Beck, one of the pioneers of agile development, recently reflected on his experience using GenAI tools:

Kent Beck Twitter post “I’ve been reluctant to try ChatGPT. Today I got over that reluctance. Now I understand why I was reluctant. The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x. I need to recalibrate”

This is a wake-up call. The industry is shifting. If your only value as a developer is quickly writing pretty generic code, the harsh reality is: if you lean too heavily on AI, you’re risking making yourself redundant.

The engineers who will thrive are the ones who bring deep understanding, strong problem-solving skills, the ability to understand trade-offs and make pragmatic decisions.

My Plea…

Early in your career, your most valuable asset isn’t how quickly you can produce code – it’s how well you can think through problems, how well you can work with other people, how well you can learn from failure.

It’s a crucial time to build strong problem-solving and foundational skills. If GenAI assistants replace the process of struggling through challenges and learning from them (and from more experienced developers), and investing time to go deep into learning topics well, it risks stunting your growth, and your career.

If you’re a junior developer, my plea to you is this: don’t let GenAI tools think for you. Use them sparingly, if at all. Use them in the same way most senior developers I speak to use them – for very simple tasks, autocomplete, yak shaving. But when it comes to solving real problems, do the work yourself.

Because the developers who truly excel aren’t the ones who can generate code the fastest.

They’re the ones who understand it best.


The evidence suggests GenAI coding assistants offer tiny gains – real productivity lies elsewhere

GenAI coding assistants increase individual developer productivity by just 0.7% to 2.7%

How have I determined that? The best studies I’ve found on GenAI coding assistants suggest they improve coding productivity by around 5-10%.1see the following studies all published in 2024: The Impact of Generative AI on Software Developer Performance, DORA 2024 Report and The Effects of Generative AI on High Skilled Work: Evidence from Three Field Experiments with Software Developers

However, also according to the best research I could find, developers spend only 1-2 hours a day on coding activity (reading/writing/reviewing code).2see Today was a Good Day: The Daily Life of Software Developers, Global Code Time Report 2022 by Software

In a 7.5-hour workday, that translates to an overall productivity gain of just 0.7% to 2.7%.

But even these figure aren’t particularly meaningful – most coding assistant studies rely on poor proxy metrics like PRs, commits, and merge requests. The ones including more meaningful metrics, such as code quality or overall delivery show the smallest, or even negative gains.

And as I regularly say typing isn’t the bottleneck anyway. The much bigger factors in developer productivity are things like:

  • being clear on priorities
  • understanding requirements
  • collaborating well with others
  • being able to ship frequently and reliably.

GenAI might slightly speed up coding activity, but that’s not where the biggest inefficiencies lie.

If you want to improve developer productivity, focus on what will actually make the most difference

Best Practices for Meetings

Make Every Meeting Count

Too many meetings and ineffective meetings are highly wasteful. If a meeting is necessary, make sure it’s worth everyone’s time. Here’s how both attendees and organisers can ensure meetings are effective and productive.


For Attendees

Should You Even Be There?

The best meetings are the ones we don’t need to have! Before accepting an invite, ask yourself:

  • Does it have to be a meeting? Can the same outcome be achieved via an email, a shared document, or an async discussion?
  • Do I need to be there? Understand your role in the meeting and why your presence is necessary.

What to Expect When You Attend

  • A clear purpose, an agenda, and a desired outcome.
  • An understanding of why you need to be there.
  • The right to challenge if the above expectations aren’t met.

How to Show Up Effectively

  • Be on time, visible*, present, and engaged.
  • If you realise during the meeting that you don’t need to be there, politely excuse yourself – it’s not rude, it’s efficient.
  • When committing to actions, use commitment language: “I’ll do X by this time.”
  • If can’t make it and you decline, explain why.

*cameras on unless a large team meeting like a town hall


For Organisers

Should It Even Be a Meeting?

Before scheduling, consider:

  • Is this truly necessary? Can the same outcome be achieved via an email, a shared document, or an async discussion?
  • Do you have a clear purpose? If not, don’t book it.
  • Can you meet the further guidance below? If not, rethink your approach.

Expect attendees to challenge if your meeting lacks clarity.

Get the Right People in the Room

  • Don’t invite people just because you’re unsure if they need to be there.
  • If decisions need to be made, fewer attendees are better.
  • Don’t invite people just for awareness – share the output instead.

Set the Right Duration

  • Can it be 30 minutes? 15 minutes? Avoid defaulting to an hour.
  • Adjust your calendar settings for shorter meetings by default (Outlook/Gmail has options for this).

Choose the Right Time

  • Consider attendees’ working patterns.
  • Avoid disrupting deep work (engineers, designers). Best times are often after stand-ups or after lunch.

Facilitate Effectively

  • Keep the meeting focused and ensure it meets its objective.
  • Be inclusive – don’t let the loudest voices dominate.
  • Pre-reads: Send any information you want to go over well before the meeting, don’t waste people’s time trying to read something as a group for the first time
  • Be quorate or cancel if key people don’t turn up – don’t waste people’s time if you’ll end up needing to have the meeting again.

Summarise and Track Actions

The organiser is accountable for:

  • Sending a summary with key decisions, actions, owners, and deadlines.
  • Tracking and following up on agreed actions.

Additional Tips

Recurring Meetings

  • Use meeting notes to track progress.
  • All actions should have owners and dates
  • Make sure tracking actions is part of the agenda
  • Regularly review the cadence – is the meeting still needed? Do the right people attend?

For more, check out: Avoiding Bad Meetings and What to Do When You’re in One.

Will the Generative AI bubble start deflating in 2025?

This is a copy of an article I wrote for Manchester Digital. You can read the original here and their full series here

As we approach 2025, it’s been nearly two years since OpenAI’s GPT-4 launched the generative AI boom. Predictions of radical, transformational change filled the air. Yet these promises remain largely unfulfilled. Instead of reshaping industries, generative AI risks becoming an expensive distraction – one that organisations should approach with caution.

Incremental gains, not transformational change

Beyond vendor-driven marketing and from those with vested interests, there are still scant examples of generative AI being deployed with meaningful impact. The most widespread adoption has been in coding and office productivity assistants, commonly referred to as “copilots.” However, the evidence largely suggests that their benefits are limited to marginal gains at best.

Most studies on coding assistants report a modest boost in individual productivity. The findings are similar for Microsoft Copilot. A recent Australian Government study highlighted measurable, but limited benefits.

Notably, the study also highlighted training and adoption as significant barriers. Despite being widely in use for years, many organisations still struggle to use the existing Office 365 suite effectively. Learning to craft clear and effective prompts for an LLM presents an even greater challenge, where good results rely heavily on the ability to provide precise and well-structured instructions. A skill that requires both practice and understanding.

Busier at busywork?

These tools are good at helping with low-level tasks – writing simple code, drafting documents faster, or creating presentations in less time. However, they don’t address the underlying reasons for performing these tasks in the first place. There’s a real risk they could encourage more busywork rather than meaningful, impactful change. As the old adage in software development goes, “Typing is not the bottleneck.”

All in all, this is hardly the kind of game-changing impact we were promised. 

But they’ll get better, right?

Hitting the wall: diminishing returns

The initial promise of generative AI was that models would continue to get better as more data and compute were thrown at them. However, as many in the industry had predicted, there are clear signs of diminishing returns. According to a recent Bloomberg article, leading AI labs, including OpenAI, Anthropic, and Google DeepMind, are all reportedly struggling to build models that significantly outperform their predecessors.

Hardware looks like it may also be becoming a bottleneck. The GPU chip maker NVidia, which has been at the heart of the AI boom (and got very rich from it), is facing challenges with its latest GPUs, potentially further compounding the industry’s struggles.

Another exponential leap – like the one seen between ChatGPT3.5 and ChatGPT4 – currently looks unlikely.

At what environmental and financial costs?

The environmental impact of generative AI cannot be ignored. Training large language models consumes vast amounts of energy, generating a significant carbon footprint. With each new iteration, energy demands have risen exponentially, raising difficult questions about the sustainability of these technologies.

Additionally, current generative AI products are heavily subsidised by investor funding. As these organisations seek to recoup costs, customer prices will undoubtedly rise. OpenAI has already said they aim to double the price of ChatGPT by 2029

Advice for 2025: Proceed with caution

Generative AI remains a promising technology, but its practical value is far from proven. It has yet to deliver on its transformational promises and there are warning signs it may never do so. As organisations look to 2025, they should adopt a cautious, focused approach. Here are three key considerations:

  1. Focus on strategic value, not busywork
    Generative AI tools can make us faster, but faster doesn’t always mean better. Before adopting a tool, assess whether it helps address high-impact, strategic challenges rather than simply making low-value tasks slightly more efficient.
  2. Thoughtful and careful adoption
    GenAI tools are not plug and play solutions. To deploy them effectively, organisations need to focus on clear use cases, where they can genuinely add value.Take the time to train employees, not just on how to use the tools but also on understanding their limitations and best use cases.
  3. Avoid FOMO as a strategy
    As Technology Strategist, Rachel Coldicutt highlighted in her recent newsletter, “FOMO Is Not a Strategy”. Rushing to adopt any technology out of fear of being left behind is rarely effective. Thoughtful, deliberate action will always outperform reactive adoption.

Is “computer says maybe” the new “computer says no?”

GenAI and quantum computing feel like they’re pulling us out of an era when computers were reliable. You put in inputs and get consistent, predictable outputs. Now? Not so much.

Both tease us with incredible potential but come with a similar problems: they’re unreliable and hard to scale.

Quantum computing works on probabilities, not certainties. Instead of a clear “yes” or “no,” it gives you a “probably yes” or “probably no.”

Generative AI predicts based on patterns in its training data, which is why it can sometimes be wildly wrong or confidently make things up.

We’ve already opened Pandora’s box with GenAI and are needing to learn to live with the complexities that come with its unreliability (for now at least).

Quantum Computing? Who knows when a significant breakthrough may come.

Either way it feels like we’re potentially entering an era where computers are less about certainty and more about possibility.

Both technologies challenge our trust in what a computer can do, forcing us to consider how we use them and what we expect from them.

So, is “computer says maybe” the future we’re heading towards? What do you think?

My restaurant anecdote: a lesson in leadership

I want to share a story I often use when coaching new leaders – a personal anecdote about a lesson I learned the hard way.

Back when I was at university, I spent a couple of summers working as a waiter in a restaurant. It was a lovely place – a hotel in Salcombe, Devon (UK), with stunning views of the estuary and a sandy beach. It was a fun way to spend the summer.

The restaurant could seat around 80 covers (people). It was divided into sections and waiters would work in teams for a section.

I started as a regular waiter, but was soon promoted to a “station waiter.” This role had to co-ordinate with the kitchen and manage the timing of orders for a particular section. For example, once a table finished their starters, I’d signal the kitchen to prepare their mains.

Being me, I wanted to be helpful for the other waiters. I didn’t want them thinking I wasn’t pulling my weight, so I’d make sure I was doing my bit clearing tables.

Truth be told, I also had a bit of an anti-authority streak – I didn’t like being told what to do, and I didn’t like telling others what to do either.

Then it all went wrong. I ordered a table’s main course before they’d finished their starters. By the time the mains were ready sitting on under the lights on the hotplate, the diners were still halfway through their first course.

If you’ve worked in a kitchen, you’ll know one thing: never piss off a chef.

I was in the shit.

In my panic, I told the other station waiter what had happened. Luckily, they were more quick witted than me. They told me to explain to the head chef that one of the diners had gone to the toilet, and to keep the food warm.

So I did.

The head chef’s stare still haunts me, but I got away with it.

That’s when I realised what I’d been doing wrong. My section was chaotic. The other waiters were stressed and rushing around, and it was clear that my “helping” wasn’t actually helping anyone.

My job wasn’t to be just another pair of hands; it was to stay at my station, manage the orders, and keep everything running smoothly. I needed to focus on the big picture -keeping track of the checks, working with the kitchen, and directing the other waiters.

Once I got this, it all started to click. People didn’t actually mind being told what to do, in fact it’s what they wanted. They could then focus on doing their jobs without feeling like they were also panicking and running around.

What are the lessons from this story?

The most common challenge I see with new leaders is struggling to step out of their comfort zone when it comes to delegation and giving direction.

Leadership is about enabling, not doing. Your primary role isn’t to do the work yourself; it’s to guide, delegate, and create clarity so your team can succeed. Trying to do everything means you’ll miss the big picture, creates confusion and stress.

It’s tempting to keep “helping” or to dive into the weeds because it feels safer. But that’s where things start to unravel – and where many new leaders experience their own “oh shit” moment.

And remember, giving direction doesn’t mean micro-managing, it’s about empowering. Set clear priorities, communicate expectations, step back and allow people to do their jobs.

And yes, sometimes it’s OK to be quite directive – that clarity is often what people need most.

Are GenAI copilots helping us work smarter – or just faster at fixing the wrong problems?

Are GenAI copilots helping us work smarter – or just faster at fixing the wrong problems? Let me introduce you to the concept of failure demand.

The most widespread adoption of GenAI is copilots – Office365 CoPilot and coding assistants. Most evidences suggests they deliver incremental productivity gains for individuals: write a bit more code, draft a doc faster, create a presentation in less time.

But why are you doing those tasks in the first place? This is where the concept of failure demand comes in.

Originally coined by John Seddon, failure demand is the work created when systems, processes, or decisions fail to address root causes. Instead of creating value, you spend time patching over problems that shouldn’t have existed in the first place.

Call centres are a perfect example.

Most call centre demand isn’t value demand (customers seeking products or services). It’s failure demand: caused by unclear communication, broken systems, or unresolved issues.

GenAI might help agents handle calls faster, but the bigger question is why are people calling at all?

The same applies to all knowledge work. Faster coding or document creation only accelerates failure demand if the root issues (e.g. unclear requirements, poor alignment, unnecessary work) – go unaddressed.

Examples:

– Individual speed gains might mask systemic problems, making them harder to spot and fix and reducing the incentive to do so.

– More documents and presentations could bury teams in information, reducing clarity and alignment.

– More code written faster could overwhelm QA teams or create downstream integration issues.

There’s already evidence which suggests this. The 2024 DORA Report (an annual study of engineering team performance) found found AI coding assistants marginally improved individual productivity but correlated with a downward trend in team performance.

The far bigger opportunities lies in asking:

– Why does this work exist?
– Can we eliminate or prevent it?

Unless GenAI helps addressing systemic issues, it risks being a distraction. While it might improve individual productivity, it could hurt overall performance.

How do Generative AI tools impact software developer productivity and code quality?

How do Generative AI tools impact software developer productivity and code quality? A recent large-scale study – one of the most empirically robust I’ve seen – tackled this question by analysing 218k+ developers across 880m+ commits (with control groups and everything).

The results? A modest but consistent 4% productivity boost without sacrificing code quality.

Other key findings:

– Moderate GenAI users emerged as the highest overall performers (i.e. less is more)
– Only 1% of developers committed GenAI-authored code without significant rework.

It’s a nice 4% gain right? Not exactly game-changing. but worth it if licence costs stay reasonable. Enough to reduce a team of 100 to 96 engineers (or boost output by 4%)?

Not that simple – the study is somewhat flawed because it’s measuring effort not outcomes and individuals and not teams. The latest DORA Report (the gold standard when it comes to measuring high performing tech teams) found AI Tools marginally improving individual productivity, but having a NEGATIVE impact on overall team productivity.

As Giovanni Asproni said to me on Bluesky:

“I think that better product management, with a decent strategy, and proper prioritisation and planning will increase output far more than 4% in most organisations. And it may actually help saving energy and CO2 emissions.”

Poll of engineers using AI assistants with production code

I ran a LinkedIn poll asking software developers about their experiences using AI coding assistants with production code. The results were interesting 👇

Why production code? Developers spend most of their time writing it, and AI tools can handle non-critical activities like proof-of-concepts, solo projects, and quick experiments fairly well.

I received 50 responses – not a huge sample, but interesting trends emerged nonetheless. Also, LinkedIn polls aren’t anonymous (which I didn’t know!), so I could see who voted. It allowed me to exclude non-developers and gauge experience levels from LinkedIn profiles and add it to the data.

You can see the results broken down by answer and experience level in the chart attached.

What stood out to me?

– Only one respondent – an apprentice engineer – said they “automate most things.”

– 74% found AI assistants helpful, but 26% felt they’re more trouble than they’re worth.

– All who said AI assistants “get in the way” (24%) had 8+ years of experience.

– 80% of respondents had 8+ years’ experience (I know a few of them, and others’ profiles suggest they’re legitimately experienced engineers).

My thoughts? Unless they improve significantly:

1) They’re unlikely to shift productivity dramatically. From my experience, “task” work takes about 10-30% of a developer’s coding time. So, while helpful, they’re not game-changing.

2) That a significant portion – especially experienced engineers – find these tools intrusive is concerning.

3) Having heard stories of AI tools introducing subtle bugs, I was surprised not to see more votes for “considered harmful.”

4) The idea that AI tools might replace software engineers – or that even “conversational programming” is anywhere close – still feels a very long way off.

5) I worry about an apprentice software engineer getting an AI tool to do most of their work. Partly because more experienced developers are a lot more cautious and must have a reason for this, but mainly because they won’t be learning much by doing so.

Creating an Effective Recruitment Process

Hiring is one of the most impactful decisions for any organisation. The wrong person can badly impact the culture and performance of the organisation, absorb time in performance management and be a significant distraction.

Informal recruitment processes are leaving things down to chance.

Inefficient processes consume time and risk losing top candidates who move quickly.

This is why fixing the recruitment process is typically one of my first priorities when working with organisations.

Having interviewed thousands and hired hundreds over the past two decades, here are my top tips:

All candidates should have a positive experience

Whether or not a candidate gets the role, they should leave with a positive impression. A poor experience not only harms your brand but can dissuade other candidates from applying.

  • Being warm, friendly, and engaging throughout the process costs nothing, and it doesn’t prevent you from asking tough questions.
  • Give constructive feedback to unsuccessful candidates so they know where they can improve.

Short Lead Times

It’s all too common for the time from CV received to decision made to be a very lengthy process, often taking months.

  • Aim for a two week turnaround from CV received to decision made.
  • Much of the following advice is for how you can achieve this 👇

Start with a brief screening call

To make sure a candidate is a basic fit, get eligibility information and to introduce them to your company, and the recruitment process.

  • If you’re working with a recruiter, ideally empower them to handle this stage (if properly briefed).
  • It’s important this conversation isn’t just a box-ticking exercise. It sets the tone for the rest of the process, so make sure it’s welcoming and informative.

Pre-booked interview slots

So much time is lost in calendar tennis for scheduling interviews. Avoid this by agreeing, and putting in place, pre-booked interview slots upfront in interviewers’ calendars. Whoever is engaging with candidates can then offer them any of the available pre-booked slots. If they can’t do any of them you can always compromise and try and find other times, but make this route one.

Schedule all stages up front

Again, this saves time for both the candidate and your organisation, and gives the candidates clarity on timelines. If you’re doing multiple stages, there’s no reason why you can’t cancel a further stage if you don’t consider it’s worth proceeding with the candidate, just let them know up-front that you may do this.

Do as many stages as you can in one go

Consolidating interview stages reduces cycle times significantly. While you may end up spending more time with candidates who may not progress, you’ll save more overall by avoiding the delays of additional scheduling and decision-making between stages.

If you’re concerned about wasted effort, you can always tell the candidates up front that you may choose to finish the interview early if it’s clear to you it’s not progressing well.

Structured interviews

Every interview should be structured, with the same scripted questions for all candidates and scoring indicators for responses.

This reduces bias, ensures consistency, and simplifies decision-making. You never use scoring indicators as a pass or fail, they just help inform your decision.

I’ve developed interview templates where each stage has its questions and scoring indicators on the same form.

Two interviewers in each stage

This has several benefits:

  • One can focus on the candidate while the other takes notes.
  • It avoids leaving decisions to a single viewpoint.
  • It provides an opportunity to train future interview leads.
  • It allows candidates to meet more team members.

The second interviewer doesn’t have to be sitting there quietly, they can take part in asking follow up questions and responding to candidate questions.

The interview forms I mentioned previously? Use them as a shared document in e.g. Google or OneDrive and interviewers can see the notes as they’re typed, and make their own comments and observations.

Same-day candidate debrief

Debrief as quickly as possible after each stage or at the end of all interviews. The structured format, shared notes, and scoring indicators will help you avoid lengthy debate, maintain consistency and make timely decisions.