By Adam Hassan • Feb 25, 2026 • 4 min read
Programming has changed for good, no matter how we feel about it.
Back in 2023, we often complained that LLMs made logical and syntax errors, sometimes making our projects impossible to run. These concerns were real—we just couldn’t trust AI to get things right.
By 2024, the doubts had changed.
AI started to look like a useful tool, but many people didn’t like being forced to use Copilot-style features or not having the option to turn them off. Some even asked how to remove AI from their workflow.
These complaints made sense. Unwanted suggestions or auto-generated code often caused problems and sometimes made our work take even longer than if we had just written it ourselves.
Management’s expectations didn’t match reality. Developers saw that AI wasn’t reliable, but stakeholders were caught up in the hype and news, which just put more pressure on us.
And even as the tools improved, the day-to-day experience still didn’t live up to the hype. Watching autocomplete write whole functions felt impressive, but AI still wasn’t reliable enough for important work. The perception that emerged was that generation doesn’t equal delivery. Reviewing and correcting code took significant time, and often it didn’t feel like AI was saving time at all, just adding another layer of work.
But in 2025, everything changed. AI became more powerful and easier to use than ever before.
If I had said in 2023 that we’d soon have a tool that could handle several projects at once, plan out complex features, ask questions, and then build them almost perfectly, no one would have believed it. But that’s where we are now.
There were studies saying AI might hurt our skills, and reports that 95% of GenAI projects weren’t delivering real results. But I believe the real issue isn’t AI itself—it’s how we use it.
In the past, building a complex feature across different areas like API and frontend meant lots of analysis, meetings, and grooming. You’d need a frontend expert for the UI and a senior backend developer for the business logic. Then, UX and other team members would review it, along with stakeholders. By the time you shipped the change, it took a lot of people’s time and often led to overengineering and unnecessary tasks.
All of this just slowed teams down even more.
Recently, I tested one of these projects using Claude Code and skipped most of the usual steps. It handled the task perfectly.
But the most interesting part wasn’t the code—it was the research. I could use subagents to read research papers, trusted sources, UX guidelines, and best practices, all while having access to the codebase and its patterns. The result was a ‘Design Decisions & Research’ document that I could review, edit, and add my thoughts to. That document became the main reference for building the feature.
When we build features on our own, we can’t usually spend this much time on research for just one task. There’s no way to read three papers, see how others solved the problem, and check UX best practices before starting to code. But with AI, you can do all that, and it changes everything. You’re not just making software faster and cheaper—you’re making it better.
There was some resistance. Some people worried they weren’t needed anymore, but I saw it as a big improvement to the process. Instead of overthinking and spending too much time before releasing a feature, I can now ship it and get feedback from stakeholders right away. It’s faster and better.
Of course, some tasks still need real experts, but most don’t.
Big companies haven’t really taken advantage of this yet, but those that become AI-first will see huge benefits. The main challenge isn’t the AI—it’s the human side.
“I can write that code faster myself.”
“If AI can do my daily job, what do I do?”
Management expects everyone to use AI, but there’s no official rule or training. Sometimes, they just give you Copilot because it comes with the Microsoft Office license, even if it’s not the best fit for everyone.
So where are we going from here?
Honestly, I’m not sure. I don’t think anyone really knows. But one thing is clear: programming has changed for good, and the parts you used to enjoy might not fit as well in this new era.
Some people will like the new way of prompting, analyzing, and using judgment to build features, even if it means not going into the same level of detail as before. Others might find it less meaningful, and it may not be as motivating as spending hours debugging just to see your code finally work.