This week a teammate and I ran a deep experiment: we handed almost the entire coding workload of a new project to AI. The outcome was stunning—we wrapped up in a single week what normally takes us three. Yet behind that unprecedented velocity were a few counterintuitive truths that quickly surfaced.
The Cost of Efficiency and Cognition on Overdrive
When coding hits three times its usual speed, the first thing to tire out isn’t your hands—it’s your brain. The build–feel–revise loop compresses so much that design reviews and iteration cycles blur together. Rate limits do force occasional breaks, but when task switching ramps to triple intensity, it becomes its own cognitive tax.
My teammate went a step further, spinning up three independent project branches and letting Claude Code work on them in parallel. The speedup was real, but it created new headaches, especially constant context juggling and the inevitable merge conflicts. It drove home that in the AI era we need a fresh balance between tool velocity and the human ability to manage context. OpenAI still feels stronger here: the early web client kept clean, per-task contexts by design.
A Quiet Revolution in Software Engineering
AI-native development is quietly reshaping the software engineering defaults we grew up with. We used to spend endless energy on reuse and avoiding technical debt. Now we can afford to rewrite boldly because execution is cheap.
That “traveling light” feeling is delightful. Every new project can start from code that fits the moment, without dragging history around. We even practiced this: when another team’s API couldn’t be updated fast enough, we just rewrote our own.
It reminds me of something Databricks co-founder Reynold Xin said at the HYSTA annual meeting this weekend: we may be heading toward an era of “personalized software,” where every company runs a deeply customized back office. Just like short-form video reshaped media, AI is rewriting the cadence and patterns of software development.
Human Evolution: From Implementers to Definers
If AI takes on more of the implementation, where does that leave us?
At HYSTA, an AMD VP encouraged younger engineers to go full-stack. I agree, but the real winning stack points upward. The most critical layers will be demand insight and product design—the business model, user journey, and interactions. The implementation layer is something we can happily hand off to AI.
Software may soon resemble today’s video creation landscape: low barriers, creativity everywhere. We went from only major studios producing video to countless indie creators flourishing. As the tools become ubiquitous, the race shifts to who has the best ideas and who understands their users best.
Finding a Human Rhythm
This week made it clear we are standing at a productivity inflection point. AI doesn’t just linearly boost efficiency—it forces us to rethink the work tempo and the human role.
The feedback loop is intoxicating: in half a day you can watch a new feature spring from nothing. But as long as humans stay the ultimate decision-makers, our attention and cognitive rhythm set the ceiling. Finding the golden balance between AI’s perpetual motion and human deep thinking is the next challenge we all need to tackle.
The Myth of Custom Training in a Flood of Open Models
One last reflection on model training.
During the afternoon pitch session at HYSTA, countless startups talked about gathering data and training their own models. That clashes with the experience I’ve had over the past two years.
Most of these teams were seed or pre-seed, still searching for product direction and user feedback loops. Training a bespoke model at that stage is usually a drag. It slows exploratory cycles to a crawl. My firsthand lesson: if you can’t hit a baseline using off-the-shelf models plus solid product design, it’s extremely hard to believe that custom training will suddenly save the day.
Meanwhile open models are improving at breakneck pace—better performance, lower costs. Two years ago, fine-tuning and shipping a model took at least a month, and by then your requirements might have moved on. Today, at least during the validation phase, embracing the wave of open models is far wiser than building behind closed doors.