
Our team’s interview format just changed.
Leetcode puzzles are out; building a full feature is in. If you don’t code with AI—and your typing speed isn’t absurd—you won’t finish. It sounds like a tooling upgrade, but it’s actually reshaping what we mean by an “excellent engineer.”
The Gap Gets Wider
In classic algorithm interviews, the difference between failing and passing might come down to a single hint. Now, when you’re asked to build a UI or feature, the spread in completeness is enormous.
Some candidates deliver a basic, usable page in an hour: smooth interactions, reasonable styling. Others get lost in rounds of AI edits and never land a working baseline. Same AI tools, wildly different outcomes. This format magnifies the gap like never before.
Communication Becomes Hard Currency
Communication now matters more than ever—both with the interviewer and with the AI.
I’ve noticed people who struggle to communicate often dislike vibe coding. It’s not just about unclear requirements; it’s about expectation management.
- Telling the AI exactly which files to touch slashes the chance of edits in the wrong place.
- Feeding back results periodically and asking for refactors keeps the codebase from drifting.
- But if you assume the AI is clueless and spell out every tiny detail, you tank efficiency.
Striking the right balance depends on understanding the tool’s limits and knowing how to collaborate with it.
Taste Is Now Testable
Taste suddenly matters. Traditional interviews barely touched it—you might glance at code style, but no one nitpicked CSS color choices. Because this format is so visual, taste is on display.
Is the layout sensible? Are the interactions smooth? Does it look comfortable? Things that used to be “the designer’s job” are now part of the engineering assessment.
That means spending more time observing great products. Taste isn’t innate; it’s trained through exposure and practice.
Technical Depth Still Matters, Just Shows Differently
Is technical skill less important? Not at all.
Today I interviewed someone from a big Australian company that equips engineers with robust AI coding tools, so he was at home with this format. He shipped the feature quickly, and after one or two feedback loops the quality jumped.
What impressed me most: before turning the AI loose, he told me where he expected the AI to make changes. After it finished, he opened those spots to verify. Instantly he looked like someone who can direct AI, amplify output, and correct it when it drifts.
That’s the new face of technical depth: not hand-writing a red-black tree, but predicting how AI will implement something and validating the output fast.
Tools Haven’t Caught Up
Most interview platforms haven’t adapted yet.
Today I used Show Me Bug. Their web IDE runs code nicely, but there’s no AI coding yet. For now, video calls with screen sharing still work best. I hope we’ll see tools purpose-built for vibe coding interviews.
The Paradox of AI “Cheating”
Remember the startup that helped candidates “cheat” with AI? If AI use is itself a core evaluation point, is there anything left to cheat on?
If the interview is meant to gauge how you collaborate with AI, can using AI even count as cheating? It’s an amusing question.
The pace of change is stunning. In the 10.11 issue I mentioned Claude Code helping me write 5,700 lines in one night. By 10.19, AI let us finish three weeks of work in one week. Now we’re using vibe coding to interview new teammates.
AI is changing how we work and redefining what makes a great engineer. Communication, taste, and tool command—once “soft” skills—are now core competitiveness. Technical depth still matters; it just shows up in new ways.