Om

Why AI Coding Is Still Broken (And Why Your Job Is Safe)

Om

Share this

Every product you use is a website or an app. That's it. And everyone keeps screaming that AI will replace the people building them. I've spent two years actually using these tools every single day, and I need to tell you: we've hit a ceiling. A hard one.

The Context Cliff

Here's what happens when you feed your entire codebase into AI. At first, it feels like magic. Then something breaks. The model starts hallucinating features that were never there. It invents APIs that exist in its training data but have nothing to do with your stack. Research from early 2026 confirms this: once you push past 32,000 tokens of context, accuracy falls off a cliff. The AI doesn't just forget details. It starts making things up. Your "smart assistant" becomes a confident liar, and you're left debugging code that looks right but fails in production.

Effective Context vs. Claimed Context (2026). Note the "Reasoning Collapse" past 32k tokens.

People tell me: just use a specialized coding model. Train it purely on code, nothing else. Sounds logical. Completely wrong. The February 2026 leaderboards prove it. The best coding models in the world right now are massive generalists. Claude Opus 4.5 scores 80.9% on SWE-bench. These models learned math, logic, literature, philosophy. Turns out that training makes them better engineers. The narrow models are fast and cheap, but when you need actual architectural intelligence, the generalists win. The breadth is the point.

The Security Spiral

But here's where it all falls apart. I tell AI to add a file upload to my signup form. My brain immediately jumps: users need to edit that file later, where does it get stored, what happens if the upload fails, how do we handle malicious files. The AI? It asks which storage service I want and writes the upload function. That's it. Security reports from 2025 found vulnerabilities in 45% of AI-generated code. Worse: when you ask AI to fix its own code, the security problems get 37% worse after five iterations. It solves the syntax puzzle but fails the engineering test. It writes code that compiles and breaks in ways you will discover three months later when a user reports data loss.

+37% more vulnerabilities after 5 iterations
Security Degradation in Iterative AI Code Generation. Asking AI to "fix" code actually introduces more bugs.

The Iceberg Problem

The real problem is simpler than anyone admits. AI pulls information related to your exact query and stops. It's trained to answer questions, to complete patterns. It's a prediction engine optimized for the next token. Engineering is about thinking three steps ahead, around corners the AI will never see.

What AI Sees
uploadFile()
— surface —
What Engineers See
S3 Bucket Permissions
Malware Scanning
GDPR Compliance
Database Relations
File Type Validation
Error Handling
Mobile Retry Logic
Rate Limiting
Token Prediction vs. Systems Thinking. AI sees the syntax; Engineers see the dependencies.

That's why after two years of this, coding with AI feels like walking a tightrope. You're guessing. You're praying. And only 3.8% of developers trust AI code without review, because we've all been burned. The promise that AI will replace your engineering team is garbage. What we actually have is a tool that makes experienced developers faster and junior developers dangerous. Your job is safe. Actually, it's more essential than ever, because someone needs to catch what the AI misses. And it misses everything that matters.