The naked truth about AI-assisted coding
Photo by Olena Kamenetska
In this article, I argue that AI-assisted coding is useful but overhyped. I see real value in speeding up boilerplate and first drafts, but I also see serious risks: weaker developer fundamentals, more technical debt, shallow productivity metrics, security and licensing concerns, and growing pressure on open source maintainers. My core point is simple: AI can help me write code faster, but it still doesn’t solve the hardest part of software engineering - building the right systems that stay reliable, secure, and maintainable over time.
1. The current landscape (as of early 2026)
AI-assisted coding tools - GitHub Copilot, ChatGPT, Claude, Gemini, and a growing ecosystem of competitors - have moved from novelty to mass usage in under four years. The narrative around them is between utopian ("10x developers for everyone!") and dystopian ("developers are obsolete!"). The truth is probably a bit different, and far more uncomfortable for the industry
2. Impact by developer experience level
Junior / Entry-level developers
The good:
- AI tools can scaffold boilerplate, explain unfamiliar syntax, and accelerate the "getting something working" phase.
- They lower the initial barrier to producing functional code.
The critically bad:
- The learning crisis. This is arguably the most dangerous long-term effect of AI on the industry. Junior developers are now able to produce code they don't understand. The "struggle" phase - debugging, reading docs, understanding why something works is precisely what builds deep expertise. AI tools are killing this process.
- Cargo-cult programming at scale. Juniors accepting AI suggestions without experise leads to codebases full of code that nobody truly owns intellectually. When it breaks in production, the developer who "wrote" it can't debug it. As complex is as difficult to maintain it becomes.
- Atrophied fundamentals. Data structures, algorithms, systems thinking, networking - these are already under-taught. When an AI can answer any LeetCode question, the incentive to deeply internalize these concepts erodes further.
- The interview paradox. Companies still gate-keep with traditional coding interviews, but the skills AI cultivates (prompt engineering, output evaluation) are orthogonal to what interviews test. This creates a mismatch that hurts juniors from both sides.
Mid-Level developers
The good:
- Genuine productivity gains on well-understood, repetitive tasks: writing tests, CRUD operations, data transformations, boilerplate API endpoints.
- AI as a "rubber duck" that talks back - useful for thinking through design decisions.
The critically bad:
- The illusion of velocity. Teams are shipping more code faster, but "more code" is not the same as "better software." The truth is that the bottleneck in most projects was never typing speed - it was understanding requirements, making architectural decisions, and managing complexity. AI barely touches these.
- Over-reliance on pattern matching. AI models are fundamentally pattern-matching engines trained on public code. They are really good at common patterns and fail silently on novel or domain-specific problems. Mid-level devs are most at risk of not recognizing when the AI is going in the wrong direction.
- Technical debt acceleration. If developers are generating code 2-3x faster but code review practices haven't scaled to match, the net effect is more technical debt shipped faster.
Senior / Staff+ Engineers
The good:
- Useful for rapid prototyping, exploring unfamiliar APIs, and generating first-draft implementations that they then heavily revise.
- Can accelerate context-switching between codebases and languages.
- AI can be a helpful brainstorming partner for architectural trade-offs, though it should never be the final decision-maker.
The critically bad:
- Architecture and system design remain almost entirely un-assisted. The hardest parts of senior engineering - designing for scale, managing distributed systems, making trade-off decisions under uncertainty - are areas where current AI tools provide superficial help at best and dangerously confident wrong answers at worst.
- The review burden shifts. Senior engineers now spend more time reviewing AI-generated code from their teams. This code is often "almost right" - syntactically correct, passing basic tests, but with subtle issues in error handling, edge cases, security, or performance. Reviewing "almost right" code is harder than reviewing obviously wrong code.
3. The problems right now
Quality and correctness
- Hallucination is not a bug, it's a feature of the architecture. LLMs generate plausible text. Sometimes plausible is not equal to correct. This is fundamental to how they work, not something that will be "fixed" in the next version.
- Security vulnerabilities. Multiple studies (Stanford, 2023; GitGuardian reports) have shown that AI-generated code is more likely to contain security vulnerabilities than human-written code. AI tools often suggest deprecated functions, insecure defaults, or patterns vulnerable to injection attacks. They optimize for "works" not "works safely."
- License. AI models trained on public repositories reproduce code verbatim in some cases. The legal landscape is still unsettled (ongoing litigation as of 2026), creating genuine risk for companies that ship AI-generated code without review.
The measurement problem
- Productivity metrics are misleading. Lines of code, PRs merged, and tickets closed are going up. But are we measuring output or outcomes? There is little rigorous evidence that AI tools are reducing time-to-market for complex features or improving user-facing quality.
- Survivorship bias in testimonials. Developers who love AI tools are vocal. Developers who tried them and found them bad are quieter. The loudest voices are not a representative sample.
The homogenization problem
- AI tools are trained on the same public codebase and tend to suggest the same patterns, the same libraries, the same architectural approaches. This creates a monoculture in code:
- Less diversity in approaches means less resilience.
- Novel solutions are harder to discover when the AI always suggests the "popular" path.
- Smaller, better libraries get overlooked in favor of whatever has the most GitHub stars (and therefore the most training data representation).
The context window limitation
- Despite growing context windows, AI tools still struggle with codebase-level understanding. They can help with a function, sometimes a file, occasionally a module - but they cannot reason about the system. Real software engineering is about systems, not functions.
4. Impact on open source
This is perhaps the area with the most complex and potentially damaging effects.
Contribution quality
- Low-quality AI-generated PRs are flooding open source projects. Maintainers report a surge in pull requests that are clearly AI-generated: superficially plausible, often fixing non-issues, and consuming maintainer time to review and reject. This is an unpaid labor tax on already overburdened maintainers.
- Some projects have had to add policies explicitly addressing AI-generated contributions.
The sustainability paradox
- AI tools are trained on open source code, and the companies building them are worth billions. Yet the open source maintainers whose work makes these tools possible see none of that value. This extraction without reciprocity deepens the open source sustainability crisis.
- There is no scalable mechanism today to compensate OSS authors whose code was used for training.
The maintenance burden
- AI tools make it trivially easy to start an open-source project. They do not help with the hard part: maintaining it for years, triaging issues, managing a community, making backward-compatible changes, and writing documentation. The result may be an explosion of half-finished, abandoned repositories.
Documentation and knowledge sharing
- If developers use AI instead of reading documentation, the incentive to write good docs decreases. This could create a vicious cycle: worse docs → more AI reliance → even less investment in docs.
- Stack Overflow's traffic decline is a canary. Community knowledge-sharing platforms are losing contributors. What happens when the AI's training data becomes stale because the community stopped producing the content it was trained on?
5. Industry-wide challenges
The employment and hiring distortion
- Companies are using "AI productivity gains" as justification for smaller teams and hiring freezes. Whether AI actually delivers enough productivity to offset fewer humans is unproven at scale for complex systems.
- The junior pipeline is being squeezed. Fewer entry-level positions, combined with AI-dependent learning, risks creating a generation gap. In 10 years, who will be the senior engineers if we're not properly training juniors now?
- Hiring processes haven't adapted. We're still testing for skills that AI can trivially perform while ignoring the skills that matter more in an AI-augmented world (critical evaluation, system design, communication).
The deskilling risk
- There is a well-studied phenomenon in aviation and manufacturing called automation complacency: when humans oversee automated systems, their own skills atrophy, and they become worse at catching the automation's failures. Software is walking directly into this trap.
- The developers who will thrive are those who use AI while maintaining their independent ability to do the work without it. But the economic incentives push in the opposite direction.
The vendor lock-in / dependency problem
- Organizations are embedding AI tools deeply into their development workflows. What happens when pricing changes? When a model's behavior shifts after an update? When the API goes down? Teams have built a dependency on a system they don't control and can't run locally (in most cases).
The testing illusion
- AI can generate tests. But AI-generated tests often test the implementation rather than the behavior. They codify what the code does rather than what it should do. This gives false confidence - high coverage numbers that don't actually catch regressions.
Ethical and environmental costs
- Training and running large language models has a non-trivial carbon footprint. The industry has been conspicuously quiet about the environmental cost of millions of developers making API calls for code completion thousands of times per day.
- Data privacy concerns: code sent to cloud-based AI tools may include proprietary logic, secrets (despite guardrails), and sensitive business context.
6. What's being under-discussed
- Cognitive offloading: We don't understand the long-term cognitive effects of outsourcing thinking to AI. Early research on GPS and memory suggests this is not in our favor.
- The "average" problem: AI models regress to the mean of their training data. They make everyone's code more average. Exceptional, creative solutions become rarer.
- Debugging AI code: Debugging code you wrote is hard. Debugging code an AI wrote - that you never fully understood - is harder. This is a ticking time bomb in production systems.
- The feedback loop collapse: AI is increasingly trained on AI-generated content. Model collapse - degradation in quality when models train on synthetic data - is a documented phenomenon. The quality ceiling may be lower than we think.
- Regulatory lag: Governments are years behind on AI legislation. Questions of liability (who's responsible when AI-generated code causes a data breach?) are entirely unresolved.
7. A Balanced Verdict
AI tools for software development are genuinely useful in specific, bounded contexts:
- Boilerplate generation
- Syntax lookup and language translation
- First-draft implementations that are heavily reviewed
- Explaining unfamiliar code
- Automating tedious, well-defined tasks
They are actively harmful when:
- Used as a substitute for understanding
- Trusted without verification
- Applied to security-sensitive or architecturally complex work without expert oversight
- Used as a metric for "productivity" without measuring quality
- Allowed to erode the pipeline of skilled developers
The fundamental tension is this: AI tools optimize for speed of code production. But the hard problems in software have never been about producing code fast enough. They've been about producing the right code, maintaining it over time, and building systems that are reliable, secure, and evolvable. Until AI tools meaningfully address these dimensions - and there is no evidence they're close - the productivity narrative is, at best, incomplete, and at worst, a dangerous distraction from what actually makes software teams effective.