3 minute read

The rise of AI coding assistants like GitHub Copilot and Cursor marks a significant shift in how software is developed. While these tools undoubtedly offer productivity benefits, I’m increasingly concerned about their medium to long-term impact on code quality and developer skills.

The Seduction of AI-Generated Code

The allure of AI assistants is undeniable. They suggest completions for repetitive tasks, generate boilerplate code, and even propose solutions for complex problems. The time savings are immediate and tangible. A function that might have taken fifteen minutes to research and implement appears in seconds with a simple prompt.

This immediate gratification creates a dangerous feedback loop. Each successful suggestion reinforces our trust in the system, gradually eroding the critical evaluation that’s essential to quality development. After a dozen helpful completions, it becomes all too easy to accept the thirteenth without proper scrutiny — even when that suggestion contains subtle logical flaws or security vulnerabilities.

Growing Codebases, Shrinking Scrutiny

The data emerging from the development community shows worrying trends. Codebases appear to be growing faster, and pull requests are being merged at higher rates — GitHub reports a 15% increase in merge rates for projects using AI assistants. But larger, faster-merged pull requests should give us pause rather than celebration.

Is this productivity, or is it bloat?

In networking infrastructure, where resource efficiency and reliability are paramount, code expansion often correlates with increased complexity, maintenance challenges, and larger attack surfaces. When AI suggestions drive this expansion, the code often lacks the contextual awareness that a focused human developer would apply.

The Hidden Cognitive Biases

Our relationship with AI coding tools is complicated by several cognitive biases:

Automation bias leads us to trust computer-generated solutions more than our own judgement, even when we have the expertise to recognise flaws. This is particularly problematic in networking contexts where edge cases can have cascading failure implications.

Sunk cost fallacy makes us reluctant to discard AI-generated code that we’ve spent time tweaking and integrating, even when a cleaner solution might be better.

Anchoring bias causes us to overly rely on the AI’s initial suggestion, constraining our thinking rather than expanding it.

Review fatigue sets in when evaluating large blocks of generated code, where our attention naturally diminishes as we proceed through the implementation.

A Framework for Responsible AI Use

Rather than rejecting these tools outright, I advocate for a disciplined approach to their integration:

  1. Define boundaries clearly. Certain critical components—security functions, core routing logic, exception handling—should remain primarily human-authored with AI relegated to an advisory role.

  2. Implement stricter review protocols for AI-assisted code. Consider adding dedicated review stages specifically to question the necessity and efficiency of generated solutions.

  3. Maintain skill sharpness by regularly writing code without assistance. Just as pilots must maintain manual flying skills despite autopilot availability, developers should preserve their fundamental coding abilities.

  4. Measure the right metrics. Instead of celebrating increased code production, focus on maintainability indices, test coverage, and performance benchmarks.

  5. Create team norms around when and how to use AI assistants. Some problem domains benefit from assistance more than others.

The Long View

The networking industry has always balanced innovation with stability. Our infrastructures support critical communications where failures aren’t merely inconvenient—they can be catastrophic. As we integrate AI coding assistants into our development practices, we must maintain our commitment to rigorous engineering principles.

The true measure of these tools won’t be how much code they help us write, but whether that code stands the test of time—whether it remains maintainable, secure, and efficient as requirements evolve and teams change.

My concern isn’t that AI assistants will make developers obsolete, but rather that they might make us complacent. The most valuable skill in an AI-augmented development environment isn’t prompt engineering—it’s the discernment to know when to accept assistance and when to rely on human craftsmanship.

In our eagerness to embrace the productivity benefits of AI coding assistants, let’s not sacrifice the engineering discipline that has built reliable systems for decades. The code we write today may run for years—let’s ensure we’re not building our foundations on unexamined AI suggestions.