Quality and vibe coding: take back control of your project

Vibe code properly


With vibe coding, I feel like I’m losing control of my project, and after a few sessions, I’ve usually created a monster that ends up joining my bestiary of useless side projects.

That’s why I try as much as possible to pause, write some code myself to give examples to the agent, …


To avoid this drift, I impose a discipline on myself, which broadly looks like this:

And it works pretty well — I end up with projects I understand and can maintain fairly easily with or without AI help… (it’s a 🚧 “work in progress” but you can find this skill here: methodical-dev)


Now, let’s be honest — sometimes I do let my code agent off the leash, carried away by the magical feeling of its capabilities (small aside: AI IS NOT MAGIC, AI IS JUST PRACTICAL!!!). And that’s usually when disaster strikes, once again my code slips away from me, or at least part of it. In short, my methodology isn’t perfect (actually, it’s me who doesn’t follow it).


So how do you avoid burning tokens for nothing and throwing hours of work in the trash (and therefore being unproductive at a cost)?

How do I avoid a grim end for my projects?

What I’ve noticed is that my projects get complex when I let the AI do everything, and I no longer understand my code (no wait, HER code). This also means that maintainability is going to be difficult, even impossible… And on top of that, I’m going to find this project less and less interesting…

I’ve therefore decided, in my “agent-augmented” development ritual, to insert a quality control procedure. For now it’s manual, and it will probably stay that way. Indeed, if I insert it into my skill, my token consumption goes up and I hit the rate limit very quickly when using my personal Claude subscription.

Objective

The goal of this “quality control procedure” is to help me:

Tooling

For this, I use the qlty tool which gives me code quality metrics (complexity, duplication, …) and code smell detection. The CLI is very simple to use and very fast.

Implementation

Installation and configuration

You first need to install qlty (I invite you to read the documentation for that):

curl https://qlty.sh | sh

Then in your project directory, you need to initialize qlty:

qlty init

This command will create a .qlty folder in your project, which will contain the elements needed by qlty to perform code quality analyses. qlty will create a configuration file .qlty/qlty.toml in which you can configure qlty and define the plugins you want to use (for example for your project language, linter, etc…)

Go read this plugin configuration and I’ve left you an example here: qlty.toml)


Usage

I created a small quality-reports.sh script that I run regularly to take stock of my project quality. This script generates metrics and code smell reports for a given branch with a timestamp, which I can then analyze to detect improvement points and track the evolution of my code quality over time.

quality-reports.sh:

#!/bin/bash
BRANCH=$(git branch --show-current | sed 's/[\\/]/-/g')
DATE=$(date +%Y%m%d-%H%M)

qlty metrics --all --max-depth=2 --sort complexity --limit 10 | sed 's/\x1b\[[0-9;]*m//g' > ".qlty/metrics-${BRANCH}-${DATE}.txt"

qlty smells --all | sed 's/\x1b\[[0-9;]*m//g' > ".qlty/smells-${BRANCH}-${DATE}.txt"

I run this script every time I merge a significant feature branch, or when I feel my project is getting a bit too “monstrous”.

Each time I’ll get 2 reports, one for “smells” and one for “metrics”:

smells-main-20260305-1654.txt
smells-main-20260305-2134.txt
smells-main-20260305-1858.txt
metrics-main-20260305-1654.txt
metrics-main-20260305-2134.txt
metrics-main-20260305-1858.txt
# and so on ...
  • The “smells” report is interesting because it gives indications on the improvement points of my code.
  • You’ll find examples in this folder: quality-reports

Report analysis

For report analysis, I created a “quality” skill that lets me analyze the code quality reports and detect improvement points. I feed it the reports generated by the quality-reports.sh script and it gives me indications on improvement points and the evolution of my code quality.

In Claude Code I use it like this:

/quality <BRANCH_NAME> <NUMBER_OF_LAST_REPORTS_TO_ANALYZE> - generate an analysis report

I also use this skill with cagent and OVH AI endpoints (and the gpt-oss-120b model). I wrote an article explaining how to use cagent with OVH AI endpoints: Using cagent with gpt-oss-120b and OVH AI endpoints


The skill will generate a detailed analysis report, giving me the trend of my code quality evolution, the improvement points to work on first, and recommendations to improve my code quality.

Here are a few excerpts from this analysis report:

## Executive Summary

| Metric | Latest Value | Trend (over 3 reports) |
|--------|-------------|--------------------------|
| Total cyclomatic complexity | 1163 | ↑ degrading |
| Overall complexity score | 886 | ↑ degrading |
| Total functions | 220 | ↑ growing |
| Lines of code (LOC) | 4300 | ↑ growing |
| Code smells detected | 19 | — |

### Global Quality Score: 4/10

**Score breakdown:**
- Average cyclomatic complexity per function: `1163 / 220 = 5.29` → in the 5–10 range → **+2 pts**
- Complexity trend: moderate average increase of ~8.3% per snapshot → **+1 pt**
- Code smells count (19 smells, in the 16–30 range) → **+1 pt**
### Trend Analysis

**Overall direction: growing complexity.** Across the three snapshots, cyclomatic complexity rose from 995 to 1163 — a total increase of **+16.9%** over the analysis window.
### Prioritized Action Plan

#### High Priority — Fix First

**1. `snip/internal/agent/agent.go` — Function with high complexity (count = 80): `Run` (line 71)**

The `Run` function is the agent's main execution loop and has a cyclomatic complexity of 80 — far above any acceptable threshold. This single function accounts for a disproportionate share of the total agent package complexity.

_Recommended action:_ Decompose `Run` into smaller, focused helpers. Identify the distinct phases (message preparation, tool detection, tool call dispatch, response handling) and extract each into a named method. Aim for each extracted function to have a complexity under 10.

You’ll find a complete example of this analysis report here: analysis-main-20260306-0442.md

Improvement steps

Based on this analysis report, I’ll be able to identify the improvement points to work on first. If I observe a dramatic downward trend in my code quality, I’ll stop developing new features and focus on improving code quality (manually and/or with the help of AI).


And I’ll iterate until I get a satisfying upward trend in my code quality.

Also here, create a new branch to do it, and make sure there are no regressions, especially if you have your AI do the refactoring.

Conclusion

I’ve been doing this for a few days now, on a development project that has been alive for several months, and on a brand new project. On the older project, the analysis and refactoring sessions are long (I had never done it before, so of course…), but clearly my code has gained in quality and maintainability and I understand it better. On the newer project, it’s easier and I feel it helps me keep control (if I manage to stay disciplined to the end).


So whatever tool you use for “code quality analysis”, whatever code agent you use for development, I encourage you to put in place this small quality control procedure which is not as constraining as it sounds and will make you find interest in your project again, and above all help you understand your code and maintain it more easily.

The skill code is accessible here: quality

© 2026 k33g Project | Built with Gu10berg

Subscribe: 📡 RSS | ⚛️ Atom