I write for readers, not for indexers

I write for readers, not for indexers

Posted on: 24 April 2026

Every so often someone asks me how I optimise my pieces for Google. The question assumes a technical answer, keywords, heading structure, semantic density, backlinks. My actual answer is that I optimise nothing, and the follow-up question usually reveals the hidden assumption, namely that writing without optimising is a form of naivety or laziness, possibly both. It isn't either. It's a choice, and it's worth explaining why, particularly now.

I write for the reader, not for machines. I'm not interested in being read by everyone, I'd rather be read by those few who have the intellectual curiosity to understand what I'm saying and to peer, through these pieces, into my way of seeing things. It isn't an intellectual exercise for its own sake, it's more like fixing thoughts in a kind of public diary, public only because someone, not everyone, might read it, disagree with it, or turn it over quietly in their head.

The diary is public by accident, not by vocation. If I were certain there existed a private way of writing that produced the same cognitive result, I'd probably prefer it, but the act of making something public, even for twenty readers, forces a precision that private writing doesn't demand. A blank page tolerates any vagueness; a published one doesn't.

Until recently this stance looked like a form of romantic snobbery, the sort of thing the niche author tells you while the marketing professionals get on with reality. Now it's simply the approach that works, and that's where it gets interesting.

For twenty years online writing has been colonised by a logic that had little to do with the reader and a great deal to do with the algorithm. Writing for the web meant writing for Google, and writing for Google meant learning rules that had nothing to do with the quality of thought. Keyword density, calibrated meta descriptions, five-word sentences to improve readability as perceived by crawlers, artificially broken paragraphs, the famous "inverted pyramid" applied to any kind of content. Those who had something to say had to learn to say it badly in order to be found, and those who had nothing to say thrived, because following rules doesn't require having ideas.

The result has been a digital landscape in which the average content was optimised to be discovered but not to be read. A structural distortion of the entire editorial ecosystem, hard to see from the inside because everyone was playing the same game. That game is ending now, not through some collective awakening about quality, but through a technical shift in the architecture of search itself. Generative AI systems, Perplexity, ChatGPT, Claude, Gemini, are progressively replacing the Google results page as the first point of contact between question and answer. And these systems don't work like Google.

Google rewarded form, RAG systems reward substance. Not because they're philosophically different, but because technically they operate on a different plane. When a language model needs to answer a question, it doesn't look for the "best page", it looks for the "best block of text" among millions of available fragments. Those who've written densely, clearly, self-sufficiently win, and those who've written to fill space around three keywords lose.

The data is starting to confirm it. The correlation between position in traditional SERPs and probability of being cited by an AI system is very weak, only 4.5% of cited URLs correspond to the top organic result. Backlinks, that central fetish of SEO, have almost no correlation with visibility in RAG systems. What matters is information density per paragraph, clear source attribution, freshness of content, and consistency of voice across different platforms. Precisely the things that anyone with something to say should always have been doing.

Those who've spent twenty years optimising for Google now have to learn to write. Those who've spent twenty years writing don't have to learn anything, and this isn't a paradox, it's a normalisation. Platforms have always incentivised specific behaviours and content producers have always adjusted, because when the incentive changes, the behaviour changes. The peculiarity of this shift is that, for once, the new incentive coincides with what the sophisticated human reader had always wanted, namely density, clarity, honest attribution, and zero filler.

At this point someone might object that I'm optimising too, just for a different algorithm. Not quite. The difference lies in the primary intention, in the sense that if you write thinking about machines, even when you write well, your compass is the algorithm; when the algorithm changes, you adjust, and in the meantime the prose carries the marks of that adjustment. If you write thinking about the curious reader, the compass is a real person, imagined but plausible, and the text calibrates itself against that person. That today's machines happen to reward precisely that kind of text is a fortunate coincidence, not the engine of the choice. A sort of unintended side effect.

The difference shows in the prose. An optimised text is always recognisable, even when it's well done, because it has a regular rhythm, a suspicious symmetry, a perfectly arched closing, openings that announce the point instead of making it. A text written for a reader has a more irregular rhythm, a length that follows the thought rather than an external rule, pauses where they're needed and not where they help the SEO score.

Then there's the question of the public diary, which is the part I find most interesting. Writing in public without seeking an audience is a gesture that seems contradictory but isn't. It means accepting that the text selects its own readers. Those who don't recognise themselves in the density or the rhythm move on, and that's fine, while those who do recognise themselves stay, and perhaps come back. There's no active persuasion, only the making available of a certain way of looking at things.

I've watched this dynamic work over the years in contexts very far from writing. In design collecting, in serious mid-century modern, in niche contemporary art. The good object doesn't shout, doesn't explain itself, doesn't try to convince. It sits there, and the connoisseur recognises it. Those without the eye walk past without even noticing. The selection is silent, mutual, and far more effective than any marketing strategy.

The same goes for a text. If I write thinking about convincing everyone, I end up convincing no one in particular. If I write thinking about the reader who already exists, somewhere, with a certain mental disposition, the text becomes a possible meeting point. Not guaranteed, possible. And that's enough.

There's an interesting side effect to this stance, which is that the public diary forces you to think more slowly, because you know someone will read it and vagueness doesn't hold up under a stranger's gaze. At the same time it frees you from the pressure of reach, because if you know the real readers are few by definition, you stop measuring a piece's success by the numbers. The number that matters is another one, namely how many times, rereading an old piece, you still recognise the thought as yours and still find it useful. That number, if you're honest, is always lower than you'd expect, and it's the real indicator of the quality of the writing.

Back to the starting point. I write for the reader, not for machines, not because it's a pure gesture of cultural resistance, but because it's the only stance that produces texts worth rereading. Machines, for their part, are becoming sophisticated enough to recognise the same thing. It's an unexpected convergence between my stubbornness and the direction of technology, and for once I'd rather not examine the reason too closely. It works, and I have other things to write.