LLMs and the wall of text
Product and engineering teams need to beware of the AI “wall of text”. LLMs have a tendency to generate huge amounts of text even for simple answers. The is likely due to increased ability to reason coupled with the fact that we as humans tend speak ambiguously, resulting in a mirroring of such semantics on the part of the AI.
Developers have had decades to improve tooling for structured data, but only three years to improve tooling on the kind of unstructured data output by LLMs. Until harnesses and tooling catches up, here is how developers can avoid “wall of text syndrome”
- Ask the model to “write at most 50 words before code” and “be brief”.
- Use shorter instead of longer words to describe the coding problem (the model will mirror this).
- Establish standard practices around CLAUDE.md, skills, tools, and MCPs that reinforce brevity.
This isn’t just best-practices, it is critical to reducing the amount of cognitive load on developers and reducing token costs while reducing time to value.
How are you dealing with the LLM wall of text in your organization?