AI

Promoting HTML Output with Claude Code: Why It's Superior to Markdown

The Claude Code team at Anthropic recommends HTML over Markdown for AI output, highlighting the potential of SVG and interactive elements to enhance comprehension.

3 min read Reviewed & edited by the SINGULISM Editorial Team

Promoting HTML Output with Claude Code: Why It's Superior to Markdown
Photo by Markus Spiske on Unsplash

The New Potential of HTML Highlighted by Claude Code

For years, Markdown has been the de facto output format for large language models (LLMs) due to its simplicity. However, Thariq Shihipar from Anthropic’s Claude Code team has challenged that convention with an article advocating the “unreasonable effectiveness” of HTML, urging a reevaluation of how we approach AI outputs.

The Superiority of HTML Over Markdown

In the early GPT-4 era, constrained by an 8,192-token limit, Markdown was preferred for its token efficiency. However, Thariq argues that in environments like Claude Code, HTML offers distinct advantages as an output format.

HTML goes beyond simple text formatting, enabling advanced expressions such as:

  • Direct rendering of charts and diagrams using SVG
  • Interactive widgets powered by JavaScript
  • Systematic information presentation via in-page navigation

These capabilities allow users to receive complex code explanations or data analysis results in a far more intuitive and comprehensible manner.

A Prominent Developer’s Validation of HTML Output

Supporting this argument is Simon Willison, a renowned tech blogger and developer. Inspired by Thariq’s article, he explored the potential of HTML outputs further.

Willison tested HTML output by asking an LLM (GPT-5.5) to generate an explanation of a newly discovered Linux security vulnerability, “copy.fail.” He prompted the model to “leverage HTML, CSS, and JavaScript to create a rich, interactive, and as clear as possible explanatory page.” The result was a highly useful and detailed page that successfully utilized these technologies.

A Paradigm Shift in Prompt Engineering

This debate extends beyond mere output format preferences—it has the potential to reshape the very foundations of prompt engineering.

Thariq provides thought-provoking examples of prompts, such as, “Review this pull request and create an explanation as an HTML artifact. Focus particularly on the logic behind streaming/backpressure, display the actual diffs with margin annotations, and use color coding based on importance.” By specifying detailed output formats, users can better harness the capabilities of LLMs.

The Future Ahead: An Era of Rich AI Outputs

Willison remarked, “Since the GPT-4 era, I defaulted to requesting almost everything in Markdown, but this article made me rethink that approach.” He expressed enthusiasm for experimenting with HTML outputs in particular.

As LLMs’ context windows expand and their processing power improves, token efficiency is becoming less critical. HTML outputs leveraging SVG, interactive elements, and rich styling could hold the key to qualitatively enhancing the information derived from AI interactions. Claude Code’s “HTML-first” approach may revolutionize future AI development workflows, drawing increasing attention to this innovative shift.

Frequently Asked Questions

Why is HTML output gaining attention now?
Previously, LLMs were limited by context windows (the amount of text they could process), and Markdown was a practical solution within those constraints. However, as models have improved and those limitations have eased, HTML's advantages in producing richer, more comprehensible outputs are being reevaluated.
How can HTML outputs be practically utilized?
HTML is effective for tasks like code reviews, explaining complex algorithms, and visualizing data analysis results. By specifying detailed instructions in prompts—such as "Draw a flowchart using SVG" or "Highlight importance with color coding"—users can generate interactive and highly intuitive explanatory materials.
Source: Simon Willison's Weblog

Comments

← Back to Home