We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Generate an /llms.txt file for the most recent version of a package:
https://hex2txt.fly.dev/<package>/llms.txt
Examples:
Generate an /llms.txt file for a specific version of a package:
https://hex2txt.fly.dev/<package>/<version>/llms.txt
Examples:
<package>
with the desired package name and
<version>
with the specific version number.
Yes. Although the implementation is currently straightforward, the end goal is to produce files that are optimized specifically for inference-time consumption by an LLM (or LLM-adjacent tooling). The /llms.txt component is a signal that these files are intended for use by machines, not humans.
Here's an example of two real sessions using Aider:
aider --sonnet
aider --sonnet
This works by scraping output files produced by ExDoc, making numerous assumptions (for example, relying on ExDoc to generate JavaScript files with embedded JSON assigned to specific JS variable names). This is obviously fragile. For this reason, documentation published with older versions of ExDoc might not work.
Yes, I think so (with, perhaps, additional web-facing tooling facilitated by other Hex-related projects). Iterating on this prototype and collecting community feedback is the best way to determine how useful this feature is and to inform requirements.
This is currently prototype-quality code, without proper error handling (among other deficiencies).
But the biggest practical issue is the size of generated documentation files, specifically for packages with a large API footprint (such as Phoenix, Ecto, Elixir, etc.). These docs can consume several hundreds of thousands of tokens and easily exhaust all available LLM context space.
We need to find ways to reduce the file size (e.g. by only including information for a subset of modules, or by dropping examples, etc.) Additionally, there may be clever ways to use embeddings to dynamically include only the relevant components of documentation for the task-at-hand.
Please submit a PR on GitHub if you'd like to help contribute. Some work (and lots of experimentation) will be required to discover how to most effectively assemble LLM-specific documentation.