Skip to main content
If you are building autonomous workflows or using an AI assistant to manage your Encord projects, you can inject our platform’s abilities and sitemap directly into your agent’s context.

Why Use Machine-Readable Docs?

While standard docs are for humans, llms.txt and skill.md provide a source of truth designed specifically for Large Language Models:
  • Reduce Hallucinations: Explicit constraints in skill.md prevent agents from guessing API parameters or using deprecated methods.
  • Token Efficiency: We strip away the HTML and UI “noise,” saving your context window space and reducing API costs.
  • Complete Site Mapping: llms.txt provides a lightweight index so agents do not have to “crawl” your site to find the right page.
  • Autonomous Action: Workflow definitions enable agents to actually execute tasks rather than just explain how they might be done.
FileThe AnalogyUse Case
llms.txtThe SitemapHelping an LLM find the right information.
skill.mdThe SkillsetHelping an agent do work on the Encord platform.

Setup

Agents can consume these files in two ways: using a direct URL, or using the Skills CLI.

The Discovery Index (llms.txt)

Point your LLM or RAG system to our documentation index. This file contains a curated list of all pages and their descriptions in a format LLMs can parse instantly. URL: https://docs.encord.com/llms.txt

The Skills CLI (skill.md)

For agents that need to understand Encord’s specific capabilities and workflows, use the Skills client. 1. Install the CLI
npm install -g skills
2. Add Encord’s Capabilities Run the following command to point your agent to our machine-readable metadata:
npx skills add docs.encord.com

Using skill.md in Production

Traditionally, you write a Python script that calls the Encord SDK to do something specific. For example, to register a Dataset, export labels, and so on. You decide the logic. The script executes it. An agent flips this around. Instead of writing every step yourself, you give an LLM a goal. For example, “check for any completed label rows and export them to COCO format”. The LLM then figures out which SDK calls to make, and in what order, to accomplish it. The agent is the LLM acting autonomously on your behalf, writing and executing the logic at runtime rather than ahead of time. For this to work reliably, the LLM needs accurate knowledge of the Encord platform: what methods exist, what order operations must happen in, what the limits are, and what mistakes to avoid. That is what skill.md provides. Without it, the LLM falls back on whatever it learned during training, which may be outdated or just wrong.
If you are using an LLM as a coding assistant to help you write Encord SDK scripts, you can still benefit from skill.md. Simply paste its contents into your conversation so the LLM generates accurate, production-ready code rather than guessing at method signatures or parameter names.

Using skill.md in Autonomous Pipelines

skill.md is a plain text file. It does nothing on its own. You are responsible for loading its contents into your agent’s context before it runs. How you do this depends on the framework you are using, but the principle is the same in all cases: the file’s contents need to be present in the LLM’s context window when it is deciding what to do. An example using the Anthropic SDK:
import anthropic

with open("skill.md", "r") as f:
    skill_context = f.read()

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-opus-4-5",
    max_tokens=1024,
    system=f"You are an agent that manages Encord annotation pipelines. Use the following skill reference to guide your actions:\n\n{skill_context}",
    messages=[
        {"role": "user", "content": "Check for any completed label rows in project abc123 and export them in COCO format."}
    ]
)
If you are using a framework like LangChain or CrewAI, inject skill.md as a system prompt, a tool description, or a knowledge document depending on what your framework supports. The goal is always the same: make the contents available before the agent starts reasoning about what to do.