Yijun Liu

Building Blocks of Agentic Systems

2026-03-21


Large language model systems are often described using broad terms like agents, tools, memory, and workflows. But in practice, many useful agentic systems are built from a smaller set of composable design patterns. Among the most important are routing, retrieval, and reusable skills.

A useful way to think about agentic behavior is that the system can choose among possible actions, use external resources, and update its behavior based on intermediate outcomes. Under this view, agentic behavior is not just about having access to tools. It is about operating in a loop: decide, act, observe, and continue.

This post summarizes three building blocks that show up frequently in agentic systems: routing, retrieval, and skills. These patterns are often discussed separately, but together they help move an LLM application from a static prompt-response interface toward a more adaptive and modular system.

Routing

Routing answers a simple but important question: what kind of problem is this, and what capability should handle it? In an agentic system, routing allows the system to adapt its behavior based on the input, rather than applying the same prompt or workflow to every request.

Common routing mechanisms include:

In practice, the best routing method is not always the most sophisticated one. If the route space is stable and well-defined, rule-based or lightweight supervised routing may be preferable to LLM-based routing because they are faster, cheaper, and easier to evaluate.

Retrieval

Retrieval is useful when the model needs access to information that is too large, too dynamic, or too domain-specific to be reliably stored in the model’s weights alone. In that sense, retrieval is not just a way to “add knowledge” to an LLM. It is a mechanism for grounding the system in external evidence.

In a simple retrieval-augmented generation (RAG) setup, the user query is first used to retrieve relevant documents or passages (often referred to as chunks) from a knowledge base, and the retrieved context is then passed to the model for answer generation. This pattern is helpful when the answer depends on private data, frequently changing information, or source material that should be explicitly referenced.

However, in agentic systems, retrieval is often more than a one-shot preprocessing step. It can appear repeatedly inside the decision loop. The system may retrieve evidence, inspect the results, decide that the retrieved context is insufficient or off-topic, reformulate the query, and retrieve again. Under this view, retrieval becomes part of the agent’s action space rather than a fixed front-end component.

This distinction matters because the usefulness of retrieval depends on more than whether retrieval exists at all. In practice, the bottleneck is often retrieval quality: whether the system asks the right query, whether the right evidence is returned, and whether the retrieved evidence is actually usable for the downstream task. Poor retrieval can easily create a false sense of grounding, where the model appears to be evidence-based but is in fact relying on weak or irrelevant context.

For that reason, I find it helpful to think about retrieval in agentic systems along at least three dimensions:

In other words, retrieval is not automatically valuable just because it is present. It becomes valuable when it helps the system make better decisions, take better actions, or produce better-grounded outputs.

Skills

A skill packages reusable procedural knowledge for the agent. Instead of re-explaining the same workflow in every prompt, the system can expose that workflow as a reusable capability. In this sense, a skill is not just a tool and not just a piece of static documentation. It is a structured way of teaching the system how to handle a recurring class of tasks. Skills are especially useful when the same kind of task appears repeatedly but requires more than a single API call.

One of the main benefits of skills is that they separate long-lived task knowledge from the main prompt. This is important because prompt space is limited and expensive. If every workflow, instruction, and edge case is kept permanently inside the active context, the system becomes harder to scale, more expensive to run, and more difficult to control. Skills offer a more modular alternative.

A useful design principle here is progressive disclosure. Instead of exposing the full skill library all the time, the system can reveal only the skills that are relevant to the current task. This helps protect the context window while still allowing the system to access richer procedural knowledge when needed. In practice, this makes skills a convenient abstraction for balancing extensibility and context efficiency.

I find it useful to think of skills as sitting between prompts and subagents. They are more structured and reusable than ad hoc prompting, but lighter-weight than spinning up a separate agent with its own reasoning loop. That makes them a strong fit for repeated workflows where the procedure matters, but the task does not justify a fully separate agent.

Under this view, skills are not only a convenience feature. They are also a system design choice: a way to organize behavior so that the agent can remain modular, composable, and easier to extend over time. Although the two are often discussed together, I find it useful to distinguish skills from tools. A tool expands what the system can do, while a skill shapes how the system approaches a repeated task.

Skills vs. Tools

Skills and tools are not opposites. In many systems, a skill may rely on tools internally. The distinction is mainly one of abstraction: tools expose actions, while skills package reusable ways of using those actions effectively.

Feature Tools Skills
Purpose Provide agents with essential capabilities to accomplish tasks Extend an agent’s capabilities with specialized knowledge
Context Tool definitions (such as name, description, and parameters) typically remain available in the context window Skills are loaded dynamically as needed
Flexibility Provide a fixed set of capabilities Can include scripts or helper resources that are used when needed (“tools on demand”)

(table from Andrew Ng’s course


Skills vs. Subagents

Feature Subagents Skills
Purpose Have their own isolated context and tool permissions Provide specialized knowledge to the main agent or any of its subagents
Operation The main agent delegates a task to the specialized subagent, which works independently and returns results Skills inform how the work should be done
Example A code reviewer subagent A language- or framework-specific best-practices skill

(table from Andrew Ng’s course

Closing Thoughts

Routing helps determine what capability should respond, retrieval brings in missing information, and skills package reusable procedures. Together, they move an LLM application away from a static prompt-response interface toward a more adaptive and modular system. The harder question, in my view, is not how to add more components, but how to evaluate and control them once they begin interacting inside a larger loop.

References