# Agents 101

> An agent in plain language, plus the four concepts you will see in every one: context, prompt, skills, tools and connectors

_15 min · beginner · track: foundation · id: agents-101_

> **Team:** 
>
> No matter your model or your tool stack, the same handful of ideas keep
> showing up under different names. Once you can recognise them, you can
> pick up any new agent and get to work without re-learning the vocabulary.

Most of the jargon you hear ("LLM", "system prompt", "MCP server", "tool
use", "skills") collapses into one definition and four concepts. Here
they are, in the order they matter when you sit down to use an agent.

## What is an agent

**An agent is an AI entity that can take actions and make decisions on
its own.** That is what separates it from a plain chat with a large
language model. An LLM gives you an answer; an agent goes off and does
things with it.

Most agents you will meet are powered by an LLM under the hood, but the
agent is not the LLM. The LLM is the brain making the calls. The agent
is the thing wrapped around it that can actually act on those calls:
open a file, run a piece of code, search the web, send a message, kick
off the next step.

The other distinguishing feature is *agency*. Inside the room you give
it, the agent decides what to do, in what order, and how long to keep
going until the goal is reached. The first time you hand a task to a
capable agent and watch it loop on its own for ten minutes without
needing a follow-up, the difference clicks.

## Context

**Context is everything the model is looking at right now.** The prompt
you just typed, the documents you shared, the back-and-forth so far. You
will hear "context engineering": that just means making sure the model is
looking at the right things at the right time for the task in front of
you.

Two practical limits. Models have a context window: they can only look at
so much at once. And bigger is not better. Stuff a model with everything
you can find and quality goes down, not up, because the relevant signal
gets buried. More context also costs more, both in time and in money.

So treat context the way you would treat a smart colleague: as much
relevant information as possible, as little irrelevant information as
possible. The "system prompt" people talk about is just a slug of context
that loads at the top of every session. Optimising what is in it is one
of the highest-leverage things you can do.

## Prompt

**A prompt is the instruction you give the model in the moment.** What
you want, what you would like it to do, what "done" looks like. The
recipe for a good outcome is that simple: a good prompt plus
good context. When the answer comes back generic or off, it is almost
always one of the two missing.

There are more structured ways to bake instructions into your prompts so
you stop retyping them every session. We get to those when we look at
specific tools, in the next level of the bootcamp.

## Skills

**A skill tells your agent how to do something.** Under the hood, a
skill is a prompt that loads automatically when it is relevant: the
steps to follow, the tools to call, the references to pull in, the shape
the output should take.

You write a skill the moment you notice you are walking the agent
through the same recipe a third time. The retyping is the signal. Once
you start writing them, the same handful of skills end up running most
of your week.

## Tools and connectors

**Tools and connectors are how an agent reaches out to other software.**
The same way you use the apps on your computer, an agent uses tools and
connectors to act on systems beyond the chat box.

A *connector* hooks the agent up to a separate system: your calendar,
your inbox, a shared document store, a database. A *tool* is one
discrete action the agent can take. Tools can come bundled with a
connector ("create event", "find a free slot" arriving with a calendar
connector), or they can be local ("look up today's date", "run this
piece of code"). The range goes from trivial to genuinely complex.

You will hear *MCP* a lot. For now, treat it as a synonym for
*connector*. The acronym names a standard for how connectors are built,
but in conversation people use it interchangeably with the thing it does.

## Quiz

**Q1.** You ask a chat-only LLM 'organise the files in my Downloads folder' and it returns a tidy step-by-step plan, but nothing has actually changed in the folder. You ask the same of an agent in the same product family and ten minutes later the folder is sorted. Which framing actually captures the difference?

- a. The agent uses a smarter underlying model that understood the request better.
- b. The agent has the ability to actually take actions, and the agency to decide which ones to call and in what order. The chat LLM had to stop at describing what to do. **(correct)**
- c. The agent has a bigger context window, so it could see the files before responding.
- d. The agent has been trained on your computer, while the LLM has not.

_Explanation:_ Agents are usually powered by the same kind of model under the hood as a chat LLM. The thing that lets them act is the wrapper around the model: tools to call, plus the agency to decide what to do, in what order, and how long to keep going. A more capable model alone does not move files; the agent's ability to act does.

**Q2.** Halfway through a long working session, your AI tool keeps missing details you mentioned an hour ago, even though you stated them clearly at the time. What is actually going on?

- a. The conversation pushed past the context window. What was forgotten is no longer in front of the model, regardless of how clearly you said it. **(correct)**
- b. The model is overwriting older statements with newer ones. The fix is to repeat the important details whenever they come up.
- c. Long conversations make the model less attentive. The fix is to ask it to focus harder before each answer.
- d. Pasting too much text confuses the model. The fix is to share less so the important parts stand out.

_Explanation:_ The window is the structural truth here. Older parts of the conversation are no longer visible to the model, regardless of how clearly you said them. The other framings ('overwriting', 'attention drift', 'too much text confuses it') describe behaviours the model does not actually have.

**Q3.** You want your AI tool to be able to act on the documents in your team's shared store, not just the ones you happen to paste in. Which change actually puts that within reach?

- a. A bigger context window in the tool, so you can paste in more documents at once before it loses track.
- b. A connector (often called an MCP) that the agent can use to fetch from the store on demand. **(correct)**
- c. A more capable model that has seen more of the public internet during training.
- d. A way to save the documents to the agent's long-term memory so it can recall them whenever you ask.

_Explanation:_ A bigger window helps once you have the data in hand; it does not get the data. Capability and training data do not bridge to your private store. Memory holds what the agent has been told, not what sits on a shared drive. A connector is the literal wiring that lets the agent reach the document store at all.

**Q4.** Three Mondays in a row, you have done the same task with AI: pull a few sources together, write a status update in the team's voice, finish with one open question. Each Monday, you re-explain the steps, the sources, and the closing structure from scratch. Which read of the situation is the most useful?

- a. Pin the structure into the persistent context the model reads at the start of every session, so the format is the default everywhere.
- b. You have a workflow on your hands: the steps, the references, and the output shape are stable enough to package as a skill, not retype as a prompt. **(correct)**
- c. Save the prompt text in a personal note and paste it in next Monday; that solves the retyping.
- d. Use the agent's memory to teach it your preferences, so it produces this shape any time you ask.

_Explanation:_ A skill is more than a saved prompt: it is the packaged workflow (the steps, the sources to pull in, the tools to use, the shape of the output) that the agent applies whenever the task matches. Persistent context is the wrong scope (it would apply everywhere), a pasted snippet ignores the structure, and memory is for facts the agent should remember rather than task-shaped workflows.

## Hands-on

1. Pick one AI tool you used this week.

2. For each of the four ideas above (context, prompt, skills, tools and
connectors), find where it shows up in that tool. Where does context
live? Is there a place to set persistent instructions? Are there skills?
What about connectors?

3. Note the one idea you could not locate. That is what to look up the next
time you open the tool.

## Reflect

- Which of the four ideas felt most familiar, and which felt new? The
  new ones are usually the ones doing the most work in the tools you
  already use.
- Where in your week could a skill replace a prompt you keep retyping by
  hand?
