# Chat agents

> How to actually use a chat agent day to day. Prompting, attaching files, picking models, and reaching into your data with connectors

_30 min · beginner · track: chat-agent · id: chat-agents_

> **Team:** 
>
> You are probably already familiar with chat agents through Claude, Gemini,
> ChatGPT, or Copilot. They look almost identical from the outside, and most
> of what makes one work for you also works for the others. This course is
> about that shared craft, the part that does not change when you switch
> tools.

We will lean on Claude, Gemini, and ChatGPT as the anchor examples, since those are the three most people end up reaching for. The buttons and menus differ in each. When you need the click-by-click for a specific feature, the official sites ([Claude](https://support.claude.com/), [Gemini](https://gemini.google/about/), [ChatGPT](https://help.openai.com/)) stay more current than any course can. Bookmark the one you use most.

> **Tip:** 
>
> **If you get stuck anywhere in this bootcamp, ask your chat agent.** Confused about a concept, curious about a feature you have not tried yet, want a deeper explanation than a course can fit? Open Claude, Gemini, or ChatGPT and ask. Modern LLMs are remarkably good at explaining themselves, and the habit of asking the agent first is the same habit that makes you independent with these tools from day one.

## Pick the right model for the job

**Most chat agents ship two or three models, and the right pick is almost always the flagship.** Today the flagships are **Claude Opus**, **Gemini Pro**, and **ChatGPT in thinking mode**. You will find the model picker either near the prompt box (Claude, Gemini) or in the top left of the conversation (ChatGPT).

For most people, the best choice is to just set it to these options and leave it there. If you are concerned about cost, you can step down a model. If you are doing something particularly complex, you can still step it up a notch by flipping on a "thinking" or "extended reasoning" toggle where the tool offers one.

## Prompt like the model is a sharp colleague, not a search box

**The biggest jump in output quality comes from how you ask, not which model you ask.** The wrong frame is "Google query": three keywords and hope. The right frame is "delegate to a smart colleague who has just walked into the room": tell them what you are trying to accomplish, what you have already tried, what the constraints are, and what good looks like.

Four things, roughly in order, do most of the work:

1. **Set context.** "I am preparing a 5-minute update for our leadership team about Q1 hiring." One line. Tells the model who, what, why.
2. **State the task.** "Draft the update." Verb. Direct.
3. **Give the inputs.** Paste the data, attach the doc, or point to the file (more on that next).
4. **Define done.** "Three bullets, no jargon, end with the one decision we need." Tells the model when to stop and what shape to land in.

You do not need a PhD in prompt engineering. You need to type the four lines a colleague would need to do the work well.

> **Heaven:** Three lines of context, the data attached, two lines of constraints. The first draft is 90% there.
>
> **Hell:** 'Write me a leadership update.' The model invents the audience, the topic, the length, and the conclusions. You spend twenty minutes rewriting it.

## Attachments beat retyping

**If you have the file, attach it.** Do not paraphrase a doc into chat when you can just drop the doc in. Claude, Gemini, and ChatGPT all take attachments: PDFs, Docs, slides, images, spreadsheets, and (on most plans) audio recordings. Drag the file in, or click the paperclip / `+` button, and ask the question against the file directly.

Remember that attachments go into the model's [context](/course/agents-101), the same way the words you type do. Two things follow. First, when you switch to a different task, start a fresh conversation rather than piling new files on top of old ones that no longer matter. Old attachments do not step aside on their own; they keep taking up room and pulling the model's attention. Second, bring context that is actually relevant. Three focused pages beat a 200-page handbook with the answer buried in chapter 14, every time, in cost, speed, and accuracy.

## Connectors: when the agent reaches into your tools

**Connectors let the model pull information directly from the apps you already use, and sometimes take actions inside them too.** No more copy-pasting an email thread, a doc, or a calendar invite into the chat. The agent fetches what it needs on the spot. Without a Gmail connector, "summarise this week's customer threads" is twenty minutes of clicking. With one, it is a single prompt.

The case for setting up many connectors is simple: agents work much better when they have the same digital context you do. Once the connector is in place, you stop having to describe what is going on in your work and start pointing the agent in the right direction.

Each tool ships its own set. Gemini has `@Apps` for Google Workspace (Gmail, Drive, Docs, Calendar, Tasks). Claude has connectors for Google Workspace, GitHub, Slack, and a growing list, plus support for a more general open standard called MCP that any tool can plug into. ChatGPT calls them Apps and covers most of the common ones.

The flip side is access. Once an agent can read your inbox, it can also write replies if you let it. Default to read-only when you connect a new tool, and only widen scope when you have a real reason. The official docs walk through the connect flows for each tool ([Claude connectors](https://support.claude.com/en/collections/15399129-connectors), [Gemini connected apps](https://support.google.com/gemini/answer/13695044), [ChatGPT apps](https://help.openai.com/en/articles/11487775-apps-in-chatgpt)) and they change often, so go there for the current click path.

> **Tip:** 
>
> The two connectors that pay back fastest for most people: your email and your file storage. If you only set up two, pick those. Calendar is a close third.

## Save the prompt when the work repeats

**The third time you write the same shape of prompt, it has earned a name.** Claude and Gemini both let you save reusable instructions from the sidebar; ChatGPT has Custom GPTs that fill the same niche. The two flagship features here are Gemini Gems and Claude Projects: similar in spirit, a little different in scope, both worth knowing.

**Gemini Gems** are saved personas. You write a short brief ("you are a status-update drafter, three bullets, no jargon"), give it a name, and pick it whenever you start a chat. Gems lean on whatever Gemini already knows about your Workspace through connected apps; the persona itself does not carry its own files.

**Claude Projects** are persistent workspaces. They have the same custom-instruction layer as Gems, plus a private knowledge base: a folder of files (briefs, style guides, past examples) that the project keeps in context across every chat inside it. Reach for a Project when the work needs background material the model should always have on hand, not just a tone of voice.

You know you have one when you have done the same task in roughly the same way three times. A weekly status drafter is a Gem. A customer-research assistant that needs to read the same set of interview notes every time is a Project. Iterate on the instructions over time, and your most common AI work moves from "type the prompt every time" to "pick the tool from the menu".

## Quiz

**Q1.** You spend two hours every Friday writing the weekly status update from the same five inputs: a Linear board, three Google Docs your team wrote during the week, and your own raw notes. You want this faster from next Friday. Where does this work belong?

- a. A Gemini Gem with the tone and structure baked in. Each Friday, paste the week's inputs into a fresh chat with the Gem.
- b. A Claude Project that holds the standing instructions plus your team's writing-style examples, and that you feed the week's inputs into each Friday. **(correct)**
- c. A one-off prompt you keep in a notes app and copy-paste into a fresh chat each week, tweaking it as the work evolves.
- d. Nothing saved. Re-prompt from scratch each week so the model approaches the inputs fresh.

_Explanation:_ The recurring part has two layers: the tone (which a Gem handles) AND a body of background context (style examples, team conventions) the model should always have on hand. That is what makes a Project the better fit. A Gem alone would make you re-attach the style examples every week. Notes-app prompts drift. Re-prompting from scratch is exactly the cost you are trying to remove.

**Q2.** You are using Claude with a Gmail connector to triage your inbox. You ask: 'Summarise the threads from customer X this quarter and flag anything that sounds like a churn signal.' The summary is sharp but you notice it missed a thread you remember clearly. Which move actually helps?

- a. Re-run the same prompt; the model is non-deterministic and will probably catch it the second time.
- b. Ask the model what it searched for, narrow the scope (sender, date range, label), and re-run with the missing thread's subject as a hint to confirm coverage. **(correct)**
- c. Switch to a more capable model and re-ask the broad question.
- d. Disconnect Gmail and paste the relevant threads directly so the model sees only what you want.

_Explanation:_ Connectors run a search under the hood, and broad queries quietly miss things. The fix is not a smarter model or a re-roll, it is tightening the search and verifying coverage. Pasting threads manually defeats the point of the connector and only scales to the threads you already remembered, which is the same failure mode.

## Hands-on

1. Pick one real task you have today (a draft email, a doc to summarize, a small piece of code to review) and pick one chat agent (Claude or Gemini, your choice). Write the prompt with the four-part frame: **context**, **task**, **inputs**, **done**. Do not write the prompt freehand, write each part on its own line.

2. Instead of pasting the input as text, **attach the source**: drag the doc, the PDF, or the file in. Ask the question against the attachment. Notice the answer is grounded in the actual file, not a paraphrase you typed.

3. Connect one app you have not yet connected (Gmail, Drive, GitHub, Slack, your pick). Follow the official docs for the click path. Then run a query that uses it: "Summarize the threads in my inbox about [topic] this week" or "What did the team change in [repo] in the last two days." Note how much faster the same answer arrives compared to going to the source manually.

4. Pick one prompt you already retype every week. Save it as a **Gemini Gem** (Explore Gems → Create a Gem) or a **Claude Project** (Projects → New project), depending on which agent you use most. Paste the standing instructions into the description, give it a name a colleague would understand, and run it on this week's actual inputs. Tweak the instructions until the output is something you would paste into Slack without editing.

## Reflect

- What is the one prompt you find yourself retyping each week? Make a note to save it as a Project (Claude) or Gem (Gemini) before the next time you would type it.
- What is the one system in your work where a connector would save you the most time? What is stopping you from setting it up today?
