# AI-first principles

> Five principles that hold up across every AI tool and every kind of work, no matter what ships next week

_25 min · beginner · track: foundation · id: principles_

> **Team:** 
>
> AI tools and models change almost every other day, and trying to keep up by
> chasing every new trick is a full-time job nobody asked for. The good news:
> a small set of principles makes it surprisingly easy to stay current. They
> hold up across whatever ships next.

Most AI advice is tactics: new tool, new prompt, new keyboard shortcut. The
five below are different. They are how you get the most out of AI without
things blowing up the second you turn your back, and they survive every
model release and every UI redesign.

## 1. You are responsible

We mentioned this in the last course, and it is worth saying again. The
point is not to be hard on yourself; the point is to put out work you can
be proud of. When something feels too easy, sometimes it actually is too
easy.

There is a phrase for AI work that is fast, plausible, and slightly off:
AI slop. Internally, slop costs you credibility with the people you sit
next to every day. Externally, it lands in front of a customer with your
name on it. Sometimes it even looks like good work, which is the worst
kind, because nobody catches it until it has done its damage. The model
produced exactly what it was asked to produce. It cannot be accountable.
The accountability comes back to you.

> **Heaven:** You read the model's draft, push back on a sentence, rewrite the conclusion in your own voice. The output is yours, signed by you, defensible if anyone asks.
>
> **Hell:** You skim the draft, send it, and a colleague flags three vague claims and a wrong number. 'The AI wrote it' does not change whose name is on it.

## 2. Context is king

The biggest predictor of whether AI output is useful is how much real
context the agent had when it started. The more detail and nuance you
bring to what you are asking, the more likely the agent is to interpret
you correctly. There is no prompt-engineering trick that beats this. The
model is doing its best to read your mind, and it cannot.

The cheap ways are usually the best ways. Paste the document. Link the
brief. Drop in the constraints. If you work a topic regularly, keep the
source material somewhere you can grab fast. Later in the bootcamp we
look at voice transcription as a way to drop a wall of context into a
session in under a minute, since speaking is much faster than typing and
you naturally add the kind of detail the agent needs.

> **Heaven:** You start a session by sharing the brief, the deck you are working from, the constraints from leadership, and a sentence on what you have already tried. The agent's first response is already useful.
>
> **Hell:** You type 'help me with this report'. The agent guesses, asks two clarifying questions, you get bored, the answer is generic. Twenty minutes lost.

## 3. Focus

It is very easy to keep adding things to a session. One more polish, one
more feature, one more nice-to-have. The trouble is that everything you
add also adds complexity, and complexity makes the next change harder.
The goals you tacked on become weights on the goal you actually came in
for.

The discipline is keeping your eye on the original ask and protecting it.
Not delivering less, but delivering the thing. Five highly focused
projects beat one overstuffed project that fails to make a point. If you
cannot describe what "done" looks like in one sentence, you are not ready
to start.

> **Heaven:** You ask for one specific edit: 'Add a TLDR at the top of this doc, three sentences, plain language.' You read it, tweak one word, ship.
>
> **Hell:** You ask the agent to 'tidy up the doc'. It rewrites your intro, drops a paragraph you actually needed, and changes the tone. You skim and send.

## 4. Start with a plan

It is much easier to steer an agent before it has walked a hundred metres
in the wrong direction than after. A plan up front (what you want, what
the agent thinks the steps are, what assumptions it is making) lets you
catch the wrong assumption while it is still cheap, instead of unwinding
an hour of work and a pile of tokens.

The nice side effect: when the plan is good, you can let your agent run
for much longer without needing to jump back in. You front-load the
thinking, then you trust the execution.

> **Heaven:** You ask for a plan first. The agent lays out six steps. Step three assumes the wrong audience for the doc; you correct it before any drafting starts.
>
> **Hell:** You skip the plan. Twenty minutes later you have a polished version of the wrong thing, and now you have to argue your way back to the right shape.

## 5. Build for yourself first

This one is about resisting a very real temptation. You discover
something fun, you build a thing, it works, and right away you want to
share it with the world. The excitement is genuine, and it is also a
trap. Until you have used the thing yourself a fair few times, it is
very hard to tell whether anyone else would actually use it.

Maintenance is the part nobody warns you about. A tool you do not
personally rely on tends to quietly stop working without anyone noticing,
and you keep doing the upkeep anyway. Be your own first user. If it
survives your own use, it has earned the right to leave the building.
Otherwise you get all of the work and none of the value.

> **Heaven:** You build a thirty-minute version that solves your own version of the problem, use it for a week, and only then think about whether it generalises.
>
> **Hell:** You spend two days building the polished general version of a tool you have never used. It ships. Nobody adopts it, including you.

> **Warning:** 
>
> **A surprising trap: don't build your own agent inside your agent.**
> 
> Almost everyone, the first time they get their hands on a capable coding
> agent, has the same idea: "I should build an app where the user talks to
> a different AI assistant that I designed." It feels creative. It is,
> almost always, an anti-pattern.
> 
> The smartest, most up-to-date, relentlessly-improving agent in your work
> is the one you are already using. It gets better with every release. The
> custom one you cobble together does not, and the maintenance is on you.
> If you want generic functionality available in your own work, extend the
> agent you already have. Don't build a parallel one beside it. You will
> save the weekend, and the result will be better.

The five stack on each other. Responsibility is the floor everything
else stands on. Context turns vague prompts into useful sessions. Focus
keeps the session small enough that you can defend the result. The plan
turns focus into something the agent can actually execute. And building
for yourself keeps all of the above grounded in work that matters to a
real person, starting with you.

## Quiz

**Q1.** You sit down with an AI tool to fix a small bug. Within 20 minutes you have rewritten three unrelated parts of the work and added something the original problem did not need. The original problem is still not fixed. Which principle, applied at the start, would have done the most to prevent this?

- a. Build for yourself first; you got pulled into building speculative features.
- b. Focus; the work expanded past the one thing you set out to do. **(correct)**
- c. Start with a plan; you should have laid out the steps before touching anything.
- d. Context is king; the agent did not understand the situation well enough.

_Explanation:_ All four options name real principles. The failure mode here is scope creep specifically, the kind of drift where 'just one more change' compounds. Focus is the principle that, taken seriously at the start, ends a session like this before it begins. A plan helps but does not stop you from adding new goals mid-session. Context and self-use address different failures.

**Q2.** A teammate is excited to build a small internal tool that 'helps everyone on the team find documents faster'. They have not used it themselves yet. They ask whether to start by drafting a spec, by mocking up a UI, or by writing the search piece. What does the AI-first reading of this question suggest?

- a. Start with the search piece; until the hard part works, nothing else matters.
- b. Start with the spec; the team needs alignment before anything is built.
- c. Start by building a thirty-minute version that solves your own version of the problem first, with no users besides you. **(correct)**
- d. Start with a mock UI; that is the cheapest way to get feedback from the team.

_Explanation:_ All three of the conventional answers (spec, search, UI) skip the principle: be the first user yourself. Tools built for hypothetical others tend not to fit anyone in particular. A thirty-minute self-built version teaches you the shape of the problem before any of those questions can be answered well.

**Q3.** Two people start the same task at the same time. Person A types one sentence into the model: 'help me fix the broken section in the report'. Person B opens the report, pastes the section, includes a screenshot of how it currently reads, and says 'I have already tried tightening the intro and it did not help.' Both spend the next ten minutes working with the model. Why will the outputs diverge?

- a. Person B has a better model; the difference is at the tool level, not the prompt level.
- b. Person A will catch up once the model asks the right clarifying questions.
- c. The amount of useful context at the start sets the ceiling for the next ten minutes; B has raised that ceiling, and A is still discovering what the problem is. **(correct)**
- d. Quality of output is mostly random; both will land in roughly the same place.

_Explanation:_ Models do not magically reach the right answer through clarifying questions; that is just a slow, polite version of you sharing the context up front. The early prompt sets the ceiling. Person B walked in with the constraints attached, and the next ten minutes get to work on the actual problem instead of discovering it.

## Hands-on

1. Pick a real task you have to do today. Something concrete: a section to
revise, an email to draft, a small piece of code to write. One task only.

2. Write the prompt for it, but apply principle 2 first. Add the relevant
context: what you have, what you have tried, what "done" looks like in
one sentence. If it is faster, dictate this with voice and let the model
clean it up.

3. Before letting the model produce the final answer, ask it for a plan:
what it will do, in steps, with the assumptions it is making. Read the
plan. Push back on at least one thing. Then let it execute.

## Reflect

- Of the five principles, which one are you most likely to skip when
  you are in a hurry? What does that say about where the work most often
  goes wrong?
- "Don't build your own agent" sounds obvious until you are mid-project
  and excited about your own creation. Where in your work this month are
  you closest to making that mistake?
