# AI-first mindset

> How AI lands in your work depends on the mindset you bring. Use it to replace your thinking and you slide. Use it to amplify yourself and you ship things you couldn't before.

_9 min · beginner · track: foundation · id: ai-first-mindset_

> **Team:** 
>
> The impact AI has on you is tied to the mindset you bring to it. Use it to
> replace your thinking and you turn into a slop cannon. Use it to amplify
> yourself and you deliver things you couldn't before. You probably already
> know which side of that line you want to be on.

A lot of people are saying AI is making us all stupid and lazy. There is real
truth to that. If you treat it as a shortcut to skip the thinking, that is
exactly what happens: you remove yourself from the equation. The work moves,
sure, but you don't.

At TechWolf, the wolfpack treats AI the other way around. We use it to amplify
ourselves, not to replace ourselves. There is a small difference in how you
talk to your AI assistant that captures the whole thing. Compare:

- "Hey, what should I do?"
- "Hey, I want to do this thing. Can you tell me what the best and worst case
  scenarios are?"

The first one delegates everything. The second one pulls in perspectives you
might not have come up with yourself, and forces you to think harder about
your own move. Same tool, same conversation length, completely different
trajectory. The second one is what we mean by AI-first.

## You are responsible

This is the rule that the rest of the bootcamp leans on. We will repeat it in
the next course on purpose, because every other principle assumes it.

A pilot flying with autopilot takes more responsibility than a passenger in
the cabin, not less. The autopilot expanded what the pilot can do, and with
that comes a bigger blast radius if anything goes wrong. AI is the same. The
leverage is real. Used well, it is fantastic. Used carelessly, it can cause
trouble well past your own desk.

It doesn't matter whether the people around you are using AI to deliver
things. What matters is the outcomes they drive and the level of ownership
they take. Inside the wolfpack, the working principle is simple: if your name
is on it, it is yours. The AI did not write the email. You sent the email.

## Three ways it bites you back

A short list of how this technology actually shoots you in the foot. Worth
knowing so you can spot it in your own work.

<Cards
  cards={[
    {
      title: "It makes things up",
      body: "AI will name a file, quote a colleague, or cite a study with full confidence and be wrong. Confident-looking output reads the same whether it is right or invented. If accuracy matters, you check."
    },
    {
      title: "It runs in circles",
      body: "An agent told to 'just figure it out' can keep trying, keep failing, keep retrying. Tokens burn, time burns, and nothing converges. If a tool is stuck in a loop, stop it. Restart with clearer success criteria."
    },
    {
      title: "It does things you didn't ask for",
      body: "The minute you let AI act on your computer or your accounts, it has hands. Hands that can delete the wrong file, send the wrong email, or merge the wrong branch. Watch what it proposes before you let it act."
    }
  ]}
/>

## Don't put your data at risk

Not all data is the same. Public stuff is fine to share. Internal stuff
deserves a moment of thought. Customer data and anything covered by a DPA or
regulation is the line you do not cross unless you are sure the tool is
covered.

| Type | Action |
| --- | --- |
| Public or already-shared data (marketing material, public docs, sample data) | Fine for any AI tool |
| Internal data (your emails, meeting notes, internal docs) | Default to caution. When in doubt, ask |
| Customer or production data (anything from your product or a customer contract) | Only with a tool that has a DPA in place. Consumer chat tools never qualify |

> **Warning:** 
>
> Never paste credentials (API keys, passwords, tokens) into any AI tool. Once
> it is in the prompt, assume it is logged somewhere you cannot reach. Rotate
> anything that slipped in before you noticed.

## The safety net

Three habits cover most of the risk. They are simple, and they hold up across
every tool you will pick up later in this bootcamp.

- **Human in the loop on anything that ships.** Read what the agent produced
  before it goes to a customer, a teammate, or production. The five seconds
  you save by skipping this is the five seconds where the bad version
  escapes.
- **Least privilege.** When you connect an agent to your accounts or your
  laptop, only open up what the task needs. Read-only first. Write access
  only when you understand the scope. Don't hand it the keys to your whole
  laptop the first time you try it out.
- **Plan before act.** Most agent tools have a "plan first, run later" mode.
  Use it. Read the plan. Approve it. Then run. The plan is much cheaper than
  undoing what the run did.

> **Heaven:** You ask: 'List files in my Downloads folder older than 90 days, group them by size, and wait for me to confirm before deleting anything.' The agent has the scope and the stop condition baked in.
>
> **Hell:** You ask: 'Clean up my drive.' With no scope and no stop condition, the agent picks the biggest folder it can see and deletes it. That folder was your operating system.

## Quiz

**Q1.** A colleague uses an AI tool to draft the weekly project update for the leadership team. They skim it and send. Two days later a leader points out that one of the dates in the update was wrong. Which framing best captures what went wrong?

- a. The model hallucinated. The fix is to switch to a more reliable model next time.
- b. Project updates are hard to write with AI. The fix is to keep doing this kind of work by hand.
- c. They sent something they could not defend. The model wrote the words, but the message was theirs. **(correct)**
- d. The model did not have the latest project information. The fix is to feed it more context next time.

_Explanation:_ All four name a real factor, but only the third one keeps the responsibility where it belongs. The other framings are useful follow-ups, but they let the sender off the hook for what was sent.

**Q2.** Two colleagues both use AI to help them write a weekly customer-facing brief. Anna reads what the model produced, rewrites a couple of paragraphs, sometimes pushes back on the framing. Ben tells the model what he wants, skims the result, and ships. They produce briefs at roughly the same speed, and both look fine on the page today. Which captures what is actually going on?

- a. Both are using AI well. Same output, same speed; the work behind the page does not really matter.
- b. Ben is more efficient. Anna is duplicating effort the model already covers.
- c. Anna is keeping her hand on the wheel; Ben is letting the model do the work for him. Same brief today, very different trajectory over time. **(correct)**
- d. It depends on the audience. For internal readers, Ben is fine; for external ones, Anna's approach is safer.

_Explanation:_ The two patterns look identical on a single brief; the difference shows up months later. Anna keeps her judgment exercised on the work, so her taste sharpens and she stays able to defend what she ships. Ben's output drifts to whatever the model produces, and his ability to defend it drifts with it.

**Q3.** You used an AI tool to research a topic for a brief that is due in an hour. It gave you five clean, well-phrased points, one of which cites a specific 2023 study by name. The brief otherwise looks great. What is the honest next move?

- a. Ship it; drop the citation. The points are well-phrased on their own.
- b. Keep the citation and ship; the model named a specific study, so it almost certainly exists.
- c. Verify the cited study before the citation reaches anyone. **(correct)**

_Explanation:_ Made-up citations are the textbook hallucination, and they look identical to real ones. Dropping the citation ducks the question without changing whether the rest is right. Trusting a confident-looking study name is exactly how invented sources end up in front of leadership. The only honest move is to verify it somewhere the model cannot fabricate.

## Hands-on

1. Find your company's AI usage policy (search your team's docs, chat, or
handbook). Read it once. If you cannot find one, ask in your team channel.

2. Pick a real task you have today (a draft email, a doc summary, a small
script). Decide which data tier it touches: public, internal, or
customer/production.

3. Based on that tier and your company's policy, pick the AI tool you are
allowed to use for it. If the answer is "none", note that, and move on.

## Reflect

- The next time you reach for an AI assistant, will the question you ask
  delegate your thinking or amplify it? What would the amplifying version of
  the question look like?
- Pick the last thing you shipped with AI help. Could you defend every line
  of it if a colleague asked? If not, what is the rule that would have caught
  it?
