A prompt asks a question. A brief sets a table. The difference sounds small, but it's the gap between a model that guesses and a model that knows what you want.

I spent eighteen months chasing the perfect prompt. The right phrasing, the right temperature, the right format instruction. What I was really doing was trying to extract value from a black box without putting anything meaningful in. You can do that. You will get mediocre results, consistently.

The shift happened when I started treating the model like a very capable new contractor who knew nothing about my situation. You would not hand a contractor a two-word instruction and expect something useful back. You would tell them who you are, what you need, who it's for, and what done looks like. That's a brief. That's what changed everything.

What makes a brief different from a prompt

A prompt is caller-side. It asks the model to produce something and hopes the model fills in the right assumptions. A brief is both sides of the conversation at once. It makes the assumptions explicit so the model doesn't have to guess at them.

The model is not a search engine. It is not retrieving a fixed answer. It is generating the most probable good response given everything you have given it. The more precise your input about what "good" means for your specific situation, the more directly it can aim.

This is not about tricks. It is about information.

What a brief contains

Four things, in this order: who I am, what I'm trying to produce, who it is for, and what good looks like to me. Nothing else. The cleverness comes from the model. The context comes from you.

Who I am — one or two sentences on your role and domain. Not a formal bio. Just enough to anchor the vocabulary and the level of assumed knowledge. "I'm a product manager at a mid-size B2B SaaS company" is more useful than "I work in tech." The difference tells the model what jargon to use, what problems to take seriously, and what level of sophistication to match.

What I want — the output, described as specifically as you can manage. Format, length, tone, intended use. If it's a document, name it. If it's a decision, describe the decision. "A 400-word email" is better than "an email." "A one-page argument memo, not a list" is better than "a summary." Specificity here is not micromanagement — it is communication.

Who it's for — the audience. Their relationship to the topic, their level of knowledge, what they care about, what they will do with the output. The model calibrates voice, complexity, and emphasis based on this. The same content written for a board of directors should read differently than the same content written for a new team member. Tell the model which.

What good looks like — constraints, non-negotiables, examples if you have them. This is where most people under-invest. It takes more thought than the other three sections, and it pays back more than any of them. What tone is wrong? What length is too long? What has not worked before? What do you definitely want included?

Once I started opening every session with this structure, the back-and-forth collapsed. First drafts got closer. Corrections got smaller. Re-prompting turned into small nudges instead of full rewrites.

The best prompt I ever wrote was a paragraph about the reader.

The template

Here is the structure I use, verbatim. Adapt it to your context — the categories matter more than the exact words:


I am [role/domain]. I'm working on [project or task]. The output I need is [specific format, length, purpose]. It is for [audience — who they are, what they know, what they need from this]. Good looks like [examples, constraints, non-negotiables]. Please [specific instruction for how the model should approach this].


The whole thing is usually three to six sentences. It takes me sixty seconds to fill out. I have been using some version of it for over a year, and I have not found anything that beats it.

The mistakes people make

The most common mistake is skipping "who it's for." People describe what they want and forget to describe who will receive it. The model can adjust significantly for audience. Most of the time, people leave that lever untouched.

The second mistake is using vague quality words. "Professional," "clear," "engaging" mean different things to different people and nothing specific to the model. Replace them with constraints: "no jargon above undergraduate level," "under 300 words," "one central argument, not a list of bullet points."

The third mistake is re-prompting instead of updating the brief. If the first output misses the mark, the instinct is to add more instructions to the thread. Sometimes that works. But often the miss was in the original brief, and adding to the thread layers complexity on top of a weak foundation. Go back to the brief. Fix the problem there.

Why this works

The model is trained on vast amounts of human text. It doesn't lack ideas — it lacks specificity about which of its many possible outputs is the right one for you. A brief hands it that specificity. You're not tricking the model into better answers. You're just giving it enough to aim at.

The brief I wrote this post from was forty words long. The post is seven minutes to read. That ratio — short context, longer output — is roughly what you should aim for. More than that, and you're doing the model's job for it. Less, and you're leaving too much to chance.

Write one today. Watch your output change.