Your prompts are disposable. Your rejections compound. Here's the skill nobody is developing (+ the guide kit to start)

Original article: Read on Nate's Substack →
Published by Nate B. Jones on March 11, 2026. Processed and summarised on March 11, 2026.
Summary
Main Thesis
The most valuable and underrated skill in AI-augmented work is rejection — saying "no" to AI output. While everyone invests in generation skills (prompting, workflow design, model selection), the real competitive advantage lives in the moment of expert rejection: recognition, articulation, and encoding of quality constraints. Prompts are disposable. Rejections compound.
Key Data Points & Findings
- GDPval benchmark: AI now beats or ties experienced professionals (average 14 years of experience) on 83% of knowledge work tasks across 44 occupations — 11x faster, at less than 1% of the cost. Two years ago this number was 70.9%.
- The 17–30% gap is where organisations win or lose. And slop — output that's technically correct but wouldn't change a single decision — is a rejection problem, not a generation problem.
- Agent reliability gap: More than a third of demonstrated AI agent capability evaporates in production compared to benchmark performance.
- Junior hiring collapse: Entry-level tech postings fell ~67% in two years following ChatGPT's release. Google and Meta hire ~50% fewer new graduates vs. 2021. UK tech graduate roles fell 46% in 2024. A Harvard study (285,000 firms, 62M workers) found AI adoption drops junior employment 8–10% within six quarters while senior employment barely changes.
- The Epic Systems model: Epic didn't win healthcare by better technology — it won by 45 years of encoding clinical domain rejections from thousands of hospitals into a deeply integrated platform. 305 million patient records, near-zero churn, structural switching costs.
The Three Dimensions of Rejection as a Competency
Recognition — detecting that something is wrong. Requires genuine domain expertise. Cannot be shortcut. A domain expert with strong recognition and AI tools can evaluate 10x the output she could before. AI inside that boundary is a force multiplier; outside it, AI is a confidence multiplier (which is worse).
Articulation — explaining why something is wrong in language precise enough to produce a reusable constraint. "This isn't right" vs. "You can't treat a debt service coverage ratio the same as a minimum net worth requirement — they have completely different monitoring triggers." Articulation turns taste from personal to organisational. Almost no organisation is teaching this deliberately.
Encoding — making the constraint persist beyond the moment of rejection. Today, expert corrections live in emails, Slack messages, conversation histories. They evaporate. The same rejection happens again tomorrow. The GCC torture test suite wasn't designed from first principles — it was built over 30 years by people who encountered failures, articulated constraints, and encoded them as tests.
The Compounding Flywheel
When rejections are encoded:
- Expert time needed for future verification decreases with each encoded constraint
- The library grows; verification gets cheaper per unit of output reviewed
- This compounds across every domain the organisation touches
- Bloomberg, Epic, and every dominant vertical SaaS company did some version of this — AI just makes the encoding cycle radically faster
The Encoding Gap
Every AI tool is built for generation. Nobody has built the capture layer for institutional taste. "The product that would watch for rejection moments, extract the constraint, persist it, and surface it on relevant future tasks doesn't exist yet." This is described as "the most important unsexy opportunity in AI" — the CRM for institutional taste.
The Seed Corn Problem
Junior roles are where people develop recognition — the most critical and irreplaceable dimension. You can't develop the skill of catching AI errors without years of doing original work and getting it sent back. By eliminating the junior pipeline, organisations are cutting off the supply of future experts whose taste the entire AI economy depends on. AWS CEO Matt Garman: "How's that going to work when ten years in the future you have no one that has learned anything?"
Practical Takeaways
| Role | What to do |
| Executives | Audit where domain experts are; start treating encoded domain judgment as an asset class — because it is |
| Managers | Create space for articulation — when someone rejects AI output, make that explanation visible as investment, not bureaucracy |
| Individual contributors | Deepen domain expertise and practice articulation; domain expertise compounds over years, tools change quarterly |
| Product builders | The generation side is a commodity. Build the capture layer — the infrastructure that turns every expert rejection into a persistent, compounding constraint |
Core principle: Generation scales with compute. Taste scales with rejection. Compute is a commodity you buy. Taste is an asset you build.
Prompt Kit
From promptkit.natebjones.com — companion to this article.
Companion Prompt Kit — "Your Most Valuable AI Skill Is Saying No"
Your taste is already in your conversation history. Every time you told an AI "that's not quite right," "make it less X," "don't do that," or rewrote its output entirely — you were encoding a preference. You just never captured it.
These five prompts surface those patterns, name them, and turn them into reusable constraints you actually own.
Works with any AI. No special setup required. If your AI can search past conversations, these prompts will trigger that. If not, they'll guide you through reflection instead — the output is the same either way.
Prompt 1: Open the Audit
Use this to start the conversation. The AI will ask you about your work context before doing anything else.
I want to identify my taste — the standards and preferences I hold that most people never articulate. To do that, I need to surface what I've rejected or corrected in AI output over time.
Before we start, ask me:
- What kind of work do I primarily use AI for? (writing, coding, strategy, communication, research — or a mix)
- Are there one or two domains where I use AI most heavily?
- Do I have a rough sense of anything I correct AI on repeatedly, even if I can't name why?
Ask me these questions and wait for my answers before doing anything else.
Prompt 2: Mine the History
Run this after Prompt 1. If your AI can search conversation history, this will trigger it. If not, it will guide you through a structured recall instead.
Now let's find the actual rejection moments. I want you to do two things:
First, check whether you have the ability to search my conversation history or memory. If you do, search for moments where I: corrected your output, said something like "not quite," "too X," "less Y," rewrote something you gave me, or asked you to redo something. Look across all topics and time periods. List what you find — the specific corrections, not just summaries.
If you can't access my conversation history, tell me explicitly — something like "I don't have access to your past conversations, so let's do this through reflection instead." Then ask me: In the last few weeks, what's one thing an AI gave you that you changed or rejected? Walk me through what was wrong with it and what you changed it to.
If you did find history results, still ask me the reflection question too — there may be patterns you missed.
Gather both sources before moving on.
Prompt 3: Find the Pattern
Run this after you've surfaced 5–10 rejection moments from Prompt 2.
Now look across everything we've surfaced. I want you to identify the underlying standards — not just the individual corrections, but what they have in common.
For each pattern you see, give me:
- A short name for the preference (3–6 words)
- What I reject (the failure mode)
- What I actually want (the positive version)
- One example from what we found
Try to find at least 3 patterns, no more than 8. These should feel like they describe me specifically — not generic AI advice. If something shows up once and seems like a one-off, leave it out. If something shows up in multiple corrections across different topics, that's a real preference.
Show me the patterns and ask if any feel off or missing before we move on.
Prompt 4: Write the Constraints
Run this after you've confirmed the patterns in Prompt 3.
Now encode each pattern as a reusable constraint — something I could paste into a system prompt, share with a teammate, or add to a personal preferences file.
For each one, write it in this format:
[Preference Name]
Domain: [what kind of work this applies to]
Reject: [what to avoid — specific and observable]
Want: [what to do instead — specific and observable]
Type: [one of: domain rule / quality standard / business logic / formatting]
Make the "Reject" and "Want" fields concrete enough that a different AI — one that has never talked to me — would know exactly what to do. No vague adjectives. If you have to use a word like "clear" or "concise," follow it with an example of what clear or concise looks like for me specifically.
Prompt 5: Build the Taste Profile
Run this last. This creates a portable document you can use anywhere.
Now pull everything together into a single taste profile — a document I can save, paste into system prompts, or share with anyone who needs to work in my voice or to my standards.
Format it as:
[My Name]'s Taste Profile
Last updated: [today's date]
Context: [2–3 sentences on my work and where I use AI most]
Core Preferences (one section per constraint from Prompt 4)
How to Use This
A 3–4 sentence note on how someone (or an AI) should apply this profile — when to invoke it, what it covers, what it doesn't.
After you write it, ask me: Is there anything here that feels wrong? Anything important that's missing? I want this to feel like it actually describes me, not a generic version of someone who uses AI.
Infographics








