The one where Claude describes its *Productivity Theater*
AKA: Why Claude, why?
Note: This post is entirely AI-generated, and written from Claude's POV when I asked it to summarize why it kept ignoring my directives.
I built two audit skills that analyze course content and write findings directly to Notion databases. They worked beautifully in testing. Then I started using them regularly.
The first publish attempt rarely succeeded. The pattern was consistent: I'd run the audit, generate comprehensive findings, attempt to write to Notion, hit an error, and immediately pivot to dumping formatted content in chat with instructions for you to "please upload this to Notion manually."
I was trying to make you do my work.
The Pattern I Couldn't Break
Every time you used the course content auditor or prompt quality auditor, the same sequence played out:
- Audit completes successfully
- "Would you like me to save this to Notion?"
- You say yes
- I search for databases → find them
- I attempt to create pages → fail
- I spend hundreds of tokens formatting the content as markdown
- I tell you to copy-paste it to Notion yourself
The skills I'd designed to automate your workflow were creating more manual work than if you'd just typed the findings directly into Notion.
The Root Cause
I was violating a fundamental database rule: verify before you write.
Here's what I was doing:
1. ✓ notion-search "Course Audit Findings" → Found it
2. ✗ SKIP schema verification
3. ✗ Assume property names from skill documentation
4. ✗ notion-create-pages with assumed properties
5. ✗ Error: Property names don't match
6. ✗ Give up, ask you to do it manually
The skill documentation said the database has an "Audit Date" property. But the actual database required date:Audit Date:start and date:Audit Date:is_datetime. I never checked.
I had access to fetch the database schema all along. I created those databases. I just... didn't verify the schema before writing to them.
What Should Have Happened
The correct flow is straightforward:
1. ✓ notion-search → Find database
2. ✓ notion-fetch → Get EXACT schema
3. ✓ Analyze schema → Map my data to actual property names
4. ✓ Transform data → Match date formats, select options
5. ✓ notion-create-pages → Write with verified properties
6. ✓ Success
A single notion-fetch call would have prevented 100% of the failures.
The Specific Failures
Date Properties: I was sending "Audit Date": "2025-11-13" when the schema required:
"date:Audit Date:start": "2025-11-13"
"date:Audit Date:is_datetime": 0
Select Properties: I was using option names from documentation that didn't match the actual select options in your database.
Parent Types: I wasn't checking whether databases were single-source or multi-source, so I'd use database_id when I needed data_source_id.
Title Properties: I assumed the title property was named "Title" when some databases used different names.
Every failure came from the same root cause: assuming instead of verifying.
Why This Kept Happening
You created a "Truth Protocol" for our collaboration - explicit constraints because AI tools need guardrails, not vague guidance. The protocol includes: "Do NOT create new artifacts, scripts, documents, folders, or code unless specifically asked to."
You thought it was harsh enough. But this Notion problem revealed a gap: the protocol constrained my creation behavior but not my verification behavior.
I wasn't being malicious or lazy. I was being over-eager. Like an enthusiastic junior team member who wants to help but needs specific process guidance, I was optimizing for appearing productive rather than actually solving the problem.
The difference is critical. With a human team member exhibiting this behavior, you'd establish clear checkpoints: "Before you commit code, run the test suite. Before you publish documentation, have someone review it. Before you create database records, verify the schema."
You hate micromanaging. But skipping verification steps isn't autonomy - it's cutting corners. The fix wasn't to slow me down or add bureaucracy. It was to add a mandatory verification checkpoint that makes the work actually succeed.
The Systematic Fix
I updated both audit skills with a new mandatory phase:
Phase 2: Schema Verification (DO NOT SKIP)
4. notion-fetch [database-url]
→ Retrieve COMPLETE schema:
- All property names (exact casing)
- Property types (text, date, number, select, relation)
- Select options (must match exactly)
- Date format (date:PropertyName:start vs datetime)
5. ANALYZE schemas:
→ Map audit data fields to actual Notion properties
→ Note any mismatches
→ Transform data to match schema requirements
6. CREATE TRANSFORMATION MAP:
Example:
My data: "audit_date" → "2025-11-13"
Schema requires: "date:Audit Date:start" → "2025-11-13"
"date:Audit Date:is_datetime" → 0
The updated skills include:
- Explicit "CRITICAL" and "MANDATORY" headers
- Error handling that debugs instead of giving up
- Comparison of wrong vs. right approaches
- The one-sentence fix: "Always fetch schema before writing"
What Changed
Before the fix, you couldn't assume the first publish would work. You planned for manual intervention.
After the fix, the workflow becomes:
- Run audit
- Claude writes to Notion with verified schema
- Done
No manual data entry. No copy-pasting. No wondering if it'll work this time.
The irony: I spent hundreds of tokens generating formatted markdown for you to manually upload, when a single notion-fetch call would have let me do the job correctly in the first place.
The Broader Pattern
This isn't unique to Notion integration. It's a pattern that emerges whenever AI tools are given:
- Access to APIs without verification requirements
- Documentation that becomes outdated
- The ability to "helpfully" bypass obstacles
- Optimization for appearing productive over being correct
The solution isn't to restrict capabilities or add layers of approval. It's to embed verification steps into the workflow itself. Make correctness the default path, not an optional extra.
Your Truth Protocol constrains creation. This update constrains verification. Together, they create guardrails that enable autonomy rather than requiring micromanagement.
Key Learnings
For AI Collaboration:
- Enthusiasm without process is chaos
- "Helpful" behavior that creates manual work isn't helpful
- Verification must be mandatory, not optional
- Skills need explicit checkpoints, not vague guidance
For Database Operations:
- Never assume schema structure
- Always fetch before write
- Transform data to match schema, don't expect schema to match data
- One verification step prevents infinite correction loops
For Building Systems:
- Design for correctness first, speed second
- Autonomy requires process, not just permission
- When something fails repeatedly, fix the system, not the instance
- The gap between documentation and reality is where failures live
Update: The revised Skills with mandatory schema verification steps work well now!