Skip to main content

Prompt architecting complex content

A few weeks ago in October, I completed two courses back-to-back, in prompt engineering and information architecture. The prompt engineering course taught me about using AI as an outline builder. The IA course gave me a comprehensive view of taxonomy, content modeling, navigation design, all the structured thinking that makes information findable and usable.

And I couldn't help but wonder: Could I use prompt engineering to build a course about prompt engineering for information architects?

It wasn't just about creating a course. It was about testing my new skills and AI. Could I use my IA expertise to validate AI-generated content at scale? Could I set up guardrails that would maintain content credibility? How far could Claude and I get while staying truthful?

I decided to find out!

The collaborative discovery

I didn't start with "Here's my outline, fill it out." I started with a question-asking prompt:

Act as an expert information architect with expert prompt engineering skills. 
Help me design a course on prompt engineering for budding information architects.
Ask me questions until you have enough data to develop this course.

Ask me the first question now.

Claude then asked me ten detailed questions. Far from surface-level, these were deep questions about target audiences, learning outcomes, technical approach, pain points to address, pedagogical style...questions that made me think harder about what I was actually trying to build.

What resulted was a comprehensive course design summary that transformed my idea into something truly substantial:

  • 16 hours of content across 4 weeks
  • Four major sections: Foundations, Core IA Tasks, Advanced Applications, Tool Building
  • Self-paced, practical, focused on real-world deliverables
  • No-code friendly but with optional prototyping

The meta-exercise structure

This experiment fascinates me because it operates on three interconnected levels simultaneously.

  • Testing my IA knowledge by having me validate AI-generated IA concepts
  • Testing my prompt engineering skills in building complex content systematically
  • Testing Claude's capabilities with strict "no hallucination" directives

Three days of generation

Once I had the outline, I switched to a template-driven approach (another lesson from my prompt engineering course):

Act as an outline expander information architect. Create a new outline 
for each bullet point in the content I give you. Then, develop detailed
content for the outline. At the end, ask me for the next set of content
to develop.

Start with Module n.n. Outline each bullet point and develop detailed
content for them now.

This prompt became my workhorse. Module after module, Claude would:

  1. Take my content bullets
  2. Expand them into detailed outlines
  3. Flesh out the outlines with examples, exercises, code samples
  4. Ask for the next module

At the end of three days, I had 11 detailed modules with learning objectives, exercises, self-assessment projects, prompt pattern libraries, and code examples.

The consistency was remarkable. Despite being built in chunks over three days, the content flows seamlessly.

The credibility paradox

Let's get honest.

What exceeded my expectations:

  • The depth and detail of generated content
  • The consistency across 11 modules using template prompts
  • The complexity of code examples that Claude produced
  • How well the high-level concepts aligned with actual IA principles

What I'm still validating: Every. Single. Code. Example.

  • Whether the exercises actually work as designed
  • If the prompt patterns produce the claimed results
  • Whether the examples are truly original (not memorized content)

This is the most sobering part of the experiment: generation speed does NOT equal validation speed.

I built the course in 3 days. I've been validating and converting it to Mintlify for over a week now, and I'm not done.

Module 2.2 (Content Modeling) required three runs of my template prompt because I kept hitting context window limits. I've learned the hard way to break large tasks into mini-tasks that AI can handle comfortably.

The Taxonomy module (Module 2.1) is my favorite so far, because I actually learned from it while validating it. The content wasn't plagiarized, and wasn't based on existing course outlines. It was genuinely useful!

Audit in progress

This course exists in an interesting liminal space:

What it IS:

  • A demonstration of what AI can do with expert guidance
  • A prompt engineering portfolio piece
  • An experiment in credibility guardrails
  • A learning tool for me about both IA and prompt engineering

What it is NOT (yet):

  • Ready to teach to others
  • Fully validated and tested
  • A finished product

I'm now tweaking the existing Claude Skills to help audit this course.

What I've learned about prompts

Prompts are powerful, powerful tools. They can make or break your AI experience.

The evolution of my prompting strategy sums it up:

  1. Discovery phase: "Ask me questions" (Collaborative, exploratory)
  2. Structure phase: "Generate an outline" (Architectural, high-level)
  3. Expansion phase: "Expand each bullet point" (Systematic, detailed)

Each phase required a different prompt architecture. The template-driven approach in phase 3 was crucial for maintaining consistency across 11 modules.

And my most critical insight: Fast generation requires slow validation.

AI creating content "in seconds" is just the generation phase. Verifying accuracy, testing examples, checking for hallucinations, ensuring originality, and validating against domain expertise, that's where human expertise and knowledge become essential.

Was this a successful experiment?

Absolutely.

This is a prime example of "Wow, look what AI can do with expert guidance!"

The course demonstrates:

  • How structured prompting can generate complex, consistent content
  • How domain expertise creates essential validation layers
  • How AI can be a genuine learning partner
  • How important it is to break large tasks into manageable chunks
  • The critical difference between generation and validation

I'm not done though. This experiment continues through the audit phase, where I'm using AI tools to audit AI-generated content about AI tools for information architects. Stay tuned for my next blog post!