Skip to main content

Unstructured content, the bane of RAG systems. Or is it?

When I started my Prompt Engineering for IAs course generation experiment, it was meant to be a simple exercise to understand how context windows work. Somewhere along the way though, I found myself staring at 20000+ lines of unstructured text across 11 modules.

In it's current form, the course content is not AI-friendly, even though it is AI-generated, ironically. The reams of content are as fun as a textbook, minus diagrams at that.

Prompt architecting complex content

A few weeks ago in October, I completed two courses back-to-back, in prompt engineering and information architecture. The prompt engineering course taught me about using AI as an outline builder. The IA course gave me a comprehensive view of taxonomy, content modeling, navigation design, all the structured thinking that makes information findable and usable.

And I couldn't help but wonder: Could I use prompt engineering to build a course about prompt engineering for information architects?

It wasn't just about creating a course. It was about testing my new skills and AI. Could I use my IA expertise to validate AI-generated content at scale? Could I set up guardrails that would maintain content credibility? How far could Claude and I get while staying truthful?

I decided to find out!

Iterative auditing as a progress tracking mechanism

The UX audit returned 18 findings on my Ikigai app (including a critical one that called the app "boring"!)

Instead of diving into fix-mode right away, I chose to run the remaining audit skills I had built, thinking a complete picture would help before I start making any changes.

Looking back, this was both a good and a bad decision.

Watching Claude build an audit system using Notion MCP

I had built five audit skills for my Ikigai app - UX, Code Quality, Content Accuracy, Accessibility, and Security - where each one could analyze the app and generate unique, detailed findings as expert team personas. But the audit reports existed only as chat artifacts, and I wanted them in Notion where I could track fixes, link to GitHub issues, and measure progress over time.

On prompting, Claude suggested the database schemas I'd need in Notion to track audits: ten databases - five for audit sessions, five for individual findings - all with proper schemas and relations between them.

I manually created one database in Notion to test the concept. Getting every property right - the select options, the number fields, the relations - took focus that would now need to last for ten databases.

So instead, I set up the Notion MCP connection and asked Claude to complete the task.

Building an app from a PDF (because AI can!)

AI and I built an interactive web app from a PDF, with drag-and-drop, auto-save, and an interactive five-step workflow, in a week! (I'm thrilled about how well it works too!)

This project was about converting a PDF (and complex philosophical concepts) into intuitive user experiences while leveraging GitHub Copilot and Claude for rapid prototyping and problem-solving.

I'm documenting what I've learned about collaborating with AI to build something real.

Sabbaticals - Yea or Nay?

YEA!

After 12 years of tech writing in the corporate world, I went on a sabbatical in 2016. I had no plan or alternate source of income. (Must say, a little too adventurous even for me!)