Skip to main content

Iterative auditing as a progress tracking mechanism

The UX audit returned 18 findings on my Ikigai app (including a critical one that called the app "boring"!)

Instead of diving into fix-mode right away, I chose to run the remaining audit skills I had built, thinking a complete picture would help before I start making any changes.

Looking back, this was both a good and a bad decision.

The Marie Kondo approach to technical debt

A year ago, I Marie Kondo’d my wardrobe and cut my belongings in half. The method worked for me because it made me analyze every single item I owned before deciding what to keep. Realizing the scope clearly is how I decided what actually mattered.

I wanted that same clarity with my audit findings. I planned to run every skill, collect every issue, and comprehend the scope before trying to fix anything, maybe even detect overlapping issues, if any. I made sure to document everything in Notion databases so nothing would get lost.

So Claude and I ran the accessibility audit next, then security, then content accuracy. No code review audit as we had generated enough data to work with now!

Claude Skills work like a charm. Claude had designed the Notion databases along with the skills, and seeing the findings directly filed into separate databases felt like having an expert team review my beginner code. Each audit filled its own database with severity ratings, specific recommendations, and links back to the audit session summaries.

Then I looked at the numbers.

Total Issues: 87

├─ Content Accuracy: 32 findings
├─ Accessibility: 24 findings
├─ UX/UI: 18 findings (1 critical)
└─ Security: 13 findings

Yikes.

The security wake-up call

The security audit surfaced something I'd been completely oblivious to while building:

Cross-Site Scripting (XSS) vulnerabilities through unsanitized user input rendered via innerHTML, combined with localStorage data exposure risk across browser users.

Seems obvious now, but as a non-coder building my first web app, I had no idea I was creating security holes. I'd been focused on making the interface aesthetically pleasing while missing fundamental safety issues.

I've since added a privacy warning to the app, but the finding exposed how much I didn't know about what I didn't know.

The paradox of good data

The problem with systematic AI audits is that they work exactly as designed.

87 documented issues, each with severity ratings and specific recommendations, organized across multiple Notion databases. No ambiguity about what needs fixing. No wondering if I missed something important. Just an overwhelming number of open issues staring back at me.

The same completeness that makes this data valuable makes it paralyzing. Where do you even start when you have 32 content accuracy findings alone?!

Decisions, decisions

Should I start fixing issues immediately?


├─ YES → Risk missing patterns across audits
│ Risk fixes breaking other things
│ Risk losing track of what changed
│ No version control = no progress history

└─ NO → See the full scope first
Understand overlaps (UX + Security?)
Plan a methodical approach
Set up proper tracking before touching code

Decision: Run all audits first, don't fix issues immediately

Result: 87 issues documented

Current Status: Still figuring out how much of the fix approach to automate

What I'm learning about prioritization

It's easy to miss the forest for the trees. My technical writing background helps here, though. In documentation, we're often the chosen ones who understand what every part of a system does and how pieces connect. We can't fixate on perfecting one section while ignoring an overall structure that doesn't make sense.

My plan is to maintain a working app at all times, even if it's not the most polished version. Progress over perfection means the app remains functional through iterations, not beautiful, but certainly not broken.

Perfect is the enemy of Done!

The current state

Those 87 issues in Notion, unfixed, are both motivating and pressure-inducing. I have to remind myself that this is a personal project meant to be fun, that I'm proving AI efficiency to myself, not shipping enterprise software. But the high number of findings does feel like a real-world scenario, and I want to approach this methodically, not just dive in randomly.

Current rumination: finalize a logical system for walking through fixes in order of actual priority, not in order of whatever catches my eye first. Start collaborating with Claude Code...

GitHub comes first

Before touching any code, I need version control in place. The app currently exists as a local file on my computer that I keep improving. It's way too easy to lose track of what changed and why.

This project is specifically about demonstrating how AI helps track improvements through connected tools, like Skills for automated auditing, Notion for issue documentation, and GitHub for code versioning. I want to showcase this progression and measure whether subsequent audits demonstrate actual improvement.

The real discovery

I built these audit skills thinking they'd be one-time debugging tools. Run the audit, get the findings, fix the issues, done.

But documenting 87 issues systematically revealed something more valuable: this isn't a debugging tool, it's a progress tracking system.

When I run the same audits next week, next month, whenever, I'll have objective proof of whether things improved, stagnated, or regressed. No guessing whether changes made things better; I’ll have data showing what shifted and by how much.

The overwhelm comes from seeing everything at once. The value comes from being able to measure movement over time.

What's next

  1. Get the Ikigai app into GitHub with proper commit structure. Every fix should be a documented commit so there's a clear history of what changed.
  2. Fix a small batch of issues manually to understand the patterns. Maybe start with the critical security findings, then see what other issues naturally resolve as side effects.
  3. Run the audits again. Compare findings. See if the Claude skills can identify improvements, not just problems.

The goal isn't to get to zero issues immediately. It is to establish a workflow where progress is visible, measurable, and documented, and where AI helps implement and track solutions.