Can you actually record yourself testing and get test case documentation back automatically?
I wondered if this works… so I tried it.
I hate writing test case documentation, especially after I’ve already done the testing manually. So I wondered… what if I just record a video of myself testing and let Claude Code write the test cases? Could this actually work?
I’m kind of thinking of two potential scenarios if this works. First, when you have something to test but nobody has bothered to write a decent spec – you could record yourself exploring the application and generate test cases from that. Second, maybe you have a wireframe of the application (before it’s built) and you can record yourself stepping through the wireframe to create your test cases before development even starts.
Anyway, no point speculating about how to use this if it doesn’t work. So will it work?
I had ffmpeg already installed (it comes with a Playwright install). The approach would be to use ffmpeg to convert video into a sequence of screenshots that Claude Code could then process and convert into a structured test case in markdown format.
Here’s how I got this working:
Installation:
ffmpeg was already on my system from Playwright. If you need it, you can get it from ffmpeg.org.
Configuration:
Created a test case template and some principles for writing good test cases (both in markdown). I used Claude.ai to generate these with these prompts:
These documents provide context for Claude Code so it knows what structure to follow.
Test Case Template:
Standard template with placeholders for ID, title, description, preconditions, test steps, expected results, and actual results.
Starting Point:
Fresh Claude Code project initialized with /init
, the template files, and a 43-second screen recording of me adding an investment account to the financial dashboard application.
I asked Claude Code: “You are an expert in software testing. My goal is to create a process where I can record a video of me using an application and have you turn it into a structured test case in markdown format. Are you able to process and understand video?”
What happened:
Claude said yes, it could process video. So I asked it to take my test-add-account.mp4 file and create a test case.
It came back with an error reading the file.
Reaction:
As expected – Claude can’t directly process video files. But this was the learning moment that led to the actual solution.
I prompted Claude to use ffmpeg to convert the video into screenshots that it could then analyze.
What happened:
Claude immediately understood the approach and started generating the ffmpeg command. It suggested using 1 frame per second, but I thought that wouldn’t be enough detail.
Adjustment:
I asked it to use 2 frames per second instead: “Please continue with this approach but use fps of 2.”
Result:
It created a temporary frames directory and extracted 84 PNG images from my 43-second video (roughly 2 per second as requested).
Claude then read and analyzed all 84 screenshots, extracting test steps and expected results from the sequence of images.
What happened:
It created a complete test case file: TC-001-add-investment-account.md
The test case included:
It actually caught specific details I’d entered – “Main Investment” as the account name, the exact description text I typed, even the modal form behavior and validation messages.
That’s pretty impressive!
After running this experiment, some patterns emerged:
Works well for:
Gets messy with:
Surprises:
⚡ Quick Verdict:
🟢 Yes, but not for everything
Would I use this?
Absolutely. Not as a replacement for all test documentation, but definitely for specific scenarios.
For what?
When would I NOT?
Still needs refinement for longer test runs. A lot of potential once perfected. You’d want to add some validation loops to ensure the test cases being generated are accurate and complete.
What I’m still curious about and want to test further…
If you’re interested in trying this, these are the exact prompts I used:
In Claude.ai (for setup):
In Claude Code:
Want to try this yourself?
Really was simple to get setup once you have ffmpeg installed. Let me know what happens when you try it – I’m especially curious about whether audio narration improves the results and how it handles more complex applications.