Month: November 2025

AI Document Consistency and Reducing Conflicts

November 6, 2025
Document Consistency and Building the System That Prevents AI Conflict Why Your AI Agent Keeps Changing Its Mind One of the quickest ways to send your AI agent off track is to give it conflicting information. Conflicting information in files, memory, or context almost guarantees unreliable results. One run it’ll do X, the next run it’ll go off and do Y. Now this might seem, on the surface, like quite an easy thing to avoid. However, when you’re getting AI agents to generate information in the first place, you can (most likely will) end up with a lot of data and files. And let’s face it — we don’t always read and review all of the content that’s generated. As...
Read more...

AI Experiment #5: Can Test Automation AI Learn From Its Own Failures?

November 3, 2025
Can this prompt-driven test automation system scale with complex applications using lessons learnt loops? I wondered if AI could fail at automating a complex test case, learn from that failure, and succeed on the second try. Here’s my attempt at building a process with a feedback loop that acheives that. The Question Can this prompt-driven test automation system scale with complex applications using lessons learnt loops? I know from experience that AG Grid scenarios are really difficult to automate – drag-and-drop, row grouping, complex UI interactions. They’re automation nightmares. So what happens when AI comes up against these sorts of challenges? What I’m looking at is scenarios where you encounter a complex application that defeats standard automation approaches. Where you...
Read more...