Ask HN: Is agent-driven QA a thing?
TDD has become the default workflow for AI coding agents, for a lot of reasons.
Something I recently started doing that I really like is, when building a feature, I write acceptance criteria and then I make sure the agents have everything they need to do QA and verify it themselves. The main difference from before is that instead of QAing it myself (which I still do but it's almost always working), the agent can exercise the whole flow itself and verify the acceptance criteria. It takes more work because I often have to set up MCPs, accounts, credentials, etc - but the output is better.
My question... do people have good approaches to this? Is this a common workflow? I see some of this conversation called harness engineering; I see some called feedback-loop engineering; I see some startups (Shiplight AI, Autosana, Ranger). I'm trying to find more discussion & best practices here, and see how others are thinking about this.