← March 12, 2026 edition

mindfort

Autonomous Security Agents

MindFort Lets AI Hack Your App Before Someone Else Does

The Macro: The Pen Test Bottleneck Is Getting Worse, Not Better

Application security has a math problem. The number of web apps shipping every year keeps climbing. The number of qualified penetration testers does not. ISC2 estimates the global cybersecurity workforce gap at around 4 million unfilled positions, and offensive security specialists are the hardest seats to fill because the skillset is genuinely rare. You need someone who thinks like an attacker, writes code like a developer, and documents findings like an auditor. Those people exist, but there are not nearly enough of them.

So what happens in practice is that companies either pay $30,000 or more for an annual pen test from a consultancy, or they skip it entirely. The consultancy model has its own issues. Engagements are scoped tightly, usually a week or two. The testers are good but rushed. They find the obvious stuff, write a PDF, and move on. Three months later the dev team ships new features and the attack surface changes. The PDF is already stale.

Bug bounty platforms like HackerOne and Bugcrowd tried to solve the supply side by crowdsourcing researchers. That works for companies big enough to run a program, but most mid-market software teams do not have the internal security staff to triage incoming reports, let alone manage a bounty program. Snyk and Veracode handle static analysis well, but static analysis catches a different class of bugs than a real attacker probing a running application.

The gap between “scan your code” and “actually try to break in” is where MindFort is building.

The Micro: Agents That Run All Night and File Pull Requests in the Morning

MindFort deploys autonomous AI agents that perform penetration testing against web applications. These are not quick scanners. According to the product documentation, agents run for 6 to 16 hours or more on a single engagement, working through the same methodology a human tester would: reconnaissance, vulnerability discovery, exploitation, and reporting.

The interesting architectural choice is that the agents collaborate. Multiple agents can coordinate across an attack surface, which mirrors how real red teams operate. One person maps the API endpoints while another probes authentication flows. MindFort is trying to replicate that division of labor with AI.

Brandon Veiseh, co-founder and CEO, previously worked as a product manager at ProjectDiscovery, the company behind Nuclei and other open-source security tools. That background matters because it means he has seen the tooling side of offensive security up close. Akul Gupta, co-founder and CTO, studied computer science at UIUC and spent time at both OpenAI and Anthropic. The third co-founder, Sam, was a senior security engineer at Salesforce leading security for Tableau, with an M.S. in Cybersecurity from UC Berkeley. The team of five went through Y Combinator’s Spring 2025 batch.

Three features stand out from the product page. First, CI/CD integration that deploys agents on every code push. That turns pen testing from a periodic event into a continuous process, which is how it should work but almost never does. Second, the agents can generate code patches and submit them as pull requests. Finding a vulnerability is only half the job. Actually fixing it before someone exploits it is the part that matters to the engineering team at 2 AM. Third, you can interact with agents in real time, steering them toward specific areas of concern or asking them to dig deeper on a finding.

The company offers a free tier and demo bookings, but no public pricing is listed. That is standard for security products selling to enterprise buyers who expect custom scoping.

I would want to know how MindFort handles false positives. Automated scanners are notorious for generating noise. If the agents produce a hundred findings and forty of them are garbage, security teams will stop reading the reports. The exploitation step, where the agent actually proves a vulnerability is real by triggering it, is what should separate this from a glorified DAST scanner. If that step works reliably, the value proposition is strong.

For a different angle on how AI agents are changing technical workflows, the a0.dev approach to autonomous development shows what happens when you let agents run with minimal human intervention in a different domain entirely.

The Verdict

The security market is not short on tools. It is short on time and expertise. MindFort is betting that autonomous agents can compress weeks of human effort into overnight runs, and that the output will be good enough that security teams trust it without a human pen tester double-checking every finding.

I think the team composition is the strongest signal here. You do not often see a founding team that covers offensive security product management, frontier AI research, and enterprise security engineering all at once. That breadth matters when you are trying to build agents that think like attackers.

At 30 days I would want to see how many customers are running agents in CI/CD versus one-off scans. The continuous model is where the real value lives. At 60 days, whether the auto-generated patches are getting merged or ignored. At 90 days, how the detection rate compares to a manual pen test on the same application.

The pen test market is ripe for compression. Whether MindFort’s agents can match the intuition of a skilled human tester is the open question, but the architecture is pointed in the right direction.