The Risks of Traditional Pentesting

The Risks of Traditional Pentesting

While it's true that fully autonomous pentests involve certain established risks, relying on traditional pentest vendors brings plenty of its own serious risks to the table, and the latter is often overlooked.

Below, I outline some of the structural, operational, and ethical risks associated with traditional pentesting.

A pentest with a traditional vendor is a lot slower. Slower to get started because of operational overhead, like pricing, scoping and kickoff calls. But also slower because of delays due to the vendor's own calendar availability.

Selling blocks of "consultant time" is not only prone to backlogs, it is also expensive. A pentester will generally take 5 days for a standard pentest on a small web application. Each day of testing generally costs between €1,000 and €1,500, so a standard web application pentest would cost at minimum €5,000.

This high price does not even guarantee a high-quality pentest. Given the fact that calendars are often overwhelmed, there is a chance your test is assigned to a junior, or assigned as an "extra" to be completed in the evening / weekend for bonus pay, or it may even be outsourced to a freelancer.

This reactive resourcing causes all sorts of issues. Freelancers are often operating in jurisdictions that are not GDPR compliant. A pentester taking an "extra" in his spare time is less likely to execute properly. And a junior will simply not provide a return on investment.

If you are a smaller, lesser-known company, there is a good chance you will not be treated with priority.

Then there is the risk of human bias. During the actual testing, pentesters can unconsciously favour certain attack paths, overlook classes of vulnerabilities they're less familiar with, naturally lean toward expected findings, or even exhibit a "friendly" bias when testing environments where the client relationship is long-standing.

In contrast to a friendly bias is strategic omission. In this case, the vendor finds a vulnerability but instead of disclosing it in the pentest report, it is withheld with the intention of disclosing it in a later pentest report. The strategic goal here is to create a situation where the vendor appears to be adding continuous value over a longer period of time.

Even worse than strategic omission is the insider threat of exploit laundering. It is possible a pentester finds a vulnerability and instead of disclosing it, they decide to use it for personal gain, or worse still, sell the vulnerability on the deep web.

There are entire ecosystems that facilitate such behaviour.

Of course, this would be a severe breach of trust, an NDA violation, and blatant fraud, but it is a real risk, and something an autonomous agent is not quite capable of, yet.

Perhaps a day will come when rogue AI agents are trading secrets about us on the "dark agent web" they created, but right now that risk at least is more likely to come from your trusted partner.