Glossary

What Is a Red Team?

A red team simulates an adversary attacking your organisation — not just to find vulnerabilities, but to achieve specific objectives (data exfiltration, lateral movement to a target system, persistence beyond detection). Red-teaming is objective-driven and stealth-focused, in contrast to penetration testing which is coverage-driven and exploitation-focused.

Red team vs penetration test

Penetration TestRed Team
GoalGoal
Coverage of vulnerability classesAchievement of specific objectives
MethodMethod
Find & exploit all available bugsPath-of-least-resistance to objective, stealthy
DurationDuration
Days to weeksWeeks to months
DetectionDetection
Not a concernMajor concern (test SOC + IR)
OutputOutput
Finding list with evidenceAdversary-simulation narrative + IR gaps

When red-teaming is the right tool

When you already have a baseline of vulnerability coverage (regular pentesting) and want to evaluate your detection and response: can your SOC see lateral movement? Can your IR playbook execute? Where do hand-offs break? Red-team engagements answer those. They're not a substitute for pentesting — they assume the baseline pentesting work is done.

For EU financial entities under DORA, TLPT is the mandatory red-team framework. For broader contexts, MITRE ATT&CK is the canonical adversary-behaviour reference.

How SQUR fits

SQUR's autonomous pentest is firmly in the "coverage-driven, exploitation-focused, time-bounded" pentest category. We do not run red-team engagements — those need objective-driven planning, multi-domain skills (initial access, persistence, lateral movement), and stealth tradecraft. For DORA TLPT and bespoke red-team work, we refer to specialist firms (Cure53, NCC Group, MWR, SySS).

Frequently asked questions

What about purple teams?

Purple team = red team + blue team (defenders) collaborating in real-time. Instead of red trying to evade detection, both sides work transparently to maximise learning. Often more cost-effective for the defender's skill-building than full adversarial red-teaming.

Is AI changing red-teaming?

Yes. AI-assisted reconnaissance is now standard. AI-generated phishing content is widely used. Autonomous-agent red-teaming is emerging but not yet at human-expert level for objective-driven engagements. For now, AI augments red teamers rather than replacing them.

Related terms

TLPTPenetration TestingBug BountyLLM Prompt Injection

Try SQUR

60-second free attack-surface scan. No signup, no credit card.

Run a free scan