COMMENTARY
In 1931, scientist and thinker Alfred Korzybski wrote, “The map shouldn’t be the territory.” He meant that each one fashions, like maps, omit some info in comparison with actuality. The fashions used to detect threats in cybersecurity are equally restricted, so defenders ought to all the time be asking themselves, “Does my risk detection detect the whole lot it is presupposed to detect?” Penetration testing and red- and blue-team workout routines are makes an attempt to reply this query. Or, to place it one other method, how intently does their map of a risk match the truth of the risk?
Sadly, red-team assessments do not reply this query very properly. Pink teaming is helpful for loads of different issues, however it’s the improper protocol for answering this particular query about protection efficacy. Consequently, defenders do not have a practical sense of how sturdy their defenses are.
Pink-Crew Assessments Are Restricted by Nature
Pink-team assessments aren’t that good at validating that defenses are working. By their nature, they solely check a number of particular variants of some attainable assault strategies that an adversary might use. It’s because they’re attempting to imitate a real-world assault: first recon, then intrusion, then lateral motion, and so forth. However all that defenders be taught from that is that these particular strategies and varieties work in opposition to their defenses. They get no details about different strategies or different sorts of the identical approach.
In different phrases, if defenders do not detect the pink workforce, is that as a result of their defenses are missing? Or is it as a result of the pink workforce selected the one choice they weren’t ready for? And in the event that they did detect the pink workforce, is their risk detection complete? Or did the “attackers” simply select a method they have been ready for? There is not any method to know for positive.
The foundation of this situation is pink groups do not check sufficient of the attainable assault variants to guage the general power of defenses (though they add worth in different methods). And attackers most likely have extra choices than you understand. One approach I’ve examined had 39,000 variations. One other had 2.4 million! Testing all or most of those is unimaginable, and testing too few provides a false sense of safety.
For Distributors: Belief however Confirm
Why is testing risk detection so essential? Briefly, it is as a result of safety professionals need to confirm that distributors even have complete detection for the behaviors they declare to cease. Safety posture is essentially based mostly on distributors. The group’s safety workforce chooses and deploys intrusion prevention system (IPS), endpoint detection and response (EDR), person and entity habits analytics (UEBA), or comparable instruments and trusts that the chosen vendor’s software program will detect the behaviors it says it should. Safety execs more and more need to confirm vendor claims. I’ve misplaced depend of the variety of conversations I’ve heard the place the pink workforce experiences what they did to interrupt into the community, the blue workforce says that should not be attainable, and the pink workforce shrugs and says, “Nicely, we did it so …” Defenders need to dig into this discrepancy.
Testing In opposition to Tens of 1000’s of Variants
Though testing every variant of an assault approach is not sensible, I consider testing a consultant pattern of them is. To do that, organizations can use approaches like Pink Canary’s open supply Atomic Testing, the place strategies are examined individually (not as a part of an overarching assault chain) utilizing a number of check circumstances for every. If a red-team train is sort of a soccer scrimmage, Atomic Testing is like working towards particular person performs. Not all these performs will occur in a full scrimmage, however it’s nonetheless essential to follow for once they do. Each must be a part of a well-rounded coaching program, or on this case, a well-rounded safety program.
Subsequent, they should use a set of check circumstances that cowl all attainable variants for the approach in query. Constructing these check circumstances is an important process for defenders; it should straight correlate with how properly the testing assesses safety controls. To proceed my analogy above, these check circumstances make up the “map” of the risk. Like a superb map, they omit non-important particulars and spotlight the essential ones to create a lower-resolution, however general correct, illustration of the risk. The way to construct these check circumstances is an issue I am nonetheless wrestling with (I’ve written about a few of my work up to now).
One other resolution to the shortcomings of present risk detection is utilizing purple groups — getting pink and blue groups to work collectively as an alternative of seeing one another as opponents. Extra cooperation between pink and blue groups is an effective factor, therefore the rise of purple-team companies. However most of those companies do not repair the basic drawback. Even with extra cooperation, assessments that have a look at only some assault strategies and variants are nonetheless too restricted. Purple-team companies must evolve.
Constructing Higher Check Instances
A part of the problem of constructing good check circumstances (and the rationale why pink–blue workforce cooperation is not sufficient by itself) is that the best way we categorize assaults obscures quite a lot of element. Cybersecurity appears at assaults by way of a three-layered lens: ways, strategies, and procedures (TTPs). A way like credential dumping might be completed by many various procedures, like Mimikatz or Dumpert, and every process can have many various sequences of operate calls. Defining what a “process” is will get troublesome in a short time however is feasible with the precise strategy. The business hasn’t but developed a superb system for naming and categorizing all this element.
For those who’re trying to put your risk detection to the check, search for methods to construct consultant samples that check in opposition to a wider swath of prospects — it is a higher technique that can produce higher enhancements. It can additionally assist defenders lastly reply the questions that pink groups wrestle with.