The MITRE Corporation’s ATT&CK framework has been used to evaluate the efficiency of several enterprise security products designed by several vendors.
In April, MITRE announced a new service based on its ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) framework to evaluate products based on their ability in detecting advanced persistent threats.
The MITRE ATT&CK evaluation service evaluates endpoint detection and response products for their ability to detect advanced threats.
“There are a lot of products on the market that try to detect adversary behavior, and we’re trying to figure out what they can do,” says Frank Duff, principle cybersecurity engineer at MITRE.
Duff explained MITRE adopt a transparent methodology and knowledge base that will make easy to interpret results obtained with its service.
In the first phase, MITRE offered the possibility to evaluate the service and its efficiency on a case study on APT3/Gothic Panda cyber espionage group.
APT3 (aka UPS Team, Gothic Panda, Buckeye and TG-0110) is a China-linked APT group, it operates under the control of the China’s Ministry of State Security. The cyber espionage group is responsible for several cyber espionage campaigns, including Operation Clandestine Fox, Operation Clandestine Wolf, and Operation Double Tap. As of June 2015, the group appears to have shifted from targeting primarily US victims to primarily political organizations in Hong Kong.
The first tests of the ATT&CK framework conducted by Mitre evaluatedthe ability of products from Carbon Black, CrowdStrike, CounterTack, Endgame, Microsoft, RSA and SentinelOne to detect the APT3 cyberespionage group.
The tests are focused on the product’s capabilities of detecting malicious activities typically carried out by the threat actors once they have compromised the system of an organization. It is important to highlight that the MITRE ATT&CK framework does not assign scores to each product, it is not designed as a comparison tool.
“Direct comparison between vendor capabilities is complicated, and we encourage anyone using our results to consider other factors we didn’t evaluate. Our evaluations are narrowly focused on the technical ability to detect adversary behavior.” Duff wrote in a blog post.
“There are other factors we are not accounting for in our evaluations that should be considered by decision makers as they decide which tool best fits their needs,” Duff said. “You should consider factors such as cost of ownership, sophistication of your Security Operations Center, environmental noise, integration with other tools, user interface, security policies, and other factors. One product may not fit every need, and products can address different needs in different ways.”
MITRE worked with vendors during the evaluations and sharing with them the results.
“We approach the evaluations with a collaborative, “purple-teaming” mindset, and we think this allows us to better articulate what a vendor’s capability can do than if we left them out of the process. During the evaluation, MITRE and the vendor are in open communication.”
“The vendor then shows us their detections and describes their process so that we can verify the detection. Since our goal is to capture different detection methods, we may even suggest to the vendor how their capability might have detected the behavior.”
The initial evaluations included the above vendors that signed up before the June 30, 2018 cohort deadline
|[adrotate banner=”9″]||[adrotate banner=”12″]|