Add to my Schedule Bohemia Ballroom 06 Dec 2017 16:30 - 17:00

In January 2017, a working group led by Pfizer began operationally sharing cybersecurity analytics based on MITRE’s ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) framework. The goal of the effort is to improve the defensive postures of the participating organizations by increasing their collective ability to detect adversary behavior. In the process, the group has learned about what analytics are useful to share and which aren’t, what data is important to include when sharing analytics, and what feedback is useful to include. As a participant in that working group, MITRE has also learned about how our internal processes to develop and share analytics should work to optimize coverage and provide provable defense.

This presentation will focus on new processes MITRE is exploring to develop and test analytics while participating in the working group. These processes start with identifying a good candidate from the ATT&CK model to focus on. We then use a test environment where we can exercise that technique (i.e., carry out the real attacks) to see how it appears in our logs. Based on that data we can develop an initial version of the analytic that testably detects at least some aspects of that technique. While we can do some level of testing in that test environment to iterate on the analytic, the test environment is not good for determining how the analytic works in the real world on real systems doing real work. For that, most testing occurs in a portion of our live network approved for experiments. That live experiment allows us again to exercise the real adversary behavior, but in this case against the background and messiness of a real production environment. That background data allows us to reduce the false positives (cases where the analytics detect things that are not malicious) to a reasonable level while still ensuring that the analytic detects the malicious behavior. Because it’s a smaller environment, we know that moving it to our production systems will require even more tuning. Moving to real production alerting in the SOC requires a further approval process and review by others.

The presentation will also discuss lessons we at MITRE and more broadly the working group have learned in sharing analytics with others and integrating analytics shared by others into our own environments. There are basic challenges like different sensors and big data tools, but also tougher challenges like determining how much of a technique an analytic that was shared actually covers and understanding how false positive rates differ across environments. Over time we’re also hoping to learn what types of feedback are useful and what types aren’t, what types of analytics make sense to share and what types don’t, what information is useful to consumers, and what information is too sensitive to share. For example, several aspects of the analytics tend to be generally applicable (what observables to look for and some false positives) while other aspects tend to be specific to a given environment (thresholds and other types of false positives).

Finally, we’ll also describe aspirational efforts to automate sharing across tools and organizations (e.g., via the development of common data taxonomies and mappings), to use new situational tools to understand defensive coverage using the ATT&CK matrix, and to use threat intelligence aligned to ATT&CK to understand defensive coverage in the context of an organization’s threats.


Speakers
The MITRE Organization
Principal Cybersecurity Engineer

Discussions


Discussion not started yet.