Most professionals in the InfoSec community have experienced the varying quality of intelligence reports and feeds available today. At ProtectWise, we are constantly evaluating and integrating new sources of threat intelligence into our own platform, The ProtectWise Grid™, which provides us with a unique view on how to evaluate a variety of intelligence providers on mass scale. In the ProtectWise Threat Research and Analysis Team, we all draw from various backgrounds of either creating or consuming intelligence. We take the practice of evaluating such vendors with rigor and high standards.

I want to share with you a few techniques on how to perform similar evaluations to develop a  better, truly actionable, threat intelligence program. I’m making the assumption that your program is first properly designed for the integration and support of an intel dataset within your organization, typically already utilizing your own assets for in-house enrichment. If you have questions on getting started, please see the references section below for some great reading.

The three key components for evaluating threat intelligence are context, quality, and ease of use. This is not an exhaustive list of all techniques, but those which typically provide a quick return of value.

Context

Generally speaking, context for intelligence is often one of the most important requirements. Many vendors fail to provide context for their intel for the sake of higher quantity and more automated feed production. Such feeds should not be considered true “intelligence”, as context turns data into “intelligence” with the assisting human aspect. Context can be details such as attacker agenda, descriptions/naming, and anything to provide an analyst with the ability to confirm its validity and scope. In addition, context allows you to build and evaluate the data with a methodology such as the admiralty code - an approach to score intelligence sources based on characteristics such as reliability and credibility.

When organizations do not provide context to their intel, it is often due to not following the guidelines to an evidence-based practice. I recommend a highly thorough and honest review of such sources before integrating into your program. These suppliers tend to generate intel with low quality controls and from a variety of poor “big data” collection methodologies.

Here are important pieces of context to review with your intelligence sources. Keep in mind, the type of intelligence (Tactical, Operational, Strategic) do come with different meaning and context. However, use these as a starting point to understand the need for context in general.

  • Targets - Industry, profession, individuals, software, or even hardware.

  • Motive - The goal and agenda related to the piece of intelligence - e.g., theft of intellectual property (IP), misinformation, causing unrest, or simply denial of service.

  • Activity Time - Time the intel became known, when it was active, and when it will become inactive if possible.

  • Severity - Estimated severity if actively or retrospectively observed.

  • Reliability - This should be reviewed against the data and source as a whole.

  • Names/Descriptions - Malware, actor, or any other description to label and organize the intel.

  • Traffic Light Protocol (TLP) Level - The level at which the intel can be shared internally and externally.

  • Resources - Connections to research, private reports, or further information.

  • Collection Method - How the intel was collected, providing evidence (honeypots, research, in-the-wild). This may also be found in other points depending on the provider.

Quality Review:

Taking the time and resources to evaluate the quality of intelligence based on your own evidence is well worth the effort. At ProtectWise we are able to use petabytes of full fidelity network traffic in an effort to help validate the success or failure of intelligence. However, even smaller organizations with a limited or nonexistent dataset can take a similar evidence-based approach. Ideally this evidence may be gained retrospectively from review against the history of your dataset, but implementation and utilization in real time can also be successful. Retrospective analysis is typically going to get you the evidence much faster, allowing you to adjust tactics and identify the quality as efficiently as possible.

  • Correlation To What Is Known - This can be easy to determine with less complex intel such as indicator lists, but may be much more difficult for TTPs (tools, techniques and procedures) or behavior-based intelligence data. However, the idea is to simply evaluate intel against a known group of data to determine if there are extremely obvious signs of quality issues such as undesired false positives and negatives. This has worked well for us at ProtectWise to toss intel against our wall of full network data and catch a provider with dangerously unreliable “intel”. A small but helpful way to evaluate domain indicators is to compare against the Cisco Umbrella 1 Million. Ideally a list of domains that is provided as intelligence is very unlikely to appear on the Cisco list. If it does, however, that may be something to investigate and a key indication on the provider's credibility, such as claiming a major advertising domain is malicious. The same could be done with an IP address associated with shared infrastructure, such as Google, AWS, or Tor. Generally, this step can at minimum help scope the expected impact of integrating the intelligence.

  • Multidimensional Evidence - Another great way to identify interesting patterns, such as with false positives, is to review intel in a hierarchy of experts approach. Try to implement methods to incorporating many different types of intel and behaviors into more focused and accurate detection. Take for instance a beaconing host on your network. Are there any observations leading up to, or after, the beaconing activity that could help identify it as a true or false positive? Combining various types of indicators can also benefit in this method as well, such as combining IPs, domain, hashes. Such an approach can be a great way to isolate false positives due to a lack of multidimension. At a bare minimum however, this can help prioritize your triggering intel into a determined confidence score.

  • Hit Review - Take the time implement a process into your security and intelligence programs for grouping and documenting the findings. You’ll only want to be at this point by first weeding out the obviously poor providers. Reviewing hits can be done by comparing the triggering activity and organizing into various groups for manual analysis, typically based on the priority and relevance to your program. For example, upon finding out that a handful of indicators have triggered within your environment, you can quickly begin to group them by specific applications, users, hosts, or other context variables for analysis. Once organized in some fashion there is typically a more simplistic view into the data that is feasible to manually review. This can weed out specific pieces of intelligence that tends to be more generalized or even producing a high rate of false positives. The goal of this step is to find those less obvious signs of reliability and confidence in the intelligence.

  • Provider Feedback and Trust - Last but not least, how does the intel provider accept and utilize feedback about their data from the field (i.e, via you)? Are they open to communication or even capable of taking in feedback? A solid relationship between the supplier and consumer should help foster the ability to correct mistakes and assist both sides in tuning. When it comes to the relationship, if a vendor is afraid to tell you why they came to a given conclusion on a piece of intel, there is potential for hiding the negative facts about their data/intel/service. In our business, that's unacceptable and an immediate red flag.

Ease of Use:

The ability to implement and make use of the intelligence is something often overlooked until it's too late. What usability means is different for every consumer and provider based on a multitude of factors. Such factors may include integration ability, a team capable of understanding the intel, and even relativity to your organizational security goals and knowledge.

When evaluating ‘Ease of Use’, it's important to understand the root cause of any difficulties. For example, you may not want to always disregard a high confidence and reliable source of intelligence that may require some effort on your end. Some security programs, especially for less experienced or smaller teams, may need to expand their intake process or knowledge to best utilize some of the top providers in the community — it's not always up to the provider to match your particular desires.

Some items to consider when evaluating Ease of Use:

  • Provided formatting - Intel should be simple and clear to utilize. Is the implementation process inefficient or frustrating due to nonexistent or unique formatting?

  • Timeliness - Notice the gap between observation and delivery. Is it actionable by the time you get it? There is still value for historical reference when it comes to intelligence, but it will need to be more tactfully managed and utilized.

  • Integration capability - Is the intelligence being fully utilized by your security program and policy makers? This may mesh with items such as formatting and timeliness, but making true use of the intelligence should be the ultimate goal. For example, implementing detection and reporting based on how the intel is seen across your environment will be needed to understand its effectiveness.

Conclusion:

In this post I’ve detailed three key techniques — context, quality, and ease of use — to consider when evaluating intelligence. Context adds the ability to organize, validate, and provide necessary information for deployment or if the intelligence ever becomes active for your organization. Quality can be evaluated in endless ways, but the listed techniques are a great place to aim for. Last is ease of use, which is generally what allows you to make the most of the intelligence with simple ways to pick out the poor providers. While these three techniques will help you build a more robust intelligence program, this is only the beginning. The below resources are a great place to learn more about how to get started on your threat intelligence program, but please don’t hesitate to email us with any questions!

Recommended Reading:

Next blog post