Bring Threat Intelligence to the world of IoT

Threat Intelligence has become common throughout the cyber security landscape used in traditional information technology platforms from next generation firewalls, application load balancers, SIEM and other threat monitoring and prevention tools. With the pervasive growth of IoT initiatives and concerns around how to protect operational infrastructures from malicious actors an understanding of how existing threat intelligence can play a role in protecting an organization’s technology infrastructure is needed. Additionally, the existing methods for collecting and analyzing threat data do not directly translate to meet all of the potential security issues found in the IoT space. Therefore, a deep dive into what existing security technology can and cannot do for an organization’s operational infrastructure will help determine what can be done today and what technologies need to be developed to better secure entire ecosystems.

This five-part blog will walk through each aspect of threat intelligence from a general overview to help provide a basic understanding to the future of threat intelligence as it relates to IoT. Part 1 will give a high-level overview of what threat intelligence is, how it is gathered, analyzed and consumed. Parts 2 and 3 will focus on IP and URL data, how it can be applied to IoT and an example of implementing this data in an IoT Gateway. The last two articles will discuss what the future holds in store for protecting devices and creating purpose-built protection for the IoT.

Threat Intelligence: An Overview

Traditional Threat Intelligence consists of the collection and analysis of four main data types: IP Addresses, URLs, Files and Mobile Applications. The focus of this data collection and analysis revolves around protecting workstations and servers from becoming infected with malicious software, preventing command and control servers from activating dormant code living in an organization’s network and helping to identify and prevent the exfiltration of data. This was initially done through the use of human analysts who spent time manually identifying and evaluating threats but has now evolved to a more automated process through the use of machine learning and big data analytics.

As stated above, threats in the cyber security space can be broken down into four main components. Of course, there are other vectors a malicious actor can use to attack an organization but the elements below comprise the bulk of threats a typical organization will regularly face:

  • IP Addresses: IPv4 and IPv6 addresses that are typically analyzed for threats inbound to an organization. Typical attacks include spam sources, command and control servers, and botnet servers.
  • URL: Not often thought of as a threat category as many organizations consider URLs as policy control but they are heavily used as dynamic embedded delivery endpoints for phishing and malware. It should also be noted that URLs can contain IP addresses.
  • Files: Traditional malicious files, think viruses, used to encrypt user data, listen to user activity, destroy systems and/or exfiltrate data.
  • Mobile Applications: These have been identified separately from traditional files as they require special analysis due to their specific platforms and the functionality they provide in terms of network connectivity and application performance.

There are three main steps to any threat intelligence system:

  • Data Collection and Aggregation: There are three main ways to gather data in the wild for analysis.
  • Active: This includes web crawlers and IP port scanning techniques. Since it can be controlled this method provides a robust amount of data but does not typically result in identifying the high-value zero-day threats.
  • Passive: By deploying victim machines, web app honeypots, endpoint agents and other exploitable devices on the Internet it is possible to attracted attackers and record malicious activity as it occurs. This technique results in a better set of threat data but requires patients while waiting for a malicious actor to attempt to take advantage of weakened system.
  • 3Rd Party Data: There are several international, governmental and independent bodies that collect threat data for use by security teams. This data, though valuable, must be vetted for accuracy and often times because outdated quickly as threat actors subscribe to the same data sets and change or avoid the items published in these lists.
  • Classification: Once data has been gathered and aggregated it can be fed into purpose-built machine learning engines for analysis. This involves the creation and training of engines for each of the data types identified above. Analysts move from doing deep dive identification of threats to maintaining and tuning the engines for better accuracy. This is done by continually feeding the engines more highly refined data for the engine type.
  • Analysis and Consumption: Once the data has been collected and classified it is a simple Big Data issue of provided tools such as APIs or SDK to access each of the individual data types.

A relatively new component to the threat intelligence space is the generation of contextualized data made possible through advancements in big data analytics. Contextualization involves walking through disparate data sources looking for linkages between the data in an effort to help prevent future threats before they occur or allow an analyst to better understand the effect of an identified threat may have on an organization.

Typical applications of threat intelligence range from policy management in next generation firewalls to network traffic analysis in security operation centers. Depending on the type of threat data an organization uses and their ability to apply that data to their infrastructure will directly correlate with how well they can detect, identify and resolve threats.

Next week Part Two of this series will explore what traditional URL and IP data can and cannot do for the IoT.

David Dufour

About the Author

David Dufour

Vice President, Engineering

David Dufour is the Vice President of Engineering at Webroot. He has 25+ years of experience in systems integration and software engineering focusing on large-scale, high-performance, high-availability integration solutions.

Share This