Threat Intelligence

Unexpected Side Effects: How COVID-19 Affected our Click Habits

Phishing has been around for ages and continues to be one of the most common threats that businesses and home users face today. But it’s not like we haven’t all been hearing about the dangers of phishing for years. So why do people still click? That’s what we wanted...

Key Considerations When Selecting a Web Classification Vendor

Since launching our web classification service in 2006, we’ve seen tremendous interest in our threat and web classification services, along with an evolution of the types and sizes of cybersecurity vendors and service providers looking to integrate this type of...

4 Ways MSPs Can Fine-Tune Their Cybersecurity Go-To-Market Strategy

Today’s work-from-home environment has created an abundance of opportunities for offering new cybersecurity services in addition to your existing business. With cyberattacks increasing in frequency and sophistication, business owners and managers need protection now...

Ransomware: The Bread and Butter of Cybercriminals

Imagine a thief walks into your home and rummages through your personal belongings. But instead of stealing them, he locks all your valuables into a safe and forces you to pay a ransom for the key to unlock the safe. What choice do you have? Substitute your digital...

How Cryptocurrency and Cybercrime Trends Influence One Another

Typically, when cryptocurrency values change, one would expect to see changes in crypto-related cybercrime. In particular, trends in Bitcoin values tend to be the bellwether you can use to predict how other currencies’ values will shift, and there are usually corresponding shifts in crypto-based crime, such as ransomware, though it’s not necessarily the kind of change you might predict.


According to Tyler Moffitt, senior threat researcher and resident crypto expert, “whatever Bitcoin does, the altcoins are going to follow. When [Bitcoin] crashes, the rest crash.” But that doesn’t necessarily mean you’ll see big spikes in ransomware or cryptojacking. In fact, Moffitt states, because Bitcoin is known for being fairly volatile, it can undermine any direct effect on, say, the amount demanded in a ransomware scheme. It’s very possible for a Bitcoin ransom to lose value over time due to market flux, making it less profitable than it might otherwise appear.

So, what’s the real story? As we see cryptocurrency values rise and fall, how should we interpret shifts in the threats we can expect to see? Is it safe for ordinary folks to try to get into the crypto market, or does that just give malicious actors another method to scam and steal from you?

Get answers to these questions and more in this informative Hacker Files podcast with Joe Panettieri, in which he and Tyler Moffitt discuss the ins and outs of crypto, what the market looks like, how it actually affects cybercrime, and what everyone from crypto novices and to bigtime enthusiasts need to know.

Ransomware, BEC and Phishing Still Top Concerns, per 2021 Threat Report

Although cybercriminal activity throughout 2020 was as innovative as ever, some of the most noteworthy threat activity we saw came from the old familiar players, namely ransomware, business email compromise (BEC) and phishing. According to the 2021 Webroot BrightCloud® Threat Report, each of these threat types saw significant fluctuations as people all over the world shifted to working, studying, and doing everything else online. Here are some of the findings from the report.

Ransomware

One of the newer trends we saw in ransomware was that of data extortion. Believed to have been started by the Maze ransomware group, the data extortion trend involves not just encrypting business’ data and holding it for ransom, but in fact threatening to expose the compromised data if the victims refuses to pay. This new ransomware business model specifically targets sensitive data to increase the likelihood of payment.

Unfortunately, there’s little a targeted business can do in these situations. If they don’t pay up, their data might be disclosed publicly or otherwise misused. And, depending on what kind of data has been compromised, the consequences of exposure could include costly fines for violating privacy regulations like GDPR and California’s Consumer Privacy Act (CCPA). These fines can really add up, starting at $100 per customer per record lost and going up to flat percentages of revenue.

As if the ransom cost and regulatory fines aren’t enough, there’s also the cost of other ransomware fallout, such as downtime and time to recover. Universal Healthcare Services reportedly suffered three weeks of downtime after its September 2020 ransomware incident, resulting in a $67 million loss of revenue. Finally, there’s the question of the brand’s reputation and customer trust, which could be so irreparably damaged that the business might not survive.

Read more about the hidden costs of ransomware in our eBook.

As the data extortion trend took off, we also saw massive payouts to ransomware actors.

  • The attackers who hit Foxconn demanded ~1804 Bitcoin ($34 million at the time) to prevent the data they’d stolen from being publicly exposed.
  • Malicious actors infected Garmin’s systems with ransomware and required (and reportedly received) $10 million to destroy the stolen data.
  • By September 2020, the average ransom payment peaked at $233,817.

“In most cases, ransomware isn’t the beginning of a compromise. It’s actually the end state, where the criminals cash in after an extended period. By the time you realize you’ve got ransomware on your network, the criminals may have been in there, watching, listening, and tampering with things for weeks or months without your knowledge. They might’ve even checked out your financials, so they know what kind of ransom to demand.”
– Kelvin Murray, Sr. Threat Research Analyst

Business email compromise (BEC)

BEC typically targets commercial, government, and nonprofit organizations by impersonating a senior colleague, IT team member, vendor, or trusted customer. In most scenarios, the malicious actor contacts the victim via email under the pretense of requesting money (especially via wire transfer or pre-paid gift card), provide credentials, or release sensitive data.

BEC relies pretty heavily on the inherent trust of employees in their management teams, fellow colleagues, and customers. But with so many invoices and payment requests that occur as part of the daily operations in any businesses, it can be quite easy for attackers to sneak a fake one in.

From the example above, you might not think much of the consequences of this type of attack. It’s important to keep in mind that it’s not always a matter of a few $50 or $100 gift cards; it could just as easily be a legitimate-looking vendor invoice for tens of thousands of dollars. BEC remains a very lucrative business; the Internet Crime Complaint Center (IC3) got 19,369 BEC complaints in 2020, resulting in adjusted losses of $1.8 billion!

“Like phishing prevention, successfully preventing BEC involves a combination of robust training for end users and appropriately designed and publicized business policies around how to handle financial or technical requests.” – Grayson Milbourne, Security Intelligence Director

Phishing

Phishing is still one of the most popular ways (if not the most popular) to get ransomware and other types of malware into a business’ network. Getting a victim to fall for a phishing attack is often the first step, which gives attackers a jumping off point to perform reconnaissance on the network, acquire any necessary credentials, interfere with protection measures and backup schedules, deploy malware payloads, and more — and then they get to decide what to do with any data they steal at their leisure.

COVID-19 definitely affected phishing in very visible ways. For example, the majority of phishing lures we spotted throughout the year pretended to offer information on the pandemic, COVID-19 tracking, protection measures and PPE, and more, often purporting to be from reputable sources like the CDC or WHO. There were also numerous malicious spam (malspam) emails claiming to provide details on stimulus checks and vaccines.

The rates of phishing attacks throughout 2020 largely coincided with the early months of the pandemic. Attacks increased 510% from January to February, with eBay and Apple the brands most often targeted (we believe these numbers were due to buyers increasingly looking online as product shortages and technology needs arose). Attack volume continued to grow into March, then dropped off as we moved into the summer months. A more modest spike occurred in the months leading up to the U.S. election, up 34% from September to October, and another 36% from October to November.


Here are a few of the other phishing stats that stand out.

  • From March to July, during the initial lockdown phase in the U.S., phishing URLs targeting Netflix jumped 646%. Other popular streaming services saw similar spikes at corresponding times.
  • By the end of 2020, 54% of phishing sites used HTTPS, indicating that checking for the lock icon in your browser’s address bar is no longer an adequate way to gauge if a website is legitimate or not.

Summary

Cybercriminals certainly didn’t sit 2020 out, but it’s not all gloom and doom. In fact, there were numerous cybersecurity achievements throughout the year that work to the benefit of businesses and individuals everywhere. Security researchers and analysts have been working hard to identify and neutralize new threats the moment they’re encountered. More businesses are adopting robust backup and disaster recovery plans to remain resilient in the face of downtime, planned or unplanned. Operating systems and web browsers are improving their built-in security to stop threats sooner in the attack cycle. Phishing simulations and security awareness training for employees continue to improve business security postures by major percentages (up to 72%, per the report). Nations and companies are working together to break down cybercriminal infrastructure. Even malware (for the moment) is trending gently downward. It’s clear from our findings that, with the right backup, training, and security layers working together to form a united defense against cyber threats, businesses and individuals can achieve true resilience, no matter what threatens.

Get the full story on these details and more in the 2021 Webroot BrightCloud® Threat Report.

It’s Too Late for Threat Intelligence Vendors to Ignore IPv6

IPv6 has been a long time coming. Drafted by the Internet Engineering Task Force (ITEF) in 1998, it became an Internet Standard in 2017. Though the rollout of IPv6 addresses has proceeded at a glacial pace since then, adoption numbers continue to inch higher.

Worldwide IPv6 adoption, according to Google’s handy tracker, is around 33 percent. It’s higher in the United States, at just shy of 45 percent. The graph has been trending relentlessly up and to the right since the mid-2000s.

This increased adoption means more cyberattacks are originating from IPv6 addresses. That means security vendors and device manufacturers who rely on embedded threat intelligence should insist on visibility surrounding the successor to IPv4.

Why we needed IPv6

Since the late 1980s, the internet’s architects realized they were cruising toward a problem. IP addresses, those numbers assigned to every internet-connected device, or node, were designed to contain 32 bits. That made for just under 4.3 billion possible number combinations under the IPv4 system. It was apparent even thirty years ago that these possibilities would be exhausted.

That day came in February 2011, met with a dramatic announcement by the Internet Corporation for Assigned Names and Numbers. Its opening line reads, “A critical point in the history of the Internet was reached today with the allocation of the last remaining IPv4 (Internet Protocol version 4) addresses.”

It seemed like the end of an era. But it wasn’t really one at all. IP addresses are frequently recycled, reallocated and many millions were never used at all. There’s even a famous story about Stanford University giving back a block of millions of unused IPv4 addresses. That helps explain why we’ve gotten so far from the adoption of IPv6 as an Internet Standard to majority adoption.

On the other hand, IPv6 is based on 128-bit encryption. This allows for a whopping 3.4 x 1038 permutations, or roughly 340 trillion trillion trillion. So, while the day may come when we need to revisit the IP system, that day is unlikely to be soon and it almost certainly won’t be because we’ve run out of assignable options.

By the way…whatever happened IPv5? Didn’t we skip a number? Well, it did exist, but was never officially adopted because it used the same 32-bit architecture as its predecessor. Begun as an experimental method for transferring streaming voice and video data, IPv5 lives on through its successor, voice over IP (VoIP).

What continued IPv6 adoption means for internet security

Hackers tend to set their sites on new targets only when they become worthy of their attention. The same goes for IPv6. As the rest of the internet pursues its perfectly logical reasons for making the migration, increasing numbers of cybercriminals are looking to exploit it. As IPv6 adoption becomes more prevalent, threat actors are increasingly using its addresses as an attack vector.

If threat intelligence feeds haven’t prepared to analyze IPv6 addresses, they’re faced with big black holes in their data sets. As we’ve seen in recent attacks, the ability to monitor anomalous web traffic is key to detecting a breach. So, in addition to having visibility into the threat status of an IP, it’s also critical to have location data and be able to cross-reference its activities with known malicious ones.

Device manufacturers, too, should look to account for accelerated IPv6 adoption when it comes to securing their products. This is especially true for IoT devices. Not typically armed with the highest security measures to start with, they now face the additional threat of an intelligence blind spot if the manufacturer makes no effort to analyze IPv6 addresses.

As internet-connected nodes in the form of IoT devices continue to proliferate, millions of new IPs will be needed. IPv6 will thankfully be more than up to the task of accommodating them, but manufacturers should make sure their devices are designed with the capabilities to analyze them.

IPv6 may have been a long time coming, but it’s too late in the game to ignore. When it’s time to choose a threat intelligence partner, choose one that’s prepared.

To learn more about the Webroot BrightCloud IP Reputation Service, click here.

Essential Threat Intelligence: Importance of Fundamentals in Identifying IOCs

The supply chain attack that Trojanized a SolarWinds update to infect and spy on the IT management platform’s customer base continues to be analyzed. Early reports have called the methods highly sophisticated and the actors highly trained. We do know that IP addresses, a command and control server and a malicious product update file were used. While details continue to come to light with further investigation, one thing has been made clear by the incident: the fundamental elements of tactical threat intelligence still have a critical place in a layered cybersecurity strategy.

Tactical threat intelligence typically focuses on the latest methods threat actors are using to execute attacks. It’s examines indicators of compromise (IOCs) like IP addresses, URLs, system logs and files to help detect malicious attacks. This type of threat intelligence is most often deployed in network and security devices like firewalls, SIEMs, TIPs and other tools, and is usually set to apply policy-based settings within these devices based on intelligence criteria.

Recent attacks continue to prove that these fundamental tactical threat intelligence pieces are still critical. While web filtering and URL classification, IP reputation, and file detection and reputation may be less flashy than threat actor profiles and takedown services, they continue to be the building blocks of core threat intelligence elements that are key to stopping attacks.

These IOCs – files, IPs, URLs – are proven methods of attack for threat actors and play a consistent role in their malicious campaigns. Having tactical intelligence concerning these internet items is one key step security and technology providers can take to ensure their users are better protected. For tactical threat intelligence to be effective it must be both contextual and updated in real-time.

Why context matters


Context is what allows threat intelligence providers to take a mass amount of data and turn it into something meaningful and actionable. With context, we can explore relationships between internet objects and better access their risk.

As the recent SolarWinds attack shows, IOCs are often interconnected and rarely only one is used. Seeing the connections surrounding various internet objects, like a benign website that may be one step away from a malicious IP address, allows us to map and analyze these objects not only as they are classified but in their contextual relationships. These relationships allow us to better predict whether a benign object has the potential to (or is even likely to) turn malicious.

Real-time intelligence

Over the course of a year, millions of internet objects change from benign to malicious and back many times as cybercriminals attempt to avoid detection. Showing a single IOC at a single point in time, as happens with static IP blocklists, doesn’t paint the full picture of an object’s activity. Both real-time and historical data, however, canhelp in the development of a reputation score based on behavior over time and common reputational influencers such as age, popularity and past infections. It also helps to protect users from never before seen threats and even predict where future attacks may come from.

Once the fundamental intelligence is present, it’s also critical to make sure policies are enabled and configured correctly to best take advantage of the threat intelligence. In the instance of the SolarWinds attack, when we evaluated the initial data we found that seven of the IP addresses used in the campaign were previously identified by BrightCloud® Threat Intelligence months prior to discovery of the attack. These IP addresses were marked as high-risk and had fairly low reputation scores. In addition, the IPs consistently remained in the high-risk category throughout the year, meaning there was a high predictive risk these IPs would attack infrastructure or endpoints. Depending on the threshold set in the policy, many end users could have already been prevented from experiencing malicious behavior initiating from one of these identified IP addresses.

Necessary, not sufficient

Many security companies treated the Orion software update released by SolarWinds as one coming from a trusted partner. That factor contributed to the widespread success of the suspected espionage operation. It also allowed the threat actors’ reconnaissance operations to go undetected for months.

But Webroot BrightCloud® Threat Intelligence associated the IP address with a botnet in the summer of last year. A properly configured security tool using Webroot BrightCloud Threat Intelligence data would have blocked communication with the command and control server.

When used as part of a wider defense in depth strategy, essential threat intelligence components and proper policy configurations that apply that intelligence can help to make vendors and their partners more resilient against complex attacks.

How to Stop Shadow IT, Manage Access and Ensure Security with Cloud Applications

Today, the average enterprise uses over 2000 cloud applications and services, and we expect this number will continue to grow as more businesses realize the efficiency, flexibility and collaboration benefits these services bring. But the use of cloud-based applications also comes with a few caveats; for example, the apps themselves may pose potential security vulnerabilities, and it’s also hard to prevent employees from using unsanctioned applications outside of the approved list (aka “shadow IT”), meaning critical business data could be floating out there in the ether without proper encryption or access controls.

When implementing these types of solutions, security should be a central concern in the vetting process. Unfortunately, it isn’t.

The State of Security with Cloud Applications

A full 92% of enterprises admit they have a gap between current and planned cloud usage and the maturity of their cloud security program. Meanwhile, 63% of web-borne malware and 15% of phishing attacks are delivered over cloud applications. And although 84% of organizations report using SaaS services at their company, more than 93% of those said they still deal with unsanctioned cloud app usage.

Even though cloud transformation is a strategic focus for many businesses, CISOs and IT teams are often left out of the discussion. That may be because the adoption of cloud services is generally billed as quick and easy with a rapid time to value, while IT security vetting processes don’t typically boast the same reputation. That often means that, for reasons of speed and perception, security may be treated as an afterthought — which is a potentially devastating oversight.

As adoption continues to grow, it’s critical for enterprises and small and medium-sized businesses (SMBs) alike to balance their cloud application use with security and access control; otherwise, the benefits they see may quickly turn into regulatory compliance nightmares, data loss disasters and security breaches.

Bringing Security and Visibility to Your Cloud Transformation

To improve visibility into the cloud applications being used, and to create usage policies and address security risks, many businesses are turning to Cloud Access Security Brokers (CASBs). CASB services are typically placed between the businesses who consume cloud services and providers who offer them, effectively protecting the gateway between a company’s on-premises IT infrastructure and the cloud service provider’s infrastructure. As such, CASBs can provide a central location for policy and governance simultaneously across multiple cloud services — for users and devices — and granular visibility into and control over user activities and sensitive data. They typically help enforce data-centric security policies based on data classification, data discovery and user activity surrounding data.

Faced with a continually growing and changing number of cloud applications and services, it’s critical to have accurate, up-to-date cloud-specific intelligence, not only for CASBs but also other security tool providers who provide support and policy control capabilities around cloud applications.

To better enable CASBs and security device vendors to identify and categorize cloud applications Webroot recently released its newest service: Webroot BrightCloud® Cloud Service Intelligence. This service is designed to offer full visibility, ensure security, enforce compliance, and identify shadow IT through three components: Cloud Application Classification, Cloud Application Function, and Cloud Application Reputation.

By embedding these components into a CASB solution or other security device, partners can identify a given cloud application, classify it by purpose, and control access to it based on the application’s group, name, and the action being performed. Additionally, customers can assess risk and compliance for all cloud applications with a reputation score. Cloud Service Intelligence can also be layered with other BrightCloud® services, such as Web Classification and Web Reputation, for a complete filtering solution that won’t impact product or network bandwidth.

Next Steps

The use of cloud applications is only going to continue to grow. Actionable threat intelligence can provide critical data around which cloud applications are being used within an organization, how they are being used, and what their security reputations may be. Armed with this kind of visibility and security information, enterprises, businesses, and the CASB and security providers who serve them can reduce risk and minimize shadow IT for a stronger overall cyber resilience posture. Learn more about this new service and its applications in our datasheet.

Key Considerations When Selecting a Web Classification Vendor

Since launching our web classification service in 2006, we’ve seen tremendous interest in our threat and web classification services, along with an evolution of the types and sizes of cybersecurity vendors and service providers looking to integrate this type of curated data into their product or service. Over the years, we’ve had the good fortune to work with partners of all sizes, from global networking and security vendors to innovative and dynamic start-ups across the world.

With the end-of-life of Broadcom’s Symantec RuleSpace OEM Web Classification service, we’ve received numerous inquiries from their former customers evaluating alternative solutions. Here we’ll outline the things to consider in a replacement. For more on why Webroot is poised to fill the gap left by the Broadcom, you can read the complete whitepaper here.

Your use case: how well does it align with the vendor?

Each use case is unique. Every vendor or service provider brings its own benefit to market and has its own idea about how their service or solution adds value for customers, clients or prospects. That’s why our adaptive business model focuses on consulting with partners on technical implementation options, spending the time to understand each business and how it may benefit from a well-architected integration of classification and/or intelligence services.

Longevity and track record

A key factor influencing change on the internet is innovation. Every service provider is continuously enhancing and improving its services to keep pace with changes in the threat landscape, and with general changes to the internet itself. As well as keeping up with this change, it’s important that a vendor brings a historical perspective to the partnership. This experience will come in handy in many ways. Scalability, reliability and overall business resilience should be expected from a well-established vendor.

Industry recognition

Fair comparative evaluations of web classification and threat intelligence providers are difficult to achieve. We can offer guidance to prospective partners, but it’s often more reassuring to simply see the strong partner relationships we have today. Many of these we’ve worked with for well over a decade. When evaluating a vendor, we recommend looking closely at current partners and imagining the investments each have made in their integrated solutions. This speaks volumes about integration performance and the quality of the partnership.

Technology platform

A classification or threat dataset is only as good its sources and the analytics used to parse it. Many companies offer classification and/or threat intelligence data, but the quality of that data varies significantly.

Threat Intelligence Capabilities

Not all our partners’ use cases require threat intelligence, but for those that do it’s critical they understand where their threat data comes from. There are now a great many sources of threat data, but again these are far from equal. Worse still, comparing source is often no simple task.

Ease of integration

As mentioned, every use case is unique. So are the platforms into which web classification, malware detection and threat intelligence services are integrated. It’s therefore crucial that a vendor provide flexible integration options to accommodate any pioneering partner, service provider or systems integrator. Simply providing data via an API is useful, but will it always deliver the performance required for real-time applications?  Delivering a local database of threats or classifications may help with performance, but what about new threats? Achieving a balance of flexible delivery, performance and security is crucial, so take time to discuss with potential vendors how they plan to deliver.

Phishing detection

Phishing sites are some of the most dynamic and short-lived attack platforms on the web, so intelligence sources must be capable of detecting and tracking them in real-time. Most phishing intelligence sources depend on manual submissions of phishing sites by end users. This is far from ideal. Users are prone to error, and for every 10,000 users who click on a phishing site only one will report it to an authority or tracking service, leading to massive under-reporting of this threat vector.

Category coverage: beware category overload

There are various approaches to classifying the web and different vendors specialize in different areas. In many cases, this is determined by the data sources they have access to or the markets in which they operate. Again, it’s important to evaluate the partners to whom the vendor is delivering services and to consider how the vendor may or may not add value to the partnership. 

Efficacy and performance

Efficacy is fundamental to web classification or threat detection capabilities, so it should be a core criterion when evaluating a vendor. Depending on the use case, false positives or false negatives may be the primary concern when making determinations. Potential vendors should be evaluated for performance in these areas and asked how they approach continuous improvement.

Reliability

Building any third-party service or solution into a product, platform or service entails risk. There’s always the chance the new dependency negatively affects the performance or user experience of a service. So it’s importance to ensure a vendor can reliably deliver consistent performance. Examine each’s track record and customers base, along with the use cases they’ve previously implemented. Do the vendor’s claims match the available evidence? Can current customers be contacted about their experiences with the vendor?

Scalability

In assessing vendors, it can be difficult to determine the level of scalability possible with their platform. It helps to ask questions about how they build and operate their services and looking for examples where they’ve responded to unexpected growth events that can help demonstrate the scaling capabilities of their platform. Be wary of smaller or upstart vendors that may have difficulty when their platform is heavily loaded or when called upon to grow faster than their existing implementation allows.

Flexibility

Some solutions may look technically sound, easily accessible and well-documented while a mutually agreeable business model remains elusive. Conversely, an agreeable business model may not be backed by the efficacy or quality of service that desired from a chosen vendor.

Feedback loops: making the best better

We’re often approached by contacts asking us for a “feed” of some kind. It may be a feed of threat data, malware information or classifications. In fact, many of our competitors simply push data for customers or partners to consume as their “product.” But this approach has inherent weaknesses.

Partnership: not just a customer relationship

As mentioned, we seek to build strong partnerships with mutual long-term benefit. Look for this approach when considering a vendor, knowing you’ll likely be working with them for a long time and fewer changes to your vendor lineup mean more time optimizing your products and services. Ask yourself: Who will we be working with? Do we trust them? How easy are they to get ahold of? These are critical considerations when selecting a vendor for your business.

Summary

We hope to have provided some food for thought when it comes to selecting an integration partner. To read the full whitepaper version of this blog, please click here. We’re always standing by to discuss prospective clients’ needs and to provide any possible guidance regarding our services. We’re here to help you craft the best possible solutions and services. Please contact us to take the next step towards an even more successful

The Problem with HTTPS

Despite the intent of ensuring safe transit of information to and from a trusted website, encrypted protocols (usually HTTPS) do little to validate that the content of certified websites is safe.

With the widespread usage of HTTPS protocols on major websites, network and security devices relying on interception of user traffic to apply filtering policies have lost visibility into page-level traffic. Cybercriminals can take advantage of this encryption to hide malicious content on secure connections, leaving users vulnerable to visiting malicious URLs within supposedly benign domains.

This limited visibility affects network devices that are unable to implement SSL/TLS decrypt functionality due to limited resources, cost, and capabilities. These devices are typically meant for home or small business use, but are also found in the enterprise arena, meaning the impact of this limited visibility can be widespread.

With 25% of malicious URLs identified by Webroot hosted within benign domains in 2019, a deeper view into underlying URLs is necessary to provide additional context to make better, more informed decisions when the exact URL path isn’t available.

Digging Deeper with Advanced Threat Intel

The BrightCloud® Web Classification and Web Reputation Services offers technology providers the most effective way to supplement domain-level visibility. Using cloud-based analytics and machine learning with more than 10 years of real-world refinement, BrightCloud® Threat Intelligence services have classified more than 842 million domains and 37 billion URLs to-date and can generate a predictive risk score for every domain on the internet.

The Domain Safety Score, available as a premium feature with BrightCloud® Web Classification and Reputation services, can be a valuable metric for filtering decisions when there is lack of path-level visibility on websites using HTTPs protocols. Even technology partners who do have path-level visibility can benefit from using the Domain Safety Score to avoid the complexity and compliance hurdles of deciding when to decrypt user traffic.

The Domain Safety Score is available for every domain and represents the estimated safety of the content found within that domain, ranging from 1 to 100, with 1 being the least safe. A domain with a low score has a higher predictive risk of having content within its pages that could compromise the security of users and systems, such as phishing forms or malicious downloads.

Using these services, organizations can implement and enforce effective web policies that protect users against web threats, whether encrypted through HTTPs or not.

Devising Domain Safety Scores

As mentioned, a Domain Safety Score represents the estimated safety of the content found within that domain. This enables better security filtering decisions for devices with minimal page-level visibility due to increasing adoption of HTTPS encryption.

How do we do it?

BrightCloud uses high-level input features to help determine Domain Safety Scores, including:

  • Domain attribute data, including publicly available information associated with the domain, such as registry information, certificate information, IP address information, and the domain name itself.
  • Behavioral features obtained from historical records of known communication events with the domain, gathered from real-world endpoints.
  • A novel deep-learning architecture employing multiple deep, recurrent neural networks to extract sequence information, feeding them into a classification network that is fully differentiable. This allows us to use the most cutting-edge technology to leverage as much information possible from a domain to determine a safety score.
  • Model training using a standard backpropagation through time algorithm, fully unrolling all sequences to calculate gradients. In order to train such a network on a huge dataset, we have developed a custom framework that optimizes the memory footprint to run efficiently on GPU resources in a supercomputing cluster. This approach allows us to train models faster and iterate quickly so we can remain responsive and adapt to large changes in the threat landscape over time.

A secure connection doesn’t have to compromise your privacy. That’s why Webroot’s Domain Safety Scores peek below the domain level to the places where up to a quarter of online threats lurk.

Learn more about Domain Safety Scores, here.

Thoughtful Design in the Age of Cybersecurity AI

AI and machine learning offer tremendous promise for humanity in terms of helping us make sense of Big Data. But, while the processing power of these tools is integral for understanding trends and predicting threats, it’s not sufficient on its own.

Thoughtful design of threat intelligence—design that accounts for the ultimate needs of its consumers—is essential too. There are three areas where thoughtful design of AI for cybersecurity increases overall utility for its end users.

Designing where your data comes from

To set the process of machine learning in motion, data scientists rely on robust data sets they can use to train models that deduce patterns. If your data is siloed, it relies on a single community of endpoints or is made up only of data gathered from sensors like honeypots and crawlers. There are bound to be gaps in the resultant threat intelligence.

A diverse set of real-world endpoints is essential to achieve actionable threat intelligence. For one thing, machine learning models can be prone to picking up biases if exposed to either too much of a particular threat or too narrow of a user base. That may make the model adept at discovering one type of threat, but not so great at noticing others. Well-rounded, globally-sourced data provides the most accurate picture of threat trends.

Another significant reason real-world endpoints are essential is that some malware excels at evading traditional crawling mechanisms. This is especially common for phishing sites targeting specific geos or user environments, as well as for malware executables. Phishing sites can hide their malicious content from crawlers, and malware can appear benign or sit on a user’s endpoint for extended periods of time without taking an action.

Designing how to illustrate data’s context

Historical trends help to gauge future measurements, so designing threat intelligence that accounts for context is essential. Take a major website like www.google.com for example. Historical threat intelligence signals it’s been benign for years, leading to the conclusion that its owners have put solid security practices in place and are committed to not letting it become a vector for bad actors. On the other hand, if we look at a domain that was only very recently registered or has a long history of presenting a threat, there’s a greater chance it will behave negatively in the future. 

Illustrating this type of information in a useful way can take the form of a reputation score. Since predictions about a data object’s future actions—whether it be a URL, file, or mobile app—are based on probability, reputation scores can help determine the probability that an object may become a future threat, helping organizations determine the level of risk they are comfortable with and set their policies accordingly.

For more information on why context is critical to actionable threat intelligence, click here.

Designing how you classify and apply the data

Finally, how a threat intelligence provider classifies data and the options they offer partners and users in terms of how to apply it can greatly increase its utility. Protecting networks, homes, and devices from internet threats is one thing, and certainly desirable for any threat intelligence feed, but that’s far from all it can do.

Technology vendors designing a parental control product, for instance, need threat intelligence capable of classifying content based on its appropriateness for children. And any parent knows malware isn’t the only thing children should be shielded from. Categories like adult content, gambling sites, or hubs for pirating legitimate media may also be worthy of avoiding. This flexibility extends to the workplace, too, where peer-to-peer streaming and social media sites can affect worker productivity and slow network speeds, not to mention introduce regulatory compliance concerns. Being able to classify internet object with such scalpel-like precision makes thoughtfully designed threat intelligence that is much more useful for the partners leveraging it.

Finally, the speed at which new threat intelligence findings are applied to all endpoints on a device is critical. It’s well-known that static threat lists can’t keep up with the pace of today’s malware, but updating those lists on a daily basis isn’t cutting it anymore either. The time from initial detection to global protection must be a matter of minutes.

This brings us back to where we started: the need for a robust, geographically diverse data set from which to draw our threat intelligence. For more information on how the Webroot Platform draws its data to protect customers and vendor partners around the globe, visit our threat intelligence page.

Context Matters: Turning Data into Threat Intelligence

1949, 1971, 1979, 1981, 1983 and 1991.

Yes, these are numbers. You more than likely even recognize them as years. However, without context you wouldn’t immediately recognize them as years in which Sicily’s Mount Etna experienced major eruptions.

Data matters, but only if it’s paired with enough context to create meaning.

While today’s conversations about threat intelligence tend to throw a ton of impressive numbers and fancy stats out there, if the discussion isn’t informed by context, numbers become noise. Context is how Webroot takes the wealth of information it gathers—data from more than 67 million sources including crawlers, honeypots, as well as partner and customer endpoints—and turns it into actionable, contextual threat intelligence.

Read about the importance of data quality for a threat intelligence platform in our latest issue of Quarterly Threat Trends.

What defines contextual threat intelligence?

When determining a definition of contextual threat intelligence, it can be helpful to focus on what it is not. It’s not a simple list of threats that’s refreshed periodically. A list of known phishing sites may be updated daily or weekly, but given that we know the average lifespan of an in-use phishing site to be mere hours, there’s no guarantee such lists are up to date.

“Some threat intelligence providers pursue the low-hanging fruit of threat intelligence—the cheap and easy kind,” says Webroot Sr. Product Marketing Manager Holly Spiers. “They provide a list of IP addresses that have been deemed threats, but there’s no context as to why or when they were deemed a threat. You’re not getting the full story.”

Contextual threat intelligence is that full story. It provides not only a constantly updated feed of known threats, but also historical data and relationships between data objects for a fuller picture of the history of a threat based on the “internet neighborhood” in which it’s active.

Unfortunately, historical relationships are another aspect often missing from low-hanging threat intelligence sources. Since threat actors are constantly trying to evade detection, they may use a malicious URL for a period before letting it go dormant while its reputation cools down. But because it takes more effort to start from scratch, it’s likely the actor will return to it before too long.

“Our Threat Investigator tool, a visualization demo that illustrates the relationship between data objects, is able to show how an IP address’s status can change over a period of time, says Spiers. “Within six months, it may show signs of being a threat, and then go benign.”

What are the elements of context?

Over the course of a year, millions of internet objects change state from benign to malicious and back numerous times as cyber criminals attempt to avoid detection. And because threats are often interconnected, being able to map their relationships allows us to better predict whether a benign object has the potential to turn malicious. It also helps us protect users from never-before-seen threats and even predict where future attacks may come from.

That’s where the power in prediction lies—in having contextual and historical data instead of looking at a static point in time.

Some elements that are needed to provide a deeper understanding of an interwoven landscape include:

  • Real-time data from real-world sources, supplemented by active web crawlers and passive sensor networks of honeypots designed to attract threats, provide the necessary data for training machine learning models to spot threats
  • An ability to analyze relationships connecting data objects allows threat intelligence providers to make a connections as to how a benign IP address, for example, may be only one step away from a malicious URL and to predict with high confidence whether the IP address will turn malicious in the future.
  • Both live and historical data helps in the development of a trusted reputation score based on behavior over time and common reputational influencers such as age, popularity, and past infections.

Seeing the signal through the noise

Context is the way to turn terabytes of data into something meaningful that prompts action. Having the power to be able to dig into the relationships of internet objects provides the context that matters to technology vendors. For consumers of contextual threat intelligence, it means fewer false positives and the ability to prioritize real threats.

“Working with real-world vendors is key,” according to Spiers. “The reach of contextual threat intelligence and number of individuals it touches can grow exponentially.”

What Defines a Machine Learning-Based Threat Intelligence Platform?

As technology continues to evolve, several trends are staying consistent. First, the volume of data is growing exponentially. Second, human analysts can’t hope to keep up—there just aren’t enough of them and they can’t work fast enough. Third, adversarial attacks that target data are also on the rise.

Given these trends, it’s not surprising that an increasing number of tech companies are building or implementing tools that promise automation and tout machine learning and/or artificial intelligence, particularly in the realm of cybersecurity. In this day and age, stopping threats effectively is nearly impossible without some next-generation method of harnessing processing power to bear the burden of analysis. That’s where the concept of a cybersecurity platform built on threat intelligence comes in.

What is a platform?

When you bring together a number of elements in a way that makes the whole greater or more powerful than the sum of its parts, you have the beginnings of a platform. Think of it as an architectural basis for building something greater on top. If built properly, a good platform can support new elements that were never part of the original plan.

With so many layers continually building on top of and alongside one another, you can imagine that a platform needs to be incredibly solid and strong. It has to be able to sustain and reinforce itself so it can support each new piece that is built onto or out of it. Let’s go over some of the traits that a well-architected threat intelligence platform needs.

Scale and scalability

A strong platform needs to be able to scale to meet demand for future growth of users, products, functionality. Its size and processing power need to be proportional to the usage needs. If a platform starts out too big too soon, then it’s too expensive to maintain. But if it’s not big enough, then it won’t be able to handle the burden its users impose. That, in turn, will affect the speed, performance, service availability, and overall user experience relating to the platform.

You also need to consider that usage fluctuates, not just over the years, but over different times of day. The platform needs to be robust enough to load balance accordingly, as users come online, go offline, increase and decrease demand, etc.

Modularity can’t be forgotten, either. When you encounter a new type of threat, or just want to add new functionality, you need to be able to plug that new capability into the platform without disrupting existing services. You don’t want to have to worry about rebuilding the whole thing each time you want to add or change a feature. The platform has to be structured in such a way that it will be able to support functionality you haven’t even thought of yet.

Sensing and connection

A threat intelligence platform is really only as good as its data sources. To accurately detect and even predict new security threats, a platform should be able to take data from a variety of sensors and products, then process it through machine learning analysis and threat intelligence engines.

Some of the more traditional sensors are passive, or “honeypots” (i.e. devices that appear to look open to attack, which collect and return threat telemetry when compromised.) Unfortunately, attack methods are now so sophisticated that some can detect the difference between a honeypot and a real-world endpoint, and can adjust their behavior accordingly so as not to expose their methods to threat researchers. For accurate, actionable threat intelligence, the platform needs to gather real-world data from real-world endpoints in the wild.

One of the ways we, in particular, ensure the quality of the data in the Webroot® Platform, is by using each deployment of a Webroot product or service—across our home user, business, and security and network vendor bases—to feed threat telemetry back into the platform for analysis. That means each time a Webroot application is installed on some type of endpoint, or a threat intelligence partner integrates one of our services into a network or security solution, our platform gets stronger and smarter.

Context and analysis

One of the most important features a threat intelligence platform needs is largely invisible to end users: contextual analysis. A strong platform should have the capacity to analyze the relationships between numerous types of internet objects, such as files, apps, URLs, IPs, etc., and determine the level of risk they pose.

It’s no longer enough to determine if a given file is malicious or not. A sort of binary good/bad determination really only gives us a linear view. For example, if a bad file came from an otherwise benign domain that was hijacked temporarily, should we now consider that domain bad? What about all the URLs associated with it, and all the files they host?

For a more accurate picture, we need nuance. We must consider where the bad file came from, which websites or domains it’s associated with and for how long, which other files or applications it might be connected to, etc. It’s these connections that give us a three-dimensional picture of the threat landscape, and that’s what begins to enable predictive protection.

The Bottom Line

When faced with today’s cyberattacks, consumers and organizations alike need cybersecurity solutions that leverage accurate threat telemetry and real-time data from real endpoints and sensors. They need threat intelligence that is continually re-analyzed for the greatest accuracy, by machine learning models that are trained and retrained, which can process data millions of times faster than human analysts, and with the scalability to handle new threats as they emerge. The only way to achieve that is with a comprehensive, integrated machine-learning based platform.

Cloud Services in the Crosshairs of Cybercrime

It’s a familiar story in tech: new technologies and shifting preferences raise new security challenges. One of the most pressing challenges today involves monitoring and securing all of the applications and data currently undergoing a mass migration to public and private cloud platforms.

Malicious actors are motivated to compromise and control cloud-hosted resources because they can gain access to significant computing power through this attack vector. These resources can then be exploited for a number of criminal money-making schemes, including cryptomining, DDoS extortion, ransomware and phishing campaigns, spam relay, and for issuing botnet command-and-control instructions. For these reasons—and because so much critical and sensitive data is migrating to cloud platforms—it’s essential that talented and well-resourced security teams focus their efforts on cloud security.

The cybersecurity risks associated with cloud infrastructure generally mirror the risks that have been facing businesses online for years: malware, phishing, etc. A common misconception is that compromised cloud services have a less severe impact than more traditional, on-premise compromises. That misunderstanding leads some administrators and operations teams to cut corners when it comes to the security of their cloud infrastructure. In other cases, there is a naïve belief that cloud hosting providers will provide the necessary security for their cloud-hosted services.

Although many of the leading cloud service providers are beginning to build more comprehensive and advanced security offerings into their platforms (often as extra-cost options), cloud-hosted services still require the same level of risk management, ongoing monitoring, upgrades, backups, and maintenance as traditional infrastructure. For example, in a cloud environment, egress filtering is often neglected. But, when egress filtering is invested in, it can foil a number of attacks on its own, particularly when combined with a proven web classification and reputation service. The same is true of management access controls, two-factor authentication, patch management, backups, and SOC monitoring. Web application firewalls, backed by commercial-grade IP reputation services, are another often overlooked layer of protection for cloud services.

Many midsize and large enterprises are starting to look to the cloud for new wide-area network (WAN) options. Again, here lies a great opportunity to enhance the security of your WAN, whilst also achieving the scalability, flexibility, and cost-saving outcomes that are often the primary goals of such projects.  When selecting these types of solutions, it’s important to look at the integrated security options offered by vendors.

Haste makes waste

Another danger of the cloud is the ease and speed of deployment. This can lead to rapidly prototyped solutions being brought into service without adequate oversight from security teams. It can also lead to complacency, as the knowledge that a compromised host can be replaced in seconds may lead some to invest less in upfront protection. But it’s critical that all infrastructure components are properly protected and maintained because attacks are now so highly automated that significant damage can be done in a very short period of time. This applies both to the target of the attack itself and in the form of collateral damage, as the compromised servers are used to stage further attacks.

Finally, the utilitarian value of the cloud is also what leads to its higher risk exposure, since users are focused on a particular outcome (e.g. storage) and processing of large volumes of data at high speeds. Their solutions-based focus may not accommodate a comprehensive end-to-end security strategy well. The dynamic pressures of business must be supported by newer and more dynamic approaches to security that ensure the speed of deployment for applications can be matched by automated SecOps deployments and engagements.

Time for action

If you haven’t recently had a review of how you are securing your resources in the cloud, perhaps now is a good time. Consider what’s allowed in and out of all your infrastructure and how you retake control. Ensure that the solutions you are considering have integrated, actionable threat intelligence for another layer of defense in this dynamic threat environment.

Have a question about the next steps for securing your cloud infrastructure? Drop a comment below or reach out to me on Twitter at @zerobiscuit.

Webroot CTO Hal Lonas on Rethinking the Network Perimeter

“What are our cybersecurity protocols?” This question is one that has, undoubtedly, been top of mind for CTOs at numerous corporations and government agencies around the world in the wake of recent ransomware attacks. Given the hundreds of thousands of endpoint devices in more than 150 countries that were infected in the latest global attack, WannaCry, can you blame them?

Cybersecurity stock buying trends are on the rise. According to CNN Money, the PureFunds ISE Cyber Security ETF (HACK), which owns shares in most of the big security companies, was up more than 3 percent in early trading the Monday following the first WannaCry attacks. Positive performance in cybersecurity stocks comes as no surprise as organizations shore up their defenses in preparation for future attacks—big or small. This is the security climate in which we live.

While the numbers have been rising on both fronts, do the affected organizations truly understand what to look for when addressing cybersecurity? Where should the protection start? What obstacles might organizations need to overcome? How can they be better prepared?

Hal Lonas, chief technology officer at Webroot, takes us beyond the sobering wake-up call that attacks like WannaCry bring, and discusses actionable advice companies should consider when fortifying systems against cybercriminals.


Where should an organization start when thinking about combating malicious files entering the network?

Organizations should think about their security in terms of layers. Between the user sitting in the chair and the sites and services they access from their workstations, every level of security is equally important. The vehicles malicious files use to infiltrate the network shouldn’t be ignored either. Is it a URL? Is it a USB key that’s physically carried into the office? Or maybe it’s an employee who takes their laptop home and uses it on an unsecured network—the possibilities are endless. We’re in a very interesting era in which mobility has become the norm, there are more internet-connected devices than ever, and there are more angles every day for cybercriminals to launch attacks. Essentially, the perimeter is dissolving. That means organizations need to rethink how they approach protecting their networks.

We’ve heard the term “dissolving” a number of times recently when talking about the traditional notion of the network. Can you speak more on that?

Let’s use my phone as an example. Right now, it’s connected to the secure employee wireless in this office. When I hit the coffee shop later for a meeting, it might be on their public Wi-Fi. While I’m driving to the airport this afternoon, it’ll be on a cellular network. By tonight, it’ll be on the guest Wi-Fi in a hotel. With each movement and interaction, perimeters converge and overlap, and this phone is exposed to different levels of security across a variety of networks. Each step means I’m carrying data that could be exposed, or even malware that could be spread, between those different networks. These days, company work happens everywhere, not just on a corporate computer within the security of an organization’s firewall. That’s what we mean by dissolving perimeters.

We’re in a very interesting era in which mobility has become the norm, there are more internet-connected devices than ever, and there are more angles every day for cybercriminals to launch attacks.

One line of defense is endpoint protection. Whether you’re using a mobile device or laptop, that protection goes with the device everywhere. Even as you switch between networks, you know that’s one layer of protection that’s always present. Network or DNS-level security is also key, to help stop threats before they even make it as far as the endpoint.

How does Webroot BrightCloud® Streaming Malware Detection fit into the layered approach? Is it cutting edge in terms of protecting against malicious files at the perimeter?

Streaming Malware Detection is pushing the boundaries of network protection. As files stream through network devices—i.e., as they’re in the process of being downloaded in real time—Streaming Malware Detection determines whether the files are good or bad at the network level. That means the solution can analyze files in transit to stop threats before they ever land on the endpoint at all. We partner with the industry’s top network vendors, who have integrated this and other Webroot technologies as part of their overall approach to stopping malicious files at the perimeter.

In terms of what we’re doing with Webroot products, we’re expanding the levels in which you can be protected—looking at more and more different aspects of where we can protect you. We’re tightening the reigns from endpoint protection, which we’ve traditionally done extremely well, and branching further into the network with Streaming Malware Detection, as well as network anomaly detection with FlowScape® Analytics. We aim to bring value to our customers by protecting holistically. We’re adapting as a company with our product offerings to this new reality we find ourselves in.

What cutting edge approaches is Webroot taking to combat what has already infiltrated the network?

We hear a lot about advanced persistent threats. The reality is that those long-resting, largely undetected threats do make their way through and land in an environment with the intention of wreaking havoc, but doing it low and slow to avoid detection. The malware authors are very smart, which is something we try to anticipate. Webroot is really good at a couple of different things, not least of which is that we’re incredibly patient on our endpoint products. Essentially, we’ll monitor something that’s unknown for however long it takes, journaling its behavior until we’re absolutely sure it’s malicious or not, and then handling it appropriately.

In addition, we’ve recently added a product that does the independent network anomaly detection I mentioned earlier: FlowScape Analytics. Essentially, it analyzes day-to-day activity within a network to establish a baseline, then if something malicious or abnormal happens, FlowScape Analytics instantly recognizes it and alerts us so that we can track it down. In conjunction with our other layers of protection, it’s a solid cybersecurity combination.

What technology do you see helping to protect networks at the same scale and velocity threats are coming?

Streaming Malware Detection is a big one. Traditionally, malware has been sent into a sandbox where it has to execute and takes up resources. The sandbox also has to simulate customer environments. This approach comes with a lot of complexities and ends up wasting time for customers and users while awaiting a response. For scalability, analyzing the malicious files in transit at network speed frees up time and resources.

Is there anything else organizations should take into consideration? Machine learning at the endpoint level?

We’re always asking ourselves, “where’s the right juncture to layer in more security?” I’d like to see more organizations asking the same. You can look at our history, during which we developed a lightweight agent by moving the heavy lifting to the cloud, and that’s the theme we’ll continue to follow. The detection elements of machine learning can fit on our client, but we’ll do the computing-intensive and crowd protection work for machine learning in the cloud. That gives you the best efficacy, shares threat discoveries with all of our products and services in real time, and keeps devices running at optimal levels.