At Carbonite + Webroot, we’re always preaching about the importance of layering security solutions. Because here’s the truth: data’s always at risk. Whether from cybercriminals, everyday mishaps or mother nature, businesses can put up all the defenses they want but disaster only has to successfully strike once.
The global pandemic means more work is being conducted in the cloud, so this is no time to be lax with the security of cloud backups. Unless protection is redundant, organizations risk of losing mission-critical data – for minutes, days or permanently depending on the disaster – and putting their survival at risk.
That’s why layered protection in the cloud is so critical to cyber resilience. Without it, any one failure can be catastrophic.
So, how’s it done?
Let’s start with endpoints
For organizations managing hundreds or thousands of endpoints, backing each up to the cloud is important for keeping employees productive in the case of hardware failure, device theft, damage or malicious insiders. It’s easy to see how a laptop can be damaged, so it’s obvious for most that files stored locally should be backed up to the cloud.
But it’s also important to recognize that work done in the cloud should also be backed up. For example, one of the world’s most popular productivity tools for office workers, Microsoft 365, increasingly carries out its core functions in the cloud. But it has some serious gaps in terms of backup capabilities.
The average endpoint user may not know or care which important work files are stored, so long as they’re there when needed. This makes it important that Microsoft 365 data is backed up to the cloud – regardless of whether the user is aware if updates are being made locally or if the location is using its cloud capabilities.
Finally, but in the other direction, cloud-based cybersecurity offers another form of data security from the cloud. This method avoids the risk of endpoints relying on out-of-date file definitions of known-bad files, instead relying on near real-time threat telemetry from the cloud. This allows for the near real-time protection of all endpoints using the solution once a threat is identified.
But must also include servers
It’s less obvious to many of us that servers are at risk of becoming ground zero for data loss as well. Hardware sometimes fails, power cords can be tripped over, or worse…natural disasters can strike data centers, wiping out servers through fires, floods or other types of damage.
What good are endpoints without the servers that feed them information? Cloud computing technology offers a handful of flexible opportunities for backing up data housed on servers.
On-premise servers – used to store data locally based a business’s preference, regulatory needs or other reasons – can and should still be backed up to the cloud in case of a localized outage. Usually this entails concentrating data within a single point of storage (a “vault”) that’s then bulk uploaded. This duplicated data can then be accessed in the event a physical location loses power or a fiber optic cable is severed by construction work, for example.
Off-premise server banks also can and should be protected by cloud backups. Many of these servers may store their data in public clouds, which are normally but not always highly reliable. Public cloud outages do happen. When they do, it pays to have another cloud backup solution to failover to so the business can continue to run.
Whether or not this data is stored in the cloud permanently or migrated there when needed, redundancy is established when on and off-premise server banks are backed up to the cloud.
Rounding out the redundancy is a disaster recovery as a service (DRaaS) solution. This form of high-availability replication anticipates a worst-case scenario for server data loss. With DRaaS, byte-level replication of changes on an organization’s systems are sent to the cloud. In the event of a disaster, you
Note that DRaaS is not to be confused with being a replacement for backup. These are two different solutions that can work perfectly well alongside each other. Backup should apply to every server in an environment and offers long term retention with flexible restore options. DRaaS typically would be layered on top of backup, for the most mission critical servers, resulting in options to either restore from backup, or failover directly and rapidly to another cloud depending on the event that has rendered the production server or data inaccessible.
Maintain uptime, all the time
Threats to business data are all around us. Rates of ransomware are rising and remote workforces have ballooned since the outbreak of COVID-19. This is no time to trust in a single cloud as an organizational backup strategy. No single point of failure should keep users from accessing business-critical data. Luckily, there are many options for designed layered backup across clouds.
It’s not just that they’re making headlines more often. Ransomware rates really are rising. Given the recent spate of high-profile attacks, it’s worth remembering the difference between standard backup and high-availability replication.
Our research suggests that the costs of ransomware for businesses can amount to much more than an extortion payment. They include lost hours of productivity, reputational damage, compliance fines and more. But maintaining access to critical data at all times can undermine ransomware actors’ leverage over an organization, reduce recovery time and earn the good graces of regulators and the public.
Ultimately, doing so comes down to answering the question: what data does my business simply need to back up, and what data can my business simply not do without? Knowing the difference helps to determine the Recovery Time Objective (RTO) for a given type of data or application.
A 24-hour recovery time may fall within the RTO for non-essential data and applications. For mission-critical data, on the other hand, a 24-hour recovery period may exceed the acceptable amount of time to be without access to data. It could drive up the cost of data breach significantly, perhaps even higher than a ransomware payment.
Also, it may come down to the amount of change-rate data that can be acceptability lost. Knowing the acceptable Recovery Point Objectives (RPO) can be as important as knowing the required RTOs. For instance, a highly transactional system performing critical Online Transaction Processing (OLTP) could not afford the loss of data that occurred between backup cycles.
Well-designed data backup plans tend to be a blend of both standard backup and high availability, so it helps to know the difference when determining which is the better fit for a given system, application or set of data.
There are all sorts of good reasons to keep regular, reliable backups of business systems. These may concern the normal conveniences of document retention – not having to begin a project from scratch in the case of accidental deletion, for instance – or to satisfy industry or legal compliance regulations.
These backups are taken at pre-determined time intervals, typically once a day during non-working hours, and stored on a backup server. Often backups will be given an associated value called a retention. A retention allows organization to keep certain backups for a longer period of time. For instance, a business may decide it’s necessary to keep daily backups for a total of 30 days. But due to storage concerns, they will drop off the server on day 31. However, regulations or corporate policies may require keeping certain backups longer, so often they will designate a monthly of a yearly backup that has an extended retention for one or even up to seven years.
Recently, backup servers have been targeted by ransomware actors. Criminals will study an organization’s environment and specifically backup services. Therefore, it’s extremely important to have a backup for the backup. One of the preferred methods is a secondary cloud copy of the backup server. Since the cloud copy sits on a separate network, it provides a layer of security making it more difficult to span the separate cloud network and target the secondary backup copy.
In most cases, backups like those discussed above have recovery times of hours for a localized power outage or even days for a flooded server room, for example. For an HR system, this RTO may be acceptable. For a point-of-sale system, this could mean significant lost revenue.
When a backup’s RTO and RPO time values do not meet the needs for recovering a company’s critical systems (OLTP servers, for instance), high-availability replication is an effective alternative for ensuring required operational performance levels are met. High-availability replication accomplishes this by keeping an exact copy of critical servers, maintained by real-time, byte-level replication, which remain powered off until needed.
When that time comes, a failover procedure is initiated, and the copy assumes the role of the production system. The failover process typically occurs within a matter of a second or minutes, depending upon the server configuration or network latency. In cases of hardware failure or data center disasters, high-availability replication can stave off a data loss disaster.
However, since replication is real-time, an offline copy can be corrupted if the primary is attacked by ransomware. Therefore, system snapshots may be required to maintain clean point in time copies of the system. Snapshots are typically non-intrusive, do not noticeably delay replication and provide a failover with a better RPO than backup.
Like with backup, an off-site cloud solution can step in if on-site servers are out of commission. Latency can slightly lengthen recovery a small amount as the off-site cloud boots up, but the time to recovery still feels like a blip to users or customers.
For some organizations there may be no data critical enough to warrant implementing this high-availability architecture. For others, all data may be considered essential. For most, the reality will be fall somewhere in the middle. If companies are highly regulated or mandated by specific corporate retention requirements, a combination of high-availability replication and backup will likely exist for the same server.
Ensuring resilience against ransomware
In a blended backup/high-availability strategy, what matters most is deciding which systems are backed up by which before the worst happens. Whether handling backup for your own organization or for clients’, it’s important to have a well-tested backup plan in place that takes in RTOs based on acceptable amounts of downtime for data and applications.
Cybersecurity analysts are charting both a rise in ransomware incidents and in amounts cybercriminals are demanding from businesses to restore their data. That’s bad news in itself, but what’s often overlooked are the additional ways – beyond payments victims may or may not choose to make– victims pay for these attacks.
Our latest threat report found the average ransomware payment peaked in September 2020 at more than $230 thousand. But the ransom alone doesn’t tell the whole story. To do that, we conducted another study to tally and quantify the collateral damage from surging ransomware incidents and rising extortion amounts.
These are some of those affects inflating the price tag of an attack, which we call The Hidden Costs of Ransomware.
1. Lost productivity
Our survey data found that hours of lost productivity from a ransomware incident were closely related to the length of time to discovery of the attack. Generally, faster detection meant limiting the spread of the infection and less time spent on remediation. In other words, the further ransomware spreads the longer it takes to eradicate. Unfortunately, almost half (49%) of respondents to our survey reported being unaware of the infection for more than 24 hours.
A third of incidents were reportedly remediated in 1-3 hours, while 17 percent required 3-5 days of effort. We attempted to quantify these lost hours based on hours spent on remediation (easily measurable) and the opportunity costs from diverting resources from IT teams’ “blue sky” responsibilities (tougher to measure).
Factoring in varying costs of IT resources, we determined low/high cost estimates for hours of remediation reported by survey respondents. These ran from $300/$750 for three hours or remediation to $4,000/$10,000 for five workdays of remediation. (A full breakdown is available in the report.)
2. Downtime costs
Regardless of whether an organization decides to pay a ransom, how long does it take to return to normal operations?
In our study, businesses that didn’t pay ransoms had recovered their data quicker than those that didn’t pay. Specifically, 70 percent of companies that didn’t pay a ransom were able to recover their data within a business day, compared to 46 percent that did.
Presumably this has to do with whether a target had readily available backups, and lost time due to back and forth with extortionists or time spent making a payment.
One of the most important factors in determining downtime costs is specifying the value of the data that’s become unavailable. Is it critical to conducting business operations? Or is it nice to have but not essential like marketing or prospecting data?
Determining data’s value helps businesses formulate their recovery time objectives (RTOs). For non-critical data and applications, a 24-hour recovery time may fall within the RTO. For mission-critical data, a 24-hour recovery may exceed the tolerable limit and help drive the cost of downtime higher than the ransom itself.
3. Impact on client operations
Nearly half (46%) of the businesses in our survey reported client operations being adversely affected by a ransomware incident at their own company. This could quickly sever business relationships that take a long time to build and result in the loss of anticipated revenue. But that’s not even be the riskiest aspect of client operations being affected.
The implications of supply chain attacks, especially for MSPs, came into sharper focus last year following the SolarWinds attack. Were a cybercriminal to compromise a trusted supplier to distribute ransomware, rather than for surveillance as in that attack, the costs could be enormous.
MSPs should seriously consider the possibility of becoming the source for such a supply chain attack, especially those with clients in critical industries like energy, public utilities, defense and healthcare.
4. Brand and reputational damage
Consider the headlines and airtime generated by ransomware attacks against high-profile targets. A Google search of “Garmin ransomware,” for instance, returns more than 1 million results. While your organization may not be a global tech giant, it also likely doesn’t have the staying power of one.
In our study, 38 percent of businesses admitted their brand was harmed by a run-in with ransomware. Beyond lost customers, publicity issues could force businesses to enlist the services of expensive PR or communications firms to repair the damage.
Businesses with the resources to do so should consider themselves lucky, because the alternative is worse. Silence or an uncoordinated response to a ransomware attack – especially one that affects customers – can come of as unserious, callous or ineffective.
Reputational damage in an age of heightened sensitivity to cybersecurity incidents can have significant consequences. Our data shows that 61 percent of consumers switched some or all their business to a competing brand in the last year, and 77 percent admit they retract their loyalty now quicker than they once did.
The list goes on…
By no means is this an exhaustive list of the hidden costs of ransomware. They extend to fines for breaches of compliance regulation, the rising costs of cybersecurity insurance and a host of other unforeseen consequences.
For the complete findings from our survey and our recommendations for not encountering these hidden costs, download the full report.
Cyber resilience refers to a business’s ability to mitigate damage to its systems, processes and even its reputation. It’s based on the principle that, in the real (and really connected) world, adverse events occur. This could be in the form of a user enabling a breach by providing sensitive information during a phishing attack, through a new threat known as a “zero day” being weaponized against a business, or an event of any complexity in between.
When it comes to building a cyber resilient business, technology is an important piece. But it’s not the only one. A well-rounded security strategy is also essential. People and processes are key ingredients when it comes to that.
Audit checklists are a great place to start when ensuring your business is taking a holistic approach to data security, and so is this revealing conversation with Channel E2E and MSP Alert editor Joe Panettieri and a product marketing expert at OpenText.
The two discuss how there’s no silver bullet to all the potential threats to your data security, but how adapting the right mindset can help organizations begin to think about security differently. Our experts cover the “train, block, protect, backup and recover” model and what solutions for each can look like as a part of a real-life security stack.
The two touch on the importance of user security training, variables introduced by widespread remote workforces and how backup can undermine ransomware actors. Whether you’re designing a cybersecurity framework for your own business or putting one in place for clients, you won’t want to miss this conversation.
For many U.S. workers the switch to remote work is a permanent one. That means more high-stakes work is being conducted on self-configured home networks. For others, home networks are simply hosting more devices as smart doorbells, thermostats and refrigerators now connect to the internet.
Security experts warn that while the internet of things (IoT) isn’t inherently a bad thing, it does present concerns that must be considered. Many devices come pre-configured with inherently poor security. They often have weak or non-existent passwords set as the default.
As our guest and host Joe Panettieri discuss, these are issues that would be addressed on corporate networks by a professional IT administrator. The conversation covers the issues of IoT and home network security both from the perspective of the average family household and what the age of remote work means for employees working on their own networks.
Security intelligence director Grayson Milbourne brings a unique perspective to the podcast. Having held senior roles in both threat intelligence and product management, Milbourne is acutely aware of what the threats security products come up against. He knows both the cyber threat landscape and the consumer internet security market, so he’s able to provide insightful advice for how tech-loving homeowners can keep personal networks powerful and protected.
Milbourne suggests problems of IoT and home network security could be addressed with a cybersecurity version of ENERGY STAR ratings. A program could formalize current IoT security best practices and incorporate them into a standard consumers recognize.
During this informative podcast, Panettieri and Milbourne discuss that idea and more cybersecurity topics related to IoT devices. They cover:
- The difference between device security and the security of the app used to control it
- How to leverage user reviews while researching IoT devices and what security concerns to check on before buying
- Privacy and data collection issues, including why one of the most common IoT devices may be among the most intrusive
- Configuring IoT devices to prevent them from joining rogue IoT zombie networks
Whether you’re an IT administrator trying to secure remote workers or just own a smart TV, there’s something in this conversation for you. Be sure to give it a listen.
Password predictability is one of the most significant challenges to overall online security. Well aware of this trend, hackers often seek to exploit what they assume are the weak passwords of the average computer user. With a little bit of background information, “brute forcing” a simple password is a straightforward undertaking.
How are passwords cracked?
Cybercriminals use computing power to crack passwords with a method known as a brute force attack. With this method, an attacker guesses at the password repeatedly with the help of computer software/scripts. This makes the process automated and essentially effortless for the attacker.
The weaker the password (meaning the easier it is to guess), the quicker an attacker can crack with computing power.
So, how do we combat this?
The problem is password predictability
Passwords can be very easy to guess. Ironically, one factor that contributes to this is one that’s supposed to make passwords safer; the uniform standard most websites impose on users when creating a new password. Typically, sites require a single capital letter, at least 6 charters, numbers and one special character.
Attackers can use this information to guess when and where each character may be using only the predictable tendencies of human users. And because many users create a single password that meets these requirements and use them on multiple sites like Netflix, Facebook and Instagram, getting lucky once can lead to a bonanza for cybercriminals.
Here is an example of a password that would meets the requirements of most websites:
This would be considered “secure” in most cases because it meets the most common internet standard for password creation. Now swap “Example” out for the name of a child or pet, and the easily remembered combination is very likely to be someone’s actual, real-life password. It’s easy for the user to remember, and therefore convenient to use across multiple sites.
Let’s assume a user has a pet named Toby and plug it into the above example format.
This is not a strong password. Pet’s names, children’s names and birthdays are often easily discoverable, especially by mining social media accounts. An attacker may just need to do a little recon on Facebook to scrounge up a handful of likely options.
Passwords vs. Passphrases
A password is a short character set of mixed digits. A passphrase is a longer string of text making up a phrase or sentence. The important thing to know about passphrases is that, when allowed, they’re far more secure than passwords. The idea that a password should be one word is outdated and retiring it would benefit user security greatly.
A method for devising a passphrase is to simply pick a line from your favorite movie, book or song and mix it with capitals and numbers. If we take Arnold’s famous line “I’ll be back,” we can easily make it into a secure passphrase.
Original: “I’ll be back”
Remove quate marks and spaces, since they can’t be used as password inputs.
Add some capitals: iLLbeBack
Add Numbers: iLL3beBack
And finally, a special character: iLL3beBack$
As a fun test, you can use this password-checking tool to see how long it would take a computer to crack your new creation. How long would it take to crack yours?
For comparison, let’s take one of our simple password examples from above and see how long it would take to crack. We can use Toby1234! (and yes, some people do use such simple passwords).
As you can see, it wouldn’t take long at all.
What about our new passphrase iLL3beBack$
I think we’ll be secure for now.
More tips and tricks for password safety
Using a password manger is the most practical way for making passwords more secure. Users tend to gravitate toward the most convenient solution to a given problem, and password managers keep them from having to memorize a series of complex passwords for different sites. The user can automatically save passwords with an internet browser plugin and let autofill features handle the rest.
Here are some other good rules of thumb for password safety:
- Use a password generator
- Use two-factor authentication (2FA) as much as possible
- Don’t reuse passwords
- Be unpredictable in password formatting
Don’t let a predictable password come back to bite you. When made up of easily guessable public information, a weak password can be cracked in minutes. Instead, choose a passphrase or rely on one of the many secure password management tools available on the web today.
It’s important for a business to be prepared with an exercised business continuity and disaster recovery (BC/DR) plan plan before its hit with ransomware so that it can resume operations as quickly as possible. Key steps and solutions should be followed to prepare and respond to cyber threats or attacks against your organization.
It may be as simple as the deployment of antivirus plus backup and recovery applications for your end users, or a more complex approach with security operations center (SOC) tools or managed response solutions coupled with network security tools such as DNS and Web filtering, network and endpoint firewalls, VPNs, backup and recovery and others.
It’s also essential to ensure end-users are trained on ransomware threats as a part of a good security awareness training program. The bottom line is, if prevention tools and training fail and your organization is compromised, you need to have a protection plan that gets your company assets and resources back to work quickly and securely.
What preparation is needed
When contemplating an in-depth plan, specific questions come to mind—the whats, the hows, the whys, and most importantly, the whos must be defined in the plan. When asking these questions, we need to be prepared to identify the resources, people and applications inlcuded. We must determine how to react to the situation and execute the logical steps and processes required to reduce damage as quickly as possible.
Below are some questions to get us started.
- Who will be involved in recovery and communication when your DR plan is in action?
- How much downtime can your organization withstand?
- What service level agreement (SLA) do we need to provide to the business and users?
- What users do we need to recover first?
- What tools do we have to reduce risk and downtime within the environment?
- How are user networks separated from operational or business networks?
- How quickly can data protection tools get us up and running again?
- Can users get their data back if an endpoint device is compromised?
- Can we determine when the ransomware first hit the network or endpoint devices?
- Are we able to stop the proliferation of ransomware or malware throughout the network?
- Can we recover quickly to a specific point in time?
- Can our users access their data from the cloud before it has been restored?
The solutions below, coupled with an exercised BC/DR plan, will help reduce your organizational risk exposure and allow for quick remediation.
- An endpoint security solution capable of determining what events took place and when
- A DNS security solution capable of turning away security threats at the network level
- A solution for endpoint backup and recovery that can safeguard data should these other solutions be compromised
Lines of Communication
Equally important as the technology are the people who manage and maintain the systems that support the different business units within an organization. For example, your security team and your endpoint support team need to be in regular discussions about how the teams will communicate when under attack. You need to determine who is responsible, what systems, and when they should be brought into the process when under attack.
System Response Ratings
A system response rating system can assist in determining which systems or employees require a higher degree or speed of response. To do this, organizations must specify the value of the system or resource and where that resource sits regarding protection or remediation priority. This is often determined by the value of the resource in monetary terms. For example, suppose the loss of a specific system would incur a massive loss of incoming revenue. In that case, it might be necessary to place a higher priority in terms of protection and remediation for it over, say, a standard file server.
The same can be said for specific individuals. Often C-level resources and mid-tier executives need to be out in front of a situation, which highlights the importance of making sure their resources like laptops and portable devices are protected and uncompromised. They are often as important as critical servers. It is necessary to classify systems, users and customers regarding their criticality to the business and place priorities based on the rating of those resources.
Now that we know a bit of the who, what, and how, let’s look at how to recover from a single system to an entire enterprise.
Recovery and Remediation
Recovery is an integral part of any BC/DR plan. It gives organizations a playbook of what to do and when. But it’s not enough to recover your data. Admins also need to understand the remediation process that should be followed to prevent further infection of systems or proliferation of malware within an organization.
Ransomware hits user’s laptops, encrypting all of the data. The laptops have antivirus protection, but no DNS protection. All network security is in as firewalls and VPNs, with some network segmentation. There is also a security team in addition to the end-user support team. The ransomware that hit is polymorphic, meaning that it changes to prevent detection even if the first iteration of the ransomware is isolated.
The first step is consulting the endpoint security console to learn when and where the malware was first seen. If backups are still running, they should be suspended at this point to prevent infected data from being being backed up with malware. This can be done either from the dashboard or from an automated script to suspend all devices or devices that have been compromised.
A dashboard should provide the ability to do single systems easily, while scripts can help with thousands of devices at a time. APIs can help to automate processes like bulk suspend and bulk restore of devices. At this time it may be prodent to block traffic from the infected areas if network segmentation is enabled to prevent the spread of malware.
Now it’s time to review the protection platform to determine the date the file was noticed, the dwell time and when the encryption/ransomware started executing. Once these facts have been determined, it’s possible track down how the organization was breached. Understanding how malware entered the network is critical to prevent future infections. Since, in our example, ransomware infected devices, a tested and reliable recovery process is also necessary.
Understanding the timeline of events is critical to the recovery process. It is essential to know the timing for the first step in the restore process to set your time to restore. Once an admin can zero in on date and time to restore, affected devices can be compiled into a CSV file and marked with a device ID number to reactivate any backups that were halted once the breach was discovered..
Once the data, source, target device IDs, date, and time to restore from are combined with a bulk restore script, a bulk restore can be pushed to the same laptops or new laptops. As heppen, solutions offering web portals can return to work quickly.
Thre right tools, planning, importance hierarchy and communication channels across a business are essential for establishing cyber resilience. Once a timeline of a breach has been determined, these elements make restoring to a pre-infection state a process that can be planned and perfected with practice.
In a previous post, we talked a bit about what pen testing is and how to use the organizations that provide them to your benefit. But, what about when one of them hands a client a failing grade?
Consider this, you’re an MSP and you get a letter or email from one of your customers that reads:
“Dear ACME MSP,
We regret to inform you that you’ve had a Penetration Test Failure produced by: “FreindlyHacker-Pentesting Inc” and we’d like to discuss the details further to determine if you have what it takes to continue to handle our security needs.
Largest MSP Customer.”
A customer may not pass along this exact wording, but the implications are clear. The results can be embarrassing or at worst devastating. When a customer reaches out after failing penetration testing, it can put an MSP on its heels and create unnecessary angst. Should the MSP have been more involved in the testing? Did my tools cause the failure Has the MSP soured its relationship with its client? Will the business be lost?
So, how should an MSP respond when a customer fails a pen test?
Some MSPs turn to self-doubt and start wondering if the layers of protection they’ve put in place are worth the costs. Others will immediately start pointing fingers at the tools that were identified in the pen test report. When a report comes through with a failure, it’s usually unexpected and can take time away from more important activities.
To save time and effort if this should happen to you, here are a few key elements of a good response to a pen test failure.
Immediately start asking questions.
- What kind of penetration testing was involved?
- Who performed the testing and what are their credentials?
- How was the penetration testing organization positioned to start taking action?
- Where the testers acting as “Red Team” or “Blue Team” actors?
- When did the testing take place?
- May I examine the data and reporting?
Review your tools configurations.
Rather than immediately assume bad tech, it’s best to step back and evaluate each tool identified in the pen test report and the associated configurations, policies and control points. Often, a security tool is designed to identify, evaluate and/or stop bad actors along the threat chain. If it failed, it could be that a setting was disabled or miss-configured. Review all tools’ “best practice” guides, documents and suggestions before making assumptions.
Ask for partnership with the customer during their next review.
If the customer did not provide a heads up or pretesting communication, request that you be more involved during their next review. If pen testing is important enough for them to do once, it’s probably that they’ll do it bi-annually or annually, depending on the industry and regulatory concerns. It’s always good to be involved in advanced than after the fact.
Blue Teams vs. Red Teams: Which type of test was conducted?
The difference between a Blue Team and Red Team is how much previous access they have to a target’s networks and devices. This can make a huge difference in how the results of a pen test are interpreted. When a Blue Team—with some previous knowledge of an organization and its IT systems—is able to breach a business, it may not be representative of real-world circumstance. It could be an internal IT admin who was able to find a vulnerability after poking around in a system she previously had access to.
When a Red Team compromises a client, on the other hand, it’s time to examine the reporting closely. Starting with zero knowledge of an organization’s systems, this type of breach could point to serious flaws in the defenses an MSP has set up for a client. Likely there are real holes here which need to be patched.
Evaluate the pen testing organizations
While there are many levels of testing capability, keep in mind that pen testers come from many IT walks of life. Former sysadmins, hackers and network administrators make the most common tester. They come with their own experiences, specialties and biases.
One question to always ask is, what are the testing organizations credentials? What is their background and how did they come to the business? How long have they been testing?
The goal is to guage whether the individuals who’ve conducted the test are knowledgeable enough to make judgments about your organization’s defenses? Did they actually breach the defenses or are they simply reporting on a “potential” for a breach?
Not all testers are alike, not all testing organizations are alike. Each has to successfully make the case of its own expertise in coming to the conclusion that it has.
As I say, trust but verify. And be prepared to ask LOTS of questions if a client ever fails a pen test.
You’ve likely heard of software-as-a-service (SaaS), infrastructure-as-a-service (IaaS), and numerous other “as-a-service” platforms that help support the modern business world. What you may not know is that cybercriminals often use the same business concepts and service models in their own organizations as regular, non-criminal enterprises; i.e., the same practices the majority of their intended victims use.
As senior threat research analyst Kelvin Murray explains to Joe Panettieri, editor of Channel E2E and MSSP alert, in our most recent Hacker Files podcast, cybercrime-as-a-service “essentially follows the same path as most as-a-service things in business.” He goes on to explain, “If you were a small company in 2002 and needed to set up email, you’d set up a mail server, a mail relay, mail clients, and you might hire an email admin. And then you might have to set up things like spam filters yourself. People like Microsoft figured out that they could just provide all of [these services] from a web page and rent it out to companies and take all the hassle out of companies’ hands.” That’s the as-a-service model in a nutshell.
According to Kelvin, a very similar thing happened in the cybercriminal space. Effectively, talented criminals who’ve written successful malicious code have begun renting access to their own cybercrime “solutions” to lower-level criminals who either don’t have the resources or know-how to design, write, and execute cyberattacks on their own.
Of course, the people providing the so-called service don’t do so out of any goodness in their hearts; they do it for a cut (sometimes a significant one) of any profits made in an attack that uses their code.
Hear more about the evolution of cybercrime-as-a-service in the full podcast. Be sure to check out other discussions and recordings in our Cybersecurity Sound Studio.
The global pandemic that began to send us packing from our offices in March of last year upended our established way of working overnight. We’re still feeling the effects. Many office workers have yet to return to the office in the volumes they worked in pre-pandemic. For MSPs, that makes up a good portion of their clientele.
Remote workers were abruptly pulled out from behind the corporate firewall, immediately becoming more susceptible to the targeted attacks of cybercriminals. Acceptable use policies could no longer be easily enforced, home devices became work devices, and employees distracted by life around them became more likely to click carelessly.
What’s worse, because the pandemic was affecting more or less all of us at the same time, cybercriminals had a virtually limitless pool of targets on which to test out new scams. Phishing scams imitating eBay skyrocketed during the first months of product shortages brought on by COVID-19. Scam emails claiming to be from Netflix rose by more than 600% in 2020.
We were fish in cybercriminals’ collective barrel. Now, even with vaccinations rising in the U.S., many companies are rethinking the way they work. It’s up to MSPs to have a strategy for security remote workers, because they’ll likely need to serve more than ever before.
Find out how to ensure your clients’ remote workers are resilient against attacks across networks in this informative conversation between ChannelE2E and MSSP Alert editor Joe Panettieri and his guest Jonathan Barnett. In addition to being a network security expert and senior product manager for Webroot’s DNS solution, Barnett brings 20 years of experience as the head of his own MSP business to the podcast.
Here’s what he has to say about ensuring a cyber resilient remote workforce.
If you’re an admin, service provider, security executive, or are otherwise affiliated with the world of IT solutions, then you know that one of the biggest challenges to overcome is efficacy. Especially in terms of cybersecurity, efficacy is something of an amorphous term; everyone wants it to be better, but what exactly does that mean? And how do you properly measure it? After all, if a security product is effective, then that means few or no cyberattacks should be getting through the lines of defense to the actual infrastructure. Yet, faced with modern cyber threats, that seems like a pretty impossible goal, particularly as many attacks are designed to operate under the radar, evading detection for weeks or months at a time.
As a result, many businesses and managed service providers may try to account for their efficacy needs in the tools that they choose, vetting the solutions with the highest reviews and the best third party testing scores. But the tools aren’t everything. What else can you do?
Here are our top 5 tips for getting the best possible efficacy out of your IT security stack.
- Partner with solution vendors who can guide you to the right setup.
Most small to medium-sized businesses and many MSPs just don’t have the resources to keep dedicated security experts on staff. That’s not a problem, per se, but it does mean you might have to do some extra legwork when selecting your vendor partners. For example, it’s important to take a hard look at the true value of a solution; if it requires costly or time-consuming training to attain a skill level high enough to get maximum value from the product, then the cost-benefit ratio is much different than it initially appears. Be sure to choose vendors who provide the type of guidance, support, and enablement resources you need; who can and will advise you on how best to configure your cybersecurity and backup and disaster recovery systems; and who are invested in helping you ensure maximum return on the investment you and your customers are making in these solutions.
- Trust your tools, but make sure you’re using them wisely.
According to George Anderson, director of product marketing for Carbonite + Webroot, OpenText companies, many of the tools IT admins already use are extremely effective, “as long as they’re being used properly,” he cautions. “For example, Webroot® Business Endpoint Protection includes powerful shielding capabilities, like the Foreign Code Shield and the Evasion Shield, but these are off by default, so they don’t accidentally block a legitimate custom script an admin has written. You have to turn these shields on and configure them for your environment to see the benefits; many people may not realize that. But that’d be one simple way admins could majorly improve efficacy; just check out all your tools and make sure you’re using them to their fullest capacity.”
- Consider whether EDR/MDR/ADR is right for you.
If you’re not already using one of the solutions these acronyms stand for, you’ve likely heard of them. Endpoint detection and response has a lot of hype around it, but that’s no reason to discount it out of hand as just another industry buzzword. It’s just important to demystify it a little so you can decide what kind of solution is right for your needs. Read more about the key differences here. Keep in mind, there’s often a high level of involvement required to get the most out of the additional information EDR provides. “It’s really more of a stepping stone to MDR for most MSPs,” per George Anderson. “Webroot Business Endpoint Protection actually provides all the EDR telemetry data an MDR solution needs, so I don’t recommend EDR alone; it should be used with an MDR or SIM/SIEM solution.”
- Lock down common security gaps.
Some of the easiest ways to infiltrate an organization’s network are also the easiest security gaps to close. Disable remote desktop protocol (RDP.) If you really need these kinds of capabilities, change the necessary credentials regularly and/or use a broker for remote desktop or terminal services. Use hardened internal and external DNS servers by applying Domain Name System Security Extensions (DNSSEC), along with registry locking domains; looking at certificate validation; and implementing email authentication like DMARC, SPF and DKIM. Be sure to disable macros and local admin privileges, as well as any applications that are not in use. And, of course, run regular patches and updates so malicious actors can’t just saunter into your network through an old plugin. These are all basic items that are often overlooked, but by taking these steps, you can drastically reduce your attack surfaces.
- Train your end users to avoid security risks.
Phishing and business email compromise are still top security concerns, but they’re surprisingly preventable at the end user level. According to the 2021 Webroot BrightCloud® Threat Report, regular phishing simulations and security awareness training can reduce phishing click-through by as much as 72%. Such a significant reduction will absolutely improve the overall efficacy of your security program, and it doesn’t impose much in the way of administrative burden. The secret to successful cyber-awareness training for end users is consistency; using relevant, high-quality micro-learning courses (max of 10 minutes) and regular phishing simulations can help you improve your security posture, as well as measure and report the results of your efforts.
All in all, these tips are simple, but they can make all the difference, especially if you have big efficacy goals to meet on a lean budget.
For more industry tips and tricks and product-related news, follow @webroot and @carbonite on Twitter and LinkedIn.
“What Bitcoin was to 2011, NFTs are to 2021.”
That’s a claim from the highly respected “techno-geek” bible Ars Technica in it’s wonderful explainer on NFTs, or non-fungible tokens. Since cryptocurrencies were, are and will continue to be impactful technologies, surely NFTs are a topic worth exploring.
They exploded into public consciousness this year as pieces of art, albums, photographs and dozens of other assets were sold in NFT form. Some net their sellers huge profits, many more are ignored or overlooked completely.
Naysayers call NFTs worthless figments of our own imagination, apologists hail them as handy tools for eliminating middlemen and empowering creators. One writer has referred to NFTs as, simply, “bragging rights.”
But naturally, at Carbonite + Webroot, we just wonder how they’ll be used and abused by cybercriminals or if they can be irrevocably lost like the password to a crypto wallet.
Before we dive into that, a brief primer of our own on NFTs.
An NFT can be thought of as a sort of digital deed. It is unalterable proof of ownership of a unique digital asset. That’s what the “non-fungible” in non-fungible token means: there’s only one, and it’s completely unique.
NFTs use the same blockchain ledger technology to verify uniqueness that cryptocurrencies rely on to prove ownership. A distributed group of devices does the work to vouch for the authenticity of the token the same way it does for a bitcoin.
Except, whereas each unit of a cryptocurrency is mutually interchangeable (1 Dogecoin always equals 1 Dogecoin, for instance), NFTs are designed to be completely unique. They can be programmed with their own rules and directions for use and behavior—even down to how they produce “offspring” in the case of CryptoKitties.
An often used and helpful analogy is to certificates of authenticity (COA) like those used in the art world. For ages artists have put their own unique stamps on their artwork or issued accompanying certificates to testify to the “realness” of the work. This could be in the form of a simple signature or, in Banksy’s case, written sign-off from the Pest Control Office. Think of an NFT as a digital COA or, arguably, an improvement on the concept since it can’t be reproduced or believably forged.
As with any art, the value of an NFT is in the eyes of the beholder. What’s the point of spending millions to own an original digital asset that’s been effortlessly reproduced a million times? Could one ask the same of the Mona Lisa?
The rise (and fall?) of the NFT
Regardless of your answer to these questions, a community of folks already undeniable place a huge value on NFTs. An April 2021 post on GitHub estimated the value of the “CryptoArt NFT” market to be at least $150 million worldwide.
That’s almost certainly an underestimate, since the most expensive NFT ever sold comes from the art world. It’s a work known as The First 5000 Days by the artist known as Beeple and it’s essentially a $69 million JPEG file.
And NFTs aren’t limited to fine art. The pro sports, music and meme industrial complexes have all entered the business. Even social media posts are being turned into NFTs; the digital certificate for Jack Dorsey’s first-ever Tweet sold for $2.9 million. So, while anyone interested can easily find it online, only a Malaysia-based CEO of a blockchain company can claim “ownership” of the Tweet that started…all this.
Can NFTs hold our attention for long? With absurd amounts of money changing hands over a string of digital characters, a lot of people are already wondering if NFTs are a bubble about to burst. Plenty of pundits were speculating about a bubble in mid to late-April, when sales of NFTs lagged. But as shown by nonfungible.com, a company that tracks the buying and selling of NFTs, they were back to brisk business in early May.
Perhaps NFTs are a bubble positioned to pop. Or maybe their values will vary with the cryptocurrencies in which they are mostly bought and sold. It’s certainly been speculated that they’re driving up the price of Etherium. Regardless, it’s safe to say they’re worth getting to know, because they’ll make headlines for some time to come.
NFT theft and a new brand of cybercrime
Not surprisingly, cybercriminals are already redirecting their efforts to the nascent NFT market. In an extraordinary and revealing Twitter thread, one NFT owner documented the experience of having his tokens stolen from a marketplace for digital art. He’s apparently not alone in this experience.
Even less surprising than the theft are the methods used to do it. It seems phishing for users’ passwords to the sites used to buy and sell NFTs is the main method of compromise. Two-factor authentication for accounts managing NFTs is strongly recommended by marketplaces.
Darkreading.com also notes the importance of closely guarding access keys, which are often the only means of managing an NFT. Once a key is stolen—either by phishing, a keylogger or some other means—there’s very little in terms of a realistic prospect of getting it back.
In terms of valuable digital art, NFT theft amounts to the regrettable loss of investment pieces or perhaps just the “bragging rights” akin to owning an original piece of physical art. But if the role of NFTs as proof of ownership expands into the physical realm, as is already being discussed in the real estate sector, NFT security will become critical. It may even have the power to spawn new industrials and criminal enterprises.
NFTs’ massive price tags and novel technological backing make them attractive target for cybercriminals. If the market for their sale isn’t a bubble, it’s possible that the high-profile art heists of the future may be carried out by hackers rather than the suave con men of Hollywood films, and their tools will be phishing attacks and spyware rather than fancy handheld gadgets.