If you attended Black Hat this year, you couldn’t avoid the topic of supply chain attacks. From keynotes to vendor messaging to booth presentations, they were a ubiquitous topic in Las Vegas this year.
Supply chain attacks are cyberattacks targeting an upstream vendor for the ultimate purpose of compromising one or more of its customers. Cybercriminals are aware that, by compromising updates from trusted vendors, they can easily bypass installed security software to infect all customers that install it.
Essentially, compromising a software vendor allows damage to cascade down the supply chain to another supplier– a consequence sometimes known as the “waterfall effect” – to increase collateral damage against multiple targets.
Black Hat founder Jeff Moss even began this year’s conference with a few words about software supply chains.
“We all rely on the software supply chain,” he said. “We’re building tools and systems based on it. We’re trusting it. We’re hoping that people in the supply chain…are doing things to help everyone else in the supply chain. Because, if they don’t, everything we do is potentially vulnerable.”
“We all depend on the supply chain being fully immunized,” he continued, “and it’s not there yet.”
Now, “not there yet” is putting it mildly. A few recent, high-profile attacks bear recalling to demonstrate the scope of the problem.
For many within cybersecurity, the SolarWinds attack by what are widely believed to be state-sponsored cybercriminals was the most significant supply chain attack since the Cleaner attack of 2018 and a worrying reminder of the damage made possible by the tactic.
SolarWinds is a Texas-based IT management platform that unknowingly pushed a Trojanized update to a large portion of its some 300,000 customers. It’s believed that the attackers concealed their presence within the victim’s network for some time to ensure they could carefully select their next targets and preserve time for intelligence gathering.
While not widely known at the time, it’s now assumed that this wide-net attack was ultimately an effort to compromise a handful of high-value intelligence and governmental agencies. Second-stage infections were then pushed against these targets, plus some of the world’s most influential technology vendors.
Critically, this type of espionage-inspired cyberattack differs a great deal from moneymaking practices embraced by for-profit hacking groups. These broadly targeted attacks against suppliers cause widespread disruption without obviously disrupting a specific target.
Another supply chain attack targeted Codecov, a software development firm that makes tools for developers, in January 2021. Investigators told the newswire service Reuters that attackers were able to use the access they’d gained to breach hundreds of Codecove customers.
As was the case with SolarWinds, compromising Codecov may have presented access to other software vendors, which could have initiated the waterfall effect presented previously. The firm counts among its clients giants like IBM, Hewlett Packard and Atlassian.
The infosec researcher Matt Tait, who spoke at this year’s Black Hat on the topic of supply chain attacks, called the Codecov compromise an instance of high-volume disruption based on indiscriminate targeting.
According to the company, information stolen from customer devices was then sent to a third-party server outside of Codecov’s control, suggesting that espionage may have once again been the end-goal of the attackers.
Perhaps the most far-reaching supply chain attack conducted by a non-state actor in the history of the tactic took place this July. This time, Kaseya, one of the world’s largest IT management platforms, was compromised by the Russia-based hacking group REvil. Unlike in the SolarWinds and Codecov, this attack included a ransomware stage meant to deliver financial rather than intelligence returns for the attackers.
REvil targeted Kaseya’s remote monitoring and management (RMM) solution, known as Kaseya VSA, which is used to manage client machines from afar. Again, targeting was indiscriminate, but unlike with espionage actors, the ransomware gang could focus on maximizing financial returns of the attack rather than trying to avoid detection.
Describing the impact of this attack, the USC Berkeley infosec researcher Nicholas Weaver noted that, “Each victim is a small-to-medium-sized business that is going to, at best, find its computers unusable and, at worst, have all their data lost forever.”
In terms of the cascading effects of a supply chain attack, the Kaseya VSA compromise hit MSPs and their small business clients especially hard.
Like a technology that advances through state-sponsored R&D but then becomes available to a wider public, recent supply chain attack techniques were honed by state-backed actors but have now been adopted by more run-of-the-mill ransomware actors. This is bad news for MSPs.
While agencies like the FBI and CISA have been warning for some time that MSPs are likely targets of advanced persistent threats (APTs), the Kaseya attack seems to have crossed a threshold. The problem is a significant security challenge, and one that some think only vendors can solve.
But there are a few measures MSPs can take to enhance their defenses against supply chain attacks. These include:
- Layer cybersecurity defenses for both you and your clients. Supply chain attacks commonly evade defenses by sneaking in with a trusted update. But after the initial compromise, network security can block communication with known-malicious IP addresses to limit damage.
- Mandating two-factor authentication (2FA) wherever possible. While 2FA isn’t the end of security issues, it makes things more difficult for cybercriminals at every turn.
- Monitor for anomalous web traffic. Be wary of communications with previously unknown IP addresses, unusual application traffic and other out-of-the-ordinary happenings on your network. Consider following these steps to reducing the time to detection of a compromise if one occurs.
- Push patches and updates with urgency. Zero-day vulnerabilities often play a key role in advancing the spread of supply chain infections. Closing those gaps as soon as possible is an actionable step MSPs can take to protect themselves and their clients.
- Back up everything. One of the most surefire ways of reducing the leverage an attacker has over you and your clients is keeping multiple backups of critical business data. Cybercriminals can’t be trusted to restore data even after a ransom is paid, so don’t be left relying on them.
- Test your backup plan. The day disaster strikes is not the time to discover if your disaster recovery plan is well designed. Instead, simulate a worst-case scenario ahead of time and see if any gaps emerge.
As global cybercrime collectives continue to experiment with supply chain attack techniques, we should expect more indiscriminate, wide-net infections to make headlines. To prevent passing these infections along to their clients, vendors must take the lead in security their products and processes. But MSPs aren’t helpless in protecting themselves and their clients.
The issue at the heart of ransomware insurance will be familiar to most parents of young children: rewarding bad behavior only invites more of the same, so it’s generally not a good idea. But critics of the ransomware insurance industry argue that’s exactly what the practice does.
Ransomware insurance has by now long been suspected of excusing lax security practices and inspiring confidence among cybercriminals that they’ll receive a timely payment following a successful breach.
Exactly how widespread ransomware claims by businesses are is difficult to determine since companies don’t exactly jump at the chance to discuss their run-ins with ransomware publicly. But it’s safe to assume that claims have risen alongside an undeniable surge in ransomware attacks.
Another issue with the cyber insurance industry stems from the fact that paying a ransom is no guarantee that data will be returned. In our recent report on the hidden costs of ransomware, nearly 20 percent of respondents were not able to recover their data even after making an extortion payment.
The Paris-based insurance giant AXA broke new ground this year by announcing it would stop insuring against cyberattacks, citing a lack of guidance from French regulators about the practice. It’s worth remembering that the FBI “does not support paying a ransom in response to a ransomware attack.”
So, if U.S.-based insurers were to follow AXA’s logic, they too would stop covering ransomware payments. So far, few have. For now.
Doomed to be a short-lived sector?
The industry publication InsuranceJournal.com recently wrote in a post on its site that “pressure is building on the industry to stop reimbursing for ransoms.” Before ransomware went rampant, the article notes, cybersecurity insurance was a profitable sub-category of the insurance business as a whole. But those days may be numbered. The sector is now “teetering on the edge of profitability” according to the post’s author.
It’s well-known within cybersecurity circles that ransomware actors will conduct advanced research to determine if a potential target is insured. If so, it’s hardly a deterrent since it increases the likelihood a payment will be made.
It winds up being a self-reinforcing cycle. As ProPublica wrote in its study of the industry, “by rewarding hackers, it encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.”
A commonly cited defense of ransomware insurance is that they not only protect against the cost of the ransom, but also against knock-on expenses from ransomware like downtime, reallocation of tech resources and reputational damage. We know from our own research that these costs can be significant, so there’s some validity to this argument.
But the real question the cyber insurance industry needs to answer is whether it can ever again be profitable. A recently released paper from the British defense think tank Royal United Services Institute (RUSI), titled Cyber Insurance and the Cyber Security Challenge, identified this as one of the key challenges to the industry’s viability.
That paper found that “there is arguably too little global premium to absorb losses from a systemic event.” In other words, the next NotPetya could sink the industry.
Ransomware on the whole has caused losses in the cyber insurance industry, not least because, “unlike the majority of risks insurers cover, ransomware attacks are both a high-impact and a high-probability risk.”
Addressing cybersecurity insurance shortfalls
Importantly, the RUSI paper in the end reported that it was unable to find empirical evidence that “cyber insurers may be unintentionally facilitating the behavior of cybercriminals by contributing to the growth of targeted ransomware operations.” While that fact undermines arguments that cyber insurers are a boon for ransomware actors, it doesn’t speak to the question of viability.
As with any nascent industry, ransomware insurance vendors have some tough issues to grapple with concerning how they do business. The “race to the bottom,” which RUSI describes as a combination of cheap premiums and loose restrictions on underwriting (not requiring basic cybersecurity measures as part of the deal, for example), represents the real risk to the industry.
Its possible cyber insurance companies could drastically reduce claims by mandating a cyber resilience posture as a condition of being insured. Like a higher life insurance premium for a career stunt man, organizations without robust cybersecurity in place (including defense plus backup and restoration capabilities) could be forced to foot a higher bill. While this is already standard practice among many insurers, industry regulation may be required to prevent the opening of a market for insurers with more lax baseline cybersecurity requirements.
At the very least, insurers should insist on three core elements of cybersecurity strategy before underwriting:
- Endpoint and network level security to guard against attacks. Devices secured with antiviruses and networks secured by DNS filters or firewalls should be the bare minimum requirement for protecting against ransomware attacks. Without them, ransomware actors are being invited in the front door.
- Mandated ongoing security awareness training for employees. User-enabled breaches remain one of the most common causes of a successful ransomware attack. Without addressing end users’ tendency to fall for phishing and other social engineering attacks, while ransomware actors may find the front door locked, they know there’s a good chance it will be opened for them by someone on the inside.
- Proven data backup and security protocols. Maintaining complete copies of mission-critical data is one of the simplest ways to undermine ransomware actors. By collectively removing this key piece of leverage, organizations can go a long way toward normalizing the non-payment of ransomware demands, easing the burden on cyber insurers.
Making the above the minimum standard for organizations would both minimize the damage caused by ransomware actors and increase the viability of ransomware insurance as an industry. By prioritizing cyber resilience over any one category of security, businesses can prevent breaches and get back to work easier when they do occur.
A cyber resilience strategy
“I have used a lot of different security products over the years, and I get approached by a lot of vendors,” says Pedro Nuñez. As president and CEO of New England based MSP IT Management Solutions, Nuñez is always on the lookout for products that go beyond just a traditional security operations center.
That’s what lead him to work with Webroot® Business Endpoint Protection.
“To make any kind of difference, you need a way to mitigate a security incident automatically.” It’s not enough to just monitor his clients’ networks and notify him if there’s a security incident. If that’s all a tool can do, it’s then up to his team to manage every incident manually – even the smallest ones.
And with over 85 clients, Nuñez needs time to focus on the most serious threats. The automation that comes with Webroot and its integration with Blackpoint Cyber means his clients’ endpoints, networks and even IoT devices are monitored for any anomalies. Once something is noticed, there’s no delay in automatically hunting down the threat.
“We effectively save up to 40 help desk hours a week, sometimes more” with the managed detection and response from Webroot.
That means when there’s a persistent attack on a server or when a client falls victim to a phishing attack, he has a head start on tackling the problem.
Protection in practice
Recently one of Nuñez’ clients, a municipality in Massachusetts, was targeted by a hacking group based out of Romania. The municipality was particularly vulnerable because of their old and out-of-date systems.
“The city would have been overrun with ransomware, but we started getting alerts right away from Webroot and Blackpoint,” Nuñez remembers. Since there was no delay in responding to the attack, he was able to get the ransomware under control so it couldn’t take over.
Even though it was a persistent attack, the security controls held up. The incident created thousands of tasks on individual devices, and it took weeks to fully stop. But in the end, the city experienced virtually no downtime. “There are a lot of city systems that can’t afford to go down, so making it through the attack without downtime . . . was a major win,” says Nuñez.
Businesses make their own luck
The next town over was also hit, but their security didn’t hold up. Their data was stolen, and they ended up having to pay a ransom. Smiling, Nuñez says that “The city that was my client can consider themselves lucky. But really, it wasn’t luck.”
His hands-on approach combined with the right tools saved his client from suffering a major incident.
For IT Management Solutions, the next step is end user training. Afterall, Nuñez notes, it no one had clicked the malicious email then the ransomware attack could have been prevented.
Watch Pedro Nuñez, President and CEO of IT Management Solutions, talk about his approach to cybersecurity.
Updated November 23, 2021
Dutch, Spanish and French were just the beginning of expanded language offerings from Webroot Security Awareness Training, with German and Portuguese added as of November, 2021! Stay posted to learn about expansions to more languages coming in the future.
A Global Challenge
The steady stream of cyberattacks seen throughout 2019 turned into a torrent over the last year – ransomware, phishing scams and data breaches are now at an all-time high. Of course, the growing cybersecurity threat isn’t contained to just one country. The effects are being felt the world over.
The National Cybersecurity Agency of France (ANSSI) is trying to tackle the 255% surge in ransomware attacks reported in 2020. Meanwhile Spain is trying crack down on malicious actors operating inside the country.
And in an interview with workers in the U.S., Japan, Australia and throughout Europe, 54% say they spend more time working from home now than they did at the beginning of 2020. The blurred lines between home life and work life leads to the use of improperly secured personal devices with ramifications being felt by small, medium and large businesses. But with cyberattacks at an all-time high, 63% of companies have kept their cybersecurity trainings at the same level that it was at the end of 2019.
Tackling Cyber Threats
Our networked world connects us to points all over, so it’s no wonder cybersecurity needs to be taken seriously across the globe. The fight against these threats is complicated, but most successful attacks share a common vector – the human factor.
Because of this shared element, security experts know where to focus their energy. In fact, research shows that Webroot® Security Awareness Training improves cyber resilience and helps defend against cyberattacks.
The truly global nature of cyber threats is why Webroot is expanding its language offerings for our Security Awareness Training. This training helps employees keep security top of mind so businesses become more secure.
Now offered in Dutch, Spanish, French, German, and Portuguese, our Security Awareness Training features native narration throughout. Other available options offer courses with only translated captions overlaid on existing content while our trainings convey important security information in an engaging experience.
Why Training is Critical
Often, attackers have a built-in advantage when they zero in on a target – they can practice. They can probe for different ways in and try a variety of tactics, like email attacks or SMS and voice phishing. And they only need to be successful once.
That’s why training is such a critical part of security. It levels the playing field by letting end users practice what they learn while they discover how to keep themselves and their business safe.
In March of 2020 schools throughout the United Kingdom closed their doors to try to stem the spread of the coronavirus. In addition to disruptions to the lives of students and their families, the pandemic put unprecedented pressure on IT departments across the UK and wider world.
Notoriously strapped for resources, many schools’ IT departments found themselves without access to server rooms and no way to troubleshoot for students and staff when grading, learning and teleconferencing applications encountered problems.
In 2020 this situation was unfolding around the UK, and why CloudHappi began searching for a solution for their clients. CloudHappi is a London-based provider of IT solutions tailored for the education sector. Determined to provide the best learning experience possible for remote students, the company began exploring opportunities for shifting the IT burden from on-premise servers to the cloud.
Unfortunately, many of the earlier solutions CloudHappi explored took up to 15 days to perform a complete migration, an unacceptable timeline for schools looking to establish some sense of normalcy as soon as possible. After finding Carbonite and its server migration solution, however, it was able to perform a complete migration for its first school within a single day.
As a result, IT operations for the school experienced fewer disruptions, applications were easy to access and unfortunate circumstances for students were made a little easier to handle.
Many reasons to migrate
Schools across the UK and United States are planning to open in the fall, notwithstanding uncertainty caused by the spread of the virus’s Delta variant. Vaccinations in much of the world are prompting workers to return to offices and life to start to resemble its pre-pandemic state in many ways.
But in other ways, it may never again. By some estimates, less than 35% of workers have returned to office spaces. Many companies don’t plan on requiring their workforces to come back at all. Some business leaders see remote work as a net positive, giving them access to larger talent pools, reducing pollution, freeing up time spent commuting for more productive tasks and cutting facilities costs.
Whether inspired by downsizing and office space or not renewing leases at all, there’s a good chance this shift in the workforce will require many more migrations from on-premise servers to the cloud. Not unlike in the case of UK schools, IT admins will require greater access to productivity solutions without the need for physical space in which to operate.
Aside from the flexibility of being able to access systems from anywhere, migrating to the cloud entails several knock-on benefits for businesses, whether MSPs or their clients.
- Streamlined management – By offloading server management to a public cloud like Microsoft Azure or Amazon Web Services, businesses capitalize on all the economies of scale these companies have built over years of innovation and investment. Given the resources at their disposal, most cloud companies dwarf the capabilities of small IT teams
- Enhanced security – With well-developed security policies covering things like firewalls, open ports and security teams dedicated to uncovering and patching vulnerabilities, public cloud companies often offer better security coverage than small IT teams. Even as bigger targets compared to a self-managed small business, available again give these companies the edge in terms of data security.
- High-availability – Migrating data to the cloud also puts high-availability data replication possible for businesses. While large public cloud operations are highly reliable, outages do happen. When they do, high availability cloud architecture can quickly search to an unaffected server containing byte-by-byte replication if an original happens to go down. Without a high-availability solution, to use our example of schoolchildren in the UK, video conferencing software may become inoperable and students unable to learn together. For a business, losing access to certain applications because of a cloud outage can spell disaster. If email systems or customer account portals become inaccessible the costs can mount quickly.
In a sense, COVID-19 accelerated trends in computing trends by years. While much work had been moving to the cloud for some time before the pandemic hit, the sudden need for a distributed workforce heightened its importance overnight. Luckily, migrating offers significant benefits for all types of organizations and looks to be well suited for the workforce of the future.
To learn more about the benefits of migrating to the cloud, visit the Carbonite Migrate page here.
Ransomware has officially made the mainstream. Dramatic headlines announce the latest attacks and news outlets highlight the staggeringly high ransoms businesses pay to retrieve their stolen data. And it’s no wonder why – ransomware attacks are on the rise and the average ransom payment has ballooned to over $200,000.
But the true cost of ransomware can go beyond the headline-grabbing payments. The hit to a business’s reputation can be long lasting, as can the effect of protracted downtime. And over 15% of businesses never retrieve their data. Even more, some companies lose their data even though they pay a ransom.
That’s the bad news. The good news is that were gaining a better understanding of how ransomware attacks happen. Learning how ransomware sneaks into our personal and business lives is the key to protecting ourselves.
Risks to Small and Medium Businesses
In episode 1 of Carbonite + Webroot’s new series on ransomware, security experts, futurists and business leaders discuss the risks faced by small and medium businesses.
Before the latest surge of ransomware, some small and medium businesses could get away with thinking they weren’t a target. After all, the largest companies are the ones that can afford to pay the largest ransom payments. But the truth is there are only so many Fortune 500 companies to prey on.
Now with so many new victims of ransomware, businesses are turning to cyber security experts and asking why they’re a target. The short answer is … they aren’t. Small businesses fall victim to ransomware because of misconfigured systems, lack of proper security and human error. In other words, attackers sneak in by focusing their attention on vulnerable systems. They look for things like outdated firewalls and outdated servers because those gaps in security make for easy targets.
Protecting Your Data
Jon Murchison, CEO of Blackpoint Cyber, succinctly sums up why attacks happen, “It’s bad IT hygiene.” He’s seen municipalities attacked repeatedly because of holes in their network. He once fought off six waves of attacks, crediting Webroot’s capacity to hunt down malware and his ability to respond in real time. Without that, he guarantees there would have been a mass ransom event.
That’s why investing in cyber security is so important. With the explosion of ransomware, businesses that don’t protect themselves can fall victim to a ransomware. By establishing strong security measures, you can keep your company out of the next ransomware headline.
Acknowledging the Threat
Dr. Kelley Misata, CEO & founder of Sightline Security, says it’s an exciting time for technology, with the proliferation of IoT and mobile devices. But she adds, “people aren’t realizing that by interacting with that technology, they are putting themselves at risk for a cyber security event to happen.”
Dr. Misata has dedicated her career to helping others understand cyber security and teaching them how to adopt best practices in their own lives. Because ransomware attackers look for the easiest target, she tells her clients that “it’s not just how they protect their businesses, it’s how they protect their lives, how they protect their customers, and how they protect those around them.” Ransomware doesn’t just sneak in through our work computers and business servers. If our mobile devices are vulnerable, attackers will break in that way.
First Step in Preventing Ransomware
The first step in preventing ransomware is knowing who it targets and how it sneaks in. Big businesses make headlines, but small and medium businesses are increasingly falling victim to ransomware. And more and more often, ransomware piggy backs on our personal devices to sneak into our business lives.
Taking all this together will help you to focus your efforts when you invest in cyber security. Dive into expert analysis on 2021’s ransomware surge in our YouTube series: Ransomware 2021.
Webroot put forward another strong performance in its latest round of independent third-party testing, besting all competitors and taking home the highest overall score. In taking the highest score in the category for 2021, Webroot beat out competitors including BitDefender™, McAfee® and ESET® endpoint security solutions.
In the report, the company conducted objective testing of nine endpoint security products, including Webroot® Business Endpoint Security. Tests measured performance in 15 categories including:
- Installation size
- Boot time
- CPU usage during idle and scan
- Memory usage during idle and initial scan
- Memory usage during scheduled scan
Webroot stood out in several categories in addition achieving the best overall score. Some categories were won by a wide margin.
Consider installation time for instance. Webroot completed installation in just over four seconds, while the next fastest installation time was more than 17 seconds and the average for the category was over 162 seconds.
According to PassMark, this metric is important because “the speed and ease of the installation process will strongly influence the user’s first impression of the security software.”
Installation size was a similar case. It is an important metric because as PassMark puts it, “In offering new features and functionality to users, security software products tend to increase in size with each new release.”
Webroot also took home top honors when it comes to memory usage. In both memory used while idle and during scan, Webroot was the least impactful to system resources.
The reason Webroot performed so well in this test is not by accident. By design, much of the “heavy lifting” of endpoint security is done in the cloud. This ensures the highest level of efficacy while also reducing the performance impact at the endpoint. Businesses should not need to sacrifice performance for efficacy.
Additionally, Webroot took the top spot in the categories of memory usage during memory usage during initial scan, memory usage during scheduled scan, scheduled scan time and file compression and decompression.
PassMark® Software Party, Ltd. specializes in “the development of high-quality performance benchmarking solutions as well as providing expert independent IT consultancy services to clients ranging from government organizations to major IT heavyweights.”
At Carbonite + Webroot, we’re always preaching about the importance of layering security solutions. Because here’s the truth: data’s always at risk. Whether from cybercriminals, everyday mishaps or mother nature, businesses can put up all the defenses they want but disaster only has to successfully strike once.
The global pandemic means more work is being conducted in the cloud, so this is no time to be lax with the security of cloud backups. Unless protection is redundant, organizations risk of losing mission-critical data – for minutes, days or permanently depending on the disaster – and putting their survival at risk.
That’s why layered protection in the cloud is so critical to cyber resilience. Without it, any one failure can be catastrophic.
So, how’s it done?
Let’s start with endpoints
For organizations managing hundreds or thousands of endpoints, backing each up to the cloud is important for keeping employees productive in the case of hardware failure, device theft, damage or malicious insiders. It’s easy to see how a laptop can be damaged, so it’s obvious for most that files stored locally should be backed up to the cloud.
But it’s also important to recognize that work done in the cloud should also be backed up. For example, one of the world’s most popular productivity tools for office workers, Microsoft 365, increasingly carries out its core functions in the cloud. But it has some serious gaps in terms of backup capabilities.
The average endpoint user may not know or care which important work files are stored, so long as they’re there when needed. This makes it important that Microsoft 365 data is backed up to the cloud – regardless of whether the user is aware if updates are being made locally or if the location is using its cloud capabilities.
Finally, but in the other direction, cloud-based cybersecurity offers another form of data security from the cloud. This method avoids the risk of endpoints relying on out-of-date file definitions of known-bad files, instead relying on near real-time threat telemetry from the cloud. This allows for the near real-time protection of all endpoints using the solution once a threat is identified.
But must also include servers
It’s less obvious to many of us that servers are at risk of becoming ground zero for data loss as well. Hardware sometimes fails, power cords can be tripped over, or worse…natural disasters can strike data centers, wiping out servers through fires, floods or other types of damage.
What good are endpoints without the servers that feed them information? Cloud computing technology offers a handful of flexible opportunities for backing up data housed on servers.
On-premise servers – used to store data locally based a business’s preference, regulatory needs or other reasons – can and should still be backed up to the cloud in case of a localized outage. Usually this entails concentrating data within a single point of storage (a “vault”) that’s then bulk uploaded. This duplicated data can then be accessed in the event a physical location loses power or a fiber optic cable is severed by construction work, for example.
Off-premise server banks also can and should be protected by cloud backups. Many of these servers may store their data in public clouds, which are normally but not always highly reliable. Public cloud outages do happen. When they do, it pays to have another cloud backup solution to failover to so the business can continue to run.
Whether or not this data is stored in the cloud permanently or migrated there when needed, redundancy is established when on and off-premise server banks are backed up to the cloud.
Rounding out the redundancy is a disaster recovery as a service (DRaaS) solution. This form of high-availability replication anticipates a worst-case scenario for server data loss. With DRaaS, byte-level replication of changes on an organization’s systems are sent to the cloud. In the event of a disaster, you
Note that DRaaS is not to be confused with being a replacement for backup. These are two different solutions that can work perfectly well alongside each other. Backup should apply to every server in an environment and offers long term retention with flexible restore options. DRaaS typically would be layered on top of backup, for the most mission critical servers, resulting in options to either restore from backup, or failover directly and rapidly to another cloud depending on the event that has rendered the production server or data inaccessible.
Maintain uptime, all the time
Threats to business data are all around us. Rates of ransomware are rising and remote workforces have ballooned since the outbreak of COVID-19. This is no time to trust in a single cloud as an organizational backup strategy. No single point of failure should keep users from accessing business-critical data. Luckily, there are many options for designed layered backup across clouds.
It’s not just that they’re making headlines more often. Ransomware rates really are rising. Given the recent spate of high-profile attacks, it’s worth remembering the difference between standard backup and high-availability replication.
Our research suggests that the costs of ransomware for businesses can amount to much more than an extortion payment. They include lost hours of productivity, reputational damage, compliance fines and more. But maintaining access to critical data at all times can undermine ransomware actors’ leverage over an organization, reduce recovery time and earn the good graces of regulators and the public.
Ultimately, doing so comes down to answering the question: what data does my business simply need to back up, and what data can my business simply not do without? Knowing the difference helps to determine the Recovery Time Objective (RTO) for a given type of data or application.
A 24-hour recovery time may fall within the RTO for non-essential data and applications. For mission-critical data, on the other hand, a 24-hour recovery period may exceed the acceptable amount of time to be without access to data. It could drive up the cost of data breach significantly, perhaps even higher than a ransomware payment.
Also, it may come down to the amount of change-rate data that can be acceptability lost. Knowing the acceptable Recovery Point Objectives (RPO) can be as important as knowing the required RTOs. For instance, a highly transactional system performing critical Online Transaction Processing (OLTP) could not afford the loss of data that occurred between backup cycles.
Well-designed data backup plans tend to be a blend of both standard backup and high availability, so it helps to know the difference when determining which is the better fit for a given system, application or set of data.
There are all sorts of good reasons to keep regular, reliable backups of business systems. These may concern the normal conveniences of document retention – not having to begin a project from scratch in the case of accidental deletion, for instance – or to satisfy industry or legal compliance regulations.
These backups are taken at pre-determined time intervals, typically once a day during non-working hours, and stored on a backup server. Often backups will be given an associated value called a retention. A retention allows organization to keep certain backups for a longer period of time. For instance, a business may decide it’s necessary to keep daily backups for a total of 30 days. But due to storage concerns, they will drop off the server on day 31. However, regulations or corporate policies may require keeping certain backups longer, so often they will designate a monthly of a yearly backup that has an extended retention for one or even up to seven years.
Recently, backup servers have been targeted by ransomware actors. Criminals will study an organization’s environment and specifically backup services. Therefore, it’s extremely important to have a backup for the backup. One of the preferred methods is a secondary cloud copy of the backup server. Since the cloud copy sits on a separate network, it provides a layer of security making it more difficult to span the separate cloud network and target the secondary backup copy.
In most cases, backups like those discussed above have recovery times of hours for a localized power outage or even days for a flooded server room, for example. For an HR system, this RTO may be acceptable. For a point-of-sale system, this could mean significant lost revenue.
When a backup’s RTO and RPO time values do not meet the needs for recovering a company’s critical systems (OLTP servers, for instance), high-availability replication is an effective alternative for ensuring required operational performance levels are met. High-availability replication accomplishes this by keeping an exact copy of critical servers, maintained by real-time, byte-level replication, which remain powered off until needed.
When that time comes, a failover procedure is initiated, and the copy assumes the role of the production system. The failover process typically occurs within a matter of a second or minutes, depending upon the server configuration or network latency. In cases of hardware failure or data center disasters, high-availability replication can stave off a data loss disaster.
However, since replication is real-time, an offline copy can be corrupted if the primary is attacked by ransomware. Therefore, system snapshots may be required to maintain clean point in time copies of the system. Snapshots are typically non-intrusive, do not noticeably delay replication and provide a failover with a better RPO than backup.
Like with backup, an off-site cloud solution can step in if on-site servers are out of commission. Latency can slightly lengthen recovery a small amount as the off-site cloud boots up, but the time to recovery still feels like a blip to users or customers.
For some organizations there may be no data critical enough to warrant implementing this high-availability architecture. For others, all data may be considered essential. For most, the reality will be fall somewhere in the middle. If companies are highly regulated or mandated by specific corporate retention requirements, a combination of high-availability replication and backup will likely exist for the same server.
Ensuring resilience against ransomware
In a blended backup/high-availability strategy, what matters most is deciding which systems are backed up by which before the worst happens. Whether handling backup for your own organization or for clients’, it’s important to have a well-tested backup plan in place that takes in RTOs based on acceptable amounts of downtime for data and applications.
Cybersecurity analysts are charting both a rise in ransomware incidents and in amounts cybercriminals are demanding from businesses to restore their data. That’s bad news in itself, but what’s often overlooked are the additional ways – beyond payments victims may or may not choose to make– victims pay for these attacks.
Our latest threat report found the average ransomware payment peaked in September 2020 at more than $230 thousand. But the ransom alone doesn’t tell the whole story. To do that, we conducted another study to tally and quantify the collateral damage from surging ransomware incidents and rising extortion amounts.
These are some of those affects inflating the price tag of an attack, which we call The Hidden Costs of Ransomware.
1. Lost productivity
Our survey data found that hours of lost productivity from a ransomware incident were closely related to the length of time to discovery of the attack. Generally, faster detection meant limiting the spread of the infection and less time spent on remediation. In other words, the further ransomware spreads the longer it takes to eradicate. Unfortunately, almost half (49%) of respondents to our survey reported being unaware of the infection for more than 24 hours.
A third of incidents were reportedly remediated in 1-3 hours, while 17 percent required 3-5 days of effort. We attempted to quantify these lost hours based on hours spent on remediation (easily measurable) and the opportunity costs from diverting resources from IT teams’ “blue sky” responsibilities (tougher to measure).
Factoring in varying costs of IT resources, we determined low/high cost estimates for hours of remediation reported by survey respondents. These ran from $300/$750 for three hours or remediation to $4,000/$10,000 for five workdays of remediation. (A full breakdown is available in the report.)
2. Downtime costs
Regardless of whether an organization decides to pay a ransom, how long does it take to return to normal operations?
In our study, businesses that didn’t pay ransoms had recovered their data quicker than those that didn’t pay. Specifically, 70 percent of companies that didn’t pay a ransom were able to recover their data within a business day, compared to 46 percent that did.
Presumably this has to do with whether a target had readily available backups, and lost time due to back and forth with extortionists or time spent making a payment.
One of the most important factors in determining downtime costs is specifying the value of the data that’s become unavailable. Is it critical to conducting business operations? Or is it nice to have but not essential like marketing or prospecting data?
Determining data’s value helps businesses formulate their recovery time objectives (RTOs). For non-critical data and applications, a 24-hour recovery time may fall within the RTO. For mission-critical data, a 24-hour recovery may exceed the tolerable limit and help drive the cost of downtime higher than the ransom itself.
3. Impact on client operations
Nearly half (46%) of the businesses in our survey reported client operations being adversely affected by a ransomware incident at their own company. This could quickly sever business relationships that take a long time to build and result in the loss of anticipated revenue. But that’s not even be the riskiest aspect of client operations being affected.
The implications of supply chain attacks, especially for MSPs, came into sharper focus last year following the SolarWinds attack. Were a cybercriminal to compromise a trusted supplier to distribute ransomware, rather than for surveillance as in that attack, the costs could be enormous.
MSPs should seriously consider the possibility of becoming the source for such a supply chain attack, especially those with clients in critical industries like energy, public utilities, defense and healthcare.
4. Brand and reputational damage
Consider the headlines and airtime generated by ransomware attacks against high-profile targets. A Google search of “Garmin ransomware,” for instance, returns more than 1 million results. While your organization may not be a global tech giant, it also likely doesn’t have the staying power of one.
In our study, 38 percent of businesses admitted their brand was harmed by a run-in with ransomware. Beyond lost customers, publicity issues could force businesses to enlist the services of expensive PR or communications firms to repair the damage.
Businesses with the resources to do so should consider themselves lucky, because the alternative is worse. Silence or an uncoordinated response to a ransomware attack – especially one that affects customers – can come of as unserious, callous or ineffective.
Reputational damage in an age of heightened sensitivity to cybersecurity incidents can have significant consequences. Our data shows that 61 percent of consumers switched some or all their business to a competing brand in the last year, and 77 percent admit they retract their loyalty now quicker than they once did.
The list goes on…
By no means is this an exhaustive list of the hidden costs of ransomware. They extend to fines for breaches of compliance regulation, the rising costs of cybersecurity insurance and a host of other unforeseen consequences.
For the complete findings from our survey and our recommendations for not encountering these hidden costs, download the full report.
Cyber resilience refers to a business’s ability to mitigate damage to its systems, processes and even its reputation. It’s based on the principle that, in the real (and really connected) world, adverse events occur. This could be in the form of a user enabling a breach by providing sensitive information during a phishing attack, through a new threat known as a “zero day” being weaponized against a business, or an event of any complexity in between.
When it comes to building a cyber resilient business, technology is an important piece. But it’s not the only one. A well-rounded security strategy is also essential. People and processes are key ingredients when it comes to that.
Audit checklists are a great place to start when ensuring your business is taking a holistic approach to data security, and so is this revealing conversation with Channel E2E and MSP Alert editor Joe Panettieri and a product marketing expert at OpenText.
The two discuss how there’s no silver bullet to all the potential threats to your data security, but how adapting the right mindset can help organizations begin to think about security differently. Our experts cover the “train, block, protect, backup and recover” model and what solutions for each can look like as a part of a real-life security stack.
The two touch on the importance of user security training, variables introduced by widespread remote workforces and how backup can undermine ransomware actors. Whether you’re designing a cybersecurity framework for your own business or putting one in place for clients, you won’t want to miss this conversation.
For many U.S. workers the switch to remote work is a permanent one. That means more high-stakes work is being conducted on self-configured home networks. For others, home networks are simply hosting more devices as smart doorbells, thermostats and refrigerators now connect to the internet.
Security experts warn that while the internet of things (IoT) isn’t inherently a bad thing, it does present concerns that must be considered. Many devices come pre-configured with inherently poor security. They often have weak or non-existent passwords set as the default.
As our guest and host Joe Panettieri discuss, these are issues that would be addressed on corporate networks by a professional IT administrator. The conversation covers the issues of IoT and home network security both from the perspective of the average family household and what the age of remote work means for employees working on their own networks.
Security intelligence director Grayson Milbourne brings a unique perspective to the podcast. Having held senior roles in both threat intelligence and product management, Milbourne is acutely aware of what the threats security products come up against. He knows both the cyber threat landscape and the consumer internet security market, so he’s able to provide insightful advice for how tech-loving homeowners can keep personal networks powerful and protected.
Milbourne suggests problems of IoT and home network security could be addressed with a cybersecurity version of ENERGY STAR ratings. A program could formalize current IoT security best practices and incorporate them into a standard consumers recognize.
During this informative podcast, Panettieri and Milbourne discuss that idea and more cybersecurity topics related to IoT devices. They cover:
- The difference between device security and the security of the app used to control it
- How to leverage user reviews while researching IoT devices and what security concerns to check on before buying
- Privacy and data collection issues, including why one of the most common IoT devices may be among the most intrusive
- Configuring IoT devices to prevent them from joining rogue IoT zombie networks
Whether you’re an IT administrator trying to secure remote workers or just own a smart TV, there’s something in this conversation for you. Be sure to give it a listen.