IT Management Solutions protects its clients with Webroot® Business Endpoint Protection

IT Management Solutions protects its clients with Webroot® Business Endpoint Protection

A cyber resilience strategy

“I have used a lot of different security products over the years, and I get approached by a lot of vendors,” says Pedro Nuñez. As president and CEO of New England based MSP IT Management Solutions, Nuñez is always on the lookout for products that go beyond just a traditional security operations center.

That’s what lead him to work with Webroot® Business Endpoint Protection.

“To make any kind of difference, you need a way to mitigate a security incident automatically.” It’s not enough to just monitor his clients’ networks and notify him if there’s a security incident. If that’s all a tool can do, it’s then up to his team to manage every incident manually – even the smallest ones.

Saving time

And with over 85 clients, Nuñez needs time to focus on the most serious threats. The automation that comes with Webroot and its integration with Blackpoint Cyber means his clients’ endpoints, networks and even IoT devices are monitored for any anomalies. Once something is noticed, there’s no delay in automatically hunting down the threat.

“We effectively save up to 40 help desk hours a week, sometimes more” with the managed detection and response from Webroot.

That means when there’s a persistent attack on a server or when a client falls victim to a phishing attack, he has a head start on tackling the problem.

Protection in practice

Recently one of Nuñez’ clients, a municipality in Massachusetts, was targeted by a hacking group based out of Romania. The municipality was particularly vulnerable because of their old and out-of-date systems.

“The city would have been overrun with ransomware, but we started getting alerts right away from Webroot and Blackpoint,” Nuñez remembers. Since there was no delay in responding to the attack, he was able to get the ransomware under control so it couldn’t take over.

Even though it was a persistent attack, the security controls held up. The incident created thousands of tasks on individual devices, and it took weeks to fully stop. But in the end, the city experienced virtually no downtime. “There are a lot of city systems that can’t afford to go down, so making it through the attack without downtime . . . was a major win,” says Nuñez.

Businesses make their own luck

The next town over was also hit, but their security didn’t hold up. Their data was stolen, and they ended up having to pay a ransom. Smiling, Nuñez says that “The city that was my client can consider themselves lucky. But really, it wasn’t luck.”

His hands-on approach combined with the right tools saved his client from suffering a major incident.

For IT Management Solutions, the next step is end user training. Afterall, Nuñez notes, it no one had clicked the malicious email then the ransomware attack could have been prevented.

Watch Pedro Nuñez, President and CEO of IT Management Solutions, talk about his approach to cybersecurity.

New Languages Added to Security Awareness Training (Nov. Update)

New Languages Added to Security Awareness Training (Nov. Update)

Updated November 23, 2021

Dutch, Spanish and French were just the beginning of expanded language offerings from Webroot Security Awareness Training, with German and Portuguese added as of November, 2021! Stay posted to learn about expansions to more languages coming in the future.

A Global Challenge

The steady stream of cyberattacks seen throughout 2019 turned into a torrent over the last year – ransomware, phishing scams and data breaches are now at an all-time high. Of course, the growing cybersecurity threat isn’t contained to just one country. The effects are being felt the world over.

The National Cybersecurity Agency of France (ANSSI) is trying to tackle the 255% surge in ransomware attacks reported in 2020. Meanwhile Spain is trying crack down on malicious actors operating inside the country.

And in an interview with workers in the U.S., Japan, Australia and throughout Europe, 54% say they spend more time working from home now than they did at the beginning of 2020. The blurred lines between home life and work life leads to the use of improperly secured personal devices with ramifications being felt by small, medium and large businesses. But with cyberattacks at an all-time high, 63% of companies have kept their cybersecurity trainings at the same level that it was at the end of 2019.

Tackling Cyber Threats

Our networked world connects us to points all over, so it’s no wonder cybersecurity needs to be taken seriously across the globe. The fight against these threats is complicated, but most successful attacks share a common vector – the human factor.

Because of this shared element, security experts know where to focus their energy. In fact, research shows that Webroot® Security Awareness Training improves cyber resilience and helps defend against cyberattacks.

Expanded Offerings

The truly global nature of cyber threats is why Webroot is expanding its language offerings for our Security Awareness Training. This training helps employees keep security top of mind so businesses become more secure.

Now offered in Dutch, Spanish, French, German, and Portuguese, our Security Awareness Training features native narration throughout. Other available options offer courses with only translated captions overlaid on existing content while our trainings convey important security information in an engaging experience.

Why Training is Critical

Often, attackers have a built-in advantage when they zero in on a target – they can practice. They can probe for different ways in and try a variety of tactics, like email attacks or SMS and voice phishing. And they only need to be successful once.

That’s why training is such a critical part of security. It levels the playing field by letting end users practice what they learn while they discover how to keep themselves and their business safe.

As workforces migrate from offices, workflows migrate to the cloud

As workforces migrate from offices, workflows migrate to the cloud

In March of 2020 schools throughout the United Kingdom closed their doors to try to stem the spread of the coronavirus. In addition to disruptions to the lives of students and their families, the pandemic put unprecedented pressure on IT departments across the UK and wider world.

Notoriously strapped for resources, many schools’ IT departments found themselves without access to server rooms and no way to troubleshoot for students and staff when grading, learning and teleconferencing applications encountered problems.

In 2020 this situation was unfolding around the UK, and why CloudHappi began searching for a solution for their clients. CloudHappi is a London-based provider of IT solutions tailored for the education sector. Determined to provide the best learning experience possible for remote students, the company began exploring opportunities for shifting the IT burden from on-premise servers to the cloud.

Unfortunately, many of the earlier solutions CloudHappi explored took up to 15 days to perform a complete migration, an unacceptable timeline for schools looking to establish some sense of normalcy as soon as possible. After finding Carbonite and its server migration solution, however, it was able to perform a complete migration for its first school within a single day.  

As a result, IT operations for the school experienced fewer disruptions, applications were easy to access and unfortunate circumstances for students were made a little easier to handle.

Many reasons to migrate

Schools across the UK and United States are planning to open in the fall, notwithstanding uncertainty caused by the spread of the virus’s Delta variant. Vaccinations in much of the world are prompting workers to return to offices and life to start to resemble its pre-pandemic state in many ways.

But in other ways, it may never again. By some estimates, less than 35% of workers have returned to office spaces. Many companies don’t plan on requiring their workforces to come back at all. Some business leaders see remote work as a net positive, giving them access to larger talent pools, reducing pollution, freeing up time spent commuting for more productive tasks and cutting facilities costs.

Whether inspired by downsizing and office space or not renewing leases at all, there’s a good chance this shift in the workforce will require many more migrations from on-premise servers to the cloud. Not unlike in the case of UK schools, IT admins will require greater access to productivity solutions without the need for physical space in which to operate.

Aside from the flexibility of being able to access systems from anywhere, migrating to the cloud entails several knock-on benefits for businesses, whether MSPs or their clients.

  • Streamlined management – By offloading server management to a public cloud like Microsoft Azure or Amazon Web Services, businesses capitalize on all the economies of scale these companies have built over years of innovation and investment. Given the resources at their disposal, most cloud companies dwarf the capabilities of small IT teams
  • Enhanced security – With well-developed security policies covering things like firewalls, open ports and security teams dedicated to uncovering and patching vulnerabilities, public cloud companies often offer better security coverage than small IT teams. Even as bigger targets compared to a self-managed small business, available again give these companies the edge in terms of data security.
  • High-availability – Migrating data to the cloud also puts high-availability data replication possible for businesses. While large public cloud operations are highly reliable, outages do happen. When they do, high availability cloud architecture can quickly search to an unaffected server containing byte-by-byte replication if an original happens to go down. Without a high-availability solution, to use our example of schoolchildren in the UK, video conferencing software may become inoperable and students unable to learn together. For a business, losing access to certain applications because of a cloud outage can spell disaster. If email systems or customer account portals become inaccessible the costs can mount quickly.

In a sense, COVID-19 accelerated trends in computing trends by years. While much work had been moving to the cloud for some time before the pandemic hit, the sudden need for a distributed workforce heightened its importance overnight. Luckily, migrating offers significant benefits for all types of organizations and looks to be well suited for the workforce of the future.

To learn more about the benefits of migrating to the cloud, visit the Carbonite Migrate page here.

How Ransomware Sneaks In

How Ransomware Sneaks In

Ransomware has officially made the mainstream. Dramatic headlines announce the latest attacks and news outlets highlight the staggeringly high ransoms businesses pay to retrieve their stolen data. And it’s no wonder why – ransomware attacks are on the rise and the average ransom payment has ballooned to over $200,000.

But the true cost of ransomware can go beyond the headline-grabbing payments. The hit to a business’s reputation can be long lasting, as can the effect of protracted downtime. And over 15% of businesses never retrieve their data. Even more, some companies lose their data even though they pay a ransom.

That’s the bad news. The good news is that were gaining a better understanding of how ransomware attacks happen. Learning how ransomware sneaks into our personal and business lives is the key to protecting ourselves.

Risks to Small and Medium Businesses

In episode 1 of Carbonite + Webroot’s new series on ransomware, security experts, futurists and business leaders discuss the risks faced by small and medium businesses.

Before the latest surge of ransomware, some small and medium businesses could get away with thinking they weren’t a target. After all, the largest companies are the ones that can afford to pay the largest ransom payments. But the truth is there are only so many Fortune 500 companies to prey on.

Now with so many new victims of ransomware, businesses are turning to cyber security experts and asking why they’re a target. The short answer is … they aren’t. Small businesses fall victim to ransomware because of misconfigured systems, lack of proper security and human error. In other words, attackers sneak in by focusing their attention on vulnerable systems. They look for things like outdated firewalls and outdated servers because those gaps in security make for easy targets.

Protecting Your Data

Jon Murchison, CEO of Blackpoint Cyber, succinctly sums up why attacks happen, “It’s bad IT hygiene.” He’s seen municipalities attacked repeatedly because of holes in their network. He once fought off six waves of attacks, crediting Webroot’s capacity to hunt down malware and his ability to respond in real time. Without that, he guarantees there would have been a mass ransom event.

That’s why investing in cyber security is so important. With the explosion of ransomware, businesses that don’t protect themselves can fall victim to a ransomware. By establishing strong security measures, you can keep your company out of the next ransomware headline.

Acknowledging the Threat

Dr. Kelley Misata, CEO & founder of Sightline Security, says it’s an exciting time for technology, with the proliferation of IoT and mobile devices. But she adds, “people aren’t realizing that by interacting with that technology, they are putting themselves at risk for a cyber security event to happen.”

Dr. Misata has dedicated her career to helping others understand cyber security and teaching them how to adopt best practices in their own lives. Because ransomware attackers look for the easiest target, she tells her clients that “it’s not just how they protect their businesses, it’s how they protect their lives, how they protect their customers, and how they protect those around them.” Ransomware doesn’t just sneak in through our work computers and business servers. If our mobile devices are vulnerable, attackers will break in that way.

First Step in Preventing Ransomware

The first step in preventing ransomware is knowing who it targets and how it sneaks in. Big businesses make headlines, but small and medium businesses are increasingly falling victim to ransomware. And more and more often, ransomware piggy backs on our personal devices to sneak into our business lives.

Taking all this together will help you to focus your efforts when you invest in cyber security. Dive into expert analysis on 2021’s ransomware surge in our YouTube series: Ransomware 2021.

Webroot top performer among security products in PassMark® Software testing

Webroot top performer among security products in PassMark® Software testing

Webroot put forward another strong performance in its latest round of independent third-party testing, besting all competitors and taking home the highest overall score. In taking the highest score in the category for 2021, Webroot beat out competitors including BitDefender™, McAfee® and ESET® endpoint security solutions.

In the report, the company conducted objective testing of nine endpoint security products, including Webroot® Business Endpoint Security. Tests measured performance in 15 categories including:

  • Installation size
  • Boot time
  • CPU usage during idle and scan
  • Memory usage during idle and initial scan
  • Memory usage during scheduled scan

Webroot stood out in several categories in addition achieving the best overall score. Some categories were won by a wide margin.

Consider installation time for instance. Webroot completed installation in just over four seconds, while the next fastest installation time was more than 17 seconds and the average for the category was over 162 seconds.

According to PassMark, this metric is important because “the speed and ease of the installation process will strongly influence the user’s first impression of the security software.”

Installation size was a similar case. It is an important metric because as PassMark puts it, “In offering new features and functionality to users, security software products tend to increase in size with each new release.”

Webroot also took home top honors when it comes to memory usage. In both memory used while idle and during scan, Webroot was the least impactful to system resources.

The reason Webroot performed so well in this test is not by accident. By design, much of the “heavy lifting” of endpoint security is done in the cloud. This ensures the highest level of efficacy while also reducing the performance impact at the endpoint. Businesses should not need to sacrifice performance for efficacy.

Additionally, Webroot took the top spot in the categories of memory usage during memory usage during initial scan, memory usage during scheduled scan, scheduled scan time and file compression and decompression.

PassMark® Software Party, Ltd. specializes in “the development of high-quality performance benchmarking solutions as well as providing expert independent IT consultancy services to clients ranging from government organizations to major IT heavyweights.”

Redundancy for resilience: The importance of layered protection in the cloud

Redundancy for resilience: The importance of layered protection in the cloud

At Carbonite + Webroot, we’re always preaching about the importance of layering security solutions. Because here’s the truth: data’s always at risk. Whether from cybercriminals, everyday mishaps or mother nature, businesses can put up all the defenses they want but disaster only has to successfully strike once.

The global pandemic means more work is being conducted in the cloud, so this is no time to be lax with the security of cloud backups. Unless protection is redundant, organizations risk of losing mission-critical data – for minutes, days or permanently depending on the disaster – and putting their survival at risk.

That’s why layered protection in the cloud is so critical to cyber resilience. Without it, any one failure can be catastrophic.

So, how’s it done?

Let’s start with endpoints

For organizations managing hundreds or thousands of endpoints, backing each up to the cloud is important for keeping employees productive in the case of hardware failure, device theft, damage or malicious insiders. It’s easy to see how a laptop can be damaged, so it’s obvious for most that files stored locally should be backed up to the cloud.

But it’s also important to recognize that work done in the cloud should also be backed up. For example, one of the world’s most popular productivity tools for office workers, Microsoft 365, increasingly carries out its core functions in the cloud. But it has some serious gaps in terms of backup capabilities.

The average endpoint user may not know or care which important work files are stored, so long as they’re there when needed. This makes it important that Microsoft 365 data is backed up to the cloud – regardless of whether the user is aware if updates are being made locally or if the location is using its cloud capabilities.

Finally, but in the other direction, cloud-based cybersecurity offers another form of data security from the cloud. This method avoids the risk of endpoints relying on out-of-date file definitions of known-bad files, instead relying on near real-time threat telemetry from the cloud. This allows for the near real-time protection of all endpoints using the solution once a threat is identified.

But must also include servers

It’s less obvious to many of us that servers are at risk of becoming ground zero for data loss as well. Hardware sometimes fails, power cords can be tripped over, or worse…natural disasters can strike data centers, wiping out servers through fires, floods or other types of damage.

What good are endpoints without the servers that feed them information? Cloud computing technology offers a handful of flexible opportunities for backing up data housed on servers.

On-premise servers – used to store data locally based a business’s preference, regulatory needs or other reasons – can and should still be backed up to the cloud in case of a localized outage. Usually this entails concentrating data within a single point of storage (a “vault”) that’s then bulk uploaded. This duplicated data can then be accessed in the event a physical location loses power or a fiber optic cable is severed by construction work, for example.

Off-premise server banks also can and should be protected by cloud backups. Many of these servers may store their data in public clouds, which are normally but not always highly reliable. Public cloud outages do happen. When they do, it pays to have another cloud backup solution to failover to so the business can continue to run.

Whether or not this data is stored in the cloud permanently or migrated there when needed, redundancy is established when on and off-premise server banks are backed up to the cloud.

Rounding out the redundancy is a disaster recovery as a service (DRaaS) solution. This form of high-availability replication anticipates a worst-case scenario for server data loss. With DRaaS, byte-level replication of changes on an organization’s systems are sent to the cloud. In the event of a disaster, you

Note that DRaaS is not to be confused with being a replacement for backup. These are two different solutions that can work perfectly well alongside each other. Backup should apply to every server in an environment and offers long term retention with flexible restore options.  DRaaS typically would be layered on top of backup, for the most mission critical servers, resulting in options to either restore from backup, or failover directly and rapidly to another cloud depending on the event that has rendered the production server or data inaccessible.

Maintain uptime, all the time

Threats to business data are all around us. Rates of ransomware are rising and remote workforces have ballooned since the outbreak of COVID-19. This is no time to trust in a single cloud as an organizational backup strategy. No single point of failure should keep users from accessing business-critical data. Luckily, there are many options for designed layered backup across clouds.  

What’s the difference between high availability and backup again?

What’s the difference between high availability and backup again?

It’s not just that they’re making headlines more often. Ransomware rates really are rising. Given the recent spate of high-profile attacks, it’s worth remembering the difference between standard backup and high-availability replication.

Our research suggests that the costs of ransomware for businesses can amount to much more than an extortion payment. They include lost hours of productivity, reputational damage, compliance fines and more. But maintaining access to critical data at all times can undermine ransomware actors’ leverage over an organization, reduce recovery time and earn the good graces of regulators and the public.

Ultimately, doing so comes down to answering the question: what data does my business simply need to back up, and what data can my business simply not do without? Knowing the difference helps to determine the Recovery Time Objective (RTO) for a given type of data or application.

A 24-hour recovery time may fall within the RTO for non-essential data and applications. For mission-critical data, on the other hand, a 24-hour recovery period may exceed the acceptable amount of time to be without access to data. It could drive up the cost of data breach significantly, perhaps even higher than a ransomware payment.

Also, it may come down to the amount of change-rate data that can be acceptability lost. Knowing the acceptable Recovery Point Objectives (RPO) can be as important as knowing the required RTOs.  For instance, a highly transactional system performing critical Online Transaction Processing (OLTP) could not afford the loss of data that occurred between backup cycles. 

Well-designed data backup plans tend to be a blend of both standard backup and high availability, so it helps to know the difference when determining which is the better fit for a given system, application or set of data.

Data backup

There are all sorts of good reasons to keep regular, reliable backups of business systems. These may concern the normal conveniences of document retention – not having to begin a project from scratch in the case of accidental deletion, for instance – or to satisfy industry or legal compliance regulations.

These backups are taken at pre-determined time intervals, typically once a day during non-working hours, and stored on a backup server. Often backups will be given an associated value called a retention.  A retention allows organization to keep certain backups for a longer period of time.  For instance, a business may decide it’s necessary to keep daily backups for a total of 30 days. But due to storage concerns, they will drop off the server on day 31. However, regulations or corporate policies may require keeping certain backups longer, so often they will designate a monthly of a yearly backup that has an extended retention for one or even up to seven years. 

Recently, backup servers have been targeted by ransomware actors.  Criminals will study an organization’s environment and specifically backup services. Therefore, it’s extremely important to have a backup for the backup. One of the preferred methods is a secondary cloud copy of the backup server.  Since the cloud copy sits on a separate network, it provides a layer of security making it more difficult to span the separate cloud network and target the secondary backup copy.

In most cases, backups like those discussed above have recovery times of hours for a localized power outage or even days for a flooded server room, for example. For an HR system, this RTO may be acceptable. For a point-of-sale system, this could mean significant lost revenue.

High availability

When a backup’s RTO and RPO time values do not meet the needs for recovering a company’s critical systems (OLTP servers, for instance), high-availability replication is an effective alternative for ensuring required operational performance levels are met. High-availability replication accomplishes this by keeping an exact copy of critical servers, maintained by real-time, byte-level replication, which remain powered off until needed. 

When that time comes, a failover procedure is initiated, and the copy assumes the role of the production system. The failover process typically occurs within a matter of a second or minutes, depending upon the server configuration or network latency. In cases of hardware failure or data center disasters, high-availability replication can stave off a data loss disaster.

However, since replication is real-time, an offline copy can be corrupted if the primary is attacked by ransomware. Therefore, system snapshots may be required to maintain clean point in time copies of the system. Snapshots are typically non-intrusive, do not noticeably delay replication and provide a failover with a better RPO than backup.

Like with backup, an off-site cloud solution can step in if on-site servers are out of commission. Latency can slightly lengthen recovery a small amount as the off-site cloud boots up, but the time to recovery still feels like a blip to users or customers.

For some organizations there may be no data critical enough to warrant implementing this high-availability architecture. For others, all data may be considered essential. For most, the reality will be fall somewhere in the middle. If companies are highly regulated or mandated by specific corporate retention requirements, a combination of high-availability replication and backup will likely exist for the same server.

Ensuring resilience against ransomware

In a blended backup/high-availability strategy, what matters most is deciding which systems are backed up by which before the worst happens. Whether handling backup for your own organization or for clients’, it’s important to have a well-tested backup plan in place that takes in RTOs based on acceptable amounts of downtime for data and applications.

4 ways ransomware can cost your business (in addition to extortion)

4 ways ransomware can cost your business (in addition to extortion)

Cybersecurity analysts are charting both a rise in ransomware incidents and in amounts cybercriminals are demanding from businesses to restore their data. That’s bad news in itself, but what’s often overlooked are the additional ways – beyond payments victims may or may not choose to make– victims pay for these attacks.

Our latest threat report found the average ransomware payment peaked in September 2020 at more than $230 thousand. But the ransom alone doesn’t tell the whole story. To do that, we conducted another study to tally and quantify the collateral damage from surging ransomware incidents and rising extortion amounts.

These are some of those affects inflating the price tag of an attack, which we call The Hidden Costs of Ransomware.

1. Lost productivity

Our survey data found that hours of lost productivity from a ransomware incident were closely related to the length of time to discovery of the attack. Generally, faster detection meant limiting the spread of the infection and less time spent on remediation. In other words, the further ransomware spreads the longer it takes to eradicate. Unfortunately, almost half (49%) of respondents to our survey reported being unaware of the infection for more than 24 hours.

A third of incidents were reportedly remediated in 1-3 hours, while 17 percent required 3-5 days of effort. We attempted to quantify these lost hours based on hours spent on remediation (easily measurable) and the opportunity costs from diverting resources from IT teams’ “blue sky” responsibilities (tougher to measure).

Factoring in varying costs of IT resources, we determined low/high cost estimates for hours of remediation reported by survey respondents. These ran from $300/$750 for three hours or remediation to $4,000/$10,000 for five workdays of remediation. (A full breakdown is available in the report.)

2. Downtime costs

Regardless of whether an organization decides to pay a ransom, how long does it take to return to normal operations?

In our study, businesses that didn’t pay ransoms had recovered their data quicker than those that didn’t pay. Specifically, 70 percent of companies that didn’t pay a ransom were able to recover their data within a business day, compared to 46 percent that did.

Presumably this has to do with whether a target had readily available backups, and lost time due to back and forth with extortionists or time spent making a payment.

One of the most important factors in determining downtime costs is specifying the value of the data that’s become unavailable. Is it critical to conducting business operations? Or is it nice to have but not essential like marketing or prospecting data?

Determining data’s value helps businesses formulate their recovery time objectives (RTOs). For non-critical data and applications, a 24-hour recovery time may fall within the RTO. For mission-critical data, a 24-hour recovery may exceed the tolerable limit and help drive the cost of downtime higher than the ransom itself.

3. Impact on client operations

Nearly half (46%) of the businesses in our survey reported client operations being adversely affected by a ransomware incident at their own company. This could quickly sever business relationships that take a long time to build and result in the loss of anticipated revenue. But that’s not even be the riskiest aspect of client operations being affected.

The implications of supply chain attacks, especially for MSPs, came into sharper focus last year following the SolarWinds attack. Were a cybercriminal to compromise a trusted supplier to distribute ransomware, rather than for surveillance as in that attack, the costs could be enormous.

MSPs should seriously consider the possibility of becoming the source for such a supply chain attack, especially those with clients in critical industries like energy, public utilities, defense and healthcare.   

4. Brand and reputational damage

Consider the headlines and airtime generated by ransomware attacks against high-profile targets. A Google search of “Garmin ransomware,” for instance, returns more than 1 million results. While your organization may not be a global tech giant, it also likely doesn’t have the staying power of one.

In our study, 38 percent of businesses admitted their brand was harmed by a run-in with ransomware. Beyond lost customers, publicity issues could force businesses to enlist the services of expensive PR or communications firms to repair the damage.

Businesses with the resources to do so should consider themselves lucky, because the alternative is worse. Silence or an uncoordinated response to a ransomware attack – especially one that affects customers – can come of as unserious, callous or ineffective.

Reputational damage in an age of heightened sensitivity to cybersecurity incidents can have significant consequences. Our data shows that 61 percent of consumers switched some or all their business to a competing brand in the last year, and 77 percent admit they retract their loyalty now quicker than they once did.

The list goes on…

By no means is this an exhaustive list of the hidden costs of ransomware. They extend to fines for breaches of compliance regulation, the rising costs of cybersecurity insurance and a host of other unforeseen consequences.

For the complete findings from our survey and our recommendations for not encountering these hidden costs, download the full report.

Download the eBook

Podcast: How to build a cyber resilient business

Podcast: How to build a cyber resilient business

Cyber resilience refers to a business’s ability to mitigate damage to its systems, processes and even its reputation. It’s based on the principle that, in the real (and really connected) world, adverse events occur. This could be in the form of a user enabling a breach by providing sensitive information during a phishing attack, through a new threat known as a “zero day” being weaponized against a business, or an event of any complexity in between.

When it comes to building a cyber resilient business, technology is an important piece. But it’s not the only one. A well-rounded security strategy is also essential. People and processes are key ingredients when it comes to that.

Audit checklists are a great place to start when ensuring your business is taking a holistic approach to data security, and so is this revealing conversation with Channel E2E and MSP Alert editor Joe Panettieri and a product marketing expert at OpenText.

The two discuss how there’s no silver bullet to all the potential threats to your data security, but how adapting the right mindset can help organizations begin to think about security differently. Our experts cover the “train, block, protect, backup and recover” model and what solutions for each can look like as a part of a real-life security stack.

The two touch on the importance of user security training, variables introduced by widespread remote workforces and how backup can undermine ransomware actors. Whether you’re designing a cybersecurity framework for your own business or putting one in place for clients, you won’t want to miss this conversation.

Podcast: Can we fix IoT security?

Podcast: Can we fix IoT security?

For many U.S. workers the switch to remote work is a permanent one. That means more high-stakes work is being conducted on self-configured home networks. For others, home networks are simply hosting more devices as smart doorbells, thermostats and refrigerators now connect to the internet.

Security experts warn that while the internet of things (IoT) isn’t inherently a bad thing, it does present concerns that must be considered. Many devices come pre-configured with inherently poor security. They often have weak or non-existent passwords set as the default.

As our guest and host Joe Panettieri discuss, these are issues that would be addressed on corporate networks by a professional IT administrator. The conversation covers the issues of IoT and home network security both from the perspective of the average family household and what the age of remote work means for employees working on their own networks.

Security intelligence director Grayson Milbourne brings a unique perspective to the podcast. Having held senior roles in both threat intelligence and product management, Milbourne is acutely aware of what the threats security products come up against. He knows both the cyber threat landscape and the consumer internet security market, so he’s able to provide insightful advice for how tech-loving homeowners can keep personal networks powerful and protected. 

Milbourne suggests problems of IoT and home network security could be addressed with a cybersecurity version of ENERGY STAR ratings. A program could formalize current IoT security best practices and incorporate them into a standard consumers recognize.  

During this informative podcast, Panettieri and Milbourne discuss that idea and more cybersecurity topics related to IoT devices. They cover:

  • The difference between device security and the security of the app used to control it
  • How to leverage user reviews while researching IoT devices and what security concerns to check on before buying
  • Privacy and data collection issues, including why one of the most common IoT devices may be among the most intrusive
  • Configuring IoT devices to prevent them from joining rogue IoT zombie networks