People’s fears and fantasies about artificial intelligence predate even computers. Before the term was coined in 1956, computing pioneer Alan Turing was already speculating about whether machines could think.

By 1997 IBM’s Deep Blue had beaten chess champion Gary Kasparov at his own game, prompting hysterical headlines and the game Go to replace chess as the symbolic bar for human vs. machine intelligence. At least until 2017 when Google’s AI platform AlphaGo ended human supremacy in that game too.

This brief run through major milestones in AI helps illustrate how the technology has progressed from miraculous to mundane. AI now has applications for nearly every imaginable industry including marketing, finance, gaming, infrastructure, education, space exploration, medicine and more. It’s gone from unseating Jeopardy! champions to helping us do our taxes.

In fact, imagine the most unexciting interactions that fill your day. Those to-dos you put off until it’s impossible to any longer. I’m talking about contacting customer support. AI now helps companies do this increasingly in the form of chatbots. The research firm Gartner tells us consumers appreciate AI for its ability to save them time and for providing them with easier access to information.

Companies, on the other hand, appreciate chatbots for their potential to reduce operating costs. Why staff a call center of 100 people when ten, supplemented by chatbots, can handle a similar workload? According to Forrester, companies including Nike, Apple, Uber and Target “have moved away from actively supporting email as a customer service contact channel” in favor of chatbots.

So, what could go wrong, from a cybersecurity perspective, with widespread AI in the form of customer service chatbots? Webroot principal software engineer Chahm An has a couple of concerns.

Privacy

Consider our current situation: the COVID-19 crisis has forced the healthcare industry to drastically amplify its capabilities without a corresponding rise in resources. Chatbots can help, but first they need to be trained.

“The most successful chatbots have typically seen the data that most closely matches their application,” says An. Chatbots aren’t designed like “if-then” programs. Their creators don’t direct them. They feed them data that mirrors the tasks they will expected to perform.

“In healthcare, that could mean medical charts and other information protected under HIPAA.” A bot can learn the basics of English by scanning almost anything on the English-language web. But to handle medical diagnostics, it will need to how real-world doctor-patient interactions unfold.

“Normally, medical staff are trained on data privacy laws, rules against sharing personally identifiable information and how to confirm someone’s identity. But you can’t train chatbots that way. Chatbots have no ethics. They don’t learn right from wrong.”

This concern is wider than just healthcare, too. All the data you’ve ever entered on the web could be used to train a chatbot: social media posts, home addresses, chats with human customer service reps…in unscrupulous or data-hungry hands, it’s all fair game.

Finally in terms of privacy, chatbots can also be gamed into giving away information. A cybercriminal probing for SSNs can tell a chatbot, ‘I forgot my social security. Can you tell it to me?’ and sometimes be successful because the chatbot succeeds by coming up with an answer.

“You can game people into giving up sensitive information, but chatbots may be even more susceptible to doing so,” warns An.

Legitimacy

Until recently chatbot responses were obviously potted, and the conversations directed. But they’re getting better. And this raises concerns about knowing who you’re really talking to online.

“Chatbots have increased in popularity because they’ve become so good you could mistake them for a person,” says An. “Someone who is cautious should still have no problem identifying one, by taking the conversation wildly off course, for instance. But if you’re not paying attention, they can be deceptive.”

An likens this to improvements in phishing attempts over the past decade. As phishing filters have improved—by blocking known malicious IP addresses or subject lines commonly used by scammers, for example—the attacks have gotten more subtle. Chatbots are experiencing a similar arms-race type of development as they improve at passing themselves off as real people. This may benefit the user experience, but it also makes them more difficult to detect. In the wrong hands, that seeming authenticity can be dangerously applied.

Because chatbots are also expensive and difficult to create, organizations may take shortcuts to catch up. Rather than starting from scratch, they’ll look for chatbots from third-party vendors. While more reputable institutions will have thought through chatbot privacy concerns, not all of them do.

“It’s not directly obvious that chatbots could leak sensitive or personally identifiable information that they are indirectly learning,” An says.

Chatbot security and you – what can be done?

1. Exercise caution in conversations

Don’t be afraid to start by asking if a customer service rep is a real person or a bot. Ask what an organization’s privacy policy says about chat logs. Even ask to speak with a manager or to conduct sensitive exchanges via an encrypted app. But regardless, exercise caution when exchanging information online.

“It used be any time you saw a web form or dialogue box, that heightened our caution. But nowadays people are publishing so much online that our collective guard is kind of down. People should be cautious even if they know they’re not speaking directly to a chatbot,” An advises.

In general, don’t put anything on the internet you wouldn’t want all over the internet.

2. Understand chatbot capabilities

“I think most people who aren’t following this issue closely would be surprised at the progress chatbots have made in just the last year or so,” says An. “The conversational ability of chatbots is pretty impressive today.”

GPT-3 by OpenAI is “the largest language model ever created and can generate amazing human-like text on demand,” according to MIT’s Technology Review and you can see what it can do here. Just knowing what it’s capable of can help internet users decide whether they’re dealing with a bot, says An.

“Both sides will get better at this. Cybersecurity is always trying to get better and cybercriminals are trying to keep pace. This technology is no different. Chatbots will continue to develop.”

Kyle Fiehler

About the Author

Kyle Fiehler

Copywriter

Kyle Fiehler is a writer and brand journalist for Webroot. For over 5 years he’s written and published custom content for the tech, industrial, and service sectors. He now focuses on articulating the Webroot brand story through collaboration with customers, partners, and internal subject matter experts..

Share This