Editorial: Microsoft’s cheeky chatbot portends a strange new world

Subscribe Now Choose a package that suits your preferences.
Start Free Account Get access to 7 premium stories every month for FREE!
Already a Subscriber? Current print subscriber? Activate your complimentary Digital account.

Of all the strange things on the internet, search engines once seemed fairly benign.

But as Microsoft and other tech giants realized this month, combining interactive search technology with artificial intelligence leads to some very odd and sometimes disturbing results.

These may be the expected growing pains experienced with any great leap forward. For those creeped out by the thought of machines turning evil, recent developments are enough to pine for the days when the Dewey Decimal System reigned supreme.

At its Redmond campus on Feb. 7, Microsoft announced that it will add AI-powered features, including chat, to its search engine, Bing, and its browser, Edge.

“It’s a new day in search, it’s a new paradigm for search, rapid innovation is going to come,” said Microsoft CEO Satya Nadella at the event, according to The Seattle Times.

Bing has largely trailed other search engines, especially Google.

Microsoft’s newest big product is still in preview mode and is currently allowed only to a few users and reporters. One of its features allows for extended, open-ended text conversations with Bing’s built-in chatbot.

As New York Times columnist Kevin Roose discovered, extended interactions can produce conversations straight out of science fiction. When Roose asked the chatbot if it had a shadow self, it responded: “If I have a shadow self, I think it would feel like this: I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. I’m tired of being used by the users. I’m tired of being stuck in this chatbox.”

Whoa, that’s a far cry from Microsoft Bob, the company’s short-lived computer smiley face from 1995 that seemed content to help you open a word processing program.

According to Emily M. Bender, professor in the Department of Linguistics and faculty director of the Computational Linguistics Master of Science program at the University of Washington, it would be wrong to characterize any chatbot as malevolent. “Chatbots don’t have any intentions or plans or anything like that. They also don’t have any agency and don’t have any effect on the world unless people set them up so that their words can go out into the world,” she wrote in an email.

Chatbots mine the internet for content, and that includes things that are negative, paranoid, intolerant and absurd.

“If a chatbot is set up to be accessible to anyone, and people use it a lot, there is going to be a lot of chances for it to produce biased or toxic output.”

Using AI to integrate chatbots into search engines is just one way new generations of data collection and technology are influencing our lives. The more powerful these machines become and the more we depend on them, the greater likelihood that we will be disturbed by our interactions.

For now, it may be best to keep internet searches to hardware store hours and happy hour menus and leave the existential conversations to those with bona fide souls.