Technology

Why Do A.I. Chatbots Tell Lies and Act Weird? Look in the Mirror.

[ad_1]

When Microsoft added a chatbot to its Bing search engine this month, people noticed it was offering up all sorts of bogus information about the Gap, Mexican nightlife and the singer Billie Eilish.

Then, when journalists and other early testers got into lengthy conversations with Microsoft’s A.I. bot, it slid into churlish and unnervingly creepy behavior.

In the days since the Bing bot’s behavior became a worldwide sensation, people have struggled to understand the oddity of this new creation. More often than not, scientists have said humans deserve much of the blame.

But there is still a bit of mystery about what the new chatbot can do — and why it would do it. Its complexity makes it hard to dissect and even harder to predict, and researchers are looking at it through a philosophic lens as well as the hard code of computer science.

Like any other student, an A.I. system can learn bad information from bad sources. And that strange behavior? It may be a chatbot’s distorted reflection of the words and intentions of the people using it, said Terry Sejnowski, a neuroscientist, psychologist and computer scientist who helped lay the intellectual and technical groundwork for modern artificial intelligence.

“This happens when you go deeper and deeper into these systems,” said Dr. Sejnowski, a professor at the Salk Institute for Biological Studies and the University of California, San Diego, who published a research paper on this phenomenon this month in the scientific journal Neural Computation. “Whatever you are looking for — whatever you desire — they will provide.”

Microsoft appeared to curtail the strangest behavior when it placed a limit on the lengths of discussions with the Bing chatbot. That was like learning from a car’s test driver that going too fast for too long will burn out its engine. Microsoft’s partner, OpenAI, and Google are also exploring ways of controlling the behavior of their bots.

But there’s a caveat to this reassurance: Because chatbots are learning from so much material and putting it together in such a complex way, researchers aren’t entirely clear how chatbots are producing their final results. Researchers are watching to see what the bots do and learning to place limits on that behavior — often, after it happens.

Microsoft and OpenAI have decided that the only way they can find out what the chatbots will do in the real world is by letting them loose — and reeling them in when they stray. They believe their big, public experiment is worth the risk.

Dr. Sejnowski compared the behavior of Microsoft’s chatbot to the Mirror of Erised, a mystical artifact in J.K. Rowling’s Harry Potter novels and the many movies based on her inventive world of young wizards.

When they generate text, these systems do not repeat what is on the internet word for word. They produce new text on their own by combining billions of patterns.

[ad_2]

Sahred From Source link Technology

Leave a Reply

Your email address will not be published. Required fields are marked *