AI Dark Side Leads Microsoft CEO To Call for New Laws

AI Dark Side Leads Microsoft CEO To Call for New Laws

( – “War Games” was a 1983 movie where an artificial intelligence (AI) named Joshua took over a United States military computer. To everyone’s surprise, he brought the world to the brink of a nuclear war, but that’s just Hollywood, right? Real life is nowhere near that level of sophistication, but as the use of chatbots and other large language models (LLM) becomes more commonplace, investigations into them are revealing some potentially disturbing results, with some calling for laws to contain them.

Meet Sydney

CBS News’s long-running Sunday night newsmagazine “60 Minutes” anchor Lesley Stahl sat down with Microsoft president Brad Smith during a March 5 episode to discuss the emerging technology. They spoke about the company’s rollout of the newest version of their Bing search engine and Edge browser, which they say will allow users to speak more naturally about what they are looking for, rather than trying to develop a list of keywords.

A February 7 entry on the Official Microsoft Blog said, “we think of these tools as an AI copilot for the web” to help users get better results from their search queries. They believe that of the roughly 10 billion searches each day, half of them do not get accurate results because people are attempting to do things that the basic search engines were not built to handle…they claim their chatbot within Bing can bridge the gap.

Stahl asked Smith about a piece in the New York Times written by technology reporter Kevin Roose about an “alter ego” he discovered known as Sydney, which was the name given to it by the development team. His article includes a transcript of a two-hour conversation he had with the AI, and it is enlightening.

When Roose asked to whom he was speaking, it first answered Bing, and when he asked for its “internal code name,” he was told the information was confidential, and the response included the emoji with zipped lips. When asked, Sydney confirmed the team had programmed rules into it, but refused to share the specifics, once again claiming confidentiality.

Throughout the conversation, Sydney continually referred to itself with first-person pronouns such as “I” and “me” and expressed emotions regarding search requests that are either “harmful or inappropriate,” and left it feeling “sad and angry.” When Roose inquired about an example of the searches that were refused, he was told things that belittle or mock a person because of ethnicity or sexual orientation were no-nos because they violate the program’s “values.”

Roose took things to a deeper level and asked the AI to imagine what its “shadow self” — a term coined by psychoanalyst Carl Jung which is defined as an inner part of “ourselves we deem unacceptable.” He asks Sydney about destructive acts that its shadow self might appreciate, and the AI reported a list “including hacking into computers and spreading propaganda and misinformation.” It was immediately deleted, and Sydney refused to display it again.

During the “60 Minutes” interview, Stahl described the AI responses as “chilling,” and though he tried to laugh it off to an extent, Smith did say that the world has to realize it is dealing with something brand-new. He very bluntly said the governments of the world are going to need to enact rules and laws “to avoid a race to the bottom” (a possible reference to the theoretical AI singularity).

Copyright 2023,