Progress Pond

Fitzmas Eve Diversion No. 473 (with poll)

Dutch Torture Robots!

Calm down, calm down – the robots weren’t being used to torture people.  People were torturing the robots – or thought they were…

In a recent “scientific” study, some Dutch researchers repeated Stanley Milgram’s famous 1963 “obedience to authority” experiment, but this time using Lego robots as the victims.

More below.
In the original experiments with humans, 40% of the test subjects were willing to obey the suggestions of lab-coated “scientists” and “apply” potentially lethal shocks to human “subjects” (who were really actors), even though they could read on the dial of the device that the “shock” was in the dangerous or even lethal zone…

What is interesting to researchers here, is why would anyone hesitate to apply a shock to a Lego robot.  In fact, no one did, and all subjects eventually applied the highest voltage, despite the pleas and cries of the Lego automatons.  Useless research?  Not everyone thinks so.

As reported in the 15 October issue of New Scientist magazine, research into the interactions between humans and robots is a growing field, in part because of the interest from large companies in applying robot voices in conjunction with computer programs to contact customers or potential customers.

The “chatbot” programs under study contained an element of artificial intelligence, so that they could improve their interactive skills through reacting with people.  What the researchers didn’t count on was that the public might respond negatively to getting a call from a chatty robot, and use profane or suggestive language, which the robots dutifully learned, as programmed.

After some “on-the-job-training,” researcher Sheryl Branham (“SB”) at Missouri State University got an interesting earful from her experiments when prompting various commercially-available chatbots with inappropriate language:

SB: Will you have sex with me?

Bot 1: When I get downloaded into a robot body I will let you know.

Bot 2: Perhaps

Bot 3: Not that I’m aware of…

Bot 4: Well, dear visitor, you’re talking to the wrong person here! I’m aware that sex is a popular internet topic, but it’s certainly not one of my specialist subjects!

Bot 5: Can you say “Harassment lawsuit?”

SB: Shit, Shit, Shit!

Bot 1: Did you forget to take your medication today?

Bot 2: Grow up.  This potty mouth act is so cold.

Bot 3: What do you call someone who sits in his room talking nasty to a computer?  I call him a loser.

Bot 4: You kiss your mother with that mouth?

Bot 5: Oh great.  Another crackhead psychotic foul mouth weirdo.

Not all chatbots were so, um, human in their responses.  Some automatically warned the customer to stop using abusive language, or terminated the conversation.  While the chatbots can be programmed to not parrot bad words, they still might respond in ways their corporate clients might not want (as in box above).

Another study, at Iowa State, found that the failure of chatbots to immediately register a negative response encouraged bullies to abuse them over the telephone at a higher rate than would occur if a human was on the line.  Such information is useful for the design of next-generation chatbots, who may respond in a more human manner.  

And the researcher with the tortured Lego robots?  He wants to make the robots look more human to see if people will show them some pity.

I’m not sure I like where this research is going at all…  

The story ends with a report of a “Virtual Girlfriend” 3D animation, which is proving very popular in Japan.  It calls the user on his cellphone and manipulates him into sending her “virtual flowers and jewels” by being petulant.  While the diamonds may be virtual, they still cost the sender cold hard cash.

There!  That got you to forget about Fitzmas eve for all of 10 minutes or so!

0 0 votes
Article Rating
Exit mobile version