-----Original Message-----
From: DCHAS-L Discussion List [mailto:dchas-l**At_Symbol_Here**MED.CORNELL.EDU] On Behalf Of Stuart, Ralph
Sent: Sunday, March 06, 2016 2:58 PM
To: DCHAS-L**At_Symbol_Here**MED.CORNELL.EDU
Subject: [DCHAS-L] Would you trust a robot in an emergency?
There's a thought provoking story about people's responses in emergency situations in the most recent CBC spark podcast at:
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cbc.ca_radio_spark_312-2Dgrowth-2Dand-2Dthe-2Dstart-2Dup-2Deconomy-2Dtwitter-2Dbot-2Dart-2Dand-2Dmore-2D1.3471294_would-2Dyou-2Dtrust-2Da-2Drobot-2Din-2Dan-2Demergency-2D1.3475216&d=BQIFAg&c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu2s&r=meWM1Buqv4IQ27AlK1OJRjcQl09S1Zta6YXKalY_Io0&m=ytY9qc__G6hZQz5kGw8HwM5VC_VFERI4LzeZw0O9AVA&s=IrPMutW0MhxttQeX3vO2bgSopCiXLZ4qv3SoeOdsYwU&e=
Excerpt from the text summary:
So how much should we trust our technology, and how do we know when a piece of tech is no longer trustworthy?
A study from the Georgia Institute of Technology wanted to see where we draw those lines.
Dr. Ayanna Howard, a robotics engineer at Georgia Tech, as well as her colleagues Alan Wagner and Paul Robinette, had participants follow a robot to a conference room, where they were asked to fill out a survey. In some cases the robot would go directly to the conference room, other times, Dr. Howard says, the researches, "...had the robot take them to a different room, kind of wandering. We had the robot do things like, as they followed them, the robot would just stop and point to the wall."
While in the room, the researchers filled the halls with smoke, which caused the fire alarms to go off. Participants then had the option to follow the robot, or to exit the building the way they came in.
Dr. Howard and her fellow researchers expected that about half of the participants would chose to follow the robot, "...but what happened in the study was... everyone followed the robot. It's astounding."
Despite having no indication that the robot knew where it was going, and even seeing first hand that it was flawed and could make mistakes, every single participant was willing to follow the robot.
Dr. Howard compares this behaviour to how we treat the GPS devices in cars. "When they first came out, you'd get a story once every couple of months about somebody who followed their system into the river... I know this is the wrong way, but maybe it knows that there's traffic the way that I normally go, so I'm just going to trust the technology, because I think that it must know what it's doing."
Dr. Howard says that the answer to this problem may be more transparency about how certain these robots are about their decisions. "Telling the user look, I think I might be broken, I'm 50% sure I'm broken, and then you make the decision."
Ralph Stuart, CIH, CCHO
Chemical Hygiene Officer
Keene State College
ralph.stuart**At_Symbol_Here**keene.edu
Previous post | Top of Page | Next post