THE ETHICS OF AI: (Part One) IoT & ROBOTICS

Jan 6, 2020Blog0 comments

Part One: Is it Possible to Murder a Robot?

Artificial Intelligence (AI) is one of the most profound technologies of our time. It is in the “brains” of our autonomous vehicles, the strategies which guard our homes, the advice for what stocks to trade, and in the friendliness of our personal assistants. AI is transforming areas from education to retail, health care to finance, entertainment to national defense. Benefits for its use are virtually endless, which means that its downsides will always be around too. So, what do we do with this AI?

AI is defined as computer systems which perform tasks which are usually associated with requiring human intelligence such as visual perception, speech recognition, decision-making, and translation between languages. AI systems can do these things, but their most striking feature is their learning capabilities. So, if these systems are learning, who’s teaching them and what are they being taught? And if these systems are learning and making decisions “autonomously,” will they become so autonomous that they defy humans?

Will the day come when you grab your coffee and coat, get ready to head to work, but the door won’t open. So you try again. And nothing. So then you ask your virtual assistant to open the door, please. And it responds… “I’m sorry, I’m afraid I can’t do that.”

Hitchbot: An example of how we treat Robots, which says a lot about how we treat each other

It was no ordinary crime. Four years ago, a hitchhiker was “murdered” in Philadelphia. The victim was a robot called Hitchbot. The “death” raised questions about the human-robot relationship. Hitchbot was an experiment designed to test this relationship. The robot looked like a small child with tubular blue arms and a bucket on its head. “Cuteness” was the aim. “It was extremely important that people would trust it and want to help it out,” said Dr. Zeller, who designed Hitchbot. 

Hitchbot had a GPS receiver to track its journey, movements in its arms, and software to allow it to communicate when asked questions. It could smile, it could wink, and it could move its thumb into a hitch position. Hitchbot was programmed so that its adventures could be tracked online and pictures could be posted on social media. It didn’t take long for Hitchbot to become a celebrity. 

Hitchbot took several journeys. It was picked up by an elderly couple and taken camping in Nova Scotia; attended a wedding in British Columbia; a baseball game at Fenway Park; took a tour of the Amsterdam canals; a trip to Disney World; in British Columbia Hitchbot arrived in a traditional 50-foot cedar canoe with the Songhee Nation and was then a guest of honor at a powwow, where it was given a name that translates as “Iron Woman.” 

Hitchbot picked up thousands of fans, many travelling miles to be the next person to give it a lift. Hitchbot was given its own social media accounts on Twitter, Facebook and Instagram. But then one day images were posted of Hitchbot lying in the streets of Philly with its arms and legs ripped off and its head missing. Dr. Zeller said, “It affected thousands of people worldwide. Hitchbot had become an important symbol of trust. It was very sad and it hit us and the whole team more than I would have expected.”

Robotics

The partial meltdown of one of Three Mile Island’s nuclear reactors 40 years ago left areas of the facility radioactive. That’s when a group of young researchers gave a boost to the field of robotics when they employed their idea to send robots in to clean up the damage. Now, we’re talking about food delivery via autonomous vehicles on college campuses — that’s robots with AI on wheels. George Mason University received 25 delivery robots that can haul up to 20 pounds each as they roll across campus at four miles per hour. 

To navigate the campus, these robots rely on AI, ultrasonic sensors, and nine cameras. Two-way audio onboard allows users to communicate with human teleoperators who monitor the robots from afar and can take over the machine at any moment. The robots can cross streets, climb curbs, navigate around obstacles, and operate in rain and snow. To launch the delivery program, Starship Technologies partnered with Sodexo, a company that manages dining on campus. Jeff McKinley, Sodexo district manager for George Mason, told Digital Trends that the delivery robots are “at the forefront of changing trends.”

Robots and AI are the same conversation. With personal assistants, self-driving cars, and the emergence companion robots, we can see how these devices are powered with AI and their “relationship” to the human is becoming an increasingly important relationship to study. For instance, in the case of companion robots, they are being designed to assist with the daily living tasks of people who have Alzheimer’s disease. These robots are helping them get in and out of bed, reminding them to take medication, measuring their mood, calming them down, and providing regular updates to human caregivers.

“But,” says Roboticist Professor Noel Sharkey, “we need to get over our obsession with treating machines as if they were human. People perceive robots as something between an animate and an inanimate object and it has to do with our in-built anthropomorphism.” For instance, he warns, “We can’t be falling in love with them.” 

“Intelligence”

Last year, at a conference on “intelligent” robotic systems, an autonomous drone using AI to navigate raced through a maze consisting of a complicated series of turns and gates. The drone destroyed its competition, completing the race course twice as fast as its nearest human competitor. 

AI-driven systems now routinely best humans. AlphaGo, a program built by DeepMind, went from learning the basics of the game Go to beating the world’s best human player in a little over three years. More recently, the AI AlphaStar, also by DeepMind, was able to beat a top player in the complex strategy video game “StarCraft II,” shutting out its human competitor five games to zero. 

AI is beating humans at games, but are these mathematical formulas expressed in computer programs “value-free tools” that can give us an accurate picture of social reality upon which to base our decisions? For instance, autonomous cars are not programmed to drive, they are programmed to anticipate what a human driver would do. When platforms decide which news stories to present, they use algorithmic predictions. They attempt to predict what news each individual person is looking for, and then they adopt decision rules designed to maximize user engagement. 

However, there are some reasons for making sure that the data sets, upon which decisions are made, remain “blind” so to speak — as in “justice is blind.” In the same way that it doesn’t matter who’s before the judicial bench, there are instances in which “judgements” of algorithmic systems can not be stacked, slanted or biased toward one group or another. 

Bias & Error

Navigating sidewalks, flying through mazes and beating Go Masters has interesting, useful, and beneficial implications; but recently the darker side of AI has been surfacing. Apple and Google have been accused of helping to “enforce gender apartheid” in Saudi Arabia, by offering an app that allows men to track women. And there have been reports of a Chinese company owning the dating app Grindr. The app caters to an LGBTQ crowd. U.S. officials believe the Chinese government could end up exploiting that information.

In Arizona last year, a self-driving car with a backup driver struck and killed a pedestrian. Based on a video taken inside the car and records collected from Hulu, police said that the backup driver was streaming an episode of “The Voice” on her phone. The driver looked up a half-second before hitting the pedestrian. 

Even with this tragedy, Google is moving ahead with its plans for a self-driving car called Waymo. Google plans to build a factory in Michigan, creating up to 400 jobs at what it describes will be the world’s first plant “100 percent” dedicated to the mass production of autonomous vehicles. The company integrates its self-driving system into vehicles it buys from automakers and is currently testing autonomous Chrysler Pacifica minivans. And Ford has also recently announced that it wants to bring self-driving cars to Washington, D.C.

Job Loss & Hacks

The rise of AI offers many benefits. However, we’re also reminded that we need to pay attention to the impact of AI on the displacement of jobs. Repetitive tasks that involve processing information, performing physical activities or operating machinery will be the first to be replaced by AI — which means that manufacturing jobs could be hit hard. 

And besides displacement, all devices themselves and their connectivity to each other are expected to boom in 2020 — the Internet of Things (IoT) is here — and all these units and systems do not have the same level of security. Hackers love to target IoT devices that don’t have built-in security.  

Need we be reminded of the 5 Biggest Data Breaches of the 21st Century:

1. Yahoo [2013-14] 3 billion user accounts: the attack compromised real names, email addresses, dates of birth, telephone numbers, email addresses, and passwords.

2. Marriott [2014-18] 500 million customers: the attack compromised names, contact information, passport numbers, credit card numbers, and expiration dates.

3. Adult Friend Finder [October 2016] 412.2 million accounts: the attack compromised names, email addresses and passwords.

4. eBay [May 2014] 145 million users: the attack compromised names, addresses, dates of birth and encrypted passwords.

5. Equifax [July 2017] 143 million consumers: the attack compromised Social Security Numbers, birth dates, addresses, drivers’ license numbers, and credit card data.

In late March of 2109, Facebook admitted that it had left hundreds of millions of users’ passwords exposed in plain text, potentially visible to the company’s employees, making this another major privacy and security headache for Google, which is already being scrutinized and penalized for mishandling people’s personal information. 
So important is connectivity — and thus the threat of hacking — that the NSA and US Cyber Command work side by side in the same building in Maryland. The NSA monitors foreign communications, while Cyber Command takes action in the digital realm. And in Georgia, Gov. Brian Kemp was just informed that his state’s new digital voting system is considered vulnerable to hacking. Even Jeff Bezos was recently hacked by the Saudi government in an effort to harm him.

To continue reading see Part Two

Blogs

Latest Blogs

We’ve designed a culture that allows our stewards to assimilate with our clients and bring the best of who we are to your business. Our culture drives our – and more importantly – your success.
AI Comes to the Rescue for Outbound Sales

AI Comes to the Rescue for Outbound Sales

This article, crafted with insights from Logan Kelly, president of CallSine, and delivered to the press by BridgeView Marketing's PR Services, highlights how AI transforms sales outreach by enhancing personalization, optimizing content, and streamlining lead...

read more

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *