THE ETHICS OF AI: IoT & ROBOTICS: (Part Two) Ethics & Culture

On January 31, 2020

Robotics and AI bring up questions of ethics. The ethics of AI can be illustrated with the example of Microsoft’s Tay. In the 16 hours between its first message and its last, Tay sent over 96,000 tweets. Similar to Siri and Alexa, Tay was designed to provide an interface for interaction. Microsoft created the bot using machine learning techniques with public data and then released Tay on Twitter, so the bot could gain more experience and, its creators hoped — to learn — and gain more intelligence. Within hours, the bot began tweeting sexist, racist, anti-Semitic content. This created a wave of negative media attention for Microsoft, prompted them to remove some of the tweets, and eventually to take Tay offline permanently, and issue a public apology. 

Tay’s brief appearance produced a cautionary tale that has implications for the relationship between algorithms and culture. As a chatbot, Tay was asked to distinguish the textual speech of others and respond with the same speech. But what comes intuitively to humans turns out to be hard to teach to a bot. The technical name for this is called “natural language processing,” and it has proven difficult for bots because conversation doesn’t follow rules — its unruly and contextual. Advances have now been made by applying machine learning algorithms. Like inductive reasoning in humans, machine learning works by developing generalizations from data. Patterns in a set of data provide a way for AI systems to better understand what is being said.

Facial recognition technologies have been reported to have difficulties with trans people and African-American faces. This points to a lack of diversity in the data which may cause inaccuracy problems. This raises the issue of transparency. Making the data sets available to the public or independent overseers provides a chance to evaluate programs for bias. Though this then raises further concerns with privacy.

Machine learning builds on data sets. The data sets are about people — their attributes, their preferences, their actions, and their interactions. This data documents people’s behaviors in their everyday lives. Although Tay was active on Twitter for less than a day, that’s all it took for it learn how to harass other users. The action was modeled off past patterns of behavior on Twitter, but it did not agree with the values of Microsoft.

Problems like these are propelling the need to create AI systems that take culture into account. This is important for conflict resolution, prediction, and decision-making. People’s choices come from their environment, upbringing, and experience. Creating AI systems that take culture into account is essential for making effective computations for policymakers, consumers, and citizens. If we want our AI systems to be inline with our reasoning and decisions, expectations and desires, then we need to create AI systems that take our cultures into account.

The way to do that is with stories and analogy-based technology. Given a new problem, an AI system can use a human-like retrieval process to find a similar prior situation, and compare how it applies. It’s called “analogical generalization,” and it’s a learning process that helps find patterns in stories. The advantage of this process is that the number of examples needed to train AI systems can be very small. Even ten examples can be enough, as opposed to deep learning systems which can take millions of examples. The efficiency arises from using more “human-like representations” than are usually used in machine learning. These human-like representations help the AI with intentions, reasons, and arguments, which are critical for building AI systems that can be trusted.  Cultural products such as stories, religious texts, and folktales provide reliable source data. This kind of data evolves over generations, providing a historical memory and moral framework in order to help AI systems ground decisions in everyday life. Cultural narratives offer a moral compass through events, actors, and motivations. 

The pipeline for building a cultural model for moral decision-making would be: 

(1) Gather a set of representative cultural products 

(2) Translate into whatever natural language form can be currently understood automatically
(3) Feed them to the analogical learning system

Preparing the Way 

AI for IoT and Robotics is the same thing — they’re learning algorithms — and all of this innovation is coming. Regardless of the equipment upon which their infrastructure is currently operating, high tech communications companies today want to be in on this transformation now. One IoT connectivity model resides in Senet’s Low Power Virtual Network. This particular model offers ways for participants across markets and industries to establish IoT revenue streams by monetizing their networks and services with very low capital and effort. With a cloud-based architecture purpose built for Low Power Wide Area Network (LPWAN) service delivery and Operations Support System (OSS) and Business Support System (BSS) technology built for IoT from the ground up, this approach allows for the rapid expansion of network coverage and product and service offerings beyond traditional areas of operation, unlocking regional and global expansion opportunities for existing communications companies. Thus avoiding the “we’re not ready yet” problem. 

Conclusion

Turns out, that though robots are good at building cars, cleaning up nuclear waste, and winning at games of strategy, they aren’t so good at hospitality. After opening to worldwide publicity in 2015, Japan’s Henn na, or “Strange,” Hotel, the world’s first all robot hotel, is now laying off its droids. They were annoying guests with their inability to answer questions, by making appearances in rooms when not called for, and by becoming foot hazards in the lobby when their batteries ran out. So maybe there are some things a robot can’t do.

China and the U.S. are currently the world’s leaders in advancing AI systems. Today, when you walk out your front door in China, cameras will immediately pick you up and log your whereabouts. Chinese consumers use digital currency and have their favorite snacks delivered to the office. AI improves and benefits their lives, but with the Chinese system of “social credit” earned for good behavior, the AI system can lead to public shaming if your score is low. So, we need people to oversee these systems for their biases and we all need to assure that these systems have transparency — “open data.” If Taobao, the internet consumer giant in China, talks to the bank, then citizens need to know what info is being shared. The Chinese have an “ecology” of AI: Wechat, the social media app, talks to Taobao, the consumer platform, which talks to the bank, which has access to your social credit score.  

Everything is advancing so fast! Everybody feels like they’re missing the boat. But wait! Maybe we need to think this through. Slow down a little bit. Perhaps, everything doesn’t need an AI system. The Skoll Foundation, which focuses on social entrepreneurship to fix societal problems, says that not everything needs an AI solution. While there is no doubt that AI algorithms could improve such things as water efficiency in irrigation systems, the problem that Sub-Saharan African farmers face are more fundamental, more about just having access to the water. Using AI to develop accurate risk models for crop insurance won’t help much without adequate systems for distributing insurance. So, perhaps basic functionality should be looked at before AI is applied, which can then be used to augment proper functioning.

But there’s good and bad with all technology. Facial recognition software can be used for mass surveillance or it can find missing children. New technology is always disruptive. It improves our lives, but imposes difficulties and challenges. The idea is to maximize the good features. Policy, laws, self regulation, crowdsourcing are all ways to mitigate the downside; but tech is always dual in nature — it’s good and it’s bad. 

Chinese citizens will gladly give over their data, so they can have their favorite tea delivered to the office; but the downside is social credit, which can make you an outlier and put your safety into jeopardy. Transparency may be the best hedge against systemic maliciousness; but AI is advancing, and the algorithms are learning; and if they keep getting better at what we do, then the follow-through question would be… then what will we do?

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *