The Bregman Leadership Podcast
Episode 177

Janelle Shane

You Look Like a Thing and I Love You

Why should we care about Artificial Intelligence (AI)? I wondered about that before I picked up Janelle Shane’s book You Look Like A Thing And I Love You. Janelle, a research scientist and 2019 TED fellow, brought me on a humorous and humanizing journey learning about AI. AI is aptly named – the intelligence truly is artificial – it’s not that AI will someday be smart enough to replace us—the danger is it isn’t smart enough to help us right now. Discover how an AI can be a mirror of our flaws, how it can perpetuate biases, and how its struggles are really our own.

About

Website: aiweirdness.com
Book: You Look Like a Thing and I Love You
Bio: Janelle Shane has a PhD in electrical engineering and a master’s in physics. At aiweirdness.com, she writes about artificial intelligence and the hilarious and sometimes unsettling ways that algorithms get human things wrong. She was named one of Fast Company’s 100 Most Creative People in Business and is a 2019 TED Talks speaker. Her work has appeared in the New York Times, Slate, The New Yorker, The Atlantic, Popular Science, and more. She is almost certainly not a robot.

Transcript

This transcript is unedited.

Peter: With us today is Janelle Shane. Janelle has work. Her work has been featured in the New York times, the Atlantic wired, popular science all things considered. She’s a 20, 19 Ted fellow. She has written the book most recently. You look like a thing and I love you how AI works and why it’s making the world a weirder place. She has an incredibly popular blog where she talks about funny AI things. Who knew that AI could be so funny? Well, you know, read this book and you will have an idea cause it is a super interesting. Janelle is an optics research scientists and artificial intelligence researcher, writer, and public speaker. Her blog is AI weirdness by the way. Janelle, welcome to the Bregman leadership podcast.

Janelle: Hey, thanks so much for having me on the show

Peter: To know what is AI? What is artificial intelligence?

Janelle: Well that’s the tricky thing is cause the, what AI means can vary a lot depending on who you talk to. So there’s the science fiction writers, AI, and then there’s the AI of people who do research and then there’s the AI of people who are trying to sell things and they’ll all have different definitions. So what I was writing the book, I had to, I mean that was one of the things I had to sort of kind of sort out is, you know, is the AI behind Siri the same as the AI that you know, sorts your spam emails and is that the same as C three PO, you know, these kinds of things. So I ended up going with a definition that machine learning researchers tend to use. So a specific kind of algorithm. And I sort of then dive into that as that’s a thing that computer scientists are calling AI these days.

Peter: So this is gonna sound like a harsh question. And it really is not meant to be, but why should we care? Like I, like I read, you know, I’m looking at the New York times and I see an article on AI, which I happened to have read yesterday because I knew that you and I were going to be talking today, but if not for that, I would’ve just passed over because I figure, you know, AI, you know, is interesting in so far as you know, I use it for Siri or for Alexa, but beyond that, like, why, you know, if I’m not a coder, if I’m not a machine learning person, like why do I care about AI?

Janelle: So this is one of the things like even as a consumer or as somebody who’s tried to decide whether your company, whether your group should use AI for something, we are being presented with a lot of kind of sketchy AI like AI applications that aren’t very good or that are never gonna work or that if they’re going to work, it’s going to be via bias or we have other IP AI presented to us that, you know, on the surface it looks like AI. But when you kind of look under the hood and say, okay wait, how can it possibly be doing this really tough task that I know AI is not really capable of. Oh, it’s the equivalent of a person in a robot suit. Like there is actually a remote worker. They’re clicking buttons and that is their AI. So I mean we have to kind of know enough about AI to see these things coming to all right, that’s, you know, a bad application or to at least know why we’re not using AI in a particular instance.

Peter: Is that why you wrote the book in a sense to help? Because it’s not written for coders, it’s written for lay people. It’s written for people like me, at least. I enjoyed reading it and it seemed like it was written for me. So are you writing it because it’s like this is the most accessible way to ha to grasp an understanding of what AI is in, in kind of a fun and interesting way so that we’re not out of the conversation?

Janelle: Yeah, I think it’s really important. I mean, as consumers, as consumers, even as voters, we do have some say in what AI we buy and how AI is used. And yeah, I wanted a, I really saw when I was doing my blog and I would do these weird experiments with, you know, I’ve trained in AI to try and generate new paint colors and it would come up with something called like stinky bean and horrible gray and it wouldn’t know that these are bad names or paint colors. And I’d get people asking me, wait, but how is it making these mistakes? So I, I saw that there was a need to answer this question and to answer it via stories because there were a lot of stories out there about the AI’s that we don’t have the AIS of the future, like Wally Skynet, these kinds of stories. But there aren’t very many about the AI’s that we actually are working with today. So I wanted my book to be a way to get some of these sticky stories in your mind. When you think of AI, think of the algorithm that was asked to sort of list of numbers, but it’s programmers technically really asked it to eliminate the sorting errors. So the algorithm eliminated the entire list,

Peter: Right? Well, so, so you, you come up with these five principles of AI weirdness, right? That the danger of AI is not that it’s too smart, but that it’s not smart enough that it’s got the approximate brain power of a worm, that it doesn’t really understand the problem. You want it to solve that it will do exactly what you tell it to. Or at least it’s going to try its best to, and it’ll take the path of least resistance. And I have to say is I read through these, I’m thinking I know some people who operate in this exact same way.

Janelle: Okay.

Peter: Right. Like, like it’s like super literal people

Janelle: [Inaudible]

Peter: Right. And that’s what you’re sort of saying AI is super literal. It’s not going to make inferences based on what you’re say it’s going to operate within the rules that you set up for it.

Janelle: Yeah. It’s almost like working with what are these, you know, wish granting items from fairytales that will grant you your three wishes, but we’ll do exactly what you asked for. And if your wording is slightly off then you know, chaos.

Peter: Yeah. And I guess it’s why we get frustrated. Like I have said recently to my kids as we’ve been using Alexa, I think Alexis gotten dumber. Like, I think Alexa was smarter six months ago and I don’t know if they’ve changed the algorithm or something but, but Alexa, I was about to say she, cause it’s a voice and you, and this is sort of like, you know, you look like a thing and I love you, like we think of Alexa as a she but she’s making a ton of mistakes and, and I wonder why. I wonder what that’s about and whether we’re just imagining that or we have an expectation that AI is going to be smarter than AI actually is.

Janelle: Yeah, I do think we in general have expectations of AI being really smart. And partly that’s from science fiction where AI is a really smart non, the surface they look like the AI is that companies are, are giving us and also because the companies that develop these things and try to sell these things do have a, you know, a motivation to make them seem really smart and competent. And it’s only when you start using them that you realize, Oh this doesn’t understand what I’m saying. Or it only understands very specific things that have been [inaudible]

Peter: And you, you know, because AI sounds like you know, a woman or a man or you know Alexa sounds like a woman Siri for me as a Scottish man. But because it actually sounds like you’re talking to someone, you have an expectation that you’re talking to someone and that’s the wrong expectation.

Janelle: Yeah, that’s true. You are talking to something much simpler than a person.

Peter: So give us an example in your research and I spoke with my kids again this morning about the robot getting from point a to point B. I love that particular experiment, but give us like give us this example. You could use that one or another one that that comes to you that reflects these five principles of AI weirdness.

Janelle: Yeah, so the example of the traveling robot, I really do like that one. Especially when I was trying to put together a book and wanted things that would stay true about AI despite how fast it’s moving. And this is one of these phenomena that I first saw in a paper from the 1990s I’m sure it’s happened earlier than that and then it happened again in 2018 and what it is is robots, AI controlled robots basically that refuse to walk because walking is tough to do and they will take any shortcuts that they can possibly take to avoid having to figure out how to walk. So if you have, you know, if you want to have a program that controls a robot that can walk from point a to point B with traditional rules based programming, you would have a programmer write down step by step instructions on how to assemble the parts into a robot.

Janelle: Then some kind of thing to make the legs move and make robots walk. And then with AI though it is so different with AI you are giving it the goal that is supposed to get to point B and then it has to figure out its own strategy via trial and error. And the way the AI tends to solve this problem, they figure it out that it’s much easier just to take these parts and assemble them into a really tall tower and then fall down and then the head of the robot lands at point B and technically that falls the problem as you laid it out and it was easier, the AI found this brilliant shortcut. It doesn’t know that you don’t want it to just flop over, that you’re hoping it will be able to get beyond point B at some point. So yeah, this is a beautiful illustration of the way AI will solve a problem.

Peter: You know, and it, it doesn’t see robot, it doesn’t understand robot. It just understands all of these various parts and it doesn’t necessarily know where they go. So it assembles them in the way. That’s actually, you know, my at, at this point when I was doing this, my wife was in the, in the room and she said, Oh, the, you know, it’s, it’s not very creative. And my daughter’s a few, it was like, no, it’s incredibly creative. Like, who would think about, you know, attaching all the pieces of a body in a straight line and then, you know, knocking it over and that’s how you get there faster. So in a way it’s super creative, but despite us, not because of us.

Janelle: Yeah, exactly. You know, we have these preconceived notions on how to go about how to get to point B. And in some situations it may be more efficient to do it differently and AI is going to figure this out. And even if we give it a robot body, like some kind of humanoid body, often it won’t use it to walk normally. Like it will start somersaulting toward the point B or it will run backwards or it will use his hands as these kind of weird counterbalanced things as took them out at weird angles. And the way we set up the problem, like we didn’t tell her that had to face forward, there’s no obstacles to look for. It has no eyes there. It’s maybe doesn’t have to worry about getting tired. So if it ends somersault and it gives it a speed or stability, you know, bonus, then it will do that.

Peter: Huh.

Janelle: Yeah. So creative. So a Willie do use every part of his environment for his advantage.

Peter: So, you know, that’s an example of AI not being limited by the way we think as human beings. I was reading an article in New York times, as I mentioned yesterday about AI, which I never would have read had you and I not been talking. It was talking about how AI has exposed bias that, that there was the, the programmer put in 100 words and you know, including words like baby and 99%, sorry, 99 of the 100 words. So 99% we’re affiliated with men, the AI affiliated it with men. So it basically, there was one word which was mom, which was affiliated with a woman. But otherwise it’s basically the, it’s, it’s learning from Google books and, and, and looking at the way that words have been used in the past, but it’s looking at the biases that our culture has. And it’s, it’s continuing those biases cause it doesn’t know any better to stop the biases. And then you talk about how AI can take biases out of the interview screening process because it can expose them. And I’m curious to, to get your thinking on this, like on how, how AI can free us from the kinds of biases or prejudices or, or constrained thinking that we have. And at the same time actually how it might perpetuate them because it’s not particularly smart.

Janelle: Yeah. So this is one of the things that people have. I mean, the bias is not something new, but this was one of the things that comes out glaringly when you try to automate some of these decisions about loans or about parole because now you have a system where you can run a, you know, thousands and millions of test decisions and tweak one variable and say, okay, is there a correlation between the people who get loans and their gender or their race? Because there’s not supposed to be a correlation. Like this is illegal in many places. And but if you can do these kinds of experiments on AI and say, Oh no, it has come up with this correlation and the responsible thing to do then would be, instead of saying, well, that’s the algorithm you know, I would just do what it says.

Janelle: If responsible thing to do would be to say, okay, hang on. You know, that shouldn’t be there. Let us fix this. And there are different strategies for how to then fix an AI solution decisions. One thing to do is say, okay, in a fair world, you know, equal numbers of people from all these different categories would have gotten loans. So let us manually move some people over from category to category B until the distribution batches what it would be in a fair world. And we’ll train the AI AI on that. And that can work. In some cases.

Peter: You talk about these four signs of AI doom, right? The problem is too hard. The problem is not what we thought it was. There are sneaky shortcuts and the AI tried to learn from flood data. When I think about that, I think these are also problems that humans run into, right? We get stuck when the problem’s too hard or it’s not what we thought it was. Or we tried to take sneaky shortcuts that don’t particularly work or we try to learn from flood data. I’m wondering what we can learn as human beings from what you have learned from AI, right? So you know, you’ve learned a lot about AI and the challenges that AI faces. How can that shed some light? And I don’t know if you’ve thought much about this. This is what I think about every day, right? Which is like how do we address the challenges that we face as human beings? I would love to have your experience with the AI shed light for us on how to be better human beings.

Janelle: Well, yeah, this is one of these things with working with AI, it’s sort of like holding a mirror back up at us because it is imitating what it sees. It’s imitating human behavior as it sees it. And we can see some strange things reflected back at us. So there’s the bias for example. And then there’s also like, why was our society working this way? Why is the magic school bus something probable? But zombie school bus is not so probable. So it does tend to kind of reflect back at us these things that we may take for granted until we see a computer imitate them. So that’s, that’s kind of fun.

Peter: So it’s like AI is holding up this mirror that shows us exactly what we’re doing. Like there’s just no distortion in the mirror. And, and like we were talking about with bias, that it, it it can help expose biases.

Janelle: Well if we flex back what it can. So that’s the other interesting thing is as also a way to kind of feel good about ourselves when we see the things that we do effortlessly, that AI struggles so much with. So I mean you see AI doing really well at chess or at games like go. And that’s because these are these very specific sort of narrow domains with nice set out rules. The AI can systematically go in and try to solve. But Hey, I, it’s really tough to get it to, for example, fold laundry. Like that’s a really tough problem and we don’t realize how amazing it is that we can pick up a shirt and figure out how to fold, you know, arbitrarily shirt with a different length in a different fabric and AI, well it goes Whoa, Whoa, I only learned how to fold like this one size of polyester t-shirt and you know, and it takes it half an hour to fold it. And whereas, so, so that’s another thing we see as where the AI can’t imitate as it can’t do what we can do and we don’t realize how amazing the human brain really is until we see this much simpler system falling on its face as it tries to do our everyday stuff.

Peter: Huh. How do you even think of trying to get AI to fold the tee shirt? Like, you know, like how does that even like what got you interested in this and, and how, how do you even, you know, think about these experiments?

Janelle: Well, the tee shirt thing came to mind because there is a company in Japan that’s been working on this problem. The company’s called mirror robotics and they have a prototype that can fold a tee shirt and it takes it a really long time to do it and you have to have like a human there to hand it the tee shirt and exactly the right position it takes forever. But their goal was to build these sort of home helper bots and that has proven to be really, really hard. Like Jetson Butler bots, Butler butts or something. We’d really like and something that we don’t have. And so this particular company has come up with a solution they’re working on instead, which is to have remote control Butler bots. So this is like I mentioned before, this would be a person located hundreds or thousands of miles away who is remotely controlling this bot to Folger laundry or whatever. And it’s, so that’s, I mean it’s an illustration of kind of the desperate lengths people will go to, to come up with an AI for solution for something where an AI solution is not going to work. And in some cases they only realize this, that they’re in trouble after they’ve gotten going and realize that they can’t do it with AI,

Peter: Which is why human beings often get very frustrated at AI phone systems, for example, which isn’t, you know, which are, are trying to act like human beings. But then we say things and they don’t understand what it is that we’re saying. It’s like a setup. It raises our expectation and then it doesn’t meet it because, you know, it’s trying to make us feel like we’re talking to a human and we’re not

Janelle: [Inaudible] and the way, so the way some people have, some companies have approached solving this particular kind of mismatch between expectations and reality is to swap in humans secretly when the AI begins to struggle. And that can be a really bad thing actually because then you have a frustrated customer who’s talking to an employee, but the is already frustrated and realize that the employees, even a human being. And so that’s a recipe for frustration all, all around employee abuse. Like it would be so much better to have like a talk to the human option or some kind of indication so you know what you’re talking to and when it’s okay to mess around and when it’s okay to get angry.

Peter: All right. Is there a way to help AI gets smarter? Cause I wonder whether this could shed light on how we can help human beings understand, you know subtlety or intonation or how to read between the lines kind of thing.

Janelle: Yeah. so AI, the, the way to get help AI get smarter at these kinds of projects is to do as much of his work for it as you possibly can. So rather than hoping your AI is going to be able to come up to adapt to subtleties or you know, a free form open conversation, maybe you restrict things down so your customers only have a few different options they can choose from. And these are all options that, you know, your AI’s going to be able to handle this sort of thing that, you know, groups that AI is like Siri and Alexa are using where they, they have some built in functions that they can handle. And then if it’s outside, you know, they’ll have maybe some kind of a standard. I don’t know what your help talk.

Peter: Right. It reminds me actually I was just joking with a friend of mine when Amazon, cause humans do this too, when Amazon has their frequently asked questions at the bottom of a product and sometimes it’ll say something like, you know, will this work with blah, blah, blah? And someone will answer. I don’t know. I haven’t tried it. You’re like, why was someone answered that? I don’t know, do an FAQ question on Amazon, but it almost feels like maybe that’s an AI because why would a human being, you know, take the time to say, I don’t know, two a question.

Janelle: Lynn’s comment on the weirdest things when they don’t know what’s going on.

Peter: It’s really true. And then you begin to have a greater respect for AI. How is that

Janelle: Worked with AIS that also bluff, right?

Peter: Right. Is that right?

Janelle: Yeah. There’s a bot visual chat bot. You can ask it questions about an image and then you can have a back and forth conversation with it. But since it was trained on the way that humans answered questions about images, it learned that whatever happens, you don’t express confusion cause that’s just not a thing that happens.

Peter: Wow.

Janelle: So it will bluff. It will dig itself a hole. If you know you give it a picture of Neptune and it tells you it’s a red end. Yep.

Peter: Well let’s say, I don’t know. It refuses to say, I don’t know.

Janelle: Exactly. And then for the purposes of that conversation, Neptune is a red Apple and there is nothing that will convince her otherwise.

Peter: Well it is such a mirror. I mean that is, it is so interesting to at AI as a mirror for you know, our strengths and our foibles. And that’s amazing. Might be a hard question to answer, but you know, assuming that the way that AI can’t think, right, it just follows parameters that we give it. If I were interviewing you, what questions would I ask if I were AI, if I were an AI, if I were AI interviewing you, I am interviewing you as a person. But if I were AI interviewing you, what questions would I ask?

Janelle: I think you would ask some kind of average of all the questions that have been asked before. Because to know what people ask other people, you have to be trained on examples, these questions. So it would be probably somewhat an out of, out of context question for me about a book. I didn’t write it.

Peter: Sure.

Janelle: Okay.

Peter: So how is, cause you talk about this in the book AI generated writing. How has AI generated writing possible, you know, blogs that are written by AI?

Janelle: Yeah, so this is AI’s that have basically learned to predict which letter comes next in a phrase. So they’ve looked at a whole bunch of examples of human writing and it knows, for example, if you have the letters T H then E is a likely thing that comes next. Q is not so likely. And then you know, the really powerful ones now look at a lot of data and they have a lot of memories that they can actually do this over the course of sentences or paragraphs. And they’re calculating probable next words, but they really don’t have any idea what they’re saying. And so things can get kind of surreal sometimes,

Peter: Like your paint names. I want to read this this how you ended the book because I just think it’s really worth reading and maybe you have a comment about it, but I think it’s really powerful, which is, here’s what you wrote at the end of your book. As AI becomes even more capable, it still won’t know what we want. It will still try to do what we want, but there will always be a potential disconnect between what we want AI to do and what we tell it to do. Will it get smart enough to understand us and our world as another human does or even surpass us? Probably not. In our lifetimes for the foreseeable future, the danger will not be that AI is too smart, but that it’s not smart enough. On the surface, AI will seem to understand more. It will be able to generate photorealistic scenes, maybe paint entire movie scenes with lush textures, maybe beat every computer game we can throw at it, but underneath that it’s all pattern matching.

Peter: It only knows what it has seen and seen enough times to make sense of our world is too complicated, too unexpected, too bizarre for an AI to have seen it all during training. The emus will get loose, the kids will start wearing cockroach costumes and people will ask about giraffes even when there aren’t any AI will misunderstand us because it lacks the context to know what we really want it to do. And, and for me, this book really, it’s like, I remember when I scuba dive for the first time, probably 30 years ago and I was under the water and I thought to myself, wow. Like I like just by learning how to breathe under water on entire world has opened up to me that I never had access to before. And one of the things that your book did for me is it opened up an entire world that not only did I not know about, but I really honestly wasn’t particularly interested in. And your, your writing about it was so fun and interesting that it’s opened me up to this whole world. So before we go, Janelle, I want you to tell us about giraffes. What is this thing about giraffes?

Janelle: Oh yeah. So here’s a giraffe that I-

Peter: And it’s not small, so you’re traveling right now and that’s a commitment to travel with a giraffe. And it’s actually very cute.

Janelle: Yeah, yeah. It’s, it’s curled, curled up in my suitcase. But the reason I bring it is that drafts have become a bit of a running joke in machine learning circles. So people have noticed that image recognition algorithms, for example, if they look at ordinary pictures of like plain rocks or some kind of like the coat of arms of Savoy or these very strange things that are definitely not giraffes. Sometimes the image recognition algorithms will still report, Oh yes, this is two giraffes right here. And you know, it’s sort of interesting quirk of that reveals how little these algorithms really are understanding of what they’re seeing. And then also how common giraffes are in the kinds of pictures that we train image recognition of algorithms on as opposed to just like a plain rocks. Right. So as far as they know, giraffes are more co, more likely than just an ordinary plain picture.

Peter: So here’s the thing, folks, you are now in on the joke. It is not a joke you would have been in on had you not listened to this podcast or read Janelle’s book and that’s actually I again, how I feel about about this book in our conversation, which is it lets me into a world I never would’ve been in otherwise. We’ve been talking with Janelle. Shane, you look like a thing and I love you. How AI works and why it’s making the world a weirder place is her book. Janelle, thank you so much for being on the Bregman leadership podcast.

Janelle: Hey, thank you so much. This was a lot of fun.

 

Comments

  1. Susanne says:

    Fun and illuminating! Both reassuring and yet still frightening to think that immature programmers who are not interested or aware of their own biases or cultural conditioning, create the algorithms. Wow! Thanks for this happy encounter with AI as a mirror of our own expectations and foibles.

Leave a Reply

Your email address will not be published. Required fields are marked *