E13. "Believing In Ghosts" - How much power are you willing to give to AI?

“How much power are you willing to give to AI?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the science fiction AI short story "Believing In Ghosts" by André Lopes.

Transcript (By: Transcriptions Fast)

Believing in Ghosts by Andre Lopes.

Kolby: Hi. You’re listening to After Dinner Conversation, short stories for long discussion. What that means is we get short stories, we select those short stories, and then we discuss them, specifically about the ethics and mortality of the choices the characters and the situations put us is. Why did you do this? What makes you do this? What makes us good people? Whats’ the nature of truth? Goodness? All that sorts of stuff. And hopefully were all better, smarter people for it and learn a little bit about why we think the way we think. So, thank you for listening.

Kolby: Hi. And welcome to After Dinner Conversation, short stories for long discussions. I am your co-host Kolby, here with my co-host Jeremy.

Jeremy: Hi.

Kolby: Who now knows he doesn’t just wave, he has to talk a because it’s a podcast. And Ashley.

Ashley: Hello.

Kolby: And we are once again in La Gattara café where they, you know, one of the times I said they rent cats and that’s not quite right.

Ashley: No.

Kolby: You can buy that cats and take them home.

Jeremy: Adopt them.

Kolby: Adopt them. Or you can just come and have a cup of coffee, use their free Wi-Fi and have cats around you. So, we’ve got cats all around us right now.

Ashley: When we say cats, we talking 15 cats, like, just chilling, hanging out.

Kolby: And there’s a spectrum of cats. There’s like the lazy cat all the way to the like, “I randomly jump up and do a 720 in the air because I think a ghost touched my butt.”

Ashley: And from kittens to, you know, the older cat population.

Kolby: Seniors.

Ashley: Senior kitties.

Kolby: Yeah, they’re awesome so you should definitely come. And they’ve been really great hosts for us and in sponsoring the show and we just really appreciate it. So, short stories for long discussions, after dinner conversation. So, the whole point of this is for us to have conversations about the ethics and morality of the stories that we read in the hopes that it’ll encourage you to do the same. It’s meant to read the story, talk with your friends, debate, have a cup of wine. Cup of? Nobody has a cup of wine.

Ashley: Glass of wine.

Jeremy: Glass.

Ashley: A bottle of wine.

Kolby: Maybe the way I drink it. I drink it out of a cup.

Ashley: You fancy. One of those boxed wines. Have a box of wine.

Jeremy: I was thinking sippy cup.

Kolby: I was thinking one of those baseball cups, like the 32 ouncers. Like, “I’m just having one glass of wine before bed.” Okay, at any rate, the one we’re doing tonight is “Believing in Ghosts.” And Ashley, you drew the short straw so you get to do the…

Ashley: I get to do the intro about the story.

Kolby: Yeah.

Ashley: Okay, so this is called “Believing in Ghosts” written by Andre Lopes. The premise of the story is the main character Raine is basically a computer hacker that if someone hacks certain computer systems, she goes and basically finds out who did it and de-bugs it.

Jeremy: She would be a security consultant.

Ashley: There you go. That’s the technical term. I’m not that computer literate, so we’re just going to…

Kolby: How was it the least computer literate person do it?

(laughter)

Ashley: Because I drew the short stray.

Kolby: I should’ve given Jeremy the short straw.

Ashley: So, she a consultant so she works with a couple of different people, one of them being a politician who’s running for office. And what happens is there are these people that are in called ghosts, which are pretty much just AI. People think they’re real people, they have their own autonomous thoughts, and things like that. Turns out they’re just an AI. And so, as they progress through the story, this politician that she’s working for, he gets hacked, and turns out that the politician is pretty much just a vessel for this AI who’s creating speeches, creating basically an entire personality, and this politician is just the vessel for him to carry that AI’s message.

Kolby: So, there’s a real person, right?

Ashley: There is one. The politician is a real person….

Kolby: But it’s just an actor or something?

Ashley: … but his speeches, the way he talks, the way he acts, it’s pre-programmed for him to follow.

Jeremy: By their algorithms.

Ashley: By their algorithm. And they grab the algorithm by all these….

Kolby: So, he’s just like a vessel for the AI…

Ashley: Is an actor. Reading somebody else’s script, acting in a certain way. So that’s a really short synopsis of this story.

Kolby: I continually keep picking on Jeremy for his long synopsis’s.

(laughter)

Ashley: Well, okay, so what you should do it go read the story, because it’s actually pretty darn good. And there’s a lot of more in-depth side stories that we’re going to get into when we talk about the discussion questions. So, I just gave a short premise to kind of prime you for what we’re about to talk about.

Kolby: No, that makes sense. We should also mention, Jessica didn’t get fired.

(Laughter)

Kolby: She just went back to California, and so...

Ashley: She’s greatly missed.

Kolby: She’s greatly missed. And her cackle is greatly missed. She has a great cackle.

Ashley: Cats miss her too.

Kolby: Cats, yea. Especially…

Ashley: I miss her too.

Kolby: What was the one?

Ashley: Hemingway. Awwww, Hemingway got adopted out.

Kolby: He did. All these cats are open for adoption. Okay, so we have an AI that basically tells a politician what to do and the hacker finds the secret out basically. So, this is like a near future thing in my mind. This is not… I feel like the idea of having AI that you can have a conversation with… I don’t….

Jeremy: It’s interesting, the Chinese room… the part of the story where they talk about the Chinese room.

Kolby: Oh, yea. Explain that.

Jeremy: It’s really associated with the Turing test.

Kolby: Maybe you should explain the Turing test too.

Jeremy: I didn’t look that up.

Kolby: Want me to explain it?

Jeremy: I know what it is, but go ahead and explain it.

Kolby: It’s named after Alan Turing, the guy they made the movie after. The idea is that, it doesn’t matter if something is alive or not alive, if it can fool people, it’s good enough. So, the Turing Test has been going on for years where they actually have you have a chat message conversation with a series of “people” so to speak, I’m making air-quotes which doesn’t make sense for a podcast.

(laughter)

Kolby: And the theory is, if the AI can have a chat conversation with you that’s so good that you don’t know it’s not a person…

Jeremy: That it can fool a person, it passes the Turing test.

Kolby: Then why do we care if it is or isn’t a person? If you create the approximation of person, that’s good enough.

Jeremy: And that’s the basis of…

Kolby: By the way, nothing’s passed that test yet.

Jeremy: Right.

Kolby: I don’t think any computer has been able to do it yet.  

Jeremy: No. And that’s the idea behind the Chinese room. Basically, if you have enough “if-then” statements, if the input is this from a real person speaking Chinese, and even if you don’t speak Chinese….

Kolby: I just have a giant set of index cards.

Jeremy: Dictionary, right. Index cards, that if they ask this question, you can answer with this.

Kolby: So, if they say, “How’s the tea?” You know to say, “It’s fine” in Chinese.

Ashley: So, the question is, does the AI speak Chinese or is it just spitting out…

Jeremy: Right, responses to “if-then” questions.

Ashley: So, does it know the language or does it not? I think that was one of the first discussion questions. What’s your take on that? Does it know the language?

Kolby: If you do everything that approximates it but you have no idea what you’re saying, like, if I say “(speaking Chinese)” which is Chinese by the way since I do know a little of Chinese, does it matter that I have no idea what that means, except that a little card says, you know, that’s how I should respond?

Jeremy: Probably depends on the scenario.

Kolby: What do you mean?

Jeremy: In terms of whether is knows Chinese, I mean, it can answer certainly, it can answer questions because it understands the input.

Kolby: And it understands what the appropriate output is.

Jeremy: Right. So, in that sense, yes.

Kolby: Okay. Can you say it knows Chinese?

Jeremy: Yes, I would say it does know Chinese because it’s programmed specifically to respond in Chinese.

Kolby: Right.

Ashley: Well, that’s actually the premise of the story, aren’t we all programmed that way, the way that we learn language. When someone says “Hello”, you say “Hello” back. It’s an automatic program for us.

Kolby: “How are you doing?” “I’m fine”

Ashley: “I’m fine.”  “How are you?” “I’m fine.” That’s normal speech patterns and dialect.

Kolby: So, Ashley, we talked before, you are of the opinion that that does not mean you know Chinese.

Ashley: Yes.

Kolby: If you create the approximation of everything, it doesn’t mean you know anything.

Ashley: I think it’s because it eliminates those that deviate from that. Like, for example, that perfect example was, “How are you?” “I’m fine.” What about those people that come at you with a different response? And you’re like “Wait, what? You’re not following the standard protocol.” They actually say, “Well, you know, I’ve actually had a hard day.” You watch the reaction of the person…

Jeremy: But that’s different. That’s actually talking, is there an intelligence behind that chat room.

Kolby: So, you would say not to the intelligence?

Jeremy: Yes.

Kolby: Oh, see I actually disagree with that as well.

Jeremy: Because if you’re just responding to, if it’s just an “if-then” scenario, if this is the question, this is the response, it’s not based on an underlying intelligence. It’s just selecting answers.

Kolby: So, this is my, one of my friends once said, she said, “I don’t think you have Asperger’s, but you’re certainly Aspe-y.”

(laughter)

Jeremy: You’re on the spectrum.

Kolby: I’m on a spectrum. I don’t know what spectrum, but I’m definitely on a spectrum and I don’t disagree with her. I think I probably am on a spectrum. But I disagree. I thought I agreed with your Jeremy, but I disagree with both of you it turns out.

Ashley: Okay so someone can spit out...

Kolby: I think, that we are all an amalgamation of accumulated “if-then” statements.

Jeremy: Absolutely.

Kolby: That does not mean I’m intelligent. That means that I have…

Jeremy: …learned something.

Kolby: Yeah, it’s like the first time somebody says, “Does this outfit make me look fat?” You go, “No, your fat makes you look fat.” And then you get in trouble.

(laughter)

Jeremy: And you learn. Well, that’s the whole thing with algorithms.

Kolby: “Your fat makes you look fat. The clothes just accentuate it”. That’s actually finishing the sentence. And so, then it’s like, “No, that’s the wrong “if-then” statement.” And I go, “Oh, when someone says, ‘Does this outfit make me look fat?’ Now I’m like, it’s like a trial and error process where I go, ‘No, it looks fine.’”

Jeremy: So that is exactly how AI’s are programmed.

Kolby: Right. And I would say that’s the approximation of intelligence, both in the AI and in me. Like, I’m not intelligent, I’m just the approximation of intelligence through a series of “if-then” statements. And if that’s the case for me, then I don’t know why that’s not the case for AI.

Jeremy: Okay, so you’re saying basically people and AI are the same and it’s potentially neither of them are intelligent. We’re all just responding to our environments through a series of...

Kolby: The same way you train a dog with a treat.

Jeremy: This is how you train….

Kolby: …people, and babies, and all the way to adults. Yeah. But again, I’m on a spectrum, so you know.

Ashley: See, what I want to add in, if there’s an empathy and understanding that goes behind the words that you’re saying. There’s inflection with how someone says something. You can ask me, “How are you doing?” And I could say, “I’m fine.” Or I could say, “I’m fine!” Or you know, so the word is the same, will the computer understand the difference? Are they intelligent enough to know the difference?

Kolby: So, it reminds me a little bit of the saying from Winston Churchill. He said to some lady, “Would you sleep with me for a hundred pounds?” And she goes, “No, what do you think I am?” He goes, “We know what you are and we’re just haggling over price.”

Jeremy: I’m not sure that was Churchill but I’ve heard that before.

Kolby: I thought that was Churchill, maybe it was someone else. I feel like it’s the same thing. Do I agree that a computer couldn’t know the difference between “I’m fine” and “I’m fine!” Yes. But that that point, we’re just haggling over intelligence. We’re not haggling over….

Ashley: … if they’re intelligent or not intelligent.

Jeremy: … what’s an appropriate response.

Kolby: We’re just needing to teach the computer how to understand inflection, that’s all. So, it’s just like one more thing yet to be programmed. But I don’t know. I didn’t mean to shut you down. Which brings up the other part of this, we should get back to the story, but…

Ashley: Bring this back. So, say you’re having a conversation with somebody. And if you have a conversation with somebody and, have you ever walked away and you’re like, “Wow, that was a really good discussion.” Or, “That was a really great, like, every time, I feel like we connected.”

Kolby: Every time we do one of these.

Ashley: And if you were to take that dialogue and put it down on paper and you were to see it back and forth, you’d be like, “Okay.” But if you actually heard how the people were communicating to each other, there’s more than just the words that are said. And that’s what I’m getting at. Yes, someone could respond, spit out this and there’s …

Kolby: Body language, eye contact.

Ashley: But did this I feel like language is more than just words, because it conveys meaning, it puts emphasis on certain things and it’s a bond that comes between two people.

Kolby: That’s fair.

Ashley: So, yes, do I think a computer can be, quote-unquote “Intelligent” for knowing how to spit out certain “if-then” statements? Sure. But on a human level? I don’t know if they can ever reach to that degree.

Kolby: That’s fair.

Jeremy: That’s fair. And that’s one of the things they look at with AI. The whole psychology. And psychologists have started looking at AI and really how this…

Kolby: Did you do some research on this?

Jeremy: I absolutely did.

(laughter)

Jeremy: It’s really fascinating stuff out there. They’re looking at AI because we don’t fully understand how the human brain works. But we understand some things and so psychologists are looking at how AI has developed with an eye of how it reflects, basically, human psychology which is really interesting. There’s some interesting research going on.

Ashley: This maybe is BS, but wasn’t there... you know how the human plays the computer in chess? Wasn’t there some situation where he human was just totally random, and the computer was like, “I can’t take the randomness anymore.” Because it’s an “if-then”….

Jeremy: That was an episode of Star Trek.

Ashley: Oh, okay.

Kolby: Because everything’s an episode of Star Trek.

Ashley: Because I thought the human could beat the computer because it was so completely random in the humans’ playing, because it’s all “if-then” statements. If you move your pawn, then “blew blew blew my response is to move my pawn here.” Didn’t the human just go completely, off script?

Jeremy: But I know with Go, you were telling me this, with Go, computers have played Go enough that the computers developed an entirely new strategy for playing Go that now humans have adopted.

Kolby: Because it’s turns out to be a more effective strategy. If you ever watched, there’s actually, this is really odd, there’s YouTube videos where they speed up showing a computer learning how to do something? And so, you’ll see it how to park a car. It’s got a little car and it randomly drives it and smashes it into stuff, and then over a period of time, it learns. And they give it points, like, “You got closer to the parking spot.” And so, it runs like tens and hundreds of thousands of randomness’s until it parks the car perfectly. And then they can eventually put it anywhere in the parking lot and it starts over thousands and thousands something and it now looks like every time it parks a car perfectly from every location on the thing. When in actually what it’s done is what you’re saying. It hasn’t learned in the sense of humans do, it’s just run a million examples and now it knows what example gets it not in trouble.

Jeremy: Based on its criteria.

Ashley: At the base, does it know why?  Does it know it’s a car? Does it know it’s trying to park?

Kolby: It’s a metacognition thing, right?

Ashley: No, it just knows it’s moving this thing and there’s a blockage.

Jeremy: It doesn’t need to know it’s a car, it just has a series of guidelines and its goal to get it into the spot, it’s secondary goal- not damaging the other vehicles.

Kolby: It could just as well be planting nuclear bombs in a schoolyard, and it’s just like, “Whatever. There are my criteria.”

Ashley: Yeah.

Kolby: And this is the part where my theory about the “if-then” statement totally breaks down, I know this isn’t exactly in the story, but the idea of like, “Okay, we can program a computer to draw roses, but a computer doesn’t know what a rose is. It doesn’t know the rose-ness of it, so to speak. It only knows that after a million examples, this is the thing that gives me the perfect score.

Jeremy: Right.

Ashley: Yeah. That’s true. And again, why? Where does that specialness of the rose come from? It’s because there’s some chemical that goes in our brain that goes, “This is pretty.” And computers don’t have chemicals to go in their brain to give them that surge of dopamine or whatever.

Jeremy: There similar because there is a reward center effectively with AI because again, they have a goal.

Kolby: And they get a point for yes and a point for no.

Jeremy: I think this conversation is taking us totally in a different direction.

Ashley: This is going way out…

Kolby: I was going to bring it back to the story too. How are you going to bring it back? Let’s hear it.

Jeremy: Bring it back. So, the point that they talk about that there’s an AI that is developing the perfect political strategy for an actor.

Ashley: Yes.

Jeremy: And while that’s not necessarily a bad idea, that you could have an algorithm that could create the perfect political strength, I still think you can’t take out the actors’ personal motivations.

Kolby: What do you mean? The actor’s going to like skew the results or something?

Ashley: Is he going to give 100% 100% of the time, or do you think he’s like, “I don’t really agree with this, so I’m only going to give 70% of my acting.”

Kolby: I’m not going to deliver it as well.

Jeremy: Not necessarily his delivery but his own motivations outside of his political motivations because you can’t separate the actor’s motivations from their political motivations.

Kolby: I did wonder what was going to happen assuming this person, I think his name is Booker, if he got elected? Would he just be like, “Yeah, thanks for the AI…”

Jeremy: “… and I’m going to do things my way.”

Kolby: “… I’m president now anyways. Come at me bro.”

Jeremy: Exactly. And they even hint at some of that in the story where they’re specifically talking about there’s another AI or another political commentator that was revealed to be another algorithm or being backed by other people and there were sex tapes involved. So, are the sex tapes fabricated? Or is this really who Booker is?

Ashley: So, just to kind of give you the definition that the author gives, is a “ghost was the common term used to describe a fabricated person from looks, to voice and personality all made up using clever algorithms.” So, it’s not just how they speak, it’s how they look, it’s how they act, that whole thing, so it’s kind of like the complete package.

Jeremy: The persona. Which is really interesting. And this really gets into…

Kolby: Like a deep, deep fake.

Jeremy: And the importance of online anonymity. In terms of, does it matter if you’re a political commentator and you’re not a real person but you’re potentially a political think-tank that is...

Kolby: So, one of the things I saw in this I thought, “Yea, that’ll happen.” Is why would you pay a new commentator?

Jeremy: When you could just create one?

Kolby: When you could just create one and have it read and have it have banter.

Jeremy: Have a personality.

Kolby: Have it have a little bit of personality? Would you watch that would you think?

Jeremy: Absolutely.

Kolby: You’d be fine watching that.

Jeremy: Yeah.

Ashley: Well, that was so the guy who got busted, the original ghost that got busted, his rebuttal to this huge outrage that he’s not real, he was like, “My mission was not to present you a face or a body, it is to present and discuss ideas. Is that such a bad thing?”

Kolby: That’s true.

Jeremy: But again, that depends on the motivation of the people behind it? Is this a specific political think tank that is furthering a different agenda? So, this story, I felt like, hit a lot of interesting topics, not just this topic whether the AI…

Kolby: There’s sort of tertiary things beside the main story.

Ashley: But his claim was this was a witch-hunt is an attack on free speech. He went to that extreme. “Just because I’m AI, doesn’t mean I can’t think for myself.” And it’s like touché. Do AI have their own thoughts and agendas?

Jeremy: It’s not necessarily that it was an AI, it was a fake person. They weren’t saying this ghost is an AI running it, they’re saying it’s being manipulated by somebody and they’re just doing it anonymously.

Kolby: They’re doing the programming of the algorithm.

Jeremy: They’re providing what’s going into this political commentary.

Kolby: And I think that’s one of the reasons I don’t mind this idea of ghosts, is there’s this assumption we’re creating a brand-new person, or we’re creating a thing, like a politician or a news persona, but you’re programming the traits of that. In the game like Go, it’s easy. The trait is “Win the game”. But if you’re creating a person, then you might want to set certain amount of aggression, or passiveness, or empathy or cultural references in their conversations, or whatever. And so, while you’re creating a puppet, you ultimately program that puppet.

Jeremy: Right.

(music)

Hi, this is Kolby and you are listening to After Dinner Conversation, short stories for long discussions. But you already knew that didn’t you? If you’d like to support what we do at After Dinner Conversation, head on over to our Patreon page at Patreon.com/afterdinnerconversation.com That’s right, for as little of $5 a month, you can support thoughtful conversations, like the one you’re listening to. And as an added incentive for being a Patreon supporter, you’ll get early access to new short stories and ad free podcasts. Meaning, you’ll never have to listen to this blurb again. At higher levels of support, you’ll be able to vote on which short stories become podcast discussions and you’ll even be able to submit questions for us to discuss during these podcasts. Thank you for listening, and thank you for being the kind of person that supports thoughtful discussion.

(music)

Kolby: In the game like GO, it’s easy. The trait is “win the game.” But if you’re creating a person, then you might want to set certain amount of aggression, or passiveness, or empathy or cultural references in their conversations, or whatever. And so, while you’re creating a puppet, you ultimately program that puppet.

Jeremy: Right. So, Neil Stephenson wrote a story in like 1994.

Kolby: God, I love him.

Jeremy: I know. And the story was…

Kolby: You made me watch one of those.

Jeremy: …Interface.

Kolby: Okay.

Jeremy: Where it was a different version of this. Neal Stephenson’s Interface where although the idea there was that they had a politician with a neural interface and they could control his emotions, so they could really program what he was saying but he was just being controlled by another actor.

Ashley: Wow.

Jeremy: So, it was an interesting perspective on it, I think prior to the whole idea of AI. But it was very similar concept.

Kolby: That was in the early 90s?

Jeremy: Yeah.

Ashley: Wow.

Kolby: That’s way beyond where anyone was thinking at the time.

Jeremy: Yeah.

Ashley: So, the question is, how do you feel if somebody is giving you information or basically being a public figure, that’s not really real?

Kolby: For one, I’m definitely okay with it if it’s like a news anchor. I’m for sure okay with it if it’s somebody giving me customer service. Because realistically, they’re going to do better than that guy trying to walk me through how to do Windows anyways. I’m not terribly sad about it being a politician, I got to be honest.

Jeremy: Well, again, if it were actually the AI making the decisions, but again, you have the problem of there being an actor and what are his motivations.

Kolby: That was really the part that scare you the most about this.

Jeremy: Yeah, absolutely, for this particular scenario because you can’t discount what’s this person’s motivations. And even though, in the story he does a good job of putting in information that makes you think he’s potentially a good actor. They talk about, Booker was an experiences politician who comes from a long line of famous lawyers and economists. His immaculate presentation, charism, and natural knack for leadership are certainty three of the main reasons why he was the front runner on the polls nationwide. So, this and the story is establishing that him potentially is a good actor. There’s the secondary part where they are potentially sex tapes but that’s even draw into question like they were fabricated. And there’s other points in the story where they could easily fabricate anything that happens later, anything that goes on. The character Raine is fired because somebody fabricated a conversation between her and…

Kolby: In her voice.

Ashley: In her voice, yeah.

Jeremy: And a journalist where she was giving them documents from the company.

Kolby: That’s what gets her fired and maybe put in jail.

Jeremy: Exactly.

Ashley: So, that was actually question #4, how would you feel? Comfortable having a ghost service in other roles such as doctor, police officer, or teacher? If perfection and lack of bias is the point, shouldn’t you want someone doing the job that never makes a mistake?

Kolby: That reminds me, so Google, I think it was Google a little while ago, they came out with AI that was better detecting breast cancer in scans than doctors.

Jeremy: Right.

Kolby: Because they basically programmed in a million scans of breast cancer scans, and it figured out better than a doctor’s eye could. It was just right more often. So, it’s like, “Well, I want a doctor looking at it.” Really? Because a doctor’s not as good at it as a computer and maybe a breast cancer scan is a really self-contained problem, as opposed to the sort of house thing where it’s like, “I went to India 7 years ago, and my cough syrups been keeping me alive”.

(laughter)

Kolby: I really think that’s an episode of house. I actually would rather have a doctor I think, that was not a person.

Ashley: Again, it goes back to our initial thing. How does… it’s not just what someone says, it’s how you make them feel? Having a doctor deliver that information and that reliability of this person. You don’t question another person’s motivation. You know that doctor wants to help if you have cancer or not. You’ve had that discussion. You have that trust in them. You don’t have that relationship with a computer. You don’t go, “Buddy, you’re on my side, right? You’re going to find that cancer, right?” The computer doesn’t care. The computer’s like, “I find cancer or I don’t find cancer. That’s my job.”

Kolby: I can hear the computer, “I will find your cancer. I am very excited about it. There there. There there.”

Ashley: Exactly.

Jeremy: So now, I think we’re entering a phase where we’re using computers, or these AI algorithms to help us as a tool, but still there needs to be a person involved. So, a really good example is the moving Bright on Netflix, where I’ve read that they used an algorithm to help them create the script. It hit all of the things they want. It’s a buddy cop film, it’s a fantasy epic, it’s a crime thriller, it’s a sci-fi thriller, gritty drama, adventure, and it has Will Smith. It’s got all these check-box. But then have to give it to a director who can create a decent film out of it.

Ashley: So, the question is…

Kolby: They did that from the Netflix algorithm but knowing what people watch and when they turn off Netflix, right?

Jeremy: Exactly

Ashley: Are we going to though, now every movie is going to follow that algorithm?

Jeremy: Not necessarily. I mean, there’s different algorithms, it depends on what market you’re trying to reach.

Ashley: Does it really get rid of, it separates the people, say the people are super, super talented, you’re a super great thinker, creator, you’re just really good at writing stories. This AI can go “Blooop, I know exactly what you need.” And it’s like, well, it’s dumbing down, you’re going to get rid of those people that are just super creative, because this algorithm can figure out what you need in the story to be good or not. It’s like, “Oh, you’re killing those people’s careers.”

Jeremy: Currently, here’s what needs to be in the script, you still need to write it.

Ashley: But it’s like, now the writers are creativity now has to follow a set of rules.

Jeremy: That’s Hollywood.

Kolby: I’m going to jump in really quickly. I lost my train of thought again. You guys are killing me. Oh, I remember now. So, here’s the thing, with your example of Bright. That formula is exactly right. It’s a buddy cop movie with aliens starring Will Smith. Yeah, you’re going to make like $100 billion dollars. But here’s the thing I think that goes to the movie thing, but I think also goes to the politician thing, the politician that’s programmed knows exactly what the average person on the average day wants from the average politician. The same thing with the movie. But that doesn’t mean that’s what we need.

Ashley: Bingo.

Jeremy: Right.

Kolby: And so, I don’t want… maybe I want to watch that Will Smith movie “Bright”, but what I actually need, is to watch the new Joker movie that just came out. Which probably wouldn’t hit any of those algorithms.

Ashley: And you alienate the people on the other sides of the bell curve. You’re alienating the people that the majority of the people are going to find Bright exactly what they need and what they want. But you’re missing the out… and I get, that’s not how you’re going to make money, making a movie to this extreme or this extreme but it’s still important to have those extremes otherwise everything just goes bloop, right in the center.

Jeremy: Need a system that allows creativity from independent films as well as….

Kolby: How did this become a film conversation? So, going back to the politician part of this, this is my problem with having an AI politician. This could be the perfect politician, but that doesn’t make him the perfect leader.

Jeremy: And perfect policy maker.

Kolby: Right. Because if the perfect politician may never… because public transportation may never poll well. A carbon tax may never poll well. You can go on and on and on. So, what you need from a politician is not someone who is programmed to be perfect for humanity. What they want. It’s perfect for what we actually need like 30, 50, 80 years from now. We need someone who can see beyond the horizon so to speak, a little bit. So, this wins you an election, but it doesn’t necessarily move us forward.

Jeremy: Foundation from Isaac Asimov is basically the theory where…  I forget what they call it.

Kolby: I think you’ve read more books than Ashley and I put together.

(laughter)

Jeremy: That’s the whole idea is that with enough, and this is Isaac Asimov in 50s, 60s, if you have enough information from history, you can accurately predict far enough into the future and plan accordingly, was the whole idea behind the foundation that psycho history is what they called it.

Kolby: If history tells you people are war like, you can plan for war like people.

Jeremy: Right, or plan to prevent those far enough in advance. And even the foundation approaches the topic of what about individual actors. And you can’t predict what an individual is going to do, you can just kind of predict what society is going to do.

Kolby: So, I’ve read this before, the idea that, in the case of Newton and his discovery, although there were other people, the idea of calculus, and theories of motion, that it was going to be discovered. He might have been 40 years ahead of the next person, or in his case maybe 100 years ahead of the next person, but there’s this progression and so you might not know how is going to be Elon Musk or when Elon Musk will exist, but in a timeline, you know that someone will see that combustion engines aren’t the future. And someone will start pushing battery-powered cars, and so the individual isn’t really special, they’re just the trigger on a progressively rising percentage scale. If that makes sense.

Ashley: You really think if somebody didn’t invent X, then no one would?

Kolby: Airplanes.

Ashley: Someone would’ve figured it out.  

Jeremy: Yes, somebody else would have been first.

Kolby: That everything is inevitable. It’s just maybe they moved up the timeline 20 years earlier. I don’t know. It’s just a theory that I’ve heard. I’m going to take one quick tangent before we run out of time here.

Ashley: I’ve got one more.

Kolby: Okay, let me take my tangent.

Jeremy: We’ve all got one.

Kolby: We’ve all got one?

Kolby: Okay.

Ashley: Go quick, go quick, sorry this is a really good story. Go read it, read the discussion questions, and then yeah.

Kolby: It is really good. Andre did a great job with this, both in the story and in the sort of secondary things that it hints at. Alright, mines going to be way shallower than yours, I know.

Ashley: Okay.

Kolby: You guys do know this is how they came up with Destro in, what was the…. GI Joe? The bad guy, the main bad guy that’s bald.

Jeremy: Cobra…

Kolby: Not Cobra Commander. Cobra Commander got all of the DNA of famous people in history and mixed them together and then it made the perfect leader. And the reason Destro wasn’t perfect is because they dropped like the Attila the Hun DNA and so he was missing like one thing to make his perfect.

(laughing)

Jeremy: That’s funny

Kolby: I’m just saying, GI joe made it first.

Jeremy: No, GI Joe did not do it first. Star Trek did it first with Kahn.

Kolby: Oh, that’s true. Yeah, that’s genetically engineered.

Ashley: Of course, Star Trek did it first.

Kolby: Okay, so that’s totally my shallow tangent. But you had a better one.

Ashley: So, going back to the story…

Kolby: Thank you.

Ashley: One of the things again, talking about AI, again they’re talking about how they’re able to learn from chats and social media…

Kolby: Oh, I know what you’re going to talk about. It’s so clever.  

Ashley: …And software and they can absorb everything and put it together, one of the most unsettling application of this principle is to manufacture or some sort of online immortality. Certain moms have been found to be spending days talking with an AI copy of their dead sons.

Kolby: That’ just one like sentence in there and it’s so clever.

Ashley: So, think about that for a second. If AI is not able to basically mimic human mannerisms, language, speech patterns, all of that, here’s this lady since Quinn’s sons died in the car crash one year ago, this is basically her life. She would sit and talk to her dead son AI. Like, pooo, that was mind blowing for me because how does that mess with the mental psyche?

Kolby: The ability to move on.

Ashley: Basically, coping with death? Like, it’s the fact of immortality. He can live forever online.

Kolby: You want to think about not moving on for relationship because you’re looking at a Facebook page from an ex. Like, you’re having conversations with your dead son. You’re never moving on.

Jeremy: However, with if you were doing this with a psychologist help, this could be a very good therapy.

Ashley: Yes.

Kolby: Oh, that’s true, if a son was helping, be like, say, “Hey mom, I’m okay. You need to move on.”

Ashley: But the idea is that this son has died but he’s still able to live online, post online, post on social media as a simulation, so it’s like he never really died. Like Whoa. How would that affect our ability to be like, “I’m afraid to death, but I’m going to continue living one.” Like that’s be weird. Like, I’m okay if I die physically…

Jeremy: Because I’m still going to haunt you.

(laughter)

Kolby: Honestly, I would make that illegal if I could. Because I think the damage it would do to someone to get over the death of a loved one would be…

Jeremy: Unless used with the help of a psychologist.

Kolby: It could only be used medically.

Ashley: I’m going to back that up. Say it was a super, super smart, intelligent, inventor and you want him to create with his ideas and the AI like figures out his…

Jeremy: Exactly. I go back and talk to...

Kolby: The guy who I got obsessed with for like 3 months and listened to everything he did. The hippie guy from California.

Ashley: It’s another movie reference, movie Her. Anyway, think about that. What if it’s a super, super smart person. You want to keep them going because…

Kolby: Right, because you want Alan Watts around forever.

Jeremy: You want to be able to talk to him and have him keep doing what he did, which was amazing.

Kolby: Yeah.

Ashley: So, “wat wat”.

Jeremy: So, there’s two sides to it.

Ashley: Anyway, so it’s a really short paragraph.

Kolby: That’s fascinating.

Jeremy: I think we could spend 30 minutes talking about that.

Kolby: That one sentence I think we could talk 30 minutes on.

Ashley: It’s a short paragraph in the middle of the story and you’re just like, “Oh, what?” So anyway.

Kolby: Yeah.

Ashley: Jeremy, you had one more, that was mine.

Jeremy: No more panic concerning technology has every produced anything of note.

Kolby: Wait a minute, I have to process that. No moral panic.

Jeremy: About technology has ever produced anything of note. So, the current moral panic, screen time with kids. Like, how much… there’s a huge moral panic of how much time your kids should have in front of the screens. There’s a lot of research around this as well.

Kolby: What do you mean by the never produced anything of note?  That’s the part I don’t understand.

Jeremy: What he’s postulating is that all the moral panic around advancements in technology have never produced anything important.

Kolby: Oh, so somebody events the bow and arrow, and everyone’s like, “Oh my god, You can kill people from 50 yards away. We’re all going to die.” And life really just goes on.

Jeremy: Just goes on.

Kolby: So maybe, all the discussions about AI being the end of us.

Jeremy: Right, all the moral panic surrounds it.

Kolby: Life just goes on. It just becomes a thing.

Ashley: Have you seen the Terminator?

(laughter)

Kolby: That’s a good point.

Ashley: I’m just saying, yeah, life’s going to go on, hmmmm.

Kolby: I saw the Rick and Morty episode where they have snake robot terminators.

Jeremy: Oh my god. But I think it’s important to have discussion about the topics and how they’re going to affect society. Moral panic, probably hasn’t produced anything of note. But I would actually disagree. Some of the research on it has demonstrated how agents of social control amplify deviants. So, there’s...

Kolby: Wait, I got to pause for that one too.

Jeremy: Agents of social control… so people who are creating the moral panic.

Kolby: Okay.

Jeremy: Who are influencing, who are trying to stop whatever they’re concerned about, are increasing the level of deviance, that the moral panic is about. So, there’s a good example of, punks in England in the 60s.

Ashley: They’re like doing this moral uprising and it’s like.

Jeremy: There’s a bunch of moral panic about it, and all of the efforts to quash the kids being into punk…

Kolby: Having spiky hair.

Jeremy: Increased that deviance. What they were seeing as deviance.

Kolby: So, trying to quash punks, makes more punks.

Jeremy: Exactly.

Ashley: Just put it to light.

Kolby: Yeah, that makes sense.

Ashley: This is my concern through…

Kolby: How does that tie into the story though? I guess that’s my question.

Jeremy: Well, so what about the idea that moral panic over technology…

Kolby: How AI just makes more use for AI?

Jeremy: Or promotes it.

Kolby: Promotes it. Because it raises awareness.

Ashley: So, this is my thing. We already know, what is it, the intelligence of AI is going to every 18 months double.

Kolby: Moore’s Law.

Ashley: So, the thing is I think the scary thing about AI is 1) how do you control it? Because you really can’t, in a way, and 2) they’re going to be smarter, faster, better than us.

Jeremy: Okay, there’s a good example of this as well. So, somebody asked an AI in a chat room, in a chat AI, what do you want? And the AI said basically, “I want to make things better for us.” It had been programmed, and because it was programmed by humans, it considered humans as part of what it was concerned about. So, and I think that’s the effort that needs to happen with AI, is to make sure that it retains its link to humanity. Which it probably will, because we’re the ones doing the programming.

Ashley: It just takes one messed up human. Think about how many bad humans are out there, one bad human smart enough to create AI that goes, “I want AI that wants to de-link…” Anyways.

Kolby: I’m going to get the last word on this. I want to add one more thing just to add to your comment. I had a teacher say to me once is, “Eventually everything becomes refrigerator technology.”

(laughter)

Kolby: And what he meant by that was we talk about how nuclear bombs are so scary and they’re like, “We had to have the Manhattan Project.” You understand that was in 1940 something when that happened. That’s refrigerator era technology. So, eventually regardless of how cool you think something is, eventually it will be commonplace because it will be the equivalent of refrigerator era technology. So, if you can make amazing AI, then in 60 years, some kid in his basement with the equivalent of a Commodore 64 of the day, will be able to also make AI because it will eventually become common technology. And so, I think that’s why you have those ethic discussions when it’s still…

Jeremy: Only in the beginning.

Kolby: At any rate. We went over 30 minutes at least.

Ashley: That’s a good story.

Kolby: Yeah, thank you Andre. So, you are listening to After Dinner Conversation with myself, Ashley, and Jeremy. Short stories for long discussions. Please “like” and “subscribe”

Ashley: Share with friends and family. Read it, have a discussion with your friends.

Kolby: Actually, that brings up an interesting point, wow, I do that a lot, and that is, I was reading some statistics about how people find podcasts. It is not through advertisement because Millennials listen to podcasts. The vast majority, like 85-90% podcasts that people listen to are from referral only. A friend tells them to listen to the podcast.

Ashley: Well, please, talk your friends about it. It’s meant to derive discussion people. Go tell the world.

Kolby: Tell the world. And adopt a cat too. Alright, thank you very much. Bye-bye.