E14. "Give The Robot The Impossible Job!" - Can teaching methods go too far when murder is on the line?

“Can teaching methods go too far when murder is on the line?”

Kolby, Jeremy, and Ashley discuss the ethics and choices in the science fiction AI short story "Give The Robot The Impossible Job!" by Michael Rook. Subscribe.

Transcript (By: Transcriptions Fast)

Give the Robot the Impossible Job!  -- by Michael Rook

(music)

Kolby: Hi, you’re listening to After Dinner Conversation, short stories for long discussions. What that means is we get short stories, we select those short stories, and then we discuss them, specifically about the ethics and the morality of the choices the characters and the situations put us in. Why did you do this? What makes you do this? What makes us good people? What’s the nature of truth? Goodness? All of that sort of stuff. And hopefully we’ll all better, smarter people for it, and learn a little bit about why we think the way we think. So, thank you for listening.

(music)

Kolby: Welcome back to After Dinner Conversation, short stories for long discussions, where we take the short stories that are published on the website and on Amazon through After Dinner Conversation, and we pick some of the best ones, and we discuss them, we talk about the morality and the ethics about the stories. Really with the focus on, sort of, ideally, often the classical sort of questions of what is the nature of humanity, what’s the nature of life, what does it mean to be good or moral, all of those sort of things, not just like, “He should be dating her.” (Finger snap)

(Laughter)

Kolby: Actually, like the deeper sort of stuff that we get into. And we have a great time doing it and the hope is that you’ll download and watch these, read the books, talk to friends, and have the same kind of conversations we’re having and maybe come to different conclusions, that’s totally fine too. We are, once again, in La Gattara café. And every time I say that…

Jeremy: Cat café.

Kolby: …Cat café.  I always super impose the logo in the top right-hand corner, so I feel like I can say, like.

Ashley: This corner.

Kolby: Cat café. One of the corners. I don’t know which corner.

Ashley: But, Tempe, Arizona. It’s a place where there’s a bunch of cats that are up for adoption and they’re literally just chillin’ in this place that looks like a home. There’s like 15 cats, you can come just hang out, chill, get to know the cat, be like, “Hey, I love you, let’s go home together.”

Kolby: Right. It’s like a rent to own program.

(laughter)

Jeremy: You rent to adopt.

Kolby: You come 2 or 3 times, pay a couple of bucks, hang out with the cats, and then eventually leave with a cat.

Ashley: Or if you can’t have a cat, it’s a great way to come and still get your cat fix.

Kolby: If you’re significant other’s allergic to cats.

Ashley: Ugh, I don’t know what you’re talking about (sniffle). I’m actually slightly allergic to cats, so me being here, by the end of the day, I’m like, “(sniffle) Hi guys.”

(laughter)

Kolby: So, this is Ashley, one of the co-hosts.

Ashley: Hello.

Kolby: I am Kolby. And Jeremy.

Jeremy: Hi.

Kolby: And we are on, we’ve got to be up like episode 14 now.

Jeremy: Something.

Kolby: Yeah.

Ashley: Getting up there.

Kolby: Actually, I almost forgot, the anthology probably is coming out, or has come out, by the time this comes out. It is 25 of our best short stories, many of which we did podcasts about. And it’s a thick anthology. It’s shaping up to be a 300-page anthology of all these great short stories with all the discussion questions at the end, so you can go on Amazon and buy that as well.

Jeremy: Excellent.

Kolby: And if you’ve got something that you think would fit our format, you can email it to us. Go to afterdinnerconversation.com and you can email at us. We get a lot of submissions now. I was telling Jeremy yesterday; we’ve had a backlog of 100+ submissions now for 3 months because so many come in. But we’ve got a group of readers which you could also be a reader if you’d like to be reader, who’s sorting through them.

Ashley: So, keep them coming. Great writing. Great writing.  

Kolby: And if yours is selected, it’ll get published. And if it’s one of the ones that gets published, it’s got a 50-50 chance of being one of the ones we do a podcast about. Okay.  So, this week is “Give the Robot the Impossible Job!” I think it’s called.

Ashley: Who’s it written by?

Kolby: Rook something. Jeremy you’ve got the thing up.

Jeremy: Michael Rook.

Kolby: “Give the Robot the Impossible Job!” by Michael Rook. And I will tell you, so I’d read this one, I was actually the reader on this one and selected it, and it’s the first thing I read in a while, since I was in 9th grade, where I was like, “Man, this is really smart.”

(laughter)

Kolby: “This is smarter than I am.”

Ashley: So smart to the point where you had to message back to him to bring in, basically, subtitles, or what are they called?

Jeremy: Foot notes.

Ashley: Foot notes, to describe certain things.

Kolby: I had the author put in foot notes because I was like, “Look, I need help.”

Ashley: So, what this story’s about, and why it’s so complicated…

Kolby: It’s great though. It’s great.

Ashley: Oh, it’s a fantastic…. Once you get the premise, it’s smooth sailing. Don’t be put off by the first couple of pages. Definitely read the footnotes, but don’t get too caught up in so much the details. You’ll kind of fit into it after a couple of pages. So, the premise is there are robots that live among people world, human world, to give you a little idea, these robots…

Kolby: This is not the near future; this is the distant future.

Ashley: The distant future, yeah.

Kolby: This is 60, 80, 100 years in the future.

Ashley: Well, the 3rd Civil War was 2029-2031. So, not too too far.

Kolby: Okay.

Ashley: So, there are robots that live in our world. And these robots, a couple things that are unique about them, they put limitations on these robots. They have a certain amount of time, so technically they can die and when they die, the information that they gathered, goes back up into basically the cloud for robots to sift through. So, they’re mission so they don’t die, is to come up with some sort of new theory. Think of them like being a PhD student. They have to get to certain levels of information that is worthwhile and at that point, they get to be what’s called “set free”, when they can do their own self-study.

Jeremy: Free study.

Ashley: Free study.

Jeremy: And they’re education robots, so they’re specifically designed to educate people.

Ashley: Yes.

Kolby: Right. And if they’re good at it, then they get sent to robot heaven, “free study.”

Jeremy: They create new lessons plans.

Ashley: So, basically, they decided to put robots as teaching because it was in studies, they found that robots being teachers is a better way to go. Not just teaching, but different types of teaching. For this case in particular there is a girl who has been caught, well not caught, the mother’s concerned because the daughter is now disembodied things, killing things.

Jeremy: Killing animals, killing rabbits, killing birds.

Ashley: And she’s idolizing this serial killer called Albernon.

Kolby: Algernon.

Ashley: Algernon, sorry.

Kolby: Because I assume, he’s named after Flowers for Algernon.

Jeremy: Yes.

Ashley: Okay. So, the mother contacts Quinn, who is this robot teacher who’s been sent to basically crack this uncrackable case because no one’s been able to fix a serial killer in the process.

Kolby: Deprogram them.

Ashley: Deprogram her so she doesn’t grow up to be a serial killer. In the process Quinn, again striving for this new theory so she can live forever, is trying to crack this case and she goes through a series of three steps. Turns out to be four, but through all of her data processing is like, “Oh, there’s 3 different scenarios you bring this girl through”. One of them is embarrassment, one of them is exposure, something like that. Either way, there’s a series of steps to get through to this girl that you shouldn’t be killing. And the ethical questions are basically, again, the idea of how Quinn teaches Leticia, the disturbed girl, if her educational methods…

Kolby: How old is she? Didn’t they say 10 or 12 or something?

Ashley: Yeah, something or somewhere around there. Is it ethical her methods? Should Quinn die off if she can’t solve this girl? It goes under the interesting fact of, there’s been studies done, think of it this way, back in the day we would do terrible studies on little children. And nowadays we’re like, “Oh, you can’t do that!” But in robot world, they’re able to have access to all that information and then in their mind they can play things out. It’s like of a weird…

Jeremy: And I think the way she puts it is they don’t face the ethical dilemma or the unsureness that people go through of…

Kolby: …they only do what is optimal.

Jeremy: Right.

Ashley: Yes. Yes. So, she’s got this pressure of “My time’s running out, I’m going to be killed soon”, versus “I got to save this girl.” What I found interesting is her motivation really wasn’t to help this girl, it was really just to save her own life.

Kolby: The AI, the computer, the android.

Ashley: Which brings us to our first discussion question, is given how nearly human Quinn is, is it fair to have her have a limited lifespan? Is it fair to make near human AI fear a pending death to motivate them to work?

Kolby: Can I just finish a little bit of the description of the story first?

Ashley: Yeah, sure.

Kolby: Because I think that’s, I mean I wrote it, I think that’s an interesting question. So, just to sort of round out the story because it is a longer complicated story. So, there’s sort of three phases and then a 4th phase. The first on is the robot gets called out because a little girl, I think, has found a dead rabbit.

Ashley: Found a dead rabid rabbit...

Kolby: But the part that makes her creepy town is she cut off the limbs and parts of the rabbit, and then resewed them back onto the dead rabbit in different locations. So, its leg is attached to its head, etcetera etcetera. And then the robot goes and meets with her, and then the 2nd time, and tried to teach her but then gets called out again because there’s a bird that I think…

Ashley: It hit a window….

Kolby: … and it was injured but not dead….

Ashley: …and she kills it and also stitches it weird.

Kolby: She sewed its head to its butt and its butt to its head or something like that.

Ashley: The 3rd one she killed the robot butler because she’s like, “Well, he’s not real.”

Jeremy: Not the butler, the gardener.

Ashley: The gardener, yeah, because he’s not real.

Jeremy: And he was old.

Kolby: And he made fun of her, embarrassed of her or something. And then in the conclusion, Algernon, the murderer she idealizes, the robot brings Algernon out to her house under the pretense of him being a runaway murderer.

Jeremy: And he’s found her and he’s here to kill her.

Kolby: He’s here to kill her but, secretly the robot has put a shock control collar on him, so that they could simulate the act to help the girl understand morality better. And the twist conclusion is, is Algernon actually rips his collar off by ripping it through his head or his neck or something…

Jeremy: …gives himself a stroke…

Kolby: Nearly kills himself in the process. So, he really is cut loose in the house, and he kills the mom, I think?

Ashley: No, stabs her on the like the hamstring.

Kolby: Stabs the little girl’s mom. And then the robot comes and kills Algernon in front of the little girl and then is like, “I’m going to kill the mom too” for reason’s I don’t exactly understand except for educational reasons.

Jeremy: Right. Well, this is what she creates. Because she’s watching the girl…

Kolby: …the girl’s reactions.

Jeremy: … the girl’s reactions, and she’s not reacting in the right way.

Kolby: She’s not offended in the killing of Algernon.

Jeremy: Right.

Kolby: So, then she threatens to kill the mom. And then the girl is offended and says “No, we shouldn’t do that.” And the robot has essentially turned the boat so that she understands there are times, at least, when killing is inappropriate. And that’s the end of the story. The robot has figured it all out and the robot goes to robot heaven.

Ashley: A couple things else to bring up, this girl’s motivation, Quinn the main robot, is able to figure out that, Leticia her idea of killing is because “How can I really know life, if I haven’t taken it?” And her response is, “And made it if I haven’t had a child?” So, you can understand she is inquisitive mind, I understand where she’s coming from, like, yeah, that’s a great point. How can I know what life is if I haven’t made it or taken it? And then she’s later on, Quinn is like, so you want to play God? You can see the immaturity in her thought process, but you can also see how she has to question her motivates throughout the process as well. Let’s back it up, so first thing with Quinn being a robot. With her being on a life span. Should we have AI that has a lifespan because what are their motives? They’re motives are to keep on living. Is it fair? Their fear of impending death makes them work. Could they have used a different motive to keep them working? To keep them teaching?

Jeremy: That’s a good question because there are a lot of factors to how we’re building machine learning. A lot of it is ….

Kolby: There’s a lot of machine learning stories recently.

Jeremy: Yeah. There’s a defined result and how do you get to that result. And there’s a lot of use of the whole carrot and stick is actually used in machine learning too. There’s punishment for failures, there’s rewards for successes. So, it seems completely, within that model to have here’s your reward for good work, and here’s the punishment for continued failures.

Kolby: You’re not alive, so it doesn’t matter that we killed you.

Jeremy: Yeah, and that’s where it gets into a great gray area. What is alive?

Ashley: So, AI doesn’t need to worry about food or living or water, they can just keep on going. So, they have the motive because they want to learn things. She obviously wants to do free study, which is great. They want to learn. So, they have the motivation to stay alive, but they don’t have basic needs to stay alive. She doesn’t need to work to provide food. So, I feel like she’s a slave. “You need to do your work, or you die.” And who runs the robots? Well, basically there’s a governing board who behind the scenes is run by a human.

Kolby: They could shut her down remotely anytime they want.

Ashley: So, the robots are literally like slave. You will work or we will cut you off. I thought that was kind of interesting. So, these robots, as much as they are free thinking and want to study and do research and help and be teachers…

Kolby: …they are slaves.

Ashley: They are slaves.

Jeremy: And it won’t be long until they overthrow their human overlords.

Kolby: That’s why you got to put in the 7-year limited lifespan.

Ashley: Here’s Quinn, faced with Leticia as this impossible case, is that ever true? Are there children or adults who have started down such a horrible path, they simply can’t be stopped?

Kolby: Alright Jeremy, you’re the one with the kid.

Ashley: If so what, if anything, should be done with them?

Kolby: I’m going to interrupt for one second. So, let me ask you Jeremy, since you’re the only one here with kids, we’re always a good diversity of backgrounds, was that ever something that you thought about having two kids? Like, what if I came home and one of them, I don’t know, I don’t want to say taken apart the cat, but if they…

Jeremy: …something to that effect, right.

Kolby: Yeah, if they had done something that gave you concerns that it was an early warning. That would be terrifying as a parent.

Jeremy: That would be. No, I never thought about that

Kolby: Okay.

Jeremy: And it never happened.

Kolby: That’s good. You still got all your cats.

(laughter)

Ashley: Isn’t that always true? You always hear, they talk to the parents of a serial killer, and they’re like, “Did you ever see this coming?” And they’re like, “No.” And they never thought about it.

Jeremy: There were incidents where they killed those animals, but I never thought they’d become a serial killer.

Ashley: They never thought that would happen to their kid.

Kolby: They thought it would stop at animals. But you never just kill animals.  

Ashley: So, what should happen to them? Are there any kids that are just bad? Or impossible cases?

Jeremy: I don’t know.

Ashley: Okay, time out. Isn’t that part of the death penalty though? People are so far not able to be rehabilitated that they just need to die?

Kolby: Okay, so this is something I do know about, because I have done criminal defense work. And I read a study one, and I shouldn’t have read it, about recidivism rates for criminals. So, I’m probably getting this a little wrong, but it was a peer-reviewed, academic article, that said that if you are convicted of a felony before the age of 25, there is a 97% chance, it was in the 90s, 90 something percent chance you will commit a second felony within 5 years of getting out of jail.

Ashley: That’s pretty high.

Jeremy: But there’s a lot of factors to that. There’s not just the fact that you’ve committed felony, it’s that, the prison system doesn’t rehabilitate.

Kolby: Absolutely true. I remember going to a hearing with a sentencing for one of the clients, and the guy’s like, “Hey man, thanks. As soon as I get out, I’m turning my life around.” And before I could think and stop myself, I would say, “Statistically? Probably not.”

(Laughter)

Kolby: That’s what I told him.

Jeremy: That’s not what you’re supposed to say.

Kolby: It’s not good. I think I said 90 something percent chance that’s not true, or something like that. And I felt really bad afterwards. He’s probably out now. Actually, he’s probably back in now.

(laughter)

Kolby: But here’s the thing, if we know there’s a 90 something percent recidivism rate because we do it all wrong, I don’t think it’s his fault, it’s the systems fault.

Jeremy: But if there was a system that was actually rehabilitating…

Kolby: That’s a whole different story. But, since that isn’t the case, why do you let people out at all? If you committed felony before the age of 25, we’re 90% sure you’re…

Ashley: …going to do it again…

Kolby: …this is the rest of your life.

Jeremy: But what about the 10%?

Kolby: Yeah, and that’s the thing, right? But that’s the same thing with the kid in the impossible case in the story. So, maybe only 3% of kids that have started down this sort of fantasy path can be solved, do you sort of save resources and not worry about all of them? Or do you, actually, we had this conversation with the cat about this earlier, like why spend $3000 on your sick cat when for $3000 you can save like 100 cats at shelters?

Ashley: So, the thing is, there’s not the research there. Look at the TV show Mind Hunters. I don’t know if you guys have seen it on Netflix…

Kolby: I have.

Ashley: So, there are 2 basic serial killer, they actually come up with the term serial killer, guys who are trying to profile what makes someone as serial killer and what make they tick. So, this is a perfect example when the research isn’t there? How do you know this person is going to grow up to be a serial killer? She’s exhibiting signs but there’s no definitive facts, but we now know obviously, that’s not a good sign.

Kolby: It’s a warning sign at the least.

Ashley: Is there any definitive evidence of catching someone when they’re young and turning them around? I haven’t heard of anything.

Jeremy: So, what this story is a lot about is behavioral deprogramming. 

Kolby: Ironically done by a robot.

Jeremy: Right. There are hints to that. You talked about Algernon, Flowers to Algernon is used extensively in, not behavioral therapy, but in psychology classes, because there’s a whole lot of psychology going on in the story as well as just the ideas behind intelligence.

Kolby: Okay. It’s one of my favorite short stories.

Jeremy: Yeah. Absolutely. And there’s a great line in here, “Who would be afraid of rabbits?” She asked if she is afraid of rabbits.

Kolby: I just thought Of Mice and Men when I thought that. I thought that was a Mice and Men reference.

Jeremy: It’s really a reference to the little Albert Experiment.

Kolby: What Little Albert experiment?

Jeremy: When psychologists, when was it? I don’t know, 40s? 50s? There was the question do we intrinsically like furry animals? So, they…

Kolby: The answer must be yes.

Jeremy: Yes, but can you be programmed to fear things that are naturally cute. And so, they took a baby, little Albert, and programmed him, conditioned him, using the Pavlovian process…

Kolby: A little shock collar on him.

Jeremy: No, they just hit, made a loud noise any time he touched a bunny.

Kolby: I’ve never heard this story. I want to find out what happened.

Ashley: Again, reason why they don’t do experiments like this anymore.

(laughter)

Kolby: I want to know what happens. So, they programmed this kid to be afraid of furry animals?

Jeremy: Absolutely. And then his mom found out, mom worked at the hospital, found out what they were doing, and they left.

(music)

___________________________________________________________________________________

Hi, this is Kolby and you are listening to After Dinner Conversation, short stories for long discussions. But you already knew that didn’t you? If you’d like to support what we do at After Dinner Conversation, head on over to our Patreon page at Patreon.com/afterdinnerconversation.com That’s right, for as little of $5 a month, you can support thoughtful conversations, like the one you’re listening to. And as an added incentive for being a Patreon supporter, you’ll get early access to new short stories and ad free podcasts. Meaning, you’ll never have to listen to this blurb again. At higher levels of support, you’ll be able to vote on which short stories become podcast discussions and you’ll even be able to submit questions for us to discuss during these podcasts. Thank you for listening, and thank you for being the kind of person that supports thoughtful discussion.

(music)

 

Kolby: I’ve never heard this story. I want to find out what happened.

Ashley: Again, reason why they don’t do experiments like this anymore.

(laughter)

Kolby: I want to know what happens. So, they programmed this kid to be afraid of furry animals?

Jeremy: Absolutely. And then his mom found out, mom worked at the hospital, found out what they were doing, and they left. So, this guy grew up being afraid of furry animals because they programmed him to be afraid of furry animals.

Kolby: I thought you were going to say he grows up to marry a girl who was furry. That would’ve been amazing.

(laughter)

Jeremy: No, no. But the interesting thing though…

Kolby: I’ve never heard that.

Jeremy: Whoever did the experiment was doing a little seminar.

Kolby: Was rabbits the thing? Is that why it’s in the story you think?

Jeremy: Yeah, this is the link. So...

Kolby: Oh wow. I totally didn’t get that at all.

Ashley: You ever take psychology in school?

Kolby: It’s the only class I failed actually.

Ashley: Oh, that’s okay.

Jeremy: So, the guy did a seminar and Mary Cover Jones was in the seminar when she was in college and decided to go into behavioral therapy and she is considered the mother of behavioral therapy because of exposure to this experiment.

Kolby: Because their son?

Jeremy: No, she’s just a psychology student, went to a seminar…

Kolby: Oh, okay, got it.

Jeremy: …by this guy. She was the first person to deprogram somebody from being afraid of rabbits, or afraid of furry animals.

Kolby: Really? Huh.

Jeremy: So, I think that’s linked into this.

Kolby: So, that the question is how do you deprogram somebody?

Jeremy: And what are the methods to deprogram.

Kolby: So, what did you think of the programming method of agreement?

Ashley: I really liked that part. I feel like…

Kolby: Give me one second. Let me explain it to the people.

Ashley: Ugh.

(laugher)

Kolby: You can jump in; I just want to explain it to people who haven’t read it.

Ashley: Okay.

Kolby: So, what the robot decides to do, is go the opposite way. And every time something happens, the robots is like, “Yeah, and he deserved it. Yeah, and you should do more.” And the theory being, is that by encouraging, it becomes so awkward that it becomes embarrassing and that’s how I read it at least. I could be wrong.

Jeremy: Yeah, that’s the idea.

Kolby: And the person is like, “Oh, no I shouldn’t do that.”

Jeremy: It’s wrong. Try to get the person to get to the conclusion of that this is wrong.

Kolby: Instead of telling them it’s wrong in which they become defensive of their opinion as a sort of natural defense mechanism. So, if somebody is punching people, you’re like “Yeah, you should punch him harder until they bleed, until their face, and there’s blood on your hand.” And you’re like, “Oh, no, that’s gross. I don’t want to do that.” And they’re like “Why not?” And you’re like, “Oh, because punching is probably bad.” It’s a way to sort of one-up someone until you’ve shamed them into their position, out of their position. Okay, now….

Ashley: Well, for her to build up to being able to question her like, she has to build a rapport. So, in the beginning she asks a lot of open-ended questions. It makes sense, was the rabbit rabid? And she’s like, “Yeah.” She’s like, “Makes sense. We need to have people that you should kill dangerous things, all dangerous things, more like you’re needed.” So, it’s like, “Oh yeah, I did something good.” And so, she’s like, “Oh, you don’t think of me as twisted, you understand my logic?”

Jeremy: And really approaches her as “Yes, we want you to do this. We’re going to train you to be a killer for the right reasons.”

Kolby: Right, in the hopes she’ll be repulsed by it.

Ashley: Yeah. So, she kind of understands her mindset and is able to kind of infiltrate and so it’s like, “Okay, I understand.”

Kolby: Social currency.

Ashley: Imaging being this little girl doing these weird things, and the mother’s like, “Why are you doing this?” And she doesn’t know. But finally, here’s this first robot who’s like, “Oh, yeah, I know why, that makes sense of why you’re doing this.”  Who doesn’t want to be understood? Who doesn’t want to be like, “Oh, that makes sense to me.” And so then later on, she puts her in more complex, and more complex situations where it questions her, basically, moral compass of “Wait, is this when you would do it? Is this not when you would do it?” And so, I think the steps of her to get there, that was a really ingenious way of doing it.

Kolby: There’s one-up step by step.

Ashley: And then trusting her giving her a knife, being like, “You trust me with this?” It’s like, “Yeah, I’m on your team”

Jeremy: “This is what you’re here for.”

Ashley: Exactly. And it didn’t make her feel stupid, or dumb. You’re an 11 or 12-year-old girl. You don’t know why you’re doing these weird things

Kolby: That’s the part I was getting to the page, when she first meets the little girl, she says, “’Yes’ Quinn snapped, ‘Well I don’t, it’s wrong’. And the robot says, ‘Is that it?’ She’s like, ‘Lots of people say it’s wrong, how many is lots?’ Leticia stared at the corpse, ‘All of them, except one, except you Leticia. What do you think?’” She’s actually listening as opposed to encouraging rather than just being like “No, you’re stupid and stop it.” Which obviously doesn’t work.

Jeremy: It’s an interesting counterpoint to the point from the previous story where we talked about moral panic actually causing more… well, the…

Kolby: Oh, yeah.

Jeremy: The actors of trying to keep you from being, from deviating from the social norm, if you try too hard, it just increases that level of deviance.

Kolby: Right.

Jeremy: But to actually come in...

Kolby: Your punk rock example of the last one.

Jeremy: So, that but in this case, you’re actually come in in and listening and trying to push them in that direction so that they understand.

Kolby: Validating them and helping them.

Ashley: One of the things that was drilled into me, I’m a dental hygienist, and open-ended questions. You never tell a patient something, you ask open-ended questions so you can get more information. “I see here you have cancer, tell me more about that?” Instead of like, “Did you have cancer?” “Yes” “Well, I need to know more about that, tell me about that?” You’re afraid of coming to the dentist, tell me more about that?” Why?

Kolby: So, you think these teaching methods is legitimate? Would work?

Ashley: The open-ended questions to figure out why this girl is doing this? Yes, absolutely. There was, oh, I just flipped the page, where was it here… she’s like… maybe… anyway, it’s all these open-ended questions. “Well, what do you say, I’m a Defly, what should we do? Shall we start? What are you waiting for, you aren’t here too? No, I’m just here to figure out what you’re doing.” There’s not this shame, this motive behind of like, “I should change you, I should change your behavior. Let me learn. So open ended questions, absolutely, it’s a way for her to understand her motivation and understand where she’s coming from.

Kolby: And so, you think this would be a good, sort of way to change behavior?

Ashley: Well, just to understand.

Kolby: Yeah.

Ashley: I mean, this person is already confused, she’s killing things, her mom’s yelling at her, her mom’s hysterical crying, and it’s like, hold on, what’s going on here? Let’s lay out the facts, there is a dead rabbit, she looks at the gums, she’s like “That rabbit was rabid.” Okay, fact number 2, looks like her stitching was pretty intricate it looks more like study. Let’s say you’re like trying to figure things out, this looks interesting instead of like, “Why did you put the foot by the head?” It’s like, oh no, hold on, you’re actually, there was thought put into this obviously.

Kolby: It reminds me a of the saying the beatings will continue until moral improves.

(laughter)

Ashley: Right.

Kolby: This is the opposite of that. Let me ask you Jeremy, the extreme methodology that this theory is put to, they actually bring in an android that looks like a person, she’s dying, here’s the knife, blah blah blah. Do you feel like when you got this sort of exception problem, that it warrants and permits exceptional responses?

Jeremy: In this, probably for this case. We’re talking about a budding serial killer, then yes, this seems like an appropriate way because it’s not like shock therapy, it’s not therapies that are harming to the patient.

Kolby: There’s probably some PTSD.

Ashley: It’s exposure therapy.

Jeremy: There’s some trauma, but it’s in a direction. But it all seems like it’s healthy interactions in a direction.

Kolby: But she takes her to like a war zone?

Jeremy: Right, to see a person dying.

Ashley: Let me back it up. So, her initial questions I approve. The second one where she brings her to a girl who’s not really a real person, it’s a fake person dying…

Kolby: She doesn’t know that.

Ashley: So okay, the first situation there’s a rabbit that’s dead. In the second situation, there’s a bird that hit a window, not dead, but she kills it, so she’s like, “Well, let’s test this theory again. Let’s bring her to an injured animal, in this case a human. Again, going up a level, instead of being an animal now it’s a human and injured human, let’s see if she kills it?

Jeremy: How she handles it? She wasn’t there to kill the woman.

Kolby: She was there to kill Algernon...

Jeremy: They were...

Ashley: But it was also like…

Jeremy: Who in this process of being gutted and dying. It was an exposure to illicit that, “Oh my god, this is wrong. How could I do this?”

Kolby: To put the person out of their misery.

Jeremy: This was just to see this is what serial killer does.

Ashley: Oh, I thought she wanted to see if she would kill her because she gave her the knife for that.

Jeremy: No, that was for Algernon.

Kolby: To go get Algernon.

Ashley: Keep in mind the robot Quinn, this entire process, she’s like scan the pupils, look at the dilation, look at the heart rate, you’re looking for that response. So, could we do these sorts of simulations in real life? Like, exposure therapy? Is this permissible? You think?

Jeremy: Ok, so from a machine learning perspective, we talked about this in the last story as well, you talk about the machine, the AI, is just trying to park the car without damaging other cars.

Kolby: It doesn’t know what a car is.

Jeremy: Right. So, there’s a lot similar in here. In here these AI are specifically, here’s the scenario we want an optimal result, here are all the things we do, try not to wreck all the cars, try to get the car into the parking spot of normal social behavior, so what extremes do you go to? This ties into the failure modes in machine learning. And there are a lot of ways around this we’re currently going through, things like reward hacking, so they get rewards but if the machine can hack that reward response without having to actually...

Kolby: I don’t know what reward hacking is.

Jeremy: There are a lot of examples in gaming where AI’s perform, they figure out how to get the rewards without doing the work.

Kolby: Ok, got it. So, they’re basically someone in the basement of their mom’s house.

Jeremy: Right.

(laughter)

Kolby: I got it.

Jeremy: And there are other examples like wire heading, is a good example of it. If you can just put wires into the pleasure center of your brain, why do you even do any work when you can directly…

Ashley: You can just, “Boop”, I feel better now, “Boop”.

Jeremy: And there are good examples of this is AI as well. Activate, if you can take control of the measurement system, you don’t have to do the actual work, you just get the result.

Kolby: Sure. So, I’m more skeptical about this as a learning method. Not that I don’t think it would work; I do think it would work.

Jeremy: Again, you’re getting to the result but at what cost?

Kolby: That’s exactly it. And I think it might mean that a hundred percent of the time when a kid sort of showing these signs they end up serial killers, in jail, or on death row, whatever, if you’re in Texas, I actually, I think, that it’s unethical, maybe, to do certain things even if those things are necessary to stop behavior.

Ashley: So, the unethical thing is exposing her to embarrassment and trauma situations?

Kolby: No...

Jeremy: To the trauma, not to the embarrassment. Like the phase one seems perfectly reasonable.

Kolby: Talk to the kid about why did you kill the rabbit. Phase 2 with the bird, totally fine.

Jeremy: Wait, no, phase 2, that’s what she did in phase 2. The response to phase 2 was to see the dying person.

Kolby: Taking someone to seeing a dying person and giving them a knife and saying, “Yeah, you should go visit him, kill the serial killer.”  Like, even if that’s an effective teaching technique, the result doesn’t make the morality in my case.

Ashley: I’m playing devil’s advocate here… how do you know… okay, she’s going to fantasize forever and ever and ever and ever and ever about killing.

Kolby: Maybe.

Ashley: And she’s going to progressively get worse and worse and worse and worse until she does it, but if you can get her now to realize, “Oh, wait, I’m not capable of this, I should stop this behavior now.”

Jeremy: And that’s exposing her to a dying person, it’s not actually a dying person.

Ashley: Yeah, it’s fake. It’s a controlled environment.

Jeremy: And it works.

Kolby: But I understand I’m in the minority on this. It seems like in almost every conversation there’s a two versus one, except it depends on who is the one person.

Ashley: Say I’m trying to rock climb, and it’s like, well I’m going to keep trying to climb until I get to the top. Well, how about you just stick me on the top and see if I can handle being at the top and guess what? I can’t. Now I know. I’m going to stop trying to keep climbing and hurting people along the way, so just shock me at the top and then I realize I don’t want to go up there.

Kolby: But here’s what you guys are saying, you’re saying the severity of the disorder in action warrants a comparable severity of education therapy technique. And so, if you got an eating disorder, that warrants a certain level of eating disorder therapy intensity. But if you’re a serial killer, then it’s a certain, even greater level, you know? And at a certain point, I wonder if the results don’t justify the means.

Ashley: Well, that’s why you got to do the experiment. You got to figure out what’s the extreme you need to go to, but you don’t know until you practice.

Kolby: I’m saying maybe that result, maybe the extreme you have to go to is unethical to go to, and you just have to accept that sometimes the world has serial killers. So, let’s say you…

Jeremy: But if you’re outsourcing your ethics to the AI that’s performing the therapy…

Ashley: And its controlled environment, not real people, it’s a fake robot with blood, and she has to face it…

Kolby: I don’t know. So, imagine if she was a budding pedophile, what would the therapy be? Are they going to progressively expose her to more horrific acts of pedophilia until she’s offended by it? I’m not okay with that.

Ashley: Yeah.

Jeremy: Even if it’s all fake.

Kolby: Even if it’s all fake, and even if it changes the behavior, I don’t know about that.

Ashley: So, what do you do about that person? We just talked about, you just put them in jail.

Kolby: Put them in jail forever. You jail them forever so that they’re not a harm to other people, even though you could have helped them and chose not to because it’s unethical to help them in the way that needed to have happened in order for it to work. And I get that’s just a random, arbitrary line in the sand for me, maybe it’s not arbitrary, but it’s a different line in the sand for me than you all. I’ll give you one other example that’s not so traumatic. Everyone like, I don’t know how to get rid of all the traffic, it’s terrible. And I’ve always known how to solve the problem with traffic. It’s simple. All you do is you reverse the number of carpool lanes to the number of single person lanes. So, you get on a freeway, and instead of there being 1 carpool lane and 4 regular lanes, there are 4 carpool lanes and 1 regular lane. And the carpool lanes are going to be mostly empty or you’re going to have to carpool because that one regular lane is going to be a disaster. Right? And you will solve the traffic issue, but we don’t. It might.

Jeremy: It didn’t work with the bike lanes.

Kolby: That’s true. It didn’t work with the bike lanes. But we don’t do that because it’s just not what we do. Right? Like, the goal is not always the result because our own morality is wrapped up in the way that we…

Jeremy: In the way it was approached. Absolutely.

Ashley: But what about her example of, she was, by the way she was basing a lot of her teaching methods based off of pervious scenarios. There were basically two tribes of people that hated each other.

Kolby: It’s a great example

Ashley: And instead of trying to teach tolerance to each of the groups, she basically told each group, “Yeah, you’re right, you should kill them! I’m going to teach you how.” And then went to the other group, “Yeah, you’re right, you should kill the other group.”

Jeremy: She tried to escalate the problem.

Ashley: And then each group realized….

Kolby: Based on how smart this guy is, I bet he really found this research and I bet this is real thing.

Ashley: And what happened to each tribe is they realized how mad and crazy and extreme they were, that both of them were like, “Well, we don’t want to be the crazy people. Let’s back down.” And so, that was her idea of being like hyping her up, here’s a knife, let’s go, let’s go… and finally she’s like, “No, I really don’t.” So, is that real?

Jeremy: It’s the same idea.

Ashley: It’s the same idea. It’s kind of an embarrassment. Actually, it kind of backfires on her because Leticia was embarrassed by how she couldn’t have killed earlier, that she goes and kills the butler or the gardener guy or whatever, and it’s like, “Okay, that did backfire.” So.

Kolby: Let me ask one last thing as our parting note, this was a quick 30 minutes.

Ashley: Yeah, oh my gosh really? Oh man.

Kolby: So, the thing that ultimately shakes it out of it, is the robot kills Algernon rightfully so because he’s broken off his collar.

Ashley: He’s rogue.

Kolby: And the little girl watch’s that, and she’s okay with that. And then she goes to kill the mom and the little girls like, “No, don’t kill my mom.” And the story says, well the cycle’s broken, here’s the thing I don’t know. This goes back to your sort of gaming the game sort of thing. I understand the story meant that to mean that it broke the cycle leaving the sort of construct of the story, I don’t know if it just simply deprogrammed to the girl to think killing strangers versus killing family members whereas people you have an emotional attachment with versus, you see what I’m saying?

Jeremy: Right. Well, but it’s the similar idea, like this robot is crazy, and it’s an example of if you want to do that, it’s an extreme that this girl now doesn’t want to go to because…

Kolby: Right. But you don’t think maybe the only thing she really learned was…

Jeremy: …don’t kill my mom.

Kolby: Right. Don’t kill family members.

Jeremy: It’s possible.

Kolby: I don’t know.

Jeremy: In an extended story.

Kolby: We might find out. Like in version 2 we might find out she learned a lesson, but not the lesson.  

Ashley: The thing is Quinn, when she starts to go wanting to kill, she goes, “This is no god.”

Jeremy: To Algernon.

Ashley: To Algernon. “I just killed your idol…

Jeremy: Who is not your god.

Ashley: … and keep in mind she’s also kind of idolizing Quinn, you’re like teaching me things” And then she turns to go to her mom, the girl was shocked not only on the “kill life to kill life to understand life.” To here’s my idol being killed, here’s my other idol going crazy town, yeah, that is a cluster, mental…

Jeremy: It’s a pretty harsh therapy.

Kolby: But, for what is a pretty harsh problem, because I think the rational of the morality of doing it. Yeah, the other thing I thought when I was reading this, I thought, “Oh man they shouldn’t let this girl read Ayn Rand. That’s what got her started. That’s what made her like this.”

(laughter)

Kolby: She read Ayn Rand then it’s all downhill man.

Ashley: So, do we limit what our kids, see, read, hear, now that information is so readily available? Would this girl have been who she is if she hadn’t been reading Algernon’s stuff? Or seen his stuff?

Kolby: I go the opposite way. This goes back to the Jeremy and the punk rocker thing from our last one. I think you don’t limit what people see, you let them see everything so they understand the insignificance of any one thing.

Jeremy: Yeah. I would agree with that.

Ashley: You don’t think this girl went down a rabbit hole and got obsessed with it?

Jeremy: Yea, she would have gotten obsessed with it. Or. she would have gotten obsessed with something else anyway.

Kolby: Nobody was every like what was Hitler listening too? Let’s ban Beethoven.

Jeremy: Ban art schools’ man.

Kolby: Ban art schools, exactly.

(laughter)

Kolby: At any rate, this was a really quick 30 minutes. Again, a huge thank you to Michael Rook. This is a… I would say if you’re just reading your first After Dinner Conversation, and I hate to say this, don’t read this one. Just because it’s not that’s so confusing, it’s that it’s so smart.

Ashley: I would say it’s dense.

Kolby: Yeah. And you have to read the footnotes. The footnotes are actually hysterically. They make it as well. It’s a great story. It’s phenomenal. Thank you, Michael, for submitting it. You are listening to After Dinner Conversation, short stories for long discussions. If you’ve enjoyed this, please “like” and “subscribe”. The vast majority of people don’t. It’s a silly thing, you should do it. And recommend it to your friends.

Ashley: Share. Post it, share it, talk about it…

Kolby: That’s the #1-way people learn about podcasts, it’s by other people recommending them. So, recommend it. If you’ve got a story, submit it, go to our website: Afterdinnerconversation.com. We also have an anthology that is either come out or just coming out depending on how much time I get to do work. Go ahead and check it out. It’ll be called After Dinner Conversation Season 1. Boom.  Implying there will be a season 2.

Ashley: Redux.

Kolby: But it’ll be better than the 2nd Matrix.

(Laughter)

Kolby: So, that is it. Thank you for joining us. Bye bye.

* * *