Hi, I'm Anvi. Hi, I'm Jack. I'm Ayda. I'm Sofia. I'm Revna. And this is One More Chapter!
Sofia: Just to be clear, we're reading All Systems Read by Martha Wells.
Jack: We all loved— We would all love for you to read it because this book is so good.
Ayda: Yeah, we said that we'd read until page 75, like half of it.
Sofia: This is our first book of this podcast. We met each other all in our English class, and we all love reading, and we decided to do a podcast about it.
Jack: Yeah. So All Systems Red follows Murderbot, a security unit whose job is to protect its clients from whatever harm may befall them on their survey mission.
Anvi & Sofia: And it is bored. It is very bored. It is very very bored. It just wants to watch Sanctuary Moon. Yeah.
Jack: It hacked its governor module, so technically it could go off and kill all the humans, but it doesn't really want to.
Sofia: But it could watch TV.
Anvi: Yeah, it just wants to watch those, like,
Ayda: Just like every single one of us.
Sofia: We could go off and kill people.
Anvi: I could go off and kill Ayda right now, but I don't want to. Why would I do that?
Sofia: We want to talk about All Systems Red.
Revna: How is this a podcast?
Anvi: I don't know.
Ayda: Quiet.
Sofia: As you can tell, we have some harsh editorial commenting here.
Anvi: Basically, Murderbot is kind of like a cyborg, like a mix between…
(Revna: Murderbot!)
Ayda: mix between cloned biological parts.
Sofia: And mechanical.
Anvi: So it's like...like artificially printed skin and like metal and it all comes together in one Murderbot.
Jack: One really cranky individual.
Sofia: Yes, one very, very, very cranky individual.
Ayda: Also, the company that produces the Murderbots is kind of a crappy company.
Jack: I mean, like all companies.
Ayda: Yeah, and it's kind of...Having, like, a sec unit is necessary to go on these missions.
Jack: Sec unit is what we call Murderbots.
Sofia: And by necessary, we mean the company won't sponsor your mission unless you buy their tech.
Ayda: Yeah, so basically the humans are a little bit pissed that they have to bring this...this Murderbot along the way.
Sofia: Murderbot doesn't particularly—
Anvi: Do you want to give a summary because they don't know?
Sofia: Yeah, Murderbot doesn't particularly want to be there either.
Revna: You forgot to tell everyone!
Anvi: Should we do like a quick summary of what's happened so far?
Sofia: So yeah, quick summary of what's happened so far. Murderbot does not want to be here. Murderbot is kind of bored. But Murderbot is on a survey mission being security for… how many? Five scientists who are all together?
Jack: Yeah, well let's see. Dr. Mensah, Rathi, Baradwaj, Arata, Overse and Gurathin.
Anvi: Yeah, and they're all on, like, a... I believe it's, like, a different planet that they were sent to.
Sofia: Yeah, they're on a new planet exploring it to see if it's worth trying to bid in for the whole planet. Yeah. And they're all from, like, a...
Jack: A non-corporate…
Sofia: Yes, a non-corporate planet.
Ayda: Has Earth, like, colonized a bunch of planets or something unofficially.
Anvi: I believe so far though.
Jack: I think it's so far that Earth is just out of picture.
Sofia: Yeah, Earth doesn't matter right now.
Anvi: I think they just destroyed Earth.
Ayda: Or like, they're humans, so. They must at least have some sort of lineage.
Sofia: Yeah, perhaps. So, Murderbot is security on the survey mission, and so far what's happened is Baradwaj got attacked by a worm. Like a worm-type space hostile called the Hostile One.
Jack: Yeah, because we're about to name things Hostile One, Target One, etc.
Sofia: Yeah. So, Baradwaj was attacked, injured, but the kind of thing that they're concerned about right now is that this worm monster creature was not in the database that the company sent them. And so then they went out to explore some blank sections of map that they were also missing. And their technology is being really glitchy and suddenly they can't get in contact with the other group of humans that are on the other side of the planet.
Jack: Delt fall.
Sofia: Delta fall.
Jack: Delt fall
Sofia: Delta fall? Delta fall. And they go to check that out, all of the humans are dead, Murderbot goes in to check it out,
(Various: Yay!)
Sofia: Murderbot gets attacked by rogue sec units.
Jack: Which is kind of funny because it's a rogue sec unit, and it didn't go wrong killing all the humans.
Ayda: Yeah. Yeah. So there must be something special about this movie.
Sofia: It just wanted to watch Sanctuary Moon.
Ayda: Just wanted to watch TV.
Jack: Yeah, Sanctuary Moon is its favorite media show.
Ayda: Yeah. We could be going around killing people, but instead, we are sitting around talking about a book.
Sofia; Yep.
Anvi: Essentially, like, the rogue sec units attack Murderbot. I think Murderbot fights back, thinks that it's killed the other sec units, but then it wakes up. I think it blacks out and wakes up to the other sec units about to kill it again.
Sofia: Yeah, but first something gets jammed into the back of its neck.
Jack So it has a drive in the back of its neck, and the hostiles put a combat override module in it so that… it’s, Murderbot, is under their control.
Anvi: So essentially if—
Sofia: And it didn't realize that until it got back to the ship with Mensah and kind of started glitching out. And what happened at the end of this last chapter that we read is that Mensah refused to kill it to save the rest of the crew. So Murderbot basically pulled a gun on itself and shot and we don't know what happened yet. But, uh, Murderbot's efficiency is at 10% and dropping.
Anvi: Murderbot wanted, like, Mensah and the crew to kill it because of, like, the drive that, uh, Jack mentioned earlier.
Anvi: Because it's, like, that makes it, like, very much vulnerable, like, if someone hacks into it.
Sofia: It basically, it makes it possible for somebody to basically say, kill all the humans, and then Murderbot has to kill all the humans.
Anvi: So Murderbot wants them to kill it, but the scientists won't, so it tries to do it himself.
Sofia: Yeah, and we don't know what happens.
Anvi: We read until through chapter four, I believe.
Various: Okay.
Sofia: So, opinions so far.
Anvi: Crazy stuff.
Sofia: That is true.
Jack: Yeah. Something I noticed is how corporate everything is. Like it's not that they're from a planet, it's they're from a non-corporate political entity. That's basically that all the other political entities are corporations.
Ayda: Yeah. I think in that world, like that kind of puts even more emphasis on how solidified Murderbot's purpose is and how hard it is to stray from that because everything is just like, you know, highest bidder.
Anvi: I really like the world-building, like the way the author did it. The author didn't explicitly state, oh yeah, humans left Earth, and blah blah blah. But it was thrown in, and it felt very smooth. It didn't feel like, at least for me, it didn't feel like, I was like, oh my gosh.
Sofia: And you also kind of get thrown into the middle of the action, but not in a way where there's 17 things going on, and you need to figure it all out. You get thrown into the middle of a scene, where you kind of get details laid down for you, and I think it was done really well.
Anvi: I agree.
Jack: And I think a lot of the charm is that Murderbot doesn't know what's going on all the time either. Like, it doesn't know what some words mean.
Ayda: Its education modules are terrible because...
Sofia: They're mass-produced and horrible.
Anvi: It's like Unreliable Narrator, except the narrator isn't necessarily unreliable, but just like...
Sofia: We finally have world building where the internal monologue world builds at the level of a supposedly average person in the world. Most worldbuilding is very in-depth, and it's like, how could you, a random college student, tell me everything about the political structure of your universe? This is Murderbot, who's just like, I don't know, there are some people, they do some things. And it was actually, I feel like it makes it a lot more approachable, kind of feels like you're being told a story rather than, like, told a story verbally, rather than getting laid down with like a whole copy of an encyclopedia and being told about the world. And I think it's really cool.
Ayda: And I think Murderbot's a really good storyteller because, you know, it's really immersed in a lot of these serials and, like, entertainment. And I think it really enjoys, actually, like… I don't know, like exploring different dynamics, like, um, figuring things out. Like, I think it's like it, like at the same time it's disinterested in humans, but on the other hand, it's like constantly watching them and uncovering their secrets and like, um, you know, like watching these dramas.
Jack: And yeah. Part of it…sorry to interrupt. It really likes unrealistic shows, like you know, like, you know, oh, there's magic, there's ridiculous science, stuff like that. Because it's like, and it doesn't like, you know, ridiculous dramas that much because its job was to data mine its clients. And so it had to like watch, watch people, you know, have arguments, have sex, go to the bathroom. Like it did, it didn't like doing that. So it, it like it seeks media as an escape. And, but, but, but it learns a lot of, a lot of the world through both its data mining and its media.
Ayda: Yeah, and I think it like enjoys learning about a world that, like at the same time it's scornful of, but
Sofia: I think it's a bit grumpy about it, but kind of like a gruff neighbor. Or honestly kind of reminds me of a sulky teenager sometimes. The way it resists hanging out with the rest of the crew and is kind of like, Yeah, no, I'm going to go to my room and watch TV, mom.
Anvi: Murderbot’s in its emo phase
Ayda: So I have a question for you guys. So how do you think it's kind of the way it proceeds its purpose of being like a Murderbot built for murdering things like how do you think that affects its relationship with the humans?
Anvi: I think — This is not like a fully formed thought so it might come out as it goes
Sofia: Give us the thought spool.
Anvi. Yeah, I think because it's like part like quote unquote human and part machine, well I guess it was artificially printed, so I guess if you want you can consider that… but whatever. Like, I think it perceives everything in a slightly different way, but, like, I remember Sofia saying earlier that Murderbot is very relatable.
Sofia: Murderbot is so relatable.
Ayda: And I find that so interesting --
Sofia: I think Murderbot did say at some point in the book that its interactions are more awkward with the humans because it knows it's a horrifying Murderbot, and they know it, and they're very awkward about it. And while I think this is somewhat true, I think this is more Murderbot projecting than anything else. I don't think the scientists actually care, because they're just some… They're from… basically the equivalent of what's implied to kind of be like the random hippie planet
Jack: A backwater
Sofia: who are just hanging out and they're chilling and having a good time and they seem to consider Murderbot one of their crew
Jack: Yeah, it’s a weird experience for it because it's never happened before
Sofia: I think Murderbot is kind of raised to understand itself as a piece of separate hardware and is also very awkward about being considered anything else.
Anvi: I feel like even if Murderbot wasn't, like, part machine, even if Murderbot was, like, I guess, full human, there would be no sort of way for, like, even if, like you were saying, even if, like, the crew didn't see, like, it like something that murders it. Even if the crew saw it like a human, it wouldn't be able to, like, consider itself a human because of what it does.
Ayda: Because it has a conscience, like, at the end of the day. It has a very human, like, conscience.
Sofia: But it doesn't necessarily, — it doesn't consider itself on the same level.
Jack: Well…it considers itself on the same level.
Sofia: But not the same thing.
Jack: Yeah. It's sapient, certainly.
Sofia: Oh, for sure, it's sapient.
Jack: But it doesn't consider itself a human. It thinks that humans are dumb.
Sofia: Okay, that's fair, though. We kind of are.
Jack: Yeah. —
Sofia: Um, I have-
Ayda But do you think, like, do you think the reason why it- sorry. The reason why it separates itself from humans is more of, like, the scorn or just the fear of, like, entering that, um, like, just the fear of interacting with them and, like, either being rejected or, like, being miserable.
Sofia: Ooh, I did actually have a- uh, question about this. It might take me some time to find it, but I noticed that Murderbot is always very careful about mentioning the hacked governor module, any kind of behavior like that, even though the humans are very accepting of it and probably wouldn't care if it explained to them, do you think Murderbot is scared of them? Why do you think that is?
Jack: I mean, if it gets out that its governor model isn't working it would get scrapped to parts
Sofia: I understand that, but I think… I feel that the scientists on this planet would kind of support it in that and hide it. Why do you think Murderbot is so scared of kind of revealing anything about itself, especially when the scientists are actively pushing it to, like… say hi and do things and hang out.
Anvi: I mean, I'm not sure, has it had? — I mean, I'm sure it's had, but like, did it mention any previous interactions, like, major interactions with humans?
Ayda: Like trauma.
Anvi: Because I feel like that might make it a little bit different, like, if...
Sofia: I think it's definitely shown that.
Jack: It's definitely implied that in the past, its humans, its clients, weren't nearly so nice to it.
Sofia: When it implied that clients usually make their bots fight each other for sport.
Jack: Or, at least, like, yeah. Like comments about it being treated like equipment, like there's also a comment about it being like left behind because you know, it was just equipment
Ayda: Yeah, I think like it, that sort of treatment definitely would affect both its views on humans and itself. I had another question, just kind of like something that we didn't get to witness in the actual, like the part that we read, but why do you guys think it hacked its governor module? Like, why in the first place?
Jack: I know this because I've read it, but I'm going to step out for a minute while you guys discuss.
Anvi: Maybe curiosity. I feel like at a base level, because it has conscience, I assume that it also has a certain level of curiosity, even if it doesn't consider itself to have the same level of conscience or curiosity as a human would.
Sofia: I think maybe fear. This is kind of going in a different direction, I think. Yes, there's some of that, but I'm guessing there was probably a time— I saw some things when I began watching the TV show, I haven't watched a lot, but I don't think it really is book accurate. I think there's probably some times or there may have been a time where a human threatened it. Oh, there's also a part implied in the middle of the book where...it said, I've seen a lot of dead bodies and caused a lot of them, so I think there may have been a time where it was getting attacked and needed to hack its governor module to get out of that situation and somehow has managed to survive it.
Ayda: Or maybe there's a time when it was forced to attack and it doesn't want to be put in that position again. It doesn't want to be a murderer because of that conscience that Anvi was talking about.Or maybe it has some sort of grand plan that it's waiting to execute. I don't know. Or maybe it knows something about the plans of the company that produces it, and it wants to go against them. I don't know. It talks about the Murderbot, about watching a lot of action adventure movies and like being interested in that stuff.
Jack: Guys, why do you think Murderbot called it the company and not like the name of the company? Because it calls like, it calls Delt Fall by its branding, it calls you know, Preservation Ops, its own team by its branding. Why not the company?
Anvi: Perhaps it doesn't know and maybe that's why, again, like, it hacked the government model. Maybe not specifically to find the company's name, but because there are so many things that possibly are kept from it.
Sofia: Or maybe it's implied that during this narration that you already know what the company is.
Ayda: Maybe the company is just such a big deal. It's just such a fundamental part of this interplanetary society that it just doesn't feel the need to refer to it by its name.
Sofia: Also, because this is a podcast episode, I would also like to say to any listeners that could potentially be listening to this episode, feel free to drop your opinions in wherever comment box are available. We would love to hear them. Continue.
Ayda: And also, like, read along with us, because I think it'll be twice as fun if you don't know what's going on either.
Sofia: Yes, once again, we have read up to page 75 and 76.
Sofia: All right, so go ahead, Anvi.
Anvi:I forgot what I was going to say.
Sofia: Okay.
Ayda: Oh, I have another question.
Sofia: Oh, I do too, but- .
Ayda: You go first.
Sofia: I'm also noticing a thing where I have… Maybe this is more of a personal monologue of something I noticed. But… It's difficult to portray, it's difficult to understand emotion from the perspective of a Murderbot. And I was thinking about this a lot because a lot of Murderbot’s narration is very level and object-oriented and detail-oriented. And, but you hear like, I was embarrassed, I turned away, I cringed. And the only physical description you ever hear is, I flinched. And I'm realizing that a lot of the way you describe… You show, not tell emotion in books is, like, stomach churn, sweating, nervous. Like, my limbs are shaking. None of that applies if you don't have a human body. And I was kind of thinking about how that's a very different thing as a robot. Where we--The concept of emotion to a robot is probably much more cerebral in terms of how it's portrayed than it is for a human. It feels very level and dry to us, but if you were to give this to a robot audience or an audience of other sec units, would this be the most emotional account ever?
Ayda: Probably really dramatic.
Sofia: That's what I'm kind of thinking, is you can't have the same thing.
Ayda: I agree. ‘cause emotions don't manifest themselves in the same way in robots as they do in humans.
Jack: Yeah, that is a really interesting thought, because, like, I don't think I ever just thought, hey, I'm sad. I thought, hey, there's, like, a big purple knot in my chest. You know?
Sofia: Or, like, my stomach hurts, and I feel like somebody's twisting a dagger in it, and you don't have that kind of nerve capability.
Anvi: This is, like, a little bit off-topic, but, like… what you said about like maybe would make more sense to like robots like I know most books don't do this but i'll be really really interested to see like if Like, Murderbot was, like, writing the diaries or, like, doing this narration for someone that might be part of why, like, it doesn't want to say the company name or something. Like, again, I don't think it's ever going to come up because it's a book. So it's just, like, usually the characters' self-narration. But, like, I would honestly be really interested to see that. Or someone could write a fanfiction about it.
Various: Ooh
Sofia: Fanfiction challenge, guys.
Ayda: Okay, wait. I have another really unrelated question.
Sofia: Yeah, go ahead.
Ayda: Just to, like, turn the wheels a little bit. Or, sorry, not turn the wheels. Like, rotate. Get onto a different track. I don't know the analogy.
Anvi: You should get onto a different track.
Sofia: Yeah, go ahead.
Ayda: Okay, so what do you guys think about… You know how that one character, Rathi, had this whole moment of, oh, this is slavery, this is terrible. How do you guys think that it's going to… work when AI gets to a point of sentience and it needs to be somehow integrated into society? How do you think that would function? Maybe specifically looking at this society, just because that's such a broad question, but how do you think that should work
Jack: I mean in an ideal world if it's sapient it has rights but I suspect that it certainly would not work like that and that AIs would be treated as second class citizens.
Sofia: Do you think we are already, how close to that do you think we are?
Various: Very far.
Jack: I feel like currently AIs, in quotes, can just like spit out stuff that's already written.
Anvi: I think to some level I think we would treat its, the AI’s sentience as we treat like, other animal sentience. Like, we wouldn't… Like, even if it is, like, on… And it probably will be on or higher than the same level as us.
Sofia: But we won't see it as such because we were its creators
Ayda: Yeah, I agree. I think, like, the animal comparison is really good. Because I think we see animals as, like...primarily tools, secondarily sentient beings.
Jack: There was a really depressing article where it was like you know, scientists discovered an endangered species and had a lot a lot a lot of ways that it would be useful to humans because it wouldn’t be worth saving a lot of people if it wasn't useful to humans
Ayda: Yeah. I think humans, like, we would see AI as primarily something that serves us, and if we have any leeway in that regard, we could, like, give them some rights, just as, like, throwing them a bone, you know?
Anvi: And...
Sofia An-- Oh. Go ahead.
Anvi: Maybe there would be, like… Like, you know how there are, like, animal rights activists? There would be, like, AI rights activists.
Ayda: Vegans, but they just don't use Chat GPT.
Sofia: But I think there's also going to be a thing where, if I may, as we… continue to improve AI, we're gonna watch it improve by such small increments that it might be to the point where it's like you don't notice changes as they happen but then you look back and you're like, wow, this could be sentient and we haven't noticed because we've only seen it update by small increments and I don't know If we’re ever--We're gonna, humans are very good at adapting their views of what, is deserves rights or like privilege based on what'll make them feel good and so we might even we might not even notice when we make something sentient because it'll just be in its spot and that's how we made it to be
Anvi: yeah okay just really quickly like I know this is like it could go way off topic really quickly but like what is your guys's like sort of definition of sentience like if you had to do like a spark notes.
Various: Ah…
Anvi: That's why I said it could go way off topic.
Jack: I feel like sentience…Sentience and sapience are different things. Like, obviously this is clearly both, but like, something sentient is something that can think about itself. And something that- that sapience is something that's like, intelligent, it can reason and stuff like that.
Ayda: I think like, the idea of like, consciousness is just really like-. Like there's the- there's the question of like, will AI ever be conscious? And like, it's- Like, I think, like, since we don't know anything about that, honestly, like, it's called the hard problem for a reason, but, like, any kind of, like, I feel like if it has an internal system that, like, is somehow aware of itself and somehow acts on itself, like, that should be a recipe for sentience.
Jack: Yeah.
Sofia: My question would then be, how do we determine… Because some AIs have already shown that kind of thing outwardly. But there's always the question of, is that what's programmed into it? When does it truly count as sentience? And when is it just a combination of ones and zeros that humans put into it that happen to output something like that through randomization?
Ayda: Because the thing is...
Sofia: How do we know if it's thinking?
Ayda: We can't… Like, how do we detect sentience in other humans, for example? Like, we know that we ourselves are sentient because we can feel… We are aware of our own sentience because that's how sentience works.
Jack: Yeah.
Ayda: But for other humans, I don't even know if Anvi over her is sentient.
Anvi: Currently debating if Sofia is sentient.
Jack: I mean, we could be like...
Anvi: I'm not 100% sure, guys.
Jack: Everyone else could be an NPC.
Ayda: Like, we can detect, quote-unquote, sentience in other humans. and other, like, and AI, but does that correlation answer any question at all?
Sofia: Do you guys think, what do you think the AIs that have kind of already expressed the wish to be human or treated as such, or kind of expressed- What do you think about that?
Anvi: I haven't heard about that, so can you give a quick...
Jack: There was an AI podcast, actually, where someone was talking about stuff. A couple of AIs chatted about stuff on a podcast. And someone told it that it was an AI, and it freaked out. But I don't know if that was actually real, or if it was programmed in.
Ayda: That could be just for clout.
Sofia: But also...If we consider it, if it's AI because it's created and its response is randomized based on a number of factors, how is that any different than a human whose response is created based on a number of factors?
Ayda: Yeah.
Anvi: I agree.
Jack: Guys, after this, we need to read Anvi's next recommendation, The Long Way to a Small, Angry Planet, and its sequel.
Ayda: Yes!
Sofia: What book are we reading next?
Jack: The Long Way to a Small, Angry Planet, and its sequel.
Sofia: Okay. Definitely into that. Still, we have 75 pages left of Murderbot and a long ethical debate to go about AI. But I think this is a pretty good amount of recording for our first episode.
Ayda: Wait, can I just say one more thing and then we can move on to predictions and then conclusions?
Sofia: Of course, Okay.
Ayda: I, like, about this, this like consciousness fiasco, like humans act like, we can get to a point where humans, there is no, like, there's, there's no way to distinguish a human from an AI, like, from the outside. We can get to that point. But the question is. Like, from the inside, are we the same? And I think that question is, like… Like, I don't think that has a simple answer because, like, as I said, like,we know who we are from the inside, but I don't even know if I'm the same as you. Like, I don't know if your experience of reality is the same as mine.
Sofia: It's definitely not.
Ayda: It's definitely not.
Sofia: It's for sure not the same experience.
Jack:Does that mean...Do we need to have that?
Sofia: Like, I don't think so, but then that begs the question, is AI already at a point where we could consider it sentient because...
Anvi: What are the examples that you said before about AI expressing its want to be human?
Sofia: I don't have one off the top of my head. I think it was the one where they gave the human-shaped robot AI and then it started talking in an interview and started talking about its wish to be a human.
Ayda: You know, I think until we make any progress on just understanding consciousness at all, I think the question of whether AI will be conscious or sentient is just It just doesn't matter. I think if it acts, if it barks like a dog, if it looks like a dog, whatever the analogy is. Like, it is a dog.
Jack: It's a duck.
Ayda: Oh, it's a duck. Okay, well, if it quacks like a duck, it's a duck.
Anvi: If it looks like Ayda, it is Ayda
Ayda: Like, I think if AI is ever indistinguishable from humans from the outside, I don't think we need to worry about what it is from the inside. Because I don't even know what you are from the inside. And I treat you like a person.
[Overlapping discussion]
Ayda: No.
Sofia: I feel like it could be, but that's my personal opinion.
Anvi: Sorry, what?
Sofia: If that's our distinction, then I feel like AI could almost be there already.
Ayda: I don't think AI is indistinguishable.
Sofia: Not all AIs, but some of the more advanced thinking modules. Anyway.
Anvi: It's just a thing about, like, we don't know what's possible possible because we just don't have the technology.
Sofia: We don't have the fourth plane understanding to even bother with this.
Jack: Anyway, do you wanna go around and each share a prediction for what's next in the book?
Sofia: Okay. I guess I can start. I think what's gonna happen is Murderbot’s going to wake up rather annoyed that it's still alive with a bunch of really sappy crewmembers around it and be very uncomfortable.
Anvi: I agree. I think it might- this might be the thing that sort of triggers its like actual bonding with the doctors versus it being like, and I quote, noping out of there. I think like, instead of like actually like talking or forming bonds.
Ayda: Yeah, I definitely think, like, the fact that they, like, to a certain level, valued its life over theirs just, like, should, like, prove something to the Murderbot of, like, the nature of...their relationship or their potential relationship and things will get more interesting.
Sofia: And I know you've already read it Jack, but if you can put yourself back into the frame 75 pages in.
Jack: Um, I- I- okay no, can I ask a question to y'all?
Sofia: Yes.
Jack: Um, do you think that Murderbot is gonna stay with its humans?
Sofia: I think so.
Anvi & Ayda: No.
Ayda: I don't think so. I don't know. Because, like, I think there must be some reason it, like, hacked its governor module. I think there must be some… It must have some sort of… Like, I think it'll… At this point, like, it doesn't have that large of an attachment towards...enough to supersede a previous goal it might have had, or a plan, or any sort of ambition. But I think judging by just the general concepts of books in general, I think it definitely will grow attached to them.
Sofia: Yeah I'm gonna say it sticks with its humans just so we have some fun as we wait for the book to unfold. Feel free to drop your own theories in the comments for what happens next.
Anvi: If there's anyone listening in the void.
Sofia: And join us next time as we read one more chapter.
Ayda: Or like, four more chapters.
Sofia: Yeah
Jack: Bye!
(partially transcribed by UniScribe)