The Entropy Cage by Stormrose casts you as a “cyber-psychiatrist” who communicates with some problematic artificial intelligence via computer commands.
user> sub.queryRequest()
user> What did you do that was so bad?
e26: Here is proof I crashed an elevator killing all inside.
PROOF:Verified: e26 in charge of elevator. Crashed killing 2/2 occupants.ERROR: queryRequest() not found in sub e26.
user> sub.punish() | sub.disconnect()
I am not a stranger to detangling computer syntax in an IF game (long back I was a beta tester on Bad Machine) but on a conceptual level I had no idea what was going on. The background notes the author provides at the end make me feel like I’m reading about an entirely different game than the one I played. Apparently there’s some major moral choice between “punish” and “disconnect” but I’m unclear what it is supposed to be.
Additionally, the repetition of “punish/disconnect” choices (over and over) is interspersed with a few larger choices, which seem to involve supporting one of two AI factions. I got an ending that involved winning a Nobel Prize, which is … good I guess, but I don’t understand why or how.
I suspect the time allocated to the plot (this takes roughly 15 minutes to play) was too short to fit in everything the author wanted to. Mind you, the atmosphere is excellent and I very much appreciated the concept as well as the author attempting something different than simply “AI gains consciousness”. Perhaps I was the wrong player to understand this one.
Leave a Reply