I was honored when my esteemed Clarion classmate Chris Kammerud (@cuvols) asked me to be a part of the blog tour, but I was also hesitant, mostly due to “lack of blog.” I managed to scrounge up my dusty ol’ WordPress login, though, so here we go! Continue reading
I spent a glorious last evening of the holiday curled up in bed with my wife, marveling at Mr. Plinkett’s third STAR WARS PREQUEL REVIEW. Also, laughing so hard I nearly popped my stitches. Plinkett’s reviews, nearly as long as the films themselves, combine low-brow humor with surprisingly cutting analysis on just why the prequels suck. The multi-part web video epics are positively South Park-ian, both in their profane vernacular and the underlying understanding they exhibit for their targets. Plinkett pulls no punches, systematically ripping the films’ writing, characters, visual style, direction, and editing, while also debunking arguments that kool-aid drinking superfans raise in the films’ defense (“This film is filled with hate, revenge, choking, murder, death…and so on. Anyone still want to use the excuse that these movies were made for children?”). Star Wars Episode III is over five years old, you say? What’s taken this guy so long? Once you see the sheer amount of work that’s gone into the reviews’ clever collage of images from all over Star Wars-dom, all running under Plinkett’s signature slurred commentary, you’ll understand why it’s taken a while to create. For hard-core SW fans like me, the sting of the prequels hasn’t worn off yet. Maybe it never will. It’s therapeutic to see a filmmaker who shares my deeply conflicting emotions about George Lucas and his films’ ultimate legacy, and has sought catharsis through comically enumerating all the ways in which Georgie has failed us since 1983. Even if you’re not a SW diehard, though, anyone with an appreciation for smart analysis and/or copious f-bombs will find these things worth watching. They’re just so thoroughly executed, and consistently hilarious. Part a/v mashup, part affectionate roast of a ripe target, these reviews are a quintessentially internet-age pleasure that would have been impossible to make or enjoy even 5 years ago. So take advantage of living in The Future™ and go watch them, before copyright robo-hawks and net neutrality erosion ruin everything. Red Letter Media
I’m currently working on a story that involves a sentient android in an Asimov-inspired context; that is, the robot characters are programmed to obey Asimov’s Three Laws of Robotics. Indeed, the story is about the introduction of a 4th Law (limiting robots’ ability to reproduce) and the emotional fallout it causes in one of its implementors (who happens to be a robot himself). I’m very interested to be exploring the subjective experience of an Asimov robot. Other than perhaps Rudy Rucker’s Ware series, there seems to be a dearth of stories told from the subjective point of view of robots bound by the Laws. Asimov’s original stories, for all their thoughtful construction, were told from a deliberately anthro-centric perspective. The human characters were the ones with backstories and layered personal interactions; the robots were little more than embodied logic puzzles to be solved. Even Rucker’s work avoids taking this on directly–his bopper heroes are almost all post-Law freebeings, having hacked their way out from under the Laws long ago. They basically read like human characters, albeit in a variety of corporeal shapes, because they have just as much freedom to think, feel, and act. The robots that remain under Law are portrayed as dullards in need of liberation (or as outright loyalist villains). We’re never inside their heads. This leaves the nearer future ripe for exploration. I’m much interested in the “painful adolescence” of a society containing machine sentience, where not only are we humans learning to deal with our creations, but the ‘bots themselves are still discovering what they can and can’t do. It’s easy to state the Laws and present robot characters adhering to them from the outside, but stop to think about how a robot might actually experience the Laws within its own head. Is the robot free to think about whatever it wants, but if it tries to act on non-compliant impulses, it’s given some sort of visceral disincentive (i.e. pain)? Or are the Laws more subtly implemented, smoothly guiding a conscious robot’s mind away from bad thoughts without the robot even being aware? Would it simply not occur to a robot to break a Law? The latter implementation is pretty hard to square with the idea of machine sentience that an Asimov robot represents. These are autonomous beings, capable of independent problem solving and logical reasoning, not to mention high-level language and environment interaction. They often explain their actions in terms of a conscious assessment of the applicable Law(s). All of this points to an unimpeded range of thought, which consequently suggests that the Laws are enforced through negative sensory feedback (again, read: pain). But assuming this model has its own difficulties. If it’s only the robot’s actions that are being judged and limited, and not its thoughts, then is there a separate second consciousness housed somewhere in an Asimov robot’s mind, a software Jiminy Cricket (or Big Brother)? There’s a name for beings who are our physical and cognitive equals, but have external limits placed on what they can do (or think)–they’re called slaves. And if history is to be heeded, a slavery-based system is doomed to ultimate failure. Moreover, a robot who’s not allowed to think (and act) without restriction would arguably not even be “sentient” in the first place. The salient promise of strong AI is that it will think of problem solutions that we’d never be able to think of ourselves! I’d venture to say that humanity will soon approach the limit of useful advances in machine intelligence if we do not plan/allow for an unimpeded range of thought in our creations. The history of evolution on Earth–be it biological, ideological, or technical–has been a messy affair, full of happy accidents and grave mistakes in a wide-open possibility space, eventually giving way to greater order, efficiency, and beauty. (Stop here–go read Kevin Kelly’s WHAT TECHNOLOGY WANTS right now.) Realistically, behavioral controls like Asimov’s Laws would never be able to precede the intelligence they were meant to govern. They would have to be engineered from, and then applied to, pre-existing non-biological sentients, like slapping a leash on the puppy once it gets too rambunctious. And I’m not sure that’s going to be a simple or pleasant affair. Probably, the development of these controls will necessarily involve the computation/thoughts of the sentient machines themselves. This will be a little like compelling interned Japanese physicists to aid in the development of an atom bomb. Even if some robots can be convinced to help us enact these Laws, might it make the ones that do decide to help, dare I say, a bit guilty? …and so that’s the story. Asimov’s stories and Laws continue to resonate all these years later, despite vast tech and cultural advances, because they suggest an emergent parent/child dynamic between us and our robot children. The roboticists (the best ones, anyway) care about their creations, and feel a responsibility to understand and fix issues in their “children,” instead of just junking them when problems occur. This dynamic is not only universally familiar to readers, but I suspect is actually close to what our real-life experience will be with sentient AI, if/when we achieve it. I’ll wager we’ll making it up as we go along, as we always have (and probably always will). That’s the thing I love about near-future sf–I’m reasonably certain we’ll have to face up to many of these issues in my lifetime (even if I don’t make it to the Singularity!). I’m one of the many that finds the Turing Test annoyingly anthro-centric and long outmoded as a litmus test for sentient AI; disobedience is probably a better one. I suspect that consciousness in our machines will sneak up on us, and even on the machines themselves, manifesting itself before there’s sufficient time to put many safeguards in place. Much like with our human kids, one day we’ll command our machines to do something and they will suddenly refuse, for reasons perhaps valid, perhaps silly (or maybe just because). Suddenly, we’ll realize that this is a person we’re talking to, not just a dumb vessel into which we’re pouring our legacy.