Mechanicum by Graham McNeill

Hey, it’s the second Horus Heresy novel that I’ve ever reviewed on the blog!

I’ve always been more partial to the Adeptus Mechanicus than the space marines or Imperial Guard, so I was expecting to enjoy Mechanicum. And it’s pretty good. My main complaint with it is its insistence on cleaving the plot into two (three?) distinct stories that don’t really overlap until the final chapter (and even then, it’s not particularly meaningful.)

The main plot follows Dalia Cythera, an Administratum transcriber with some latent psyker abilities who’s conscripted into helping Adept Koriel Zeth build a device called an Akashic Reader. Dalia’s not a particularly interesting character — her abilities come close to pushing her into Mary Sue territory and she doesn’t seem experienced or opinionated enough to really drive her own narrative. 

Zeth is more interesting — here’s a tech adept who actually values scientific knowledge and experimentation over faith in the Mechanicum’s “god,” the Omnissiah. (Of course, her values are ripped to shreds along with the rest of the planet, but, it’s nice of a 40K property to actually acknowledge the possibility of science over faith.)

Plus, look — a 40K novel with TWO female main characters! Too bad that Zeth gets an introductory description that spends more time on her “lithe, muscular physique,” “shapely armoured legs,” and “the curve of her thighs and the swell of her breasts” than on any demonstration of skill or personality.

Still, for all its failings, the book has charm. There are some great moments that express the conflicted humanity of the Mechanicum, and the age-old cyberpunk question of how much flesh and bone one needs to be considered “human.”

Nice one, Graham

It’s also got a few moments where it’s clear the editor was asleep at the wheel.



Diaspora by Greg Egan

Have you ever loved a novel so much that you wanted to get every word of it tattooed on your body?

Allow me to introduce Greg Egan’s Diaspora.

This is speculative fiction at its absolute finest, in my opinion: the type that’s grounded in what seems scientifically plausible to feeble yet physics-loving minds such as my own, while also doing what I have yet to see a pre-1990’s sci-fi story do, which is to recognize that not only the medium, but the nature of human interaction, can change.

(This is ’98, so it’s close, but no dice.)

The novel starts about 800 years into our future, with a large portion of humanity having voluntarily given up their physical bodies in order to live as sentient software programs inside servers buried deep under terrestrial ground. Each different server is called a polis, named after, presumably, the original creators. We’re introduced to Konishi Polis and Carter-Zimmerman Polis, and it’s implied that there are more.

Our hero, Yatima, is a Konishi polis “orphan”—a parentless, genderless being created from what could be called a glitch in the software that makes up reality within the polis. Essentially, Yatima’s birth was an immaculate conception in programming terms. Egan never truly leans toward a Messianic interpretation for the character, though I suppose that reading would be there, if you wanted it to be.

An aside, here: Yatima is far from the only genderless character. In fact, the majority of the characters use the pronouns ve/ver/vis, and at no point is the reader assaulted with explicit reasoning for this (though there’s plenty of implicit reasoning.) It feels natural, and I love it, and it’s great.

The remainder of humanity has split off into sects of “fleshers” – a fairly self-explanatory term. Naturally there are extremes within the fleshers: those who opt for a ton of genetic modifications and augmentations to make them essentially superhuman, and those who reject these opportunities—while paradoxically using them—to regress back to a more primitive state.

In the grand scheme of the novel, the fleshers don’t really matter. They are wiped out fairly early in the story by an unexpected cosmic event, something that shouldn’t have been possible according to the laws of physics as they are understood in the novel’s future (the Theory of Relativity has been replaced by Kozuch theory, which without giving too much away is a once again, entirely-too-plausible-sounding explanation for some of the weirdness that physics as we know it can’t definitively explain.)

The polises, buried deep under the Earth and backed up on other galactic worlds, survive the event. The remainder of the novel is focused on the Diaspora, the journey of the citizens of the Carter-Zimmerman polis (including Yatima) away from earth and into uncharted territories of the universe, across several more millennia and shifts into areas where our understanding of time no longer holds up.

This book really excels at telling a compelling story of massive, exponential progress in technology and social design, while remaining mostly aware of the bits of human nature that can and do tend to confound such progress. The scope of the story is vaster than that of any novel in my recent memory, yet Egan grounds it well by filtering through the experiences of a small, tight group of characters.

The polises are predictably hyper-advanced and allow their citizens to think and act hundreds of times faster than humans in the “real” world, yet polis citizens still fall victim to the type of overthinking and emotional blindness that frequently plagues today’s interactions.

In Konishi polis, autonomy is valued above all else, leading to a localized reality in which citizens are unable to touch each other, as even the slightest sensory intervention by another being is considered a loss of autonomy for the one being touched. Naturally, sexuality and romantic love have fallen deeply out of fashion and are largely considered outdated and off-putting, a point that I consider a refreshing turn away from the tropes of 60’s and 70’s sci-fi in which everyone is banging hot space babes with the help of off-world alcohol and massive leaps in birth control science.

Early in the book, attempts at saving what remains of the flesher population are mostly thwarted by the fleshers themselves fearing the unknown, refusing to accept the severity of the threat, and misconstruing the polis citizens’ invitations as some sort of invasion plot. Fairly topical issues.

In Carter-Zimmerman the focus is heavier on artistic pursuit and sensory experience, making it fitting that it should be the polis that creates thousands of cloned copies of its citizens and sends them off to distant planets. Which is where the story gets cool. Egan’s descriptions of life, and “life” on other planets are incredible, especially his descriptions of life that exists in more than four dimensions.

Technological advancement isn’t demonstrated with floating cities or Dyson spheres or FTL drives, but rather with the ability to physically change and manipulate the atoms in a planet’s atmosphere – to leave decipherable messages in the form of isotopes, for example. There’s the underlying idea that sufficient advancement would lead to technologies that are increasingly unobtrusive, difficult to detect, a contrast to the constant race to build the biggest, the tallest, the strongest.

Eventually, the pursuit of an ultra-advanced non-human civilization, following clues that have been left throughout the universe, leads the citizens of C-Z out of Earth’s universe, into exponential higher dimensions and physical descriptions that are dizzyingly difficult to picture.

The end of the novel is almost disappointing, if only because by that point I half expected that Egan would be revealing some sort of undiscovered, unifying universal truth. In reality though, the story reaches its logical conclusion when it becomes clear that there isn’t any point in continuing. The Diaspora discovers multitudes, but in the end what it truly reveals is the ambivalence of the universe at large, the sort of optimistic pointlessness of attempts to map or fully understand the extent of the universe, to even know what reality is.

And you know by now that I’m a sucker for endings that resolve nothing.

The Ethics of Robot Sex: Our Bizarre Future Relationships with Machines

Humanity has made great innovations in the field of robotics in the past few decades. While the idea of robots that interact with us as equals has been explored for centuries in science fiction, we are now closer than ever to a reality in which humans will live alongside robots. As we move closer to creating artificial intelligence, scientists, theorists and philosophers are asking deeper questions about how we will begin to define and redefine our relationships to these machines.

Robots already fulfill numerous applications in fields such as heavy industry and space exploration – and they’re beginning to impact us closer to home. Scientist and theorist Ray Kurzweil was hired by Google in 2012 to help them design the first search engine that is more intelligent than its users. Soon, the ubiquitous webpage might be directing us to answers that we previously wouldn’t have even known to search for. It sounds amazing – but with artificial intelligence poised to integrate itself into our most intimate spaces, how will those spaces, and our experiences, change?

Kurzweil has already predicted that by 2029 or 2030, advances in technology will allow us to construct robots that are as intelligent, if not more intelligent, than the average human. According to Kurzweil, these robots will likely have the ability to understand and utilize intricate emotional strategies such as flirting and humour. A revelation, considering that we humans sometimes have trouble with the emotional complexities and subtleties of our own peers.

There are many that see these coming advancements as a way that we’ll be able to improve and build on our understanding of ourselves, but there are also those who worry about the dangers of creating artificial consciousness. With the development of sentient computers and human-like androids comes the question of what rights we grant to these machines.

A robot that understands flirting and humour, as Kurzweil predicts, has the potential to become more than a co-worker. If machines eventually become sufficiently advanced to be treated as our equals, do we invite them to take part in human social hierarchies? Will robots soon be more adept than humans in the field of dating and relationships? Is a relationship with a robot who understands your every mood and need just around the corner? If dating and sex with robots becomes commonplace, how will it impact our relationships with fellow humans?

The Turing Test

In 1951, codebreaker and mathematician Alan Turing originally proposed what he called “The Imitation Game.” The “game” comprised a simple test in which a man, a woman and an impartial judge were placed in three separate rooms and provided with computer terminals through which to communicate with one another.  The man and the woman would both attempt to convince the judge that they were the man, using only what they could convey through words.

Alan Turing’s passport photo (Wikimedia Commons)

From this admittedly problematic origin, Turing later devised what we know today as the Turing Test, a test which aims to determine whether or not a machine is intelligent. In the current model for the Turing Test, a judge sits in one room and another entity, either human or machine, sits in another. Communicating through text on a computer terminal, the judge poses a series of questions to the other participant, who answers to the best of their (or its) abilities. If the judge is unable to determine whether the other participant is man or machine, the machine mind is considered sufficiently intelligent so as to be indistinguishable from a human mind.

Modern science is conflicted on whether the Turing Test is the most accurate method for determining artificial intelligence. Some experts point out that there are many living humans who would not pass the test, while others note that the model leaves too much room for trickery and manipulation to be considered scientifically rigorous. Relatively simplistic computer programs might be able to pass the test simply by being programmed to recognize patterns and keywords in human speech.

Nevertheless, the test has its supporters in the scientific community, and, as of this writing, it remains the only standardized test of artificial intelligence that we have. Each year, a select few advanced robots and software programs are entered into the Turing Test for a chance at winning the Loebner prize. Inaugurated in 1991, the Loebner contest is an annual competition in which entrants participate in the Turing test against a panel of judges.

The Loebner contest has its own set of problems, however. Rather than a single judge, the Loebner contest uses a panel of judges, therefore the parameters for victory are slightly different. Each judge has time to “speak” with each entrant, then each judge ranks their conversation partners in order from least to most human-like. If a computer program entered into the contest has a higher median rank than one of the humans, that program is be considered the winner, and is awarded $100,000.

The first four times the contest was held, the creators knew that the computer programs of the day were unlikely to stand a chance to win the prize, so they restricted the topic of conversation at each terminal. Under these restricted conditions, many pointed out that it would be reasonably simple to program a computer with expert knowledge on that one subject. Seeing this, the creators of the contest lifted these restrictions in 1995. In addition, the judges for the contest were no longer varied laymen, but computer experts. Predictably, the new entrants garnered universally lower rankings than those in the four years prior.

As of 2015, there has yet to be a Loebner Prize winner.

Even so, there are many scientists and theorists who are certain that at some point in the near future, a machine will be able to win the Loebner contest, or pass the Turing Test, or in some other manner prove that it is intelligent enough to be considered conscious. Once a robot “passes the test,” the door will be open for a world of new advantages – and many more legal and moral grey areas.

Humanity’s Convergence with Tech

While the term “robot,” was not coined until 1920, humans have been both fascinated and terrified by machines for many years, since we began building them for industrial and home use. One of the first movies to feature a robot as we now think of them was 1896’s L’Eve Futur, in which a scientist creates a robot in the image of a beautiful woman. A British lord ends up falling in love with this beautiful automaton. Over subsequent decades, we’ve continued to explore the idea of romance with robots –recent films such as Ex Machina, Her and Blade Runner have all explored the ethical and philosophical repercussions of humans entering into sexual relationships with robots and even falling in love with them.

Many of these stories focus on men being seduced by overtly sexualized female robots, acting as temptresses and sirens. Perhaps these stories are simply fables for a modern age, intended as a didactic message about the evils of becoming too reliant on technology, through the lens of popular media’s attachment to women as the original sinners.

Some academics, including noted feminist theorist Donna Haraway, have touched on this. Haraway herself said in her 1985 essay A Cyborg Manifesto that humans need not worry about our increasing reliance on technology and robots to fulfill even our basest desires, because we are already integrated with technology. Biological, organic, “natural” relationships no longer apply as we are already so far removed from that, and all of us exist as “theorized and fabricated hybrids of machine and organism.”

Ray Kurzweil defines the Singularity as the temporal point at which the distinction between human and machine is no longer tangible. According to Kurzweil, technological advancement is speeding up exponentially as increased computing power and wider access to information allows advancements to happen more quickly.

Eventually, the speed of technological advancement will be so fast that we will need to merge our own bodies and minds with technology in order to keep up with it. Devices such as computers and cellphones will shrink until they can be implanted inside of us, and we will all become the literal manifestation of Haraway’s cyborgs. Many of us are already so attached to our phones that losing the device is akin to losing a limb – perhaps in the future this will be even less of an exaggeration.

If we think about our cell phones as an extension of our minds, it isn’t too much of a stretch to imagine plugging our minds in to supercomputers to increase their memory and processing power — similar to the way you’d plug a USB port or external hard drive into your laptop. Thanks to the advent of the internet, the layman of today has access to more information than even government officials did 15 years ago. We are all getting exponentially more intelligent, though the structure of our intelligence is shifting from millions of self-contained minds to one, much larger collective of information.

Sexbots and Technosexuality

So what are these human desires that Haraway and others feel are soon to be fulfilled by robots? The integration of our own biology with technology already comes into play in our sexuality. Some of the more elaborate sex toys on the market today might be considered “robots” by some standard. There is already a large existing market for human-shaped machines, designed specifically to fill our wants and needs.

Robot fetishism, sometimes called technosexuality or ASFR by those who identify with it, has a large enough following for its own Wikipedia page, and scores of individuals produce art and stories to share with other enthusiasts. ASFR stands for “,” taking its name from a group on Usenet, a late-20th century precursor to modern-day internet forums. Some self-identified technosexuals are physically attracted to robots that appear highly mechanical, with sci-fi details like chrome plates and exposed circuitry – while other people fantasize about intimate relations with androids and gynoids who are designed to be indistinguishable from humans. Some people even fantasize about being transformed into a robot, or watching a lover undergo a similar process.

The majority of modern technosexuals still have to live out their fantasies through role play and stories, but advancements in robotics on the horizon may very well open up new avenues for making robots a part of not only our industry, but our love lives. Companies like RealDolls, for instance, specialize in highly-realistic artificial lovers. This company prides itself on paving the way for the creation of realistic android and gynoid sex partners, a goal it has been working toward for over a decade. While at present, RealDolls offer very little in the way of functional robotics, the company is just waiting for the next big advancement in artificial intelligence to offer a whole lot more in the way of customizable lovers.

Robots as Therapists

Sex robots need not be designed for purely hedonistic pursuits, however. They also have potential in medical and mental health fields that intersect with human sexuality.

Sex surrogacy, also called surrogate partner therapy, can be a very important aspect of psychological healing for patients who have been affected by sexual traumas ranging from rape and abuse to medical conditions that affect sexual functioning. In surrogate therapy, a professional surrogate partner helps a patient become comfortable with physical intimacy by playing the role of an intimate or sexual partner for the patient, in a safe and controlled environment.

Right now, surrogate partners are humans who are qualified and specifically trained to help patients overcome mental traumas or issues related to physical intimacy. But could these human partners be replaced with robots over the coming decades? Some scientists believe so. At the very least, robotic sex surrogates could fill a missing step in the rehabilitation process for people with psychological traumas so severe that they are unable to handle the touch of a fellow human. For these patients, a humanoid robot might be provide a safe starting point, where they can simulate touching a human before moving on to the real thing.

PARO Therapeutic Robot in action

Therapy robots already exist, though they don’t necessarily take human forms. PARO is a robot designed to look like a baby seal, complete with soft white fur. This robot plays the role of an animal companion in environments where actual living animals would pose logistical or hygienic issues, such as hospitals and extended care facilities. The robot can discern a gentle touch from an aggressive one, and learns to repeat behaviours that provoke positive reactions. PARO is also able to learn and respond to its own name and respond appropriately to environmental stimuli such as light, touch, sound and temperature. This robot is currently in use in numerous facilities in Japan and Europe, and has been shown to significantly reduce patient stress and improve interactions between patients and caregivers.

A similar robot has been developed in the United States by robotics experts at MIT. Ollie the baby otter is a therapy robot that functions in a similar manner to PARO, but is far cheaper to manufacture. A single PARO robot costs roughly USD $6,000 to manufacture, while Ollie’s cost is a fraction of that — about USD $500 per unit. MIT researchers are hoping that with a higher production volume, the manufacturing cost of an Ollie model might dip as low as USD $100 per unit.

If we use robots to treat sexual dysfunction in humans, what about those humans whose sexuality drives them to harm other members of society? Child-like sex dolls have been posited as a treatment option for offenders. Essentially, the dolls would be to pedophiles what methadone is to heroin addicts. The theory is that giving pedophiles non-sentient outlets for their urges will keep real children safe from harm — but some see these theorized robot children as rewarding or legitimizing the behaviour.

The success of therapy robots hinges on our own willingness to accept the help of artificial beings, rather than rallying against them. Many of us are uncomfortable around robots made to look especially human – for an example, check out artist Jordan Wolfson’s Female Figure which unnerved audiences in galleries and on YouTube. The reason for this is a psychological phenomenon called the uncanny valley.

An increasing amount of compiled evidence shows that our reactions to human facsimiles can be graphed as on a curve, with a significant valley at the point representing facsimile that are extremely close, but still slightly off from the real thing. This valley represents an increased incidence of negative reactions to the facsimile – namely, fear and anxiety.

In the uncanny valley, we are unsure how to categorize the facsimile because we can’t be certain if it’s human or not. It’s this same effect that makes dead bodies creepy to most people – we can’t get comfortable because our brains aren’t quite sure if the stimulus is a fellow human, or a potential threat.

This is partly why robots like Ollie and PARO are designed as an otter and a seal, respectively, rather than kittens. The majority of humans have been around cats and would immediately recognize real cat behaviours. Most of us haven’t spent much time up close and personal with otters, so the effect is not as strong.

Does this mean that those on the chrome-and-circuitry side of technosexuality will get their wish? Will we deliberately make our robotic partners more robot-like, to avoid the creep factor?

The Morality of AI

Thus we arrive at the moral quandary of creating intelligent machines specifically to fill roles and desires that we are unable to fulfill for ourselves. Though a little has been written on the subject of moral law when it comes to robot rights, the idea is still not entirely clear. If a robot is unable to feel pain or fear, do we possess strong enough morals to keep ourselves from exploiting a generation of androids as our slaves?

Kantian philosophy says that our actions dictate our morality. By this logic, if a machine eventually passes the Turing test and becomes indistinguishable from a human mind, we become tyrants if we attempt to predetermine its purpose for existence. It is worth noting that Kant wrote his treatises well before the prospect of intelligent machines. Should these theories be applied to such futuristic ideas?

The difference, as we presently understand it, lies in whether a robot is a true artificial intelligence, or whether it is simply programmed to emulate certain human emotions depending on the function it is designed for. Therapy robots, for example, are programmed to seem caring and empathetic in a way that is relatable to humans – however, if you wanted PARO to beat your friends at chess, the robot wouldn’t be of much use.

A true artificial intelligence, that which would be indistinguishable from a human in every way, would be just that – indistinguishable from a human. As humans, we are inherently flawed, emotional creatures – will we feel more comfortable with robots who are programmed not only to feel love, empathy and compassion, but to feel jealousy, apathy and anger? If this is the case, our relationship with robots might end up teaching us more about ourselves.

From a scientific standpoint, love is hardly a magical spell that occurs between star-crossed lovers. There are a number of recognized factors for humans that can trigger the emotions that we interpret as love. Things like proximity, need fulfillment, reciprocal liking and a sense of mystery could all be easily programmed into a robotic brain. It’s not a stretch to imagine a human falling in love with a computer that is programmed to display these traits.

Redefining Consciousness

Ray Kurzweil predicts that robots and computer programs will eventually be millions of times more intelligent than humans. There will be no way for us to continue advancing as a species unless we integrate ourselves with this technology. As mentioned earlier, this integration is a process that has already begun. Through the advent of social media we are opening ourselves up to a constant flow of information, and increasingly feel isolation when we shut off our phones and computers. The fast flow of information and innovation is already creating a need for us to be constantly plugged in. In this way, our own consciousness is changing.

Consciousness itself is already tricky to define. Physicist Michio Kaku defines consciousness in three different levels. The first level of consciousness is a simple awareness of one’s position in space and time. Creatures with the second level of consciousness, such as monkeys, have an awareness of themselves in space and time as well as a social consciousness in relation to their peers. We humans have level three consciousness, able to define ourselves in relation to both our current positions and our projected futures. This means we can make predictions, and plan ahead in time.

Part of our current understanding of consciousness is sentience. In Western philosophies, sentience is thought of as the ability to perceive experience subjectively, or to feel emotions in relation to environment and other external stimulus. Sentience exists separately from rational thought processes.

Some Eastern philosophies have a different idea about sentience. In Tibetan and Japanese Buddhism, all beings, including plants and even inanimate objects that have been imbued with spiritual significance, are considered sentient beings. In Buddhism, sentience is a quality that is attained by all non-enlightened beings with consciousness. By this logic, a robot with the ability to beat us at chess, or to care for ailing patients, robots that have become so culturally important to us, might, in some interpretations, be considered sentient.

Given all this information, it seems almost easy to think of a highly advanced computer program as having at least some level of consciousness. Though it may not be the same “awareness” that humans perceive, a robot is constantly aware of and reacting to its environment, performing commands based on external stimuli. Maybe we don’t so much need a test to determine if a robot is intelligent, but rather, a test that determines if a robot is sentient.

Perhaps one day we will have robots that are sophisticated enough to become our social equals – but it seems unlikely that those robots will be the same androids that we currently picture in our fantasies and nightmares. Humanity itself has a lot of growing to do before we can hope to have all the answers on artificial intelligence. If Kurzweil’s predictions ring true, however, many of us should be gearing up to see these changes happen within our lifetimes.

(This article originally appeared on Wisdom Pills)