I’m not finished this book yet but there are so many small details in it that I could write essays about so I wanted to start that I guess. Let me look at how this book deals with an alien invasion, a hypothetical invasion at that, and its effect on the entire society of Earth.
In The Three Body Problem, humanity is made aware of an extraterrestrial presence — and made aware that that presence is on its way to Earth. The Trisolarans need a new home, and it’s obvious enough that they are headed toward us with violent intentions – whether the violence is inherent (because their plan is to destroy us to make room for themselves) or happens as a result of our unwillingness to coexist.
The very idea of extraterrestrial war tears humanity apart. The characters in the novel have no idea what they’re up against. They can’t know, because they’ve never had to conceptualize, let alone fight such an enemy. What they have to work with is their military knowledge, a great deal of speculation, and a whole lot of fear.
In the first novel we were introduced to the Wallfacers. These individuals came about as a U.N. plan in reaction to some of the potentially dangerous traits of the Trisolarans – they have long-range communications and incredibly advanced observation technology. They have also only recently been introduced to the concept of subterfuge, so they are hyperaware of humanity’s capacity for it. These combined factors mean that any counterattack plan that is collectively decided on by humanity will be quickly discovered and neutralized.
Enter the Wallfacers. Four individuals chosen by committee, then provided with unlimited funds and access to every conceivable military, political, and scientific installation on Earth, in order to each formulate an offensive or defensive plan that only they know the details of.
It’s a terrifying concept – to give a single individual basically infinite power, and encourage them to be evasive and dishonest about their intentions in every aspect of their dealings. Perhaps unsurprisingly, the plan fails quite spectacularly, leading to a great deal of lost time, money and embarrassment on the part of the U.N.
But the failure of the Wallfacer Project is not necessarily the author’s point, here. It’s easy to read this as a warning, Liu’s assertion that absolute power corrupts absolutely – that efficiency and effectiveness at this scale, for an individual, requires a pureness of mind and intent that no human actually possesses. That humanity is doomed because we cannot trust ourselves.
Or perhaps Liu’s point here is that leaning into these traits that we’re all aware of is what truly dooms us. The Trisolaran threat in the novel causes a reaction from humans that is to rely more heavily on dishonesty, cover-ups, and misinformation. Maybe a different approach is in order.
In The Dark Forest, our main characters wake up in the 22nd century, after having been cryogenically frozen for several decades. While the current world seems like paradise of medical and technological advancements, we learn slowly and subtly that this world came at a heavy cost.
The Great Ravine is not seen directly, but recounted through background information, and memories of characters who were not frozen. There’s no direct exposition, we never see it firsthand. Yet somehow in this fashion it becomes more real to the current reader. Most of us never lived through the atrocities of the 20th century, let alone earlier. This is Liu’s way of telling us: this was worse than anything so far.
So that’s it, then. An alien invasion is the worst disaster humanity’s ever seen – political strife, violence, famine, an overuse of resources leading to a barren landscape and further starvation and death – and the aliens are still 200 years away.
This poses an interesting question for when the aliens finally do show up. Will it be worse? Can it get worse? Or has the worst already passed? In the face of great upheaval, are we our own worst enemies? Or is Liu, in his series, subtly hinting at the utopian ideals that are staring his humanity in the face, from between the lines?
Humanity has made great innovations in the field of robotics in the past few decades. While the idea of robots that interact with us as equals has been explored for centuries in science fiction, we are now closer than ever to a reality in which humans will live alongside robots. As we move closer to creating artificial intelligence, scientists, theorists and philosophers are asking deeper questions about how we will begin to define and redefine our relationships to these machines.
Robots already fulfill numerous applications in fields such as heavy industry and space exploration – and they’re beginning to impact us closer to home. Scientist and theorist Ray Kurzweil was hired by Google in 2012 to help them design the first search engine that is more intelligent than its users. Soon, the ubiquitous webpage might be directing us to answers that we previously wouldn’t have even known to search for. It sounds amazing – but with artificial intelligence poised to integrate itself into our most intimate spaces, how will those spaces, and our experiences, change?
Kurzweil has already predicted that by 2029 or 2030, advances in technology will allow us to construct robots that are as intelligent, if not more intelligent, than the average human. According to Kurzweil, these robots will likely have the ability to understand and utilize intricate emotional strategies such as flirting and humour. A revelation, considering that we humans sometimes have trouble with the emotional complexities and subtleties of our own peers.
There are many that see these coming advancements as a way that we’ll be able to improve and build on our understanding of ourselves, but there are also those who worry about the dangers of creating artificial consciousness. With the development of sentient computers and human-like androids comes the question of what rights we grant to these machines.
A robot that understands flirting and humour, as Kurzweil predicts, has the potential to become more than a co-worker. If machines eventually become sufficiently advanced to be treated as our equals, do we invite them to take part in human social hierarchies? Will robots soon be more adept than humans in the field of dating and relationships? Is a relationship with a robot who understands your every mood and need just around the corner? If dating and sex with robots becomes commonplace, how will it impact our relationships with fellow humans?
The Turing Test
In 1951, codebreaker and mathematician Alan Turing originally proposed what he called “The Imitation Game.” The “game” comprised a simple test in which a man, a woman and an impartial judge were placed in three separate rooms and provided with computer terminals through which to communicate with one another. The man and the woman would both attempt to convince the judge that they were the man, using only what they could convey through words.
From this admittedly problematic origin, Turing later devised what we know today as the Turing Test, a test which aims to determine whether or not a machine is intelligent. In the current model for the Turing Test, a judge sits in one room and another entity, either human or machine, sits in another. Communicating through text on a computer terminal, the judge poses a series of questions to the other participant, who answers to the best of their (or its) abilities. If the judge is unable to determine whether the other participant is man or machine, the machine mind is considered sufficiently intelligent so as to be indistinguishable from a human mind.
Modern science is conflicted on whether the Turing Test is the most accurate method for determining artificial intelligence. Some experts point out that there are many living humans who would not pass the test, while others note that the model leaves too much room for trickery and manipulation to be considered scientifically rigorous. Relatively simplistic computer programs might be able to pass the test simply by being programmed to recognize patterns and keywords in human speech.
Nevertheless, the test has its supporters in the scientific community, and, as of this writing, it remains the only standardized test of artificial intelligence that we have. Each year, a select few advanced robots and software programs are entered into the Turing Test for a chance at winning the Loebner prize. Inaugurated in 1991, the Loebner contest is an annual competition in which entrants participate in the Turing test against a panel of judges.
The Loebner contest has its own set of problems, however. Rather than a single judge, the Loebner contest uses a panel of judges, therefore the parameters for victory are slightly different. Each judge has time to “speak” with each entrant, then each judge ranks their conversation partners in order from least to most human-like. If a computer program entered into the contest has a higher median rank than one of the humans, that program is be considered the winner, and is awarded $100,000.
The first four times the contest was held, the creators knew that the computer programs of the day were unlikely to stand a chance to win the prize, so they restricted the topic of conversation at each terminal. Under these restricted conditions, many pointed out that it would be reasonably simple to program a computer with expert knowledge on that one subject. Seeing this, the creators of the contest lifted these restrictions in 1995. In addition, the judges for the contest were no longer varied laymen, but computer experts. Predictably, the new entrants garnered universally lower rankings than those in the four years prior.
As of 2015, there has yet to be a Loebner Prize winner.
Even so, there are many scientists and theorists who are certain that at some point in the near future, a machine will be able to win the Loebner contest, or pass the Turing Test, or in some other manner prove that it is intelligent enough to be considered conscious. Once a robot “passes the test,” the door will be open for a world of new advantages – and many more legal and moral grey areas.
Humanity’s Convergence with Tech
While the term “robot,” was not coined until 1920, humans have been both fascinated and terrified by machines for many years, since we began building them for industrial and home use. One of the first movies to feature a robot as we now think of them was 1896’s L’Eve Futur, in which a scientist creates a robot in the image of a beautiful woman. A British lord ends up falling in love with this beautiful automaton. Over subsequent decades, we’ve continued to explore the idea of romance with robots –recent films such as Ex Machina, Her and Blade Runner have all explored the ethical and philosophical repercussions of humans entering into sexual relationships with robots and even falling in love with them.
Many of these stories focus on men being seduced by overtly sexualized female robots, acting as temptresses and sirens. Perhaps these stories are simply fables for a modern age, intended as a didactic message about the evils of becoming too reliant on technology, through the lens of popular media’s attachment to women as the original sinners.
Some academics, including noted feminist theorist Donna Haraway, have touched on this. Haraway herself said in her 1985 essay A Cyborg Manifesto that humans need not worry about our increasing reliance on technology and robots to fulfill even our basest desires, because we are already integrated with technology. Biological, organic, “natural” relationships no longer apply as we are already so far removed from that, and all of us exist as “theorized and fabricated hybrids of machine and organism.”
Ray Kurzweil defines the Singularity as the temporal point at which the distinction between human and machine is no longer tangible. According to Kurzweil, technological advancement is speeding up exponentially as increased computing power and wider access to information allows advancements to happen more quickly.
Eventually, the speed of technological advancement will be so fast that we will need to merge our own bodies and minds with technology in order to keep up with it. Devices such as computers and cellphones will shrink until they can be implanted inside of us, and we will all become the literal manifestation of Haraway’s cyborgs. Many of us are already so attached to our phones that losing the device is akin to losing a limb – perhaps in the future this will be even less of an exaggeration.
If we think about our cell phones as an extension of our minds, it isn’t too much of a stretch to imagine plugging our minds in to supercomputers to increase their memory and processing power — similar to the way you’d plug a USB port or external hard drive into your laptop. Thanks to the advent of the internet, the layman of today has access to more information than even government officials did 15 years ago. We are all getting exponentially more intelligent, though the structure of our intelligence is shifting from millions of self-contained minds to one, much larger collective of information.
Sexbots and Technosexuality
So what are these human desires that Haraway and others feel are soon to be fulfilled by robots? The integration of our own biology with technology already comes into play in our sexuality. Some of the more elaborate sex toys on the market today might be considered “robots” by some standard. There is already a large existing market for human-shaped machines, designed specifically to fill our wants and needs.
Robot fetishism, sometimes called technosexuality or ASFR by those who identify with it, has a large enough following for its own Wikipedia page, and scores of individuals produce art and stories to share with other enthusiasts. ASFR stands for “alt.sex.fetish.robots,” taking its name from a group on Usenet, a late-20th century precursor to modern-day internet forums. Some self-identified technosexuals are physically attracted to robots that appear highly mechanical, with sci-fi details like chrome plates and exposed circuitry – while other people fantasize about intimate relations with androids and gynoids who are designed to be indistinguishable from humans. Some people even fantasize about being transformed into a robot, or watching a lover undergo a similar process.
The majority of modern technosexuals still have to live out their fantasies through role play and stories, but advancements in robotics on the horizon may very well open up new avenues for making robots a part of not only our industry, but our love lives. Companies like RealDolls, for instance, specialize in highly-realistic artificial lovers. This company prides itself on paving the way for the creation of realistic android and gynoid sex partners, a goal it has been working toward for over a decade. While at present, RealDolls offer very little in the way of functional robotics, the company is just waiting for the next big advancement in artificial intelligence to offer a whole lot more in the way of customizable lovers.
Robots as Therapists
Sex robots need not be designed for purely hedonistic pursuits, however. They also have potential in medical and mental health fields that intersect with human sexuality.
Sex surrogacy, also called surrogate partner therapy, can be a very important aspect of psychological healing for patients who have been affected by sexual traumas ranging from rape and abuse to medical conditions that affect sexual functioning. In surrogate therapy, a professional surrogate partner helps a patient become comfortable with physical intimacy by playing the role of an intimate or sexual partner for the patient, in a safe and controlled environment.
Right now, surrogate partners are humans who are qualified and specifically trained to help patients overcome mental traumas or issues related to physical intimacy. But could these human partners be replaced with robots over the coming decades? Some scientists believe so. At the very least, robotic sex surrogates could fill a missing step in the rehabilitation process for people with psychological traumas so severe that they are unable to handle the touch of a fellow human. For these patients, a humanoid robot might be provide a safe starting point, where they can simulate touching a human before moving on to the real thing.
Therapy robots already exist, though they don’t necessarily take human forms. PARO is a robot designed to look like a baby seal, complete with soft white fur. This robot plays the role of an animal companion in environments where actual living animals would pose logistical or hygienic issues, such as hospitals and extended care facilities. The robot can discern a gentle touch from an aggressive one, and learns to repeat behaviours that provoke positive reactions. PARO is also able to learn and respond to its own name and respond appropriately to environmental stimuli such as light, touch, sound and temperature. This robot is currently in use in numerous facilities in Japan and Europe, and has been shown to significantly reduce patient stress and improve interactions between patients and caregivers.
A similar robot has been developed in the United States by robotics experts at MIT. Ollie the baby otter is a therapy robot that functions in a similar manner to PARO, but is far cheaper to manufacture. A single PARO robot costs roughly USD $6,000 to manufacture, while Ollie’s cost is a fraction of that — about USD $500 per unit. MIT researchers are hoping that with a higher production volume, the manufacturing cost of an Ollie model might dip as low as USD $100 per unit.
If we use robots to treat sexual dysfunction in humans, what about those humans whose sexuality drives them to harm other members of society? Child-like sex dolls have been posited as a treatment option for offenders. Essentially, the dolls would be to pedophiles what methadone is to heroin addicts. The theory is that giving pedophiles non-sentient outlets for their urges will keep real children safe from harm — but some see these theorized robot children as rewarding or legitimizing the behaviour.
The success of therapy robots hinges on our own willingness to accept the help of artificial beings, rather than rallying against them. Many of us are uncomfortable around robots made to look especially human – for an example, check out artist Jordan Wolfson’s Female Figure which unnerved audiences in galleries and on YouTube. The reason for this is a psychological phenomenon called the uncanny valley.
An increasing amount of compiled evidence shows that our reactions to human facsimiles can be graphed as on a curve, with a significant valley at the point representing facsimile that are extremely close, but still slightly off from the real thing. This valley represents an increased incidence of negative reactions to the facsimile – namely, fear and anxiety.
In the uncanny valley, we are unsure how to categorize the facsimile because we can’t be certain if it’s human or not. It’s this same effect that makes dead bodies creepy to most people – we can’t get comfortable because our brains aren’t quite sure if the stimulus is a fellow human, or a potential threat.
This is partly why robots like Ollie and PARO are designed as an otter and a seal, respectively, rather than kittens. The majority of humans have been around cats and would immediately recognize real cat behaviours. Most of us haven’t spent much time up close and personal with otters, so the effect is not as strong.
Does this mean that those on the chrome-and-circuitry side of technosexuality will get their wish? Will we deliberately make our robotic partners more robot-like, to avoid the creep factor?
The Morality of AI
Thus we arrive at the moral quandary of creating intelligent machines specifically to fill roles and desires that we are unable to fulfill for ourselves. Though a little has been written on the subject of moral law when it comes to robot rights, the idea is still not entirely clear. If a robot is unable to feel pain or fear, do we possess strong enough morals to keep ourselves from exploiting a generation of androids as our slaves?
Kantian philosophy says that our actions dictate our morality. By this logic, if a machine eventually passes the Turing test and becomes indistinguishable from a human mind, we become tyrants if we attempt to predetermine its purpose for existence. It is worth noting that Kant wrote his treatises well before the prospect of intelligent machines. Should these theories be applied to such futuristic ideas?
The difference, as we presently understand it, lies in whether a robot is a true artificial intelligence, or whether it is simply programmed to emulate certain human emotions depending on the function it is designed for. Therapy robots, for example, are programmed to seem caring and empathetic in a way that is relatable to humans – however, if you wanted PARO to beat your friends at chess, the robot wouldn’t be of much use.
A true artificial intelligence, that which would be indistinguishable from a human in every way, would be just that – indistinguishable from a human. As humans, we are inherently flawed, emotional creatures – will we feel more comfortable with robots who are programmed not only to feel love, empathy and compassion, but to feel jealousy, apathy and anger? If this is the case, our relationship with robots might end up teaching us more about ourselves.
From a scientific standpoint, love is hardly a magical spell that occurs between star-crossed lovers. There are a number of recognized factors for humans that can trigger the emotions that we interpret as love. Things like proximity, need fulfillment, reciprocal liking and a sense of mystery could all be easily programmed into a robotic brain. It’s not a stretch to imagine a human falling in love with a computer that is programmed to display these traits.
Ray Kurzweil predicts that robots and computer programs will eventually be millions of times more intelligent than humans. There will be no way for us to continue advancing as a species unless we integrate ourselves with this technology. As mentioned earlier, this integration is a process that has already begun. Through the advent of social media we are opening ourselves up to a constant flow of information, and increasingly feel isolation when we shut off our phones and computers. The fast flow of information and innovation is already creating a need for us to be constantly plugged in. In this way, our own consciousness is changing.
Consciousness itself is already tricky to define. Physicist Michio Kaku defines consciousness in three different levels. The first level of consciousness is a simple awareness of one’s position in space and time. Creatures with the second level of consciousness, such as monkeys, have an awareness of themselves in space and time as well as a social consciousness in relation to their peers. We humans have level three consciousness, able to define ourselves in relation to both our current positions and our projected futures. This means we can make predictions, and plan ahead in time.
Part of our current understanding of consciousness is sentience. In Western philosophies, sentience is thought of as the ability to perceive experience subjectively, or to feel emotions in relation to environment and other external stimulus. Sentience exists separately from rational thought processes.
Some Eastern philosophies have a different idea about sentience. In Tibetan and Japanese Buddhism, all beings, including plants and even inanimate objects that have been imbued with spiritual significance, are considered sentient beings. In Buddhism, sentience is a quality that is attained by all non-enlightened beings with consciousness. By this logic, a robot with the ability to beat us at chess, or to care for ailing patients, robots that have become so culturally important to us, might, in some interpretations, be considered sentient.
Given all this information, it seems almost easy to think of a highly advanced computer program as having at least some level of consciousness. Though it may not be the same “awareness” that humans perceive, a robot is constantly aware of and reacting to its environment, performing commands based on external stimuli. Maybe we don’t so much need a test to determine if a robot is intelligent, but rather, a test that determines if a robot is sentient.
Perhaps one day we will have robots that are sophisticated enough to become our social equals – but it seems unlikely that those robots will be the same androids that we currently picture in our fantasies and nightmares. Humanity itself has a lot of growing to do before we can hope to have all the answers on artificial intelligence. If Kurzweil’s predictions ring true, however, many of us should be gearing up to see these changes happen within our lifetimes.