Sex, Robots, and the Talmud

Aug 15, 2014 • Toys

Roboutina by Paul Vera-Broadbent

The Roxxxy robot companion raises profound questions about our relationships to one another and to technology. Roxxxy is, not unexpectedly, a robot designed for sex. There’s an obvious conversation that we could have here which will start from people’s squick factor about this robot. Some of the conversation may be relatively nuanced — we could talk about the ways in which interacting with a sexual robot encourages people to view other people as machines which are ultimately there to serve their own needs, “objectifying” them in the literal as well as the metaphorical sense. I think we can all run through that conversation in our heads by now, and it’s not going to tell us anything spectacularly new. But there’s another lens we can use to look at it, which I think can lead us down an unexpected but interesting path, because this device actually raises many of the core questions about the development of technology, but in a more intimate way than you might expect.

Sex toys and sex bots as a technology

Let me give the simplest argument for why this can be a positive technology: it gives people an opportunity to explore their own sexuality in a safe environment. The Roxxxy series really sits at the intersection of two technologies of this sort. One is the sex toy, which (especially in the space of male sex toys) is basically a technology about exploring the space of physical stimulation; the other is the technology of the “dating sim,” the use of increasingly sophisticated computer simulations of personal interaction to let people explore human relationships. (Which, in turn, intersects with both other technologies — think about the increasingly complex simulated personalities of characters in computer games — as well as with the narrative arts, from the novel to the role-playing game, which contain at their heart an exploration of the space of human psychology and of how one would deal with a variety of situations)

This technology doesn’t fulfill a radically new role in society, of course; long before there were technologies which explored this space, this range of needs has been satisfied by (human) sex workers. As with these technologies, sex work spans a fairly broad range of these ends: from simple physical stimulation, to the ability to explore sexuality in a controlled environment, one which is artificially bounded by the transaction. And this, in turn, stretches out into sex therapy, and from there into general psychotherapy, which in much of its practice is about the ability to explore the space of one’s emotional states in a controlled environment, again bounded by the nature of the transaction: the therapist is someone you can talk about yourself with, freed of the obligations to either reciprocate by discussing them, or of the further obligations and consequences which would come from revealing your inner emotional state to a friend. While we sometimes laugh about how much of psychotherapy is really just the therapist listening and asking the occasional leading question, that’s actually a profoundly powerful thing, as it gives people a chance to think about themselves in a way which they ordinarily wouldn’t — and that chance is given by the “safe space” which the therapeutic environment creates. Likewise, I suspect that much of the discomfort that people often have with the idea of therapy comes from the way it fiscalizes an ordinarily personal transaction — but that very fiscalization, the replacement of social obligation with the suddenly sharp walls of “oh yes, this is business,” is precisely what lets therapy work.

We can therefore consider Roxxxy (and her male partner, Rocky) as part of a spectrum of assistive and therapeutic technologies. Clearly, this device is nowhere near the capability of a human at this point, any more than ELIZA is capable of replacing a psychotherapist; Roxxxy neither physically nor emotionally provides a decent simulation. However, you can imagine cases where she would nonetheless be useful, especially for people who are unwilling or unable (for reasons ranging from shyness, to fear, to simple lack of money) to go to a sex worker instead.

This lets us rethink one of the criticisms of this technology, namely that it might become a crutch: that is, that through overuse people might become dependent on it for their sexual interaction, and never acquire the ability to have sexual relationships with people. In fact, the phrase “become a crutch” is perfectly apt, because the crutch is the prototypical assistive technology. In general, if you are using a crutch, the choice is not between using one and walking around without one: the choice is between using one and not walking around at all, and so a crutch is a magical device which opens up a big chunk of the world to you. It’s certainly possible to overuse one — to spend too much time on a crutch after an injury so that it doesn’t properly heal, for example, and so develop a dependency on it — but this doesn’t make crutches a bad idea. In fact, without them, the injury would likely never heal properly at all.

Likewise, if we consider these devices in the spectrum of assistive technologies, we realize that their customers are people who want to explore some aspect of their sexuality which (for one reason or another) they don’t feel that they can explore with a person, even in the “controlled environment” of a sex worker. Now, since this is an extremely primitive technology at this point, it’s obviously a poor substitute, but I think that this context makes it much more interesting for us to explore where such technologies might go in the future, and what the consequences might be.

The different future directions of this technology

First of all, there are actually two major paths forward for such technology, which correspond to the two major directions in artificial intelligence research: either trying to become more human, or not trying that at all. The context for this is that, even more than AI, these technologies are attempting to mimic humans, not only in broad situations but in situations where we are naturally hyper-attuned to the things which make people people. That means that the uncanny valley is going to be broader and deeper than usual, and it’s not clear just how good the technologies will have to get before they can be genuinely assistive.

Now, as I mentioned before, sexbots are really the intersection of two different technologies, physical stimulation and personality simulation, and the question of “seeming human” is somewhat different in both of these cases. Personality simulation, in particular, likely requires that we actually make it to the other side of the uncanny valley before it can become what we might call a “transitional” assistive technology, i.e. one whose purpose is to provide temporary support and assistance during a healing process, so that its user ultimately ceases to need it. (The analogue of an under-the-arm crutch, which is meant to be a temporary thing while a leg injury heals, but which you would never use in the long term) On the other hand, it’s less clear what the requirements would be on this as a “substitutive” assistive technology, one which is meant to help with a permanent situation and which is not meant to ease you into anything else, but rather to improve your life in as many ways as possible in its own right. (The analogue of, say, a good electric wheelchair.)

The corresponding “two paths forward” in artificial intelligence (AI) research are attempting to make an AI which feels really human (in the style of Asimov’s robots), versus an AI which is meant to work with humans but which makes no attempt to seem human itself. The prototypical example of the latter is a search engine: this is a system which has many of the core behaviors of AI (it’s meant to understand what you mean when you say something, understand all sorts of documents and resources out in the world and what they mean, and how trustworthy they are and so on, and understand what might be relevant to you and when), and to act as an augmentation to the human mind — essentially, solving the problem of your not knowing something by giving you transparent and instant access to the sum total of human knowledge. However, it makes no attempt to engage you in conversation, it has no notion of self, and it doesn’t attempt to project a personality. There’s no obvious fashion in which adding such behavior would make search engines more helpful or otherwise better at what they do; these are AI’s designed to work with humans, rather than ones designed to be a rough substitution for a human.

A similar split applies to this sexbot technology. Physical stimulation devices are an easy place to see this: there are some which attempt to directly mimic human physiology (e.g., realistic dildos, the various masturbatory sleeves which are “modeled after” people, often in a loose sense of the word) and others which make no such attempt whatsoever. (e.g., the wand massager style of vibrator, or masturbatory sleeves such as the Tenga egg, which make no pretense of looking fake-human, but instead focus on being the best sleeves they can be) Both of these spaces are healthy: invention and improvements happen frequently, and they are all quite popular with their users.

The situation with personality simulation is less clear. For obvious reasons, this would only work as a transitional assistive technology if it were successfully being “more human than human,” or at least human enough that things you learn from it directly transfer to things about humans, but there may be an entire niche of personality simulation which makes no attempt to be human. It may turn out that some people are simply xenophiles and like whatever it is that comes out of that direction: only time and experiment will tell.

Sexbots and the economics of sex work

To go a bit further, I would like to explore the limits of the “more human than human” branch of this technology, and some of the consequences — economic, social, and ethical — of improving technology in this way.

To start with the economic, we should consider that all of these technologies occupy a space which overlaps with human work, namely sex work. This brings up the question of what niches the technology and the humans might find themselves occupying in the future, and how they will relate to one another. It’s hard to imagine, right now, any of these humaniform technologies ever being serious competition for (i.e., a potential substitute for) human sex workers; to my eyes, even the most advanced of these (such as the RealDoll) seems closer to a substitute for necrophilia. However, technology advances, and we should consider that it’s possible that we will have some kind of robot which physically and behaviorally simulates a human well enough to be on the other side of the uncanny valley within our lifetimes.

What would such a device be like, and how would it compare to a human? It would have to at least superficially look human, enough to not trigger the gut “eew, this is dead and it’s moving!” response. This might not necessarily mean exact replication: for example, it might be blue-skinned. (That might actually be an effective approach: it turns out that accurately capturing the appearance of skin, visually, texturally, kinetically, etc., is one of the key aspects of crossing the uncanny valley. It might be possible to short-circuit this by deliberately taking a non-human approach in skin simulation, which doesn’t have to be as accurate, while keeping enough of the other aspects of the device similar enough to a human that it still passes our “human” filter, and therefore does not only appeal to those with decidedly nonhuman tastes.)

The blue skin example really highlights the ways in which the broader aspects of this technology would likely develop. To create a true simulation of humanity in all its aspects is likely nearly impossible, and would frankly be of dubious value: if it’s exactly like a person, why not just have a person? We’ve known how to make those for millennia. Instead, we should expect that the limit of this technology would attempt to mimic some things very precisely, while completely punting on other things, and simultaneously adding and focusing on things it can do which a human can’t do, rather than trying to be perfect at the things which it will never do well.

Some of this might take on the form of unusual physical abilities; freed of the constraints of biology, you can assume that these sexbots will have remarkable combinations and options of genitals, secondary sexual characteristics, the ability to hang upside-down by their toes, and so on. More interestingly, we might see a further development of the “personality options” which we see in the RoxxxyGold: would you like this device to play-act a harsh matron? A naïve ingenue? A reluctant housewife? Would you like it to simulate your ex? The person you have a crush on? This can go wrong in a few ways, which I’ll come back to in a bit.

You can already see from this that these robotic sex workers — I think that’s really the only appropriate term, at this point — are potentially addressing the same market as human sex workers. There isn’t perfect overlap: I suspect that there will always be a healthy market for people who want a human, not a robot, for such purposes, just as there will be some who are dedicated devotés of some of the more unusual capabilities of robots. However, the economic consequences of this could be profound, as it essentially introduces automation into a market which was previously quite immune to it, thus effectively greatly expanding the worker pool.

It’s not really meaningful to think of sex work as a single economic market, of course. A streetwalker in a depressed neighborhood and an escort in a global capital aren’t serving the same people, providing the same service, or considering switching to one another’s jobs. However, we could imagine robot workers joining each of these markets.

The effect on the highest-end market will likely be the least. Escorts who are expected to also be appearing at state events are going to be hard to substitute for robots without being very profoundly noticed, and being noticed is specifically one of the things which you often don’t want in that case. Even for non-public-facing work, this niche is generally occupied by the most skilled and educated people in the market, who are best-positioned to directly compete with any attempt at automation.

If you step away from that very specialized subset, the next major tier are sex workers who provide a “safe space” in the sense discussed above. This set doesn’t simply include people selling sex, by the way: it also includes sex therapists, strippers, pro doms, and a wide range of other people with extremely different jobs from one another, all of which include letting people explore their sexuality in a controlled space. (In fact, the arguments in this section apply equally well to people whose jobs also don’t include sex, such as psychotherapists; this is really about work that’s focused on helping people explore themselves.) This set of people would only face real competition from robots which have genuinely solved the personality simulation problem, which is likely to be (by far) the hardest of these problems to solve; it suggests that serious competition will not reach this group for a long time to come. However, if it ultimately does, this competition will represent one of the most complex kinds of competition, since a robot capable of competing with this is capable of being a full participant in human relations — something I’ll get back to in a moment.

Further down in the market, the effects of robots will depend greatly on price. If humans can systematically undercut robots — if, say, the technology is such that you’re buying instead of renting, and buying a sexbot costs the equivalent of thousands of dollars today — then the effect will necessarily be limited. However, as with any manufactured good, there are economies of scale, so we should expect production costs to steadily decrease. This could create tremendous economic pressures upon those people least able to respond to them, as it would provide real and potentially cut-rate competition to humans. This situation could be further exacerbated if we continue to use legal methods to pressure human sex workers: if humans are not only subject to tremendous economic pressures, but their competition is legal while they are kept in the shadows, then they will find themselves in even more desperate situations. When you consider that these tiers of sex work are often jobs of last resort, this means that the consequences for the people displaced could be much more profound than even in our modern automation world: Where, exactly, would people go next?

I think the best answer to this is also the best answer to the legal problems facing human sex workers today: organization. That provides the tools to change the laws, to fight labor abuses, and to provide systems of mutual support when the world changes. (In this context, I’m also thinking about Marc Levinson’s The Box, a history of the development of containerized shipping. One of the most striking things about this history, which Levinson charts in detail, is how the effects on dockworkers on the East and West coasts of the U.S. were radically different when new technologies made many of their old jobs obsolete. The key difference was that in the west, the unions were basically organized and worked together, and worked with the shipping companies to come up with a meaningful transition strategy, while in the east, the unions were too busy being at each other’s throats, and so instead of negotiating the shipping companies simply bypassed them, with results which were not great for anyone involved.)

The moral and ethical limits of robots

There is one further subject which I want to delve into, and that has to do with the ethical consequences of improved personality simulation technology in particular. There are really two interesting questions here: one having to do with misuse of the technology, and the other having to do with the intrinsic moral values of the technology itself.

Let’s begin with misuse. Earlier, I mentioned the various roles which RoxxxyGold’s distant descendants could role-play: but what about other ones? For example, what about an unwilling partner? What about a small child?

We could make the argument (which has been made in different contexts) that this is still a substitutive, rather than transitional, technology: someone who is role-playing at sex with a minor with a robot is not, therefore, out having sex with a real minor. However, this argument loses a lot of weight when the activity being substituted is not merely unusual or distasteful but actively dangerous to society as a whole. The earlier discussion which I largely skipped about how it could train people to treat others, which I would now rephrase as what if someone is using this as a transitional technology and is going to end up doing this to people?, becomes extremely relevant.

Fortunately, this is actually not a new problem, and in fact we have entire systems of laws and ethics designed to deal with it. The laws around “simulated child pornography,” for example, are attempting to directly address this question by banning writing and images which could be used as transitional to actual sexual assault of children. (These laws also happen to be excellent models of how not to write laws, but that’s a separate articl.e) I think that it is reasonable to assume that, if such technologies became available — and in fact, probably long before they ever passed the uncanny valley and became widely available — we would establish similar social and legal norms barring their use for such purposes.

However, there are other kinds of misuse which might not be as easily banned by law. For example, consider the person who wants to murder a simulated human. They want a full personality simulation here — to hear the robot begging for her life, to hear terror, the whole nine yards. Instinctively, we might say that we want to ban such a thing. But all of the problems which make the “simulated child pornography” laws such a disaster come back in even more force here. What, exactly, do you ban? Do you ban any simulation of murder? (Does that make people dying in movies illegal?) Do you ban destroying a robot? (Does that mean that if you buy a robot, you can never get rid of it?) Do you ban torturing a robot? (How do you define torture? Do you have a legal definition of harm to a robot?)

Let us think, for a moment, about why this example disturbs us so much. Part of it is that the user may be practicing to be a serial killer, which is clearly a fairly alarming matter in its own right; but we already have laws against murder, and if anything this could be a way to detect the early warning signs that someone may be a potential murderer in the future. But even in its own right, I think we would all find watching the scene which I described above profoundly disturbing, because we see something which is in fundamental ways hard to distinguish from a human being murdered.

In fact, there are deep roots in our cognitive psychology of why this would be the case. Humans are anthropomorphizers par excellence, and this has important survival value to us. If you consider the famous circles-and-triangles experiment, humans seeing an animation of a bunch of circles and triangles moving on a screen are quick to ascribe “motivation” to them, to talk about the circle “trying to open a door,” and so on. At the level of ascribing animate intent to objects, there is an obvious good reason this sort of error developed: our ancestors who mistook a rock for a lion looked foolish, whereas our ancestors who mistook a lion for a rock were not, in fact, anybody’s ancestors. It’s better to err on the side of accidentally thinking that something is alive. There is a similar issue with our ascription of moral standing: it is better to err and assume that something is human, and that to torture it is wrong, than to err in the other way and say, “oh, these aren’t really people; it’s OK if we kill them.” Our profound intuition that something which seems human enough should be treated as human is something which is rooted in our psyches very deeply, and with good reason. (And when it fails — when we make a collective decision that some groups of people are not human — the results are horrifying.)

The nature of the gut reaction you probably had to my description of a “simulated” murder above may be an indication that the correct way to socially and legally prohibit such activity is, in fact, to follow that rule: to ascribe to these robots the same degree of humanness which our intuitions tell us they might have. In a way, this is like the kosher law which prohibits eating chicken together with milk. While the underlying Biblical prohibition is specific to beef — “don’t cook a calf in its mother’s milk” — the law was extended to chicken as well in order to “create a fence around the Torah:” to prevent people from sinning accidentally, in case in a mixed dish they confuse beef with chicken and therefore accidentally break the law. If we would consider putting a fence around something as minor as dietary restrictions, then how much more should we do so when the stakes are human life? If it is illegal and immoral to torture and murder a person, then we should protect ourselves from error by considering it illegal and immoral to torture and murder someone who may or may not be a person. This protects us from two errors: from the error of accidentally murdering a “real” person, and from the error of perhaps them actually being a person after all.

When we get to this sort of moral argument, of course, we then realize that it applies to more than simply murder. If we may not murder a robot, because the robot is too similar to a human for us to feel comfortable doing so, then may we beat one? May we enslave one? If we do not follow this guideline, and instead our legal or moral strictures depend on us saying that the robot has no “self,” and therefore no person which can be violated, then we are betting everything on our having judged that correctly and at all times: that is, on our having correctly identified them as being sub-human. If we argue that a robot is an item which can be owned, and that a robot cannot own property, including itself, then we may wish to compare (in)famous court ruling of a 16th-century Languedocian court that a prostitute cannot be raped, because her body is a public commodity and therefore she has no ownership interest in it. Would the same argument apply here?

This comes back to another issue I mentioned earlier, the robot which becomes a serious competitor for people who provide psychological comfort and safety to people (whether that be psychotherapists or sex workers). By its nature, this is a robot whose principal purpose is to enter into human relationships with people. Yes, these relationships are limited by construction, in much the same way that the therapist-client relationship is limited by construction: but is that limit essential, or an accident of the kinds of relationship we are thinking of? Our earlier comparison to prostitution made us think about short-term relationships between a human and one of these robots, but if you think about the technology even as it exists today — when someone owns this robot outright and keeps it, rather than renting it — this may be something which also touches on long-term relationships, which are no longer bounded in the same obvious way. You might say that, even if the simulation were perfect, this is never a real competition for a “real” relationship, because it started off with a purchase of someone designed to order — and in fact is quite possibly reprehensible for that reason. How does it compare, however, to mail-order arranged marriages? What about more traditional sorts of arranged marriage? How does the relationship itself, ultimately, compare to a real one?

I suspect that there is no good answer which we will ever be able to give to this question: the boundaries will be profoundly fuzzy, and they will not be fuzzy because of a lack of understanding on our part, but because they genuinely are fuzzy, just as the boundaries between different kinds of relationships with humans are fuzzy today. The only sharp line which I could imagine drawing is to say that humans are people, and that these robots are not, no matter what: and that line legitimizes all of the things mentioned above. I cannot, in my stomach, convince myself that a law which establishes the non-humanity of some group which looks human to me is not profoundly immoral. (I can only suggest Lester del Rey’s 1938 short story “Helen O’Loy,” and say that the question is not new, nor is it any easier to answer today.)

The lifecycle of sexual objects

I would like to close this out by considering one last thing, which is the question of how robots are made. In particular, I would like to bring up an important question raised by two books: Ted Chiang’s novella The Lifecycle of Software Objects, and Charles Stross’ novel Saturn’s Children.

Chiang was writing about the question of human-like artificial intelligence, and he raised the rather profound point (he has a technical background) that ultimately, if you want to raise something to understand the world like a human, there is almost certainly only one way to do it: to raise it like a human. He described the process by which an artificial intelligence could gradually acquire a mind and a self in our sense, and this is the process of it steadily playing, interacting with people, learning, growing, in much the way that a human child would. There is reason to believe that he is right: the simulation of humanity, such as the ability to really understand what people are saying in sentences, tends to require a fairly profound experience of what a human mind is thinking. My favorite example of this is how even understanding pronouns can require a deep understanding of psychology. Consider the dialogue:

Woman: I’m leaving you.
Man: … Who is he?

You probably had no difficulty understanding what the man meant by his sentence. But explain to me what “he” refers to in the second sentence! In order to do that, you had to “get yourself into his mind,” to imagine what another person might have been thinking. To do that, at some point you are going to need to experiment with different mind-states yourself, and the learning process for that will almost certainly involve experiencing them.

Stross came to the same conclusion that Chiang did, but his conclusion came with a warning. His novel is about a world in which humanity has become extinct, but the robots which we created have in their turn created a successful civilization of their own. His protagonist is a permanent misfit in this world: a sexbot, designed to please a race which no longer exists. The novel is about many things, but one of the key questions it ultimately touches on was how you train a robot to do something. Like Chiang, he concludes that you do so in the same way that you would do a human. And then he points out: the sexbot was created to be a sex slave. Consider how one trains a human to do that. If we try to build a slave — not just a sex slave, but anything which is designed to be subservient to people’s needs — which is also “more human than human,” then this is what we would likely be signing up for.

The technologies we are looking at are still very, very far from realistic personality simulation. We will have to deal with the questions of physical stimulation long before that — and in fact, we already are, and in that context these devices seem to be an unalloyed good, a set of technologies which simply make people’s lives better. But as the technologies broach more the question of the boundaries of humanity, we will find ourselves encountering more and more of these ethical dilemmas. In these cases, I think that there is only one thing which we can do and preserve our own humanity: if there is a doubt in our heart as to whether something is or is not human, we should always assume that it is. 

Header image by Paul Vera-Broadbent. A version of this piece originally appeared on Google+ on June 14, 2014, and was reprinted with permission.