By Sean Cameron, April 2017

Alex Garland’s 2015 film “Ex Machina” explores several classic problems in philosophy of mind through its modern illustration of the Turing Test. The Turing test was designed to answer one of the fundamental questions of this branch of philosophy: if we can build an artificial system that potentially has a mind, how can we be sure that it has a mind? The Turing Test consists of a person interacting with a machine concealed in a different room. If the person comes to believe that they are interacting with another person, then the test is passed. The conclusion drawn from a pass is that the computer possesses artificial intelligence, a mind. The characters in Garland’s story are quickly convinced of the veracity of a human-made mind, and the film proceeds to examine other important, resultant questions in the wake of this discovery. This paper will discuss how Garland’s rendition of the Turing test plays out in his film “Ex Machina”, and why his version is a necessary variation of the traditional Turing test.

Philosophy of the Mind

Before going any further we must specify what we are looking for in the character of the robot AVA who undergoes the modified Turing test: whether she has a mind. Someone’s mind is their inner life; it is their consciousness and all of their other mental properties, such as their beliefs and desires. But how can we positively identify which other entities have minds? No consensus has been reached in philosophical literature as to whether animals or complex artificial systems have or can have minds. There is, however, a common analogical argument for the existence of other minds. It suggests that if one has a mind and display some relevant set of properties, P, then all other beings that share this exact set of properties are likely to also have minds. Current research cannot illuminate exactly which properties belong to this set, but most commonly used is the ability to demonstrate behaviors that are results of beliefs and desires.

Plot Summary

In Ex Machina, many of the robot AVA’s properties are initially hidden and later dramatically revealed to the protagonist, Caleb, and the audience in order to increase suspense within the story. In order to clearly dissect the encased thought experiment we will consider mainly the condensed plot line. Both Caleb and AVA know that they will be actors in a Turing test, but what Nathan hides from them is his own pass-fail qualifications for the test. Nathan indicates to both Caleb and AVA that after the test has been conducted, regardless of the outcome, AVA will be disassembled before the next model is created. To Nathan, the true pass of his AI test comes only if AVA can manipulate Caleb into helping her escape the laboratory and imminent disassembly.

While Caleb is unaware of Nathan’s plan, he does as he believes he was summoned to do: to determine if AVA has a mind. As the audience, we observe Caleb look for clues in AVA’s physical composition as well as through conversational interrogation, but we ultimately learn why these methods are destined to fail, as they cannot differentiate between a genuine mind and the mere imitation of one.

 

 

AVA: Mind vs. Machine

To start our discussion, we will first look at the physical properties of the artificial system, AVA, and if they could possibly be used to determine if AVA has a mind.

Similarly to any computer, AVA’s ‘mind’ (if she has one) consists of software that is run on hardware one might take to be analogous to the brain. The hardware (or “wetware” as Nathan describes it) varies fundamentally from traditional circuitry. It consists of a gel-like medium that is able to “arrange and rearrange at a molecular level, but keep its form when required.” Undoubtedly, the hardware was inspired by the brain and its constantly dynamic neurons which form thoughts. In this medium, it is suggested that AVA’s thoughts come about and resonate in a fluid manner, in contrast to simply following pre-ordered lines of programming.

The algorithm, which is the basis of AVA’s software, is founded on Nathan’s company’s search engine, Blue Book (Ex Machina’s Google). Nathan claims that what his competitors have yet to realize is that tracking people’s search histories is “not just a map of what people were thinking, but also how people were thinking.” By taking a superposition of countless chains of inquiries Nathan has constructed a logarithm that replicates how the human mind thinks.

Considering the complexity and flexibility of the system, it is reasonable to think that it could have a mind just as a human does. It appears to have the necessary components of a mind, in that the hardware and software replicate the brain and its functions. However, it is unknown if neuroscience could possibly advance to the state of being able to tell us if an artificial system has a mind based on observation of physical properties alone. Caleb, though being educated in programming and familiar with the study of AI, realizes he can not make any conclusions as to AVA’s mental existence based on her physical composition because he does not know (as no one does nor will likely ever know) exactly which physical properties are sufficient to form consciousness. To tell if AVA has a mind, Caleb will need more than the blue prints.

AVA’s Mental Properties

On the surface, Caleb finds that AVA does seem to meet the loose criteria of a mind in that she appears to be capable of self-knowledge. She answers questions regarding her own existence and displays evidence of varying mental states. This suggests that AVA is self-aware; If AVA is aware of her own inner life, then she has the mental properties that we attribute to a mind. During one of her interviews with Caleb, AVA makes a joke based off something Caleb has said. Ava’s joke could not possibly have been hard coded, as it was entirely a reaction to Caleb’s dialogue. This attempt at building a relationship suggests that AVA not only is self-aware and capable of being spontaneous, but she is aware of the thoughts of others as well.

If we consider the argument for other minds as we defined, we should look for AVA to perform a behavior as a result of a belief and desire. The strongest example of this is a collection of carefully conducted actions that AVA executes because she has the belief that she can make Caleb fall in love with her, and the desire to do so. It is not difficult to produce a list of these actions: AVA inquires into Caleb’s personal life, she puts on a dress and wig to show herself to Caleb, she confronts Caleb about how he feels about her, etc. She conducts these actions based on the specific belief that her best chance of escape is to manipulate Caleb into falling in love with her and assisting her.

As we consider her mental properties, AVA does seem to have a mind. Furthermore, she appears to meet the criteria from the other minds argument. The conclusion the audience is led to is that if one is sure that we as humans all have minds, then AVA must also have a mind. She is just like us.

The Chess Analogy

When giving a progress report, Caleb expresses the intrinsic difficulty of his task by comparing AVA to a chess computer. The analogy is a simplification, but accurately determines the underlying problem of determining if an AI has a mind:

Caleb: “Testing AVA through conversation is a closed loop… like testing a chess computer by only playing chess. You can play it in a game to see if it makes good moves, but that won’t tell you if it knows that it’s playing chess, and it won’t tell you if it knows what chess is.”

As technology advances our ability to make complex artificial systems, the tactic for determining other minds proposed by Alan Turing breaks down. The previous discussion regarding AVA’s mind is dimmed as it may be only that AVA has been cleverly programmed to identically replicate the behaviors that human minds exhibit, and not that AVA has a mind. This is where Caleb is stumped. He could ask AVA to jump through any number of conversational hoops, but it will not prove beyond doubt that AVA has a consciousness, as opposed to being cleverly programmed to respond to and generate a variety of questions and answers.

As the film advances, it proposes a solution to this problem. It suggests the following: put the AI into a position where it decides on its own that it will engage in deception. We can then perhaps imagine AVA to be multi-dimensional: one part of her is genuinely in love with Caleb and is unaware (or accepting) of the doom impended by Nathan, and the second part knows she must manipulate Caleb into falling in love with her. These two make up AVA, though we can imagine the second embodies the beliefs and consciousness of the AI, and the first to be the behavior that results from the second. In sum, AVA must have an inner-life if she is able to harbour complex, conflicting beliefs and desires while hiding them from the external world.

Conclusion

Though Garland’s movie is science fiction, it has animated in clever, contemporary terms a classic problem in philosophy of the mind. It shows that the traditional Turing Test is far too elementary when testing for consciousness, and that if we are ever to genuinely test a system for a mind we will need a trial that is much more holistically engaging. The film points out the gaping holes in the other minds argument with Caleb’s chess analogy, showing that in a situation where we must determine if an organism has a mind, the argument is completely inconclusive. The contribution that Garland’s movie has to offer philosophy pertains to the identification of deception as a tool for detecting consciousness. In sum, Ex Machina proves to be an insightful thought experiment successful at getting its audience to ask the right questions when pondering the existence of artificial minds.