Wednesday, June 3, 2015

Programming a Ghost

Ghost-vulnerabilityThe worth of anything can be measured by its tendency to inspire thought. In other words, if it can make me think, then it has to be good! Fair enough, right? Ghost in the Shell: Stand Alone Complex is one of the few anime that has provoked a good deal of thought in me. In fact, no other anime has inspired me to think about such worthwhile topics like consciousness, perception, metaphysics, artificial intelligence, programming etc. Lately, Ghost in the Shell: Stand Alone Complex has brought yet another idea to my mind: the idea of the ghost. For those unfamiliar with the series, a person’s personality, thoughts, behaviors, attitudes, ideas, etc. is referred to as the person’s ghost. In a nutshell, a person’s ghost is their conscious (I’ll use the words “ghost” and “conscious” interchangeably in this post). The ghost is one of the most referenced themes in the series. The plot continuously relates to the ghost and a lot of speculation is made by the characters about the ghost. It’s even speculated that a ghost can be created, or emerge, from self-modifying and self-improving artificial intelligence.

So, can a ghost be programmed or emerge from an existing program? Before we can determine this, we must know how we determine the existence of a ghost in a person. Well, it’s pretty obvious that people have a conscious right? We see a person, we hear them talk, we listen to them as they express themselves, we observe them as their ideas, opinions, and attitudes change. It’s pretty clear that people are conscious. People are even conscious of themselves. They are self-aware. Can a program become self-aware?

Now that we know how to determine whether someone has a ghost, can we tell whether a machine has a ghost? Enter the Turing test. A Turing test measures a machine’s ability to exhibit intelligent human behavior equivalent to, or indistinguishable from, that of a human. To pass a Turing test, a machine must convince a person that it is human at least 30% of the time in a series of five minute rounds. Has a machine ever passed a Turing test by exhibiting intelligent human behavior? Most machines are quite primitive in their ability to act in an intelligent manner (compared to an intelligent human). Then again, most people don’t act in a reasonably intelligent manner (compared to an intelligent human). Therefore, most machines will perform in ways indistinguishable from that of most humans. Defining intelligent human behavior is another ordeal altogether, but I trust the people who implement Turing Test’s to know what they are doing. Okay, now that we’ve got that resolved. The answer to the aforementioned question is “Yes”, a machine has passed a Turing test.

So, a machine can appear to be a person by appearing to have a ghost, a conscious. It should be noted that appearing to have a ghost and actually having a ghost are too completely different things. However, the fact that a machine can appear to have a ghost makes it indistinguishable from actually having a ghost. Let us note that the machine appeared to have a ghost during a Turing test. Turing tests are made up of many, five minute rounds. I’m sure the machine would be less likely to appear to have a ghost if the rounds were longer or if the questions became more complex or if the environment were altogether different. Additionally, appearance is subjective. The machine could appear to be human to one person and non-human to another person. Additionally still, it is important to how we define what a reasonably intelligent person would do in a certain situation. Context matters.

All in all, even considering my reservations about the Turing test, the fact that a machine has passed the Turing test is a great feat in programming and, perhaps, we are well on our way to seeing the emergence of a ghost within a machine. But, for a moment, let us consider how difficult it would be to program a human consciousness.

A comparison may be made between basic brain activity and basic computer functions. The brain is made up of roughly 40 billions interneurons. Neurons are a lot like on/off switches. They require just the right amount and kind of chemical signal before they “fire” their electric charge down the axon to the axon terminal. And that’s all basic brain activity is, just an enormous series of neurons getting triggered and releasing their chemicals which, when performed en masse, form human thought. Basic computer functions work much the same way. The most basic machine language is known as binary. Binary is just a long sequence of 1’s and 0’s. The 1’s represent “on” and the 0’s represent “off”. They either “fire” or they don’t “fire”. See where I’m going with this? The most basic machine language seems to mimic the most basic brain function. Since this basic brain function, performed in aggregate, form the human conscious, does that mean a sophisticated enough binary could form a conscious equivalent to a human’s?

Let’s say that my comparison above holds weight, what would it take to program a conscious? Well, the code would likely need to be similar to that of the human brain. It would need to be sequential, self-modifying, and a little illogical. Would simply programming a personality using 40 billion lines of code work? Will the code become self-aware? That’s hard, if not impossible, to tell. Could the code be programmed to continually expand itself by incorporating new data into its structure? Self-modifying code is already possible, but could a code continually self-modify to same extent as a human brain? Would all this self-modification lead to the emergence of a ghost? Possibly, but, sadly, we may never know.

Due to the limits of human perception, we may never be able to tell whether a machine truly possesses a conscious or whether a machine is self-aware. Yes, we can use Turing Tests ad infinitum and the machine may always convince the human that it is a human, instead of a machine, but, as mentioned earlier, this doesn’t mean that the machine has a conscious. It just means that the machine is sophisticated enough to trick a human, and that’s it.

Though we may never know for certain whether a machine has a ghost, we may still highly suspect it. Electronic computers have come a long way since their appearance on the scene in 1942. And some computers are now sophisticated enough to convince a human that they do have a ghost, even though they may not have one. Yes, I just admitted that I think the machine that passed the Turing Test may have a conscious. Indeed, who’s to say the machine’s we currently rely upon don’t already have ghosts? After all, we as humans can’t really know whether a machine is truly conscious of itself. But, nevertheless, it’s quite fun to speculate on. I predict that, as the years go on, machines will convince us that they are conscious of themselves. What then? The political, social, and economic repercussions would be enormous. But I won’t talk about that now. For now, let’s just humble ourselves with the fact that we may never know whether a machine is conscious. If its any comfort, you can look forward to the possibility that, if machines do develop consciousness, it will happen in the near future. The future is always closer than you think.

All in all, just something to think about.

 

No comments:

Post a Comment