Sir Winston Churchill often spoke of World War 2 as the “Wizard War”. Both the Allies and Axis powers were in a race to gain the electronic advantage over each other on the battlefield. Many technologies were born during this time – one of them being the ability to decipher coded messages. The devices that were able to achieve this feat were the precursors to the modern computer. In 1946, the US Military developed the ENIAC, or Electronic Numerical Integrator And Computer. Using over 17,000 vacuum tubes, the ENIAC was a few orders of magnitude faster than all previous electro-mechanical computers. The part that excited many scientists, however, was that it was programmable. It was the notion of a programmable computer that would give rise to the idea of artificial intelligence (AI).
As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do.
But it soon became clear that this would not be the case. While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable. They were ‘smart’ at their particular task, and could even be considered intelligent judging from their behavior, but they had no understanding of the task, and didn’t hold a candle to the intellectual capabilities of even a typical lab rat, let alone a human.
Neural Networks
As AI faded into the sunset in the late 1980s, it allowed Neural Network researchers to get some much needed funding. Neural networks had been around since the 1960s, but were actively squelched by the AI researches. Starved of resources, not much was heard of neural nets until it became obvious that AI was not living up to the hype. Unlike computers – what original AI was based on – neural networks do not have a processor or a central place to store memory.

Neural networks are not programmed like a computer. They are connected in a way that gives them the ability to learn its inputs. In this way, they are similar to a mammal brain. After all, in the big picture a brain is just a bunch of neurons connected together in highly specific patterns. The resemblance of neural networks to brains gained them the attention of those disillusioned with computer based AI.
In the mid-1980s, a company by the name of NETtalk built a neural network that was able to, on the surface at least, learn to read. It was able to do this by learning to map patterns of letters to spoken language. After a little time, it had learned to speak individual words. NETtalk was marveled as a triumph of human ingenuity, capturing news headlines around the world. But from an engineering point of view, what it did was not difficult at all. It did not understand anything. It just matched patterns with sounds. It did learn, however, which is something computer based AI had much difficulty with.
Eventually, neural networks would suffer a similar fate as computer based AI – a lot of hype and interest, only to fade after they were unable to produce what people expected.
A New Century
The transition into the 21st century saw little in the development of AI. In 1997, IBMs Deep Blue made brief headlines when it beat [Garry Kasparov] at his own game in a series of chess matches. But Deep Blue did not win because it was intelligent. It won because it was simply faster. Deep Blue did not understand chess the same way a calculator does not understand math.

Modern times have seen much of the same approach to AI. Google is using neural networks combined with a hierarchical structure and has made some interesting discoveries. One of them is a process called Inceptionism. Neural networks are promising, but they still show no clear path to a true artificial intelligence.
IBM’s Watson was able to best some of Jeopardy’s top players. It’s easy to think of Watson as ‘smart’, but nothing could be further from the truth. Watson retrieves its answers via searching terabytes of information very quickly. It has no ability to actually understand what it’s saying.
One can argue that the process of trying to create AI over the years has influenced how we define it, even to this day. Although we all agree on what the term “artificial” means, defining what “intelligence” actually is presents another layer to the puzzle. Looking at how intelligence was defined in the past will give us some insight in how we have failed to achieve it.
Alan Turing and the Chinese Room
Alan Turing, father to modern computing, developed a simple test to determine if a computer was intelligent. It’s known as the Turing Test, and goes something like this: If a computer can converse with a human such that the human thinks he or she is conversing with another human, then one can say the computer imitated a human, and can be said to poses intelligence. The ELIZA program mentioned above fooled a handful of people with this test. Turing’s definition of intelligence is behavior based, and was accepted for many years. This would change in 1980, when John Searle put forth his Chinese Room argument.
Consider an English speaking man locked in a room. In the room is a desk, and on that desk is a large book. The book is written in English and has instructions on how to manipulate Chinese characters. He doesn’t know what any of it means, but he’s able to follow the instructions. Someone then slips a piece of paper under the door. On the paper is a story and questions about the story, all written in Chinese. The man doesn’t understand a word of it, but is able to use his book to manipulate the Chinese characters. His fills out the questions using his book, and passes the paper back under the door.
The Chinese speaking person on the other side reads the answers and determines they are all correct. She comes to the conclusion that the man in the room understands Chinese. It’s obvious to us, however, that the man does not understand Chinese. So what’s the point of the thought experiment?
The man is a processor. The book is a program. The paper under the door is the input. The processor applies the program to the input and produces an output. This simple thought experiment shows that a computer can never be considered intelligent, as it can never understand what it’s doing. It’s just following instructions. The intelligence lies with the author of the book or the programmer. Not the man or the processor.
A New Definition of Intelligence
In all of mankind’s pursuit of AI, he has been, and actively is looking for behavior as a definition for intelligence. But John Searle has shown us how a computer can produce intelligent behavior and still not be intelligent. How can the man or processor be intelligent if does not understand what it’s doing?
All of the above has been said to draw a clear line between behavior and understanding. Intelligence simply cannot be defined by behavior. Behavior is a manifestation of intelligence, and nothing more. Imagine lying still in a dark room. You can think, and are therefore intelligent. But you’re not producing any behavior.
Intelligence should be defined by the ability to understand. [Jeff Hawkins], author of On Intelligence, has developed a way to do this with prediction. He calls it the Memory Prediction Framework. Imagine a system that is constantly trying to predict what will happen next. When a prediction is met, the function is satisfied. When a prediction is not met, focus is pointed at the anomaly until it can be predicted. For example, you hear the jingle of your pet’s collar while you’re sitting at your desk. You turn to the door, predicting you will see your pet walk in. As long as this prediction is met, everything is normal. It is likely you’re unaware of doing this. But if the prediction is violated, it brings the scenario into focus, and you will investigate to find out why you didn’t see your pet walk in.
This process of constantly trying to predict your environment allows you to understand it. Prediction is the essence of intelligence, not behavior. If we can program a computer or neural network to follow the prediction paradigm, it can truly understand its environment. And it is this understanding that will make the machine intelligent.
So now it’s your turn. How would you define the ‘intelligence’ in AI?
Filed under: Hackaday Columns, robots hacks

Will Sweatman 01 Dec, 2015
-
Source: http://hackaday.com/2015/12/01/a-short-history-of-ai-and-why-its-heading-in-the-wrong-direction/
--
Manage subscription | Powered by rssforward.com
0 comments:
Post a Comment