Deep Learning and the great AI revolution has finally arrived. Deep Neural Networks keep making incredible leaps in their ability to surprise us and beat humans at tasks we thought machines could never do better than us, like image recognition and play “Go”.
Or, we would like to think so.
Recent developments and progress in machine learning and in deep learning in particular have been hailed as a new dawn for real artificial intelligence. AlphaGo win over world champion Go player Lee Sedol surprised many, since Go is a game much more complex than chess. However, whether this is a real qualitative step forward creating real hard artificial intelligence remains to be seen.
Our brain was modelled by millions of years of evolution, and each life form may be endowed with such incredible features that our most advanced technology cannot yet perform. In fact, many modern inventions have been inspired by biology. The brain evolved after millions of years of natural selection, but nature is dumb, and it works through random genetic mutations.
Those random, unintelligent mutations, though, get a “pass” or “fail” depending on their success in the general environment in which they are set. After thousand of years, the best mutations win. Natural selection, though dumb, has time on its side.
Through millions of years, millions of generations, new and better organisms evolved and intelligence is just a by-product of this evolution, but the process through which nature is able to accomplish these results is accidental. The natural selection process is a dumb process and by no means any serious scientist would consider nature intelligent, or evolution an intrinsically intelligent design.
AlphaGo and much of the modern so called “deep learning”, despite the hype, works in a similar fashion. AlphaGo learned the rules and improved its ability by playing thousand of games. Deep learning systems work by analysing very large amount of data. Nonetheless, there is a lot of excitement in the scientific community about deep learning, in fact sometimes even fear of it getting too intelligent.
To be honest deep learning does work in a more intelligent way than nature does and its progress is not entirely random. ANNs are built by “tuning” their weights, the parameters that allow the ANN to learn, through some form of optimisation, like gradient descent and back-propagation. Still, the process is quite mechanical and not very complex, and without large amount of data, similarly to nature without long periods of time, many ANN would be unable to learn much.
Like a “swot” in a class, who does well only by spending all his time studying vs. the really intelligent student who really understands the subject and is able to quickly grasp the basic concepts and even to generalise them, the ANN can get impressive results but only thanks to large amount of data and its understanding is quite limited.
Reza Ghanadan, DARPA (Defense Advanced Research Project Agency) program manager, noted how even a small change in task often requires programmers to create an entirely new machine teaching process and said: “If you slightly tweak a few rules of the game Go, for example, the machine won’t be able to generalise from what it already knows. Programmers would need to start from scratch and re-load a data set on the order of tens of millions of possible moves to account for the updated rules”.
Life on Earth was almost swept away by cataclysmic events several times because evolution was able to create many living things who had adapted very well to life on Earth but could not adapt well to slightly altered circumstances. Similarly, ANN can lead to machines who can accomplish very difficult tasks, like beating the world Go champion, but can fail quickly once the external circumstances are slightly changed.
In an article by Anh Nguyen, Jason Yosinski and Jeff Clune, “Deep Neural Networks are Easily Fooled: High Confidence Prediction for Unrecognizable Images” it was shown how deep neural nets could be easily fooled in thinking that certain images were something else by making undistinguishable but pondered changes to the images. The authors show how imperceptible changes to the images can lead a deep neural network to completely misclassify an image and similarly, showing it an image which seems just noise, will lead the network to make high confidence prediction. This is done by manipulating a few pixels so that using gradient descent will lead the network to completely erroneous results.
The above paper, at the very least, should point to how deep learning, while it may be effective, does definitively work differently from how a human brain works. While very effective at solving specific problems, deep learning is still quite far from becoming close to what we think of intelligence, rather it seems to be closer to “slightly better than random” natural selection systems by taking advantage of large amount of data and, by working very fast, long periods of virtual time. While it may be understandable to be excited by recent advances, we should also realise we are still far off from creating a really intelligent machine. Real learning does not reside in “winning” a game, but in being able to use knowledge to adapt, and we are still quite far from achieving that.