Archive for March, 2014



Deep Learning Making AI Intelligent and Smarter

Persontyle Deep Learning Blog March 2014
By Ali Syed and Dr. Zacharias Voulgaris

Recent advances in making machines more intelligent – able to see, speak and even think like us – have pointed the way to a new era in Artificial Intelligence (AI). This has been partly due to the breakthroughs in deep learning, a set of algorithms that allow machines to see objects and understand. As they say, AI is finally getting smart with deep learning. 

You might be thinking what’s so different about deep learning compared with machine learning as we have always known it. In fact, it’s easy to understand what separates both.  Deep learning is different because it allows representation learning, i.e. learning feature representations automatically instead of having to define them manually based on expert knowledge. How is this possible? All you need is large amounts of data (which we have now) and powerful computers (Moore’s law on steroids e.g. GPUs), then you can build systems that can learn what the appropriate data representations are. 

Machine learning is a very effective technique, but applying it to scalable problems usually means spending ages manually designing (yes manually) the input features to feed the appetite of the learning algorithms. Researchers (including three of the leading AI experts Geoff Hinton, Yann LeCun and Yoshua Bengio) have developed deep learning algorithms, which can automatically learn feature representations from unlabeled data, thus overcoming the issues of endless engineering. 

“You have to realize that deep learning — I hope you will forgive me for saying this — is really a conspiracy between Geoff Hinton and myself and Yoshua Bengio, from the University of Montreal” — Yann LeCun

Deep Learning is about learning multiple levels of representation and abstraction that help to make sense of data such as images, sound, and text. At the moment, most of the deep learning algorithms are based on building massive artificial neural networks that are broadly inspired by how our brain works. 

“Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. Automatically learning features at multiple levels of abstraction allows a system to learn complex functions mapping the input to the output directly from data, without depending completely on human-crafted features.” — Yoshua Bengio

For more about deep learning algorithms, see for example

 – The monograph or review paper Learning Deep Architectures for AI by Yoshua Bengio

 – The ICML 2009 Workshop on Learning Feature Hierarchies webpage has a list of references

– The LISA public wiki has a reading list and a bibliography

 – Geoff Hinton has readings from last year’s NIPS tutorial

Watch this hilarious clip titled “The Deep Learning Saga”. This was realized by Yoshua Bengio with the help of Olivier Delalleau, and the complicity of Andrew Ng, Yann LeCun, Marc’Aurelio Ranzato, and Honglak Lee on “how the brain works, according to Geoff Hinton”. This was presented at the NIPS’2010 workshops banquet for the Deep Learning and Unsupervised Feature Learning workshop.

“We ceased to be the lunatic fringe. We’re now the lunatic core.” — Geoff Hinton

Deep Learning has attracted a lot of interest not only from the academic world but also from industry. Just a few decades ago the deep learning movement was an outlier in the world of academia. But now deep learning researchers have the attention of the biggest names on the internet. To the extent that some of these researchers are being paid what a top NFL quarterback prospect will earn. 

“Last year, the cost of a top, world-class deep learning expert was about the same as a top NFL quarterback prospect. The cost of that talent is pretty remarkable.” — Peter Lee, Microsoft Research

Top tech companies like Microsoft, Facebook, and Google pay lots of money to have Deep Learning experts work for them, even part-time. In addition, DeepMind a UK start-up specializing in combing best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms, has been recently acquired by Google for 500 million dollars. This goes to show that the improvement in classification performance through the use of this new class of learning algorithms is more than something of scientific significance: it translates to better user experience and the possibility of developing brain inspired intelligent machines.

Jeff Hawkins, the ingenious entrepreneur, co-founder of Numenta and deep thinker in all things technological, had foreseen the rise of brain inspired machine intelligence. In his milestone TED talk he outlined his ideas and their impact in the way new tech is going to be designed.

Jeff Hawkins, founded Numenta in 2005 and brought in Palm veteran Donna Dubinsky as CEO. Numenta, Jeff stresses, has nothing to do with the field known as artificial intelligence. What he has in mind is far more supple and elegant. “Numenta, isn’t just making another gadget. It’s attempting to fuse silicon and grey matter to produce the ultimate intelligent machine.” For a detailed explanation of Numenta motivations, hopes and fears around the Numenta Platform for Intelligent Computing (NuPIC) project, see Jeff’s Introduction to NuPIC.

Deep Learning Community

The main advent of Machine Learning and Deep Learning in particular is that different learners work together to create a better generalization that none of the these learners could accomplish individually. In a deep learning system and artificial neural networks in general, these learners are called neurons. Perhaps we can be like these neurons and work together to learn from each other, like a deep neural network.  One way to make this happen is through Persontyle’s latest initiative, the Deep Learning London Meetup, a forum of enthusiasts of all levels, dedicated to the dissemination of Deep Learning knowledge and practices. 

This community is part of a wider initiative for people to learn the art and science of turning data into intelligent predications and insights. The vision is for this group to be an open platform for people to share knowledge, practices, research, applications and critiques of deep learning.

Specifically, in this group you can have a chance to meet some of the experts in the field, such as Dr. Piotr Mirowski from Microsoft, who will give a talk on Auto-Encoders on March 26th, and the aforementioned Jeff Hawkins who on April 15th will give an overview of the theory behind the NuPIC codebase.

The best part is that you not only get to meet the experts but also get to ask them questions.

Join the Deep Learning London community.  We are in the process of bootstrapping and are on the lookout for interesting speakers and topics. If interested, get in touch.

read more