Showing posts with label deep learning. Show all posts
Showing posts with label deep learning. Show all posts

Tuesday, 27 December 2016

Deep learning as new electronics


It is hard to imagine a modern life without electronics: radios, TVs, microwaves, mobile phones and many more gadgets. Dump or smart, they are all based on the principles of semi-conducting and electromagnetism. Now we are using these devices for granted without worrying about these underlying laws of physics.  Most people do not care about circuits that run in chips and carry out most functions of the devices.

For the past 5 years, a new breed of human-like functionalities has emerged through advances of a new field called deep learning: self-driving cars, voice command in mobile phone, translation in hundreds of language pairs and a new kind of art. In 2016, ten years after its revival, deep learning has taken over the Internet. People have used deep learning-powered products in daily life without worrying about how the underlying neural nets work.

These two fields free us from many physical and psychological constraints:

  • Electronic devices give us freedom of communication over distance, a new kind of experiences with augmented reality and many more.
  • Deep learning enables freedom from having to make tedious and incorrect decisions (e.g., driving a car), freedom of information access (personalization), of hand (e.g., voice command), of finance (automated trading), of feature extraction (through representation learning), and many more.
It is worth noting that electronics and deep learning are different in principles.
  • Electronics devices are designed with great precision for specific functions in mind. Imprecision comes from the quantum uncertainty principle and thermal fluctuations.
  • Neural nets on the other hand, are designed to learn to perform a function of its own, where data (and sometimes model) uncertainty is built in.
However, it is also striking that they are quite similar in many ways.

Super-city of interconnected simple parts

Modern electronic devices are truly super-cities built out of just few kinds of primitive building blocks. The same holds for deep neural nets:
  • Electronic primitives: resistor, capacitor, transistor, coil, diode, logic gate and switch.
  • Neural net primitives: integrate-and-fire neuron, multiplicative gating, differentiable logic gate, switch and attention module. Interestingly, one of the most recent idea is called "Highway networks", borrowing the idea that highway traffic is free of traffic lights.
These primitives are connected in graphs:
  • Electronic devices works by moving electrons in correct order and number. The force that makes them move is potential difference. A design circuit captures all necessary information.
  • In neural nets, the activation function is like the electronic current. The main difference is that the magnitude of "current" in neural nets can be learnt. A computational graph is what is needed for model execution.
Not just analogy: A two-way relationship
  • Electronics deep learning: At present, advances in electronics have given huge boost in efficiency of deep learning with GPU, TPU and other initiatives. It is interesting to see if we can learn from electronics in designing deep nets? For example, will something analogous to integrated-circuits in deep architectures?
  • Deep learning  electronics: I predict that soon the reverse will hold true: deep learning will play a great role in improving efficiency and functionalities of electronic devices. Stay tuned.

Sunday, 25 December 2016

Making a dent in machine learning, or how to play a fast ball game


Neil Lawrence had an interesting observation about the current state of machine learning, and linked it to fast ball games:
“[…] the dynamics of the game will evolve. In the long run, the right way of playing football is to position yourself intelligently and to wait for the ball to come to you. You’ll need to run up and down a bit, either to respond to how the play is evolving or to get out of the way of the scrum when it looks like it might flatten you.”
Neil Lawrence is known for his work in Gaussian Processes and is a proponent of data efficiency. He used to be professor at University of Sheffield, is now with Amazon. Apparently the strategy works. The ball has come to him.

I once heard about a professor who said he would come to top conferences just to learn what others were busy doing and tried to do something else.

I also read somewhere from a top physicist that students who applied to work with him often expressed the wish to study shiny-and-clean fields. Some other fields were too messy and seemed unsexy. The professor insisted that the messy fields were exactly the best to work on.

In "Letters to a young scientist", Edward Osborne Wilson told his life story. He spent his entire life cataloging ants since childhood, right at the time where ant ecology wasn't a shiny field. He is considered as father of biodiversity.

Wonder what to do in deep learning now?

It is an extremely fast ball game with thousands of top players. You will be either crushed with ideas being stolen weekly, or out of steam pretty quickly.

It looks like most of the low hanging fruits have been picked.

Then ask yourself, what is your unique position? What are your strengths and advantages that people do not have? Can you move faster than others? It may be by having access to data, access to expertise in the neighborhood, or borrowing angles outside the field. Sometimes digging up old ideas is highly beneficial, too.

Alternatively, just calm down, and do boring-but-important stuffs. Important problems are like the goal areas in ball games. The ball will surely come.

30 years of a Swiss army knife: Restricted Boltzmann machines


I read somewhere, but cannot recall exactly who said so, that in ancient worlds, 30 years are long enough for the new generation to settle down with a new system, regime or ideology. As there are only a few days away from 2017, I would like to look back the history of a 30-year old model which has captured my research attention for the past 10 years.

To some of you, restricted Boltzmann machine (RBM) may be a familiar name, especially for those who follow the current deep learning literature since the beginning. But RBM has also passed its prime time, so you may have heard about it in passing.

I was attracted to RBM for several reasons. When I was studying conditional random fields in 2004 and was looking for a fast way to train with arbitrary structures, Contrastive Divergence (CD) appears to be an interesting one. While CD is a generic technique, it was derived especially for RBMs. Second, RBM has "Boltzmann" in the name, which is kind of interesting, because physicists are kind of sexy :)

Needless too say, another big reason is that RBM, together with its cousin, Autoencoder are building blocks of unsupervised deep nets, which started the current revolution -- deep learning.

The greatest reason is that I think RBM is one of the most important classes of data models known to date, perhaps comparable to PCA in dimensionality-reduction  and k-means in clustering in terms of usefulness.

First introduced in 1986 by  Paul Smolensky under the name Harmonium in a classic two-volume book known as PDP (Parallel Distributed Processing), co-edited by Rumelhart and McLelland. RBMs were subsequently popularised by Geoff Hinton in the 2000s, especially in 2001 with the introduction of Contrastive Divergence (CD), and  in 2006 with the introduction of a deep version known as Deep Belief Nets (DBN).

Statistically, RBM is a probabilistic model of data, i.e., it assigns a probability (or density) to a multivariate data. Initially, RBMs are limited to binary data (known as Bernoulli-Bernoulli RBM), but subsequently extended to Gaussian data (known as Gaussian-Bernoulli RBM), and mixed-types (known as Mixed-variate RBM, or Thurstonian Boltzmann machine).

Source: http://deeplearning.net/tutorial/_images/rbm.png

RBM is a special case of Boltzmann machine, which is in turn a special case of Markov random field. It has two layers, one for observed data, the other for latent representation. Due to its special bipartite structure, MCMC inference can be implemented in a block-wise fashion. Learning is relatively fast with CD or its Persistent version. Estimating of latent representation is very fast with a single matrix operation. RBM is also a powerful model in the sense that it can represent any distribution given enough hidden units. As a Markov random field, it has log-linear paramerization which makes it easy to incorporate a variety of domain knowledge.

With all of these advantages, RBMs have been used successfully in many applications, ranging from density modelling, feature extraction, dimensional reduction, clustering, topic modeling, imputation, classification, retrieval and anomaly detection.

Some bias selection of developments
  • 1986: first introduced as Harmonium.
  • 2001: fast approximate biased learning introduced as Contrastive Divergence (CD)
  • 2004: generalized Harmonium introduced
  • 2006: used successfully in Deep Belief Networks
  • 2007: demonstrated with great success on a very large-scale task within the Netflix challenge
  • 2007: temporal RBM
  • 2008: recurrent temporal RBM
  • 2008: classification RBM
  • 2008: persistent CD introduced, essentially another variant of Young's.
  • 2008: convolutional RBMs
  • 2008: universality property proved
  • 2009: topic models with Replicated Softmax
  • 2009: matrix modelling with non i.i.d. RBMs, ordinal data, semi-restricted RBM
  • 2009: implicit mixtures of RBMs
  • 2010: factored high-order RBM
  • 2010: mean-covariance RBM
  • 2010: rectifier linear units RBM
  • 2010: deep BM
  • 2011: mixed-variate RBM
  • 2012: a proper modeling of ordinal matrix data
  • 2013: Thurstonian BM for joint modeling of most known data types
  • 2013: nonnegative RBMs for parts-based representation
  • 2015: trained with graph priors, demonstrating better generalization
  • 2015: extended to tensor-objects
  • 2016: infinite RBM
In short, most of the work has been on extending the representational power of RBM to suit problem structures. The rest is about analysing theoretical properties of RBMs, making deep nets out of RBMs, and improving training speed & accuracy. For the past few years, research about RBMs has slowed down significantly, mostly because the superb accuracy of supervised deep nets, and the ease of deployment of deterministic nets on large-scale problems. 

Some of our own work

Tuesday, 20 December 2016

Everything old is new again: Nested sequential models


Recently, multi-layer RNN architectures have been demonstrated to work better than single-layer versions. The Google's Neural Machine Translation machine, for example, has 8 layers of LSTMs as of Dec 2016.

The idea goes back to earlier days of multi-layer HMMs in the 1990s, which are special cases of Dynamic Bayesian Networks. These were then followed by multi-layer Conditional Random Fields (CRFs), which are also special case of Dynamic CRFs.

The idea is that higher layers represent more abstract semantics. In temporal sequences, one would expect that the "clock" of the upper layers is slower than that of the lower layers. But most existing work has to explicitly design the temporal resolution by hand.

Learning the temporal resolution automatically is an attractive idea. In 1998, Hierarchical HMM was introduced, here parent state is assumed to generate a child sequence, and each child in turn generates a grandchild subsequence and so forth. The network becomes nested. Learning and inference cost cubic time, which is prohibitive for long sequences.

A CRF counterpart is known as Hierarchical Semi-Markov CRF introduced by us in 2008.

Both HHMMs and HSCRFs are member of the Stochastic Context-Free Grammar family, which is known for its cubic time complexity.  Not just being slow, HHMMs and HSCRFs are hopeless in large-scale tasks that require many bits to represent the world.

Given the recent successes of RNNs (mostly LSTM and GRU) for sequential tasks, one would naturally ask whether we can achieve the same feat as in HHMMs, that is, the hierarchy is learnt automatically from data. It proves to be a difficult task, until very recently. Check this paper by Bengio's group for more detail. I'm very curious to see how the idea plays out in practice. Let's wait and see.

Work by us:
  • Hierarchical semi-Markov conditional random fields for deep recursive sequential data, Truyen Tran, Dinh Phung, Hung Bui, Svetha Venkatesh,  Artificial Intelligence, 2017. (Extension of the NIPS'08 paper).
  • MCMC for Hierarchical Semi-Markov Conditional Random Fields, Truyen Tran, Dinh Q. Phung, Svetha Venkatesh and Hung H. Bui. In NIPS'09 Workshop on Deep Learning for Speech Recognition and Related Applications. December, 2009, Whistler, BC, Canada.
  • Hierarchical Semi-Markov Conditional Random Fields for Recursive Sequential Data, Truyen Tran, Dinh Q. Phung, Hung H. Bui, and Svetha Venkatesh. In Proc. of 21st Annual Conference on Neural Information Processing Systems, Dec 2008, Vancouver, Canada. 
  • AdaBoost.MRF: Boosted Markov random forests and application to multilevel activity recognition, Truyen Tran, Dinh Quoc Phung, Hung Hai Bui, and Svetha Venkatesh. In Proc. of  IEEE Conference on Computer Vision and Pattern Recognition, volume Volume 2, pages 1686-1693, New York, USA, June 2006.

Monday, 19 December 2016

Everything old is new again: Deep statistical relational learning


In the age of combinatorial innovation, old things will be given a new shiny face, even nothing really new happens. The same holds for Statistical Relation Learning (SRL) -- a sub-field of machine learning for characterizing the relational structure of the world.

Started in late 1990s, SRL had gone through a fruitful period of about 10 years and reached its peak in 2007 with the publication of a book titled "Introduction to Statistical Relational Learning" co-edited by Lise Getoor and the late Ben Taskar (who died unexpectedly in 2013 at the age of 36 at his academic peak). Many significant models appeared in the first half of the 2000s, including Conditional Random Fields (CRF, 2001), Relational Markov networks (2002) and Markov Logic Networks (2006). Despite being more powerful than non-relational alternatives, SRL still relies on manual feature engineering, which will soon reach its limit of utility.

Developed rather in parallel is Deep Learning (DL), where the current wave officially started in 2006 with the publication of Deep Belief Networks in Science. Deep learning is concerned about learning data abstraction (aka features), favoring end-to-end learning through multiple steps of non-linear computation.

A combinatorial thinking would naturally lead to the question whether these two sub-fields can work together. The answer is a big YES, because SRL and DL are rather complementary. For example, in the past 3 years, there have been lots of papers marrying CRF and deep nets. While CRFs offer a semi-formal framework for joint learning and inference, deep nets offer learning of features (with feedforward nets), deterministic dynamics (with recurrent nets), and translation invariance (with convolutional nets). The marriage would be a happy one. But like any marriage of convenience, it won't go very far. Some genuine blending is needed.

Our recent work,  Column Networks, scheduled to appear in AAAI'17, blends the SRL and DL even further so that learning and inference can be carried out naturally. The term "column" refers to the famous columnar structure of neo-cortex in mammals. Interestingly, Column Networks share design features of all three main deep net architectures:

  • A column is a feedfoward net,
  • Parameters are tied across layers, which is essentially the idea behind recurrent nets.
  • The network between columns is designed so that the multi-relations between columns are invariant across columns, hence the translation invariance property of convolutional nets.
As Column Networks are very generic, expect more to come in the next few months. Stay tuned.

Updated references

  • Column Networks for Collective Classification, T Pham, T Tran, D Phung, S Venkatesh, AAAI'17.
  • Graph-induced restricted Boltzmann machines for document modeling, Tu D. Nguyen, Truyen Tran, D. Phung, and S. Venkatesh, Information Sciences. doi: 10.1016/j.ins.2015.08.023
  • Neural Choice by Elimination via Highway Networks, Truyen Tran, Dinh Phung and Svetha Venkatesh,  PAKDD workshop on Biologically Inspired Techniques for Data Mining (BDM'16), April 19-22 2016, Auckland, NZ.
  • Predicting delays in software projects using networked classification, Morakot Choetikertikul, Hoa Khanh Dam, Truyen Tran, Aditya Ghose, 30th IEEE/ACM International Conference on Automated Software Engineering, November 9–13, 2015 Lincoln, Nebraska, USA.
  • Learning vector representation of medical objects via EMR-driven nonnegative restricted Boltzmann machines (e-NRBM), Truyen Tran, Tu D. Nguyen, D. Phung, and S. Venkatesh, Journal of Biomedical Informatics, 2015, pii: S1532-0464(15)00014-3. doi: 10.1016/j.jbi.2015.01.012. 
  • Cumulative Restricted Boltzmann Machines for Ordinal Matrix Data Analysis, Truyen Tran, Dinh Phung and Svetha Venkatesh, in Proc. of. the 4th Asian Conference on Machine Learning (ACML2012), Singapore, Nov 2012.
  • Ordinal Boltzmann Machines for Collaborative Filtering. Truyen Tran, Dinh Q. Phung and Svetha Venkatesh. In Proc. of 25th Conference on Uncertainty in Artificial Intelligence, June, 2009, Montreal, Canada. 
  • Hierarchical Semi-Markov Conditional Random Fields for Recursive Sequential Data, Truyen Tran, Dinh Q. Phung, Hung H. Bui, and Svetha Venkatesh. In Proc. of 21st Annual Conference on Neural Information Processing Systems, Dec 2008, Vancouver, Canada.
  • AdaBoost.MRF: Boosted Markov random forests and application to multilevel activity recognition, Truyen Tran, Dinh Quoc Phung, Hung Hai Bui, and Svetha Venkatesh. In Proc. of  IEEE Conference on Computer Vision and Pattern Recognition, volume Volume 2, pages 1686-1693, New York, USA, June 2006.

Machine learning four years after the turning point


In May 2012 I wrote a note titled "Machine at its turning point" to argue for the new wave of machine learning in that we do not need to worry about having a convex loss but rather be happy with non-convex ones. At the time I did not know about AlexNet and its record-breaking result on the ImageNet benchmark. It was published 7 months later in NIPS'12.

AlexNet was truely a turning point for machine learning. It declared the winning of deep neural nets over others, which were combination of clever manual feature engineering and some variants of SVMs or random forests. AlexNet is remarkable in many ways: Dropout, rectifier linear units, end-to-end training on massive data with GPUs, data augmentation and carefully designed convolutional nets.

It was the year that Yann LeCun posted his complaints about the computer vision community, but quickly retracted his boycott given the aftershock of AlexNet.

Recently, there has been an interesting comment floating around: In machine learning, we ask what we can do for neural networks, and in applied domains, we ask what can neural networks do for X. And the list of Xs keeps growing from cognitive domain to non-cognitive domains. Andrew Ng made an interesting point that for domains where humans can do well to map A to B in less than a second, it is ripe for machine automation.

This year also marks the 10th year after Deep Belief Nets, the model that announces the beginning of the current wave of neural nets. Early this year, AlhaGo of DeepMind defeated one of the best Go champions 4 to 1, officially ending the superiority of human on this ancient game. AlphaGo is a mixture of convolutional nets to read the board positions and evaluate the moves, and random tree-search moves.

Many things have changed since 2012. It is clear that supervised learning works if we have sufficient labels without pre-training. Unsupervised learning, after an initial burst with Boltzmann machines and Autoencoders, failed to deliver.  There are new interesting developments, however, with Variational Autoencoder (VAE) and Generative Adversarial Nets (GAN), both invented in 2014. At this point, GAN is the best technique to generate faithful images. It is considered by Yann LeCun as one of the best ideas in recent years.

The machine learning community has witnessed 10-15 year mini-cycles. Neural networks, graphical models, kernel methods, statistical relational learning and currently, deep learning. So what is up for deep learning? If we consider 2006 as the year of beginning of current deep learning, then it is already 10 years, enough for a mini-cycle. But if we consider 2012 as the true landmark, then we have 6 more years to count.

Like other methodologies, deep learning will eventually morph into something else in 5 years time. We may call it by other names. With programming becomes reasonably effortless and with the availability of powerful CPUs/GPUs designed specifically for deep learning, the low hanging fruits will soon be picked up.

Practice-wise, as feature engineering is an unsung hero of machine learning prior to 2012, architecture engineering is at the core of deep learning these days.

It is also time for the hardcores. Data efficiency, statistics, geometry, information theory, Bayesian and other "serious" topics. Like any major progresses in science and engineering, nothing really occurs over night. At this point, deep learning is already mixed with graphical models, planning, inference, symbolic reasoning, memory, execution, Bayesian among other things. All together, something fancy will happen, just like what I noted about Conditional Random Fields years ago, that it is the combination of incremental innovations that pushes the boundary of certain field to a critical point. It also concurs with the idea of emergence intelligence, where human intelligence is really the emerging product of many small advances over apes.

For a more comprehensive review, see my recent tutorials at AI'16 on the topic. Some incremental innovations were produced at PRaDA (Deakin University), listed below.

Work by us:
  • Multilevel Anomaly Detection for Mixed Data, K Do, T Tran, S Venkatesh, arXiv preprint arXiv: 1610.06249.
  • A deep learning model for estimating story points, M Choetkiertikul, HK Dam, T Tran, T Pham, A Ghose, T Menzies, arXiv preprint arXiv: 1609.00489
  • Deepr: A Convolutional Net for Medical Records, Phuoc Nguyen, Truyen Tran, Nilmini Wickramasinghe, Svetha Venkatesh, To appear in IEEE Journal of Biomedical and Health Informatics.
  • Column Networks for Collective Classification, T Pham, T Tran, D Phung, S Venkatesh, AAAI'17
  • DeepSoft: A vision for a deep model of software, Hoa Khanh Dam, Truyen Tran, John Grundy and Aditya Ghose, FSE VaR 2016.
  • Faster Training of Very Deep Networks Via p-Norm Gates, Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh, ICPR'16.
  • Hierarchical semi-Markov conditional random fields for deep recursive sequential data, Truyen Tran, Dinh Phung, Hung Bui, Svetha Venkatesh, To appear in Artificial Intelligence.
  • DeepCare: A Deep Dynamic Memory Model for Predictive Medicine, Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh, PAKDD'16, Auckland, NZ, April 2016. 
  • Neural Choice by Elimination via Highway Networks, Truyen Tran, Dinh Phung and Svetha Venkatesh,  PAKDD workshop on Biologically Inspired Techniques for Data Mining (BDM'16), April 19-22 2016, Auckland, NZ.
  • Tensor-variate Restricted Boltzmann Machines, Tu D. Nguyen, Truyen Tran, D. Phung, and S. Venkatesh, AAAI 2015
  • Thurstonian Boltzmann machines: Learning from multiple inequalities, Truyen Tran, D. Phung, and S. Venkatesh, In Proc. of 30th International Conference in Machine Learning (ICML’13), Atlanta, USA, June, 2013.

Sunday, 18 December 2016

Caring deeper: motif detection from medical records using convolutional nets


In the previous post, I have introduced DeepCare, a LSTM-based model for (potentially very long) medical records with irregular timing and treatments.

Here I introduce another deep net called Deepr, a CNN-based model for relatively short medical records. The main purpose is learning to discover medical motifs that lead to some future events (e.g., death).

Unlike DeepCare which assumes a clear temporal dynamics in the medical records, Deepr requires only repeated short patterns (motifs) over the data sequence. Time gaps are discretized into symbols which are treated in the same way as diagnoses, procedures and medications. All symbols are then sequenced. Those co-occurring will be randomly ordered.

Once Deepr has been learnt, motif segments in a record that respond well to an outcome can be detected.

Note that Deepr can be used in other situations where irregular time gaps and discrete data are present.

Update references

  • Deepr: A Convolutional Net for Medical Records, Phuoc Nguyen, Truyen Tran, Nilmini Wickramasinghe, Svetha Venkatesh, IEEE Journal of Biomedical and Health Informatics, 2017.
  • DeepCare: A Deep Dynamic Memory Model for Predictive Medicine, Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh, PAKDD'16, Auckland, NZ, April 2016.

Saturday, 17 December 2016

Caring deeply: Intervened long short-term memory for medical records


In the US, healthcare expenditure accounts for approximately 18% GDP, almost twice as much that in Australia. And the percentage keeps growing. One possible explanation is that after having cheap, accessible everything, people want to spend more and more on is their own health.

Given this big fat cake, it is no surprise that healthcare is the next target for the current AI wave. At present, startups pop up every week, all hoping to claim a big share.

Central to modern healthcare systems is Electronic Medical Records (EMRs), the personal database of any encounter with the healthcare systems, usually consists of information regarding diseases, treatments, billing, measurements, social care and more. EMRs are the promise of the modern healthcare to improve efficiency, accessibility and personalized medicine.

We will focus our attention to predictive medicine, a new approach that is not just about diagnosis (what happens now), but also about prognosis (what will happen if we do X). Not surprisingly, to predict the future, we need to study the past. Ultimately, we end up modeling the entire health trajectory since birth (if the data is available).

Two things that make EMRs a modeling challenge are:

  • Data are episodic. Data is only recorded when patient turns up at clinic or hospital. There are time gap in between, and the gap is irregular. 
  • There is "care" in healthcare, that is, interventions done by clinician. Treatments disrupt the natural course of  health trajectory. Treatments are supposed to lessen or eliminate the illness. But medical errors do also occur, making the illness worse.

Our recent model, DeepCare, is a deep architecture that directly models the effect of irregular time gap and treatment. It modifies the gates of the popular Long Short-Term Memory (LSTM). "Memory" plays a great role here because there is weak and irrelevant information in the records, and we do not know which one! LSTM is great because it can decide to ignore or keep certain new information as well as forget or keep the old illness memory.

What can DeepCare do? You can think of treatment recommendation, disease progression prediction, readmission prediction, attributing the past illness to future event and more. Check out the paper here.

Update references

  • DeepCare: A Deep Dynamic Memory Model for Predictive Medicine, Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh, PAKDD'16, Auckland, NZ, April 2016.
  • Deepr: A Convolutional Net for Medical Records, Phuoc Nguyen, Truyen Tran, Nilmini Wickramasinghe, Svetha Venkatesh, IEEE Journal of Biomedical and Health Informatics, 2017.



Saturday, 10 December 2016

Tutorial on deep learning and applications in non-cognitive domains


In Dec 2016, I delivered a tutorial on deep learning and its applications in non-cognitive domains at AusDM'16. It covers:

The materials are accessible here.