Saturday 25 February 2017

Column bundle: a single model for multiple multipe


Supervised machine learning has a few recurring concepts: data instance, feature set and label. Often, a data instance has one feature set and one label. But there are situations when you have multi-[X], where X = instance, view (feature subset), or label. For example, in multiple instance learning, you have more then one instance, but only one label.

Things are getting interesting when you have multiple instances, multiple views and multiple labels at the same time. For example, a video clip can be considered as a set of video segments (instances), each of which has views (audio, visual frames and may be textual subtitle), and the clip has many tags (labels).

Enter Column Bundle (CLB), the latest invention in my group.

CLB makes use of the concept of columns in neocortex. In brain, neurons are arranged in thin mini-columns, each of which is thought to cover a small sensory area called receptive field. Mini-columns are bundled into super-columns, which are inter-connected to form the entire neocortex.  In our previous work, this cute concept has been exploited to build a network of columns for collective classification. For CLB, columns are arranged in a special way:

  • There is one central column that serves as the main processing unit (CPU).
  • There are input mini-columns to read inputs for multiple parts (Input)
  • There are output mini-columns to generate labels (Output)
  • Mini-columns are only connected to the central column.
Columns are recurrent neural nets with skip-connections (e.g., Highway Net, Residual Net or LSTM). Input parts can be instances, or views. The difference is only at the feature mapping: different views are first mapped into the same space.

In a sense, it looks like a neural computer without a RAM.

References

Sunday 19 February 2017

Living in the future: AI for healthcare



In a not-so-distant future, it will be a routine to chat to a machine and receive medical advice from it. In fact, many of us have done this - seeking advice from healthcare sites, asking questions online and being recommended for known answers by algorithms. The current wave of AI will only accelerate this trend.

Medicine is by large a discipline of information, where the knowledge power is very asymmetric between doctors and patients. Doctors do the job well because humans are all alike, so that cases can be documented in medical textbooks and findings can be shared in journal articles and validated by others. In other words, medical knowledge is statistical, leading to the so-called evidence-based medicine (EBM). And this is exactly the reason why the current breed of machine learning - deep learning - will do well in majority of cases.

Predictive medicine

In Yann LeCun's words, the future of AI rests on predictive learning, which is basically an alternative way to say unsupervised learning. Technically, this is the capability to fill the missing slots. For those who are familiar with probabilistic graphical models, it is akin to computing pseudo-likelihood, or estimating values of some variables given the rest.

A significant part of medicine is inherently predictive. One is diagnosis - finding out what is happening now, and the other prognosis - figuring out what will be happening if an action (or absence of action) is done. While it is fair to say diagnosis is quite advanced, prognosis has a long way to go.

To my surprise as a machine learning practitioner, doctors are unreasonably poor at prediction into the future, especially when it comes to mental health and genomics. Doctors are, however, excellent in explaining the results after-the-fact. In machine learning's terms, their models can practically fit anything but do not generalize well. This must come from the culture of know-it-all, where medical knowledge is limited to only a handful of people, and doctors are obliged to explain what has happened to the poor patients.

Physical health

Human body is a physical (and to some extent, a statistical) system. Hence it follows physical laws. Physiological processes, in theory, can be fully understood and predictable - at least in a close environment. What are hard to predict, are the (results of) interactions with the open environment. For example, virus infection and car accidents are those hardly predictable. Hence, physical health is predictable up to an accuracy limit, beyond which computers have no hope in predicting. So don't expect the performance to be close to that we have seen in object recognition.

Mental health

Mental health is hard. No one can really tell what happens inside your brain, even if you have it opened. With hundreds of billions neurons and tens of trillions connections between them that give rise to mental processes, the complexity of the brain is beyond human reach at present. But mental health never goes alone. It goes hand-in-hand with physical health. A poor physical condition is likely to worsen a mental condition, and vice versa.

A good sign is that mental health is going computational. There is an emerging field called Computational Psychiatry. They are surprisingly open to new technological ideas.

The future

AI is also eating the healthcare stage with hundreds of startups popping up each month around the world. So what to expect in the near future within 5 years?
  • Medical imaging diagnosis. This is perhaps the most ready space due to the availability of affordable imaging options (CT-Scan, ultra-sound, fMRI, etc) and recent advances in computer vision, thanks to convolutional nets. One interesting form is microscopy imaging diagnosis since getting images from microscopes can be quite cheap. Another one is facial diagnosis -- It turns out, many diseases manifest through facial expression.
  • Medical text to be better understood. There are several types of text: doctor narrative in medical records, user-generated medical text online, social health places, and medical research articles. This field will take more time to take off, but given the high concentration of talents in NLP at present, we have a reason to hope.
  • Cheap, fast sequencing techniques. Sequencing cost has come down to a historic milestone of $1,000 recently, and we still have reasons to believe that it will go down to $100 in a not far future. For example, nanopore sequencing is emerging, and the sequencing using signal processing will be improved significantly
  • Faster and better understanding of genomics. Once the sequencing reaches a critical mass, the understanding of it will be accelerated by AI. Check out, for example, the work of this Toronto professor, Brendan Frey.
  • Clinical data sharing will remain a bottleneck for the years to come. Unless we have access to a massive clinical database, things will move very slowly in clinical settings. But machine learning will have to work in data efficiency regimes, too.

Beyond 5 years, it is far more difficult to predict. Some are still in the realm of sci-fi.
  • Automation of drug discovery. Drug chemical and biological properties will be estimated accurately by machine. The search for a drug given a desirable function will be accelerated by hundred times.
  • A full dialog system for diagnosis and treatment recommendation. You don't need to see doctor for a $100 consultation for just 10 mins. You want a thorough consultation for free.
  • M-health, with distant robotic surgery.
  • Brain-machine interfacing, where humans will rely on machine for high bandwidth communication. This idea is from my favorite technologist Elon Musk.
  • Nano chips will enter the body in millions and kill the nasty bugs, fix the damages and get out without being kicked out by the immune system. This idea is from the 2006 book The Singularity is Near by my favorite futurist Ray Kurzweil.
  • Robot doctors will be licensed, just like self-driving cars now.
  • Patients will be in control. No more know-it-all doctors. Patients will have a full knowledge of their own health. This implies that things must be explainable, and patients must be educated about their own bio & mental.

However, like everything else, it is easy to imagine than done. Don't forget that AI in Medicine (AIIM) is a very old journal, and nothing really magic has happened yet.

What we do

At PRaDA (Deakin University, Australia), we have our own share in this space. Some most recent contributions are (as of 07/09/2018):
  • Healthcare processes as sequence of sets (2018), where we model the dynamic interaction between diseases and treatments.
  • Healthcare as Turing computational (2018), where we show that health processes can be modeled as a probabilistic Turing machine. Also see here.
  • Drug multiple repurposing (2018), where we predict the effect of a drug on multiple targets (proteins and diseases).
  • Predicting drug response from molecular structure (2017),  where we use molecular structure to compute a drug representation, which is then used for predicting its bioactivity given a disease. UPDATE (07/09/2018): new version is here.
  • Attend to temporal ICU risk (2017), where we figure out a way to deal with ICU time-series, which are irregular and mostly missing. Again, the work will be in the public domain soon.
  • Matrix-LSTM  (2017) for EEG, where we capture the tensor-like nature of EEG signals over time. 
  • DeepCare (2016), where we model the course of health trajectory, which is occasionally intervened at irregular time.
  • Deepr (2016), where we aim to discover explainable predictive motifs though CNN.
  • Anomaly detection  (2016), where we discover outliers in healthcare data, which is inherently mixed-type.
  • Stable risk discovery through Autoencoder (2016), where we discover structure among risk factors.
  • Generating stable prediction rules (2016), where we demonstrate that simple, and statistically stable rules can be uncovered from lots of administrative data for preterm-birth prediction at 25 weeks of gestation.
  • eNRBM  (2015): understanding the group formation of medical concepts through competitive learning and prior medical knowledge.