Monday, 19 December 2016

Machine learning four years after the turning point


In May 2012 I wrote a note titled "Machine at its turning point" to argue for the new wave of machine learning in that we do not need to worry about having a convex loss but rather be happy with non-convex ones. At the time I did not know about AlexNet and its record-breaking result on the ImageNet benchmark. It was published 7 months later in NIPS'12.

AlexNet was truely a turning point for machine learning. It declared the winning of deep neural nets over others, which were combination of clever manual feature engineering and some variants of SVMs or random forests. AlexNet is remarkable in many ways: Dropout, rectifier linear units, end-to-end training on massive data with GPUs, data augmentation and carefully designed convolutional nets.

It was the year that Yann LeCun posted his complaints about the computer vision community, but quickly retracted his boycott given the aftershock of AlexNet.

Recently, there has been an interesting comment floating around: In machine learning, we ask what we can do for neural networks, and in applied domains, we ask what can neural networks do for X. And the list of Xs keeps growing from cognitive domain to non-cognitive domains. Andrew Ng made an interesting point that for domains where humans can do well to map A to B in less than a second, it is ripe for machine automation.

This year also marks the 10th year after Deep Belief Nets, the model that announces the beginning of the current wave of neural nets. Early this year, AlhaGo of DeepMind defeated one of the best Go champions 4 to 1, officially ending the superiority of human on this ancient game. AlphaGo is a mixture of convolutional nets to read the board positions and evaluate the moves, and random tree-search moves.

Many things have changed since 2012. It is clear that supervised learning works if we have sufficient labels without pre-training. Unsupervised learning, after an initial burst with Boltzmann machines and Autoencoders, failed to deliver.  There are new interesting developments, however, with Variational Autoencoder (VAE) and Generative Adversarial Nets (GAN), both invented in 2014. At this point, GAN is the best technique to generate faithful images. It is considered by Yann LeCun as one of the best ideas in recent years.

The machine learning community has witnessed 10-15 year mini-cycles. Neural networks, graphical models, kernel methods, statistical relational learning and currently, deep learning. So what is up for deep learning? If we consider 2006 as the year of beginning of current deep learning, then it is already 10 years, enough for a mini-cycle. But if we consider 2012 as the true landmark, then we have 6 more years to count.

Like other methodologies, deep learning will eventually morph into something else in 5 years time. We may call it by other names. With programming becomes reasonably effortless and with the availability of powerful CPUs/GPUs designed specifically for deep learning, the low hanging fruits will soon be picked up.

Practice-wise, as feature engineering is an unsung hero of machine learning prior to 2012, architecture engineering is at the core of deep learning these days.

It is also time for the hardcores. Data efficiency, statistics, geometry, information theory, Bayesian and other "serious" topics. Like any major progresses in science and engineering, nothing really occurs over night. At this point, deep learning is already mixed with graphical models, planning, inference, symbolic reasoning, memory, execution, Bayesian among other things. All together, something fancy will happen, just like what I noted about Conditional Random Fields years ago, that it is the combination of incremental innovations that pushes the boundary of certain field to a critical point. It also concurs with the idea of emergence intelligence, where human intelligence is really the emerging product of many small advances over apes.

For a more comprehensive review, see my recent tutorials at AI'16 on the topic. Some incremental innovations were produced at PRaDA (Deakin University), listed below.

Work by us:
  • Multilevel Anomaly Detection for Mixed Data, K Do, T Tran, S Venkatesh, arXiv preprint arXiv: 1610.06249.
  • A deep learning model for estimating story points, M Choetkiertikul, HK Dam, T Tran, T Pham, A Ghose, T Menzies, arXiv preprint arXiv: 1609.00489
  • Deepr: A Convolutional Net for Medical Records, Phuoc Nguyen, Truyen Tran, Nilmini Wickramasinghe, Svetha Venkatesh, To appear in IEEE Journal of Biomedical and Health Informatics.
  • Column Networks for Collective Classification, T Pham, T Tran, D Phung, S Venkatesh, AAAI'17
  • DeepSoft: A vision for a deep model of software, Hoa Khanh Dam, Truyen Tran, John Grundy and Aditya Ghose, FSE VaR 2016.
  • Faster Training of Very Deep Networks Via p-Norm Gates, Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh, ICPR'16.
  • Hierarchical semi-Markov conditional random fields for deep recursive sequential data, Truyen Tran, Dinh Phung, Hung Bui, Svetha Venkatesh, To appear in Artificial Intelligence.
  • DeepCare: A Deep Dynamic Memory Model for Predictive Medicine, Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh, PAKDD'16, Auckland, NZ, April 2016. 
  • Neural Choice by Elimination via Highway Networks, Truyen Tran, Dinh Phung and Svetha Venkatesh,  PAKDD workshop on Biologically Inspired Techniques for Data Mining (BDM'16), April 19-22 2016, Auckland, NZ.
  • Tensor-variate Restricted Boltzmann Machines, Tu D. Nguyen, Truyen Tran, D. Phung, and S. Venkatesh, AAAI 2015
  • Thurstonian Boltzmann machines: Learning from multiple inequalities, Truyen Tran, D. Phung, and S. Venkatesh, In Proc. of 30th International Conference in Machine Learning (ICML’13), Atlanta, USA, June, 2013.

2 comments:

  1. I really appreciate information shared above. It’s of great help. If someone want to learn Online (Virtual) instructor lead live training in Machine Learning , kindly contact us http://www.maxmunus.com/contact
    MaxMunus Offer World Class Virtual Instructor led training on TECHNOLOGY. We have industry expert trainer. We provide Training Material and Software Support. MaxMunus has successfully conducted 100000+ trainings in India, USA, UK, Australlia, Switzerland, Qatar, Saudi Arabia, Bangladesh, Bahrain and UAE etc.
    For Demo Contact us.
    Sangita Mohanty
    MaxMunus
    E-mail: sangita@maxmunus.com
    Skype id: training_maxmunus
    Ph:(0) 9738075708 / 080 - 41103383
    http://www.maxmunus.com/

    ReplyDelete
  2. Hello,
    The Article on Machine learning four years after the turning point is nice.it give detail information about it. Thanks for Sharing the information about Machine Learning. hire data scientists

    ReplyDelete