AI Machine Learning & Data Science Research

Toward a Turing Machine? Microsoft & Harvard Propose Neural Networks That Discover Learning Algorithms Themselves

A research team from Microsoft and Harvard University demonstrates that neural networks can discover succinct learning algorithms on their own in polynomial time and presents an architecture that combines recurrent weight-sharing between layers and convolutional weight-sharing to reduce parameter size from even trillions of nodes down to a constant.

Speaking at the London Mathematical Society in 1947, Alan Turing seemed to anticipate the current state of machine learning research: “What we want is a machine that can learn from experience . . . like a pupil who had learnt much from his master, but had added much more by his own work.”

Although neural networks (NNs) have demonstrated impressive learning power in recent years, they still fail to outperform human-designed learning algorithms. A question naturally arises: Can NNs be made to discover efficient learning algorithms on their own?

In the new paper Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms , a research team from Microsoft and Harvard University demonstrates that NNs can discover succinct learning algorithms on their own in polynomial time and presents an architecture that combines recurrent weight-sharing between layers and convolutional weight-sharing to reduce models’ parameter size from even trillions of nodes down to a constant.

The team’s proposed neural network architecture comprises a dense first layer of size linear in m (the number of samples) and d (the dimension of the input). This layer’s output is fed into an RCNN (with recurrent weight-sharing across depth and convolutional weight-sharing across width), and the RCNN’s final outputs are then passed through a pixel-wise NN and summed to produce a scalar prediction.

The team’s key contribution is the design of this simple recurrent convolutional (RCNN) architecture, which combines recurrent weight-sharing across layers and convolutional weight-sharing within each layer and reduces the number of weights in the convolutional filter to a few — even a constant — while maintaining the weight functions to determine activations for a very wide and deep network.

Overall, the study demonstrates that a simple NN architecture can effectively achieve Turing-optimality — wherein it learns as well as any bounded learning algorithm. The researchers believe reducing the size of the dense parameters to depend on the algorithm’s memory usage instead of the training sample size and using stochastic gradient descent (SGD) beyond memorization could make the architecture even more concise and natural. In future work, they plan to explore other combinations of architectures, initializations, and learning rates to improve understanding of which are Turing-optimal.

The paper Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms is on arXiv .


Author : Hecate He | Editor : Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

21 comments on “ Toward a Turing Machine? Microsoft & Harvard Propose Neural Networks That Discover Learning Algorithms Themselves

  1. John Palmer

    I have a good advice for upgrading your business in internet or your blog. This is a professional marketplace for safe promoting your website or any social networks and getting organic traffic. Organic traffic a very important, because when the more people see your, the more potential customers you will have and the more people will know about you and your business, product, service… https://likigram.com/ ​I think it will be very useful for you.​

  2. Pingback: Latest Machine Learning Research from Microsoft Exhibit a… – Machine Learning

  3. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - tomecko

  4. Pingback: La última investigación de aprendizaje automático de Microsoft presenta una arquitectura de red neuronal que, en tiempo polinomial, aprende tan bien como cualquier algoritmo de aprendizaje eficiente que pueda describirse mediante un algoritmo de aprend

  5. Pingback: Recent machine learning research from Microsoft shows a neural network architecture that learns in polynomial time and any efficient learning algorithm that can be described by a constant-size learning algorithm - Instant News

  6. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - soyho

  7. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - founar

  8. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - elsibel

  9. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - bokrin

  10. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - wasiyi

  11. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - burmaba

  12. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - kloggi

  13. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - shiftpda

  14. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - nautsat

  15. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - kalqja

  16. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - endtasks

  17. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - dashaxi

  18. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - gitspk

  19. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - daytovit

  20. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - flashnp

  21. Pingback: Latest Machine Learning Research from Microsoft Exhibit a Neural Network Architecture that, in Polynomial Time, Learns as well as any Efficient Learning Algorithm Describable by a Constant-Sized Learning Algorithm - weyanw

Leave a Reply

Your email address will not be published.

%d bloggers like this: