Learn about Neutral Turing Machine(NTM)

Introduction

In the early years of artificial intelligence, neural networks were considered largely unpromising. Artificial intelligence was dominated from the 1950s until the late 1980s by a symbolic approach that attempted to explain how information processing systems, like the human brain, function by manipulating symbols, structures, and rules. Symbolic AI was challenged by a severe alternative theory in 1986. It used the term Parallel Distributed Processing, but today, connectionism is used more commonly. You may not be familiar with this approach, but you're probably familiar with its most famous modeling technique - artificial neural networks.

Two criticisms have been made of neural networks as tools for understanding intelligence.

• There are several reasons why neural networks with fixed-size inputs have difficulty handling problems with variable-size inputs.

• The second issue is that neural networks don't seem to be able to assign values to specific locations within a data structure. In both the brain and the computer, we can study this ability to write to and read from memory.

What are some answers to these two criticisms?

Recurrent neural networks (RNNs) have been created to address the first question. When processing a variable-size input or recognizing handwritten text, an RNN can use a fixed-size input for as many time steps as are needed to process the variable-size information. In response to the second criticism, Graves et al. give a neural network an external memory and learn from it. These systems are known as Neural Turing Machines (NTM).

A Neural Turing machine (NTMs) is a recurrent neural network model of a Turing Machine. The approach was published by Alex Graves et al. in 2014. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.

What is the architecture of a Neural Turing Machine?

Fundamentally, an NTM is composed of a neural network, or controller, and a 2D matrix, or memory bank, or matrix. With each time step, a bit of input from the outside world is received by the neural network, and some output is sent back out. As well as reading from and writing to memory locations, the network can also write to memory locations. Graeves et al. describe memory locations using the term "head" borrowed from the traditional Turing machine. The dotted line in the image below indicates which parts of the architecture are both inside and outside of the system.


Image source : Rylan Schaeffer, Harvard University

What are some Practical Problems with NTM?

NTMs tend to be very numerically unstable. In part, this is due to the tasks for which they were designed. They tend to make huge errors while learning algorithms, not small ones. The output of your algorithm will be incorrect if you make a mistake. In other words, when they are being trained, they have difficulties figuring out the required algorithm. When you feed neural networks enough data and allow them enough time, they generally come up with an answer. Neural Turing machines frequently get stuck. For example, they often learn to just repeat one value repeatedly. The reason for this is that it is hard to use memory.

In addition to learning to remember what's necessary to solve something, later on, they must also learn to not forget it accidentally, which adds another layer of difficulty. Thus, to overcome this problem, you need to use some intelligent optimization techniques commonly used for recurrent neural networks, but you have to sort of throw everything at them to get them to work. As a result, they are challenging to use regularly.

Wrap up

Machine learning owes its existence to Neural Turing Machines. A version of Alan Turing's classic model of computation, NTMs allow machine learning to help learn algorithms capable of accessing external memories.

Thanks to Alan Turing who made a huge contribution in the field of cognitive science, computer science, artificial intelligence and, artificial life.

We hope you found the article informative, for more such articles delivered directly to your mailbox - subscribe.
Happy Learning!