Tuesday, February 14, 2023

Researchers Discover a More Flexible Approach to Machine Learning | Quanta Magazine

 

 

Researchers Discover a More Flexible Approach to Machine Learning | Quanta Magazine

By Steve NadisFebruary 7, 2023

While the algorithms at the heart of traditional networks are set during training, when these systems are fed reams of data to calibrate the best values for their weights, liquid neural nets are more adaptable. “They’re able to change their underlying equations based on the input they observe,” specifically changing how quickly neurons respond, said Daniela Rus, the director of MIT’s Computer Science and Artificial Intelligence Laboratory.

One early test to showcase this ability involved attempting to steer an autonomous car. A conventional neural network could only analyze visual data from the car’s camera at fixed intervals. The liquid network — consisting of 19 neurons and 253 synapses (making it minuscule by the standards of machine learning) — could be much more responsive. “Our model can sample more frequently, for instance when the road is twisty,” said Rus, a co-author of this and several other papers on liquid networks.

The model successfully kept the car on track, but it had one flaw, Lechner said: “It was really slow.” The problem stemmed from the nonlinear equations representing the synapses and neurons — equations that usually cannot be solved without repeated calculations on a computer, which goes through multiple iterations before eventually converging on a solution. This job is typically delegated to dedicated software packages called solvers, which would need to be applied separately to every synapse and neuron.

In a paper last year, the team revealed a new liquid neural network that got around that bottleneck. This network relied on the same type of equations, but the key advance was a discovery by Hasani that these equations didn’t need to be solved through arduous computer calculations. Instead, the network could function using an almost exact, or “closed-form,” solution that could, in principle, be worked out with pencil and paper. Typically, these nonlinear equations do not have closed-form solutions, but Hasani hit upon an approximate solution that was good enough to use.

“Having a closed-form solution means you have an equation for which you can plug in the values for its parameters and do the basic math, and you get an answer,” Rus said. “You get an answer in a single shot,” rather than letting a computer grind away until deciding it’s close enough. That cuts computational time and energy, speeding up the process considerably.

“Their method is beating the competition by several orders of magnitude without sacrificing accuracy,” said Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign.

As well as being speedier, Hasani said, their newest networks are also unusually stable, meaning the system can handle enormous inputs without going haywire. “The main contribution here is that stability and other nice properties are baked into these systems by their sheer structure,” said Sriram Sankaranarayanan, a computer scientist at the University of Colorado, Boulder. Liquid networks seem to operate in what he called “the sweet spot: They are complex enough to allow interesting things to happen, but not so complex as to lead to chaotic behavior.”

At the moment, the MIT group is testing their latest network on an autonomous aerial drone. Though the drone was trained to navigate in a forest, they’ve moved it to the urban environment of Cambridge to see how it handles novel conditions. Lechner called the preliminary results encouraging.

Beyond refining the current model, the team is also working to improve their network’s architecture. The next step, Lechner said, “is to figure out how many, or how few, neurons we actually need to perform a given task.” The group also wants to devise an optimal way of connecting neurons. Currently, every neuron links to every other neuron, but that’s not how it works in C. elegans, where synaptic connections are more selective. Through further studies of the roundworm’s wiring system, they hope to determine which neurons in their system should be coupled together.

Apart from applications like autonomous driving and flight, liquid networks seem well suited to the analysis of electric power grids, financial transactions, weather and other phenomena that fluctuate over time. In addition, Hasani said, the latest version of liquid networks can be used “to perform brain activity simulations at a scale not realizable before.”

Mitra is particularly intrigued by this possibility. “In a way, it’s kind of poetic, showing that this research may be coming full circle,” he said. “Neural networks are developing to the point that the very ideas we’ve drawn from nature may soon help us understand nature better.”

No comments:

Post a Comment

Breakthrough in Satellite Error Correction Improves Space Communications

Typical LEO Architecture and Segments Spectra of some LEO Link Losses Breakthrough in Satellite Error Correction Improves Space Communicatio...