Thanks to a $1.95 million grant from the Air Force Office of Scientific Research, Markos Katsoulakis and Luc Rey-Bellet, both professors in the mathematics and statistics department at the University of Massachusetts Amherst, and Paul Dupuis, of Brown University, will spend the next four years developing a new approach to machine learning that extends beyond the traditional reliance on big data.
Traditional machine learning relies on enormous caches of data that an algorithm can sift through in order to “train” itself to accomplish a task, resulting in a data-based mathematical model. But what about situations for which there is very little data, or when generating enough data is prohibitively expensive? One possible, emergent remedy—often referred to as scientific machine learning—is to incorporate into the algorithms expert knowledge gained from years of scientific research in developing physical principles and rules.
There is great interest in scientific machine learning across a wide variety of applied fields and industries, including medicine, engineering, manufacturing and in the sciences, but one of the key challenges is how to ensure that the algorithmic predictions are reliable.
This is where Katsoulakis and Rey-Bellet come in, who together bring a new perspective to scientific machine learning, one focused on “divergences.” “The mathematical concept of ‘divergence,’” says Rey-Bellet, “is a way to quantify the gap between what the machine learning algorithm predicts and the actual, experimental data.” He adds that “divergences allow researchers to test different machine learning algorithms and find the ones that yield the best results.”
The team proposes a new class of divergences, which involve two fictional, competing agents—“players”—that play a “game” against each other. The first player proposes a new machine learning model, which simulates a real-life scenario; the other player can reject the proposal if the model’s predictions don’t match the available real-life experimental data closely enough. The game continues until the players find an algorithm that satisfies them both. But these players have a trick up their sleeves: “a key new mathematical feature in our divergences allows the players to ‘know their physics,’” says Katsoulakis. “More intelligent players compete more efficiently, learn faster from each other and need less data to train, but still remain open to learning new physics.”
Katsoulakis says “this is an exciting time to be a mathematician” and adds that “applied mathematics, statistics, computer science and disciplinary research can complement one another and address these fundamental issues in scientific machine learning in the years to come.” Rey-Bellet adds a final thought: “for centuries physics has been a primary source of inspiration for all the mathematical sciences. In the past few years, machine learning has started playing a similar role and brings a remarkable influx of new ideas to the world of mathematics.”