Fermi’s New Paradox: If AI Analysts Are So Obvious, Where is Everybody?

0
151

Over the lunch-hour din of the cafeteria, there was a shimmer in the air—a sense that something great was being discovered. This was different than a normal lunchtime in 1950 at the Los Alamos National Laboratory, where the Cold War had mobilized the West’s brightest minds. Normally, one could expect breakthroughs in particle physics or in fusion power, but there was a buzz on this day that could not be attributed to the Chicken Kiev.

Four great minds were at work solving one of life’s great existential questions: Are we alone in the universe? This question led to more questions, as the Manhattan Project alums scribbled calculations on napkins: How many stars are in the universe? How many are like the Earth’s sun? How many have planets? How many have planets that are old enough to transmit information as far as the Earth?

It was a frenzy. A crowd began to gather, as many were aware of the recent reports of UFO sightings nearby. Finally, as the calculations poured forth and the empty Coca-Cola bottles piled up, the math became obvious: There must be intelligent extraterrestrial life somewhere in outer space; the vastness of the universe assured the outcome to be true.

The hand of Enrico Fermi, an Italian-American physicist from the University of Chicago, slammed the table—BAM! The “architect of the nuclear age” was troubled, though, because the conclusion did not make sense. “But where is everybody?” he pondered.

The contradiction between the math and the lack of evidence—if the conclusion that intelligent extraterrestrial life exists is so obvious, then where is it?—has become known as the Fermi Paradox.

A Computerized Buffett on Every Team—An Investors Dream

Just as those scientists looked to the stars for signs of intelligent life, investing has for decades looked to computers and quantitative methods for signs of artificial intelligence that can help make smarter decisions. But after decades of experimentation and development, finance is now confronted with a similar paradox.

There is a persistent dream of putting an AI-driven version of Warren Buffet on every investment team—one with all the positive qualities but none of the negative biases and behavioral errors that come pre-installed in humans. The excitement of building such a revolutionary computer-based system to pick investments has driven billions of dollars of investment into developing systems and hiring big-brained PhDs. The share of job openings in finance that are computer or math driven has nearly quadrupled since the Great Financial Crisis.

But despite all the investments made, decades of academic papers produced, computer systems developed, and fortunes made in quant investing, the vast majority of actively managed assets are still non-quantitative in nature.

Traditional active managers will tell you that quantitative techniques are not long-term enough and they will question whether a diverse portfolio can really know anything about the “risk” of a company. Quantitative practitioners will fire back with a long-dated backtest or logic derived from (perhaps flawed) statistical techniques, and say, “Isn’t it obvious that quantitative techniques are superior to anecdotal and heuristic-driven investment?”

The two schools of thought are seemingly opposed and have spent the better part of decades without reconciliation. Sure, some quantitative techniques have permeated into risk management or screening for stocks, but there is no AI analyst working side by side with humans to make investment decisions better. Why not?

Combining human-driven investment research with assistance from a junior AI researcher would leverage the best of both worlds. A team like that would combine the long-term, complex thinking of a human with the unbiased, quantitative, evidence-based decision-making of AI.

Combining humans with AI to perform investment research seems such an obvious goal, and the resources being thrown at the problem are vast. But that being so, where are the AI investment analysts? In order to resolve this version of the Fermi Paradox, we need to rethink how finance approaches the use of AI.

The Goal of Embedding AI Has Failed Because the Aim is Misguided

In a classic scene from the movie Jurassic Park, the mathematician Ian Malcolm muses that scientists “were so preoccupied with whether or not you could, you didn’t stop to think if you should.”

This is emblematic of the state of AI research, particularly in its application to quantitative finance. Everyone is so eager to demonstrate they are “state of the art” that there is no thinking aimed at applying AI in the right way.

The search trends in the graph below demonstrate the fashion for doing something “fancy”, rather than building something transformative in the right manner.

In quantitative finance, this trend has manifested itself in the overuse (and potentially misuse) of alternative data combined with machine learning. Rather than thinking about the longer-term solutions to the problem, participants in the field are rushing to outperform each other using niche data to perform task-specific solutions.

As a result, the alpha itself is fleeting and the applications don’t generalize across a broad spectrum of investment problems.  Additionally, the industry is laden with tales of good intentions that fail to get adopted into the traditional investment workflow.

Aligning AI with How Investors Think is the Key to Progress

If one stops to think about what makes a great investor, it’s not typically a niche, task-specific process that differentiates the legends from the temporarily lucky.

Because markets are complex systems whose dancing landscapes are constantly changing, the best investors are generalists by nature; they take mental models and are able to apply them over and over again. They don’t merely learn facts; rather, they learn models and systems so as to build a toolkit in order to pick the best tool for the job at hand.

The computational complexity is low and the objective is to handicap all possible outcomes—to discount the implied market, not to forecast. They think about what investments present asymmetric payouts from a probabilistic perspective in a folksy back-of-the-envelope manner.

To build AI that can successfully be implemented in the investment process, we must align the design of the machine with the cognitive tasks of great investors.

Our team at UBS Asset Management, called Quantitative Evidence & Data Science, or QED, has taken the approach of focusing on investor workflows as a guiding principle. Essentially, we want to understand what are the things that investors do, so we can better help them make better decisions.

In the next several years, QED will be spending more and more time focusing on how to generalize these workflows and to combine them with heuristics to form investment conclusions. Our goal is to create a form of Artificial General Intelligence (AGI) that can apply reasoning to identify and apply mental models hidden in novel problems and then, ultimately, make an investment recommendation. In the next year, we will focus on aligning our machines with real investment workflows so that the AGI can make real investment recommendations.

This may seem an audacious goal. But the process of getting there is the best way for us to help drive the application of science to the fundamental investment process. As we solve problems in the path towards AGI, we can directly apply the solutions to investment workflows.

Finding AI: The Human Plus AGI Analyst Team of the Future

Does this mean that QED is trying to disintermediate human financial analysts? Not at all. In Philip K. Dick’s Do Androids Dream of Electric Sheep?—which is the basis for the classic film Bladerunner—humans apply the Voigt-Kampff test to potential replicants (AIs) to determine whether they are human or AI.

The test presents disturbing images to the subject: If the subject shows empathy, he/she is human; if no empathy is witnessed, the test proves the subject is AI. Empathy is the secret weapon of human analysts, and because human goals—like saving for retirement, or investing in a climate-aware manner—are the raison d’etre for investing, we will always need real people in the loop.

While QED’s goal is to develop an AGI, it is doing so in the context of having an empathic human working alongside a machine agent to produce better client outcomes.

The benefits of an AI/human partnership to client outcomes are clear and should motivate us to pursue this opportunity. The effort to build a successful integration of AI into the investment process doesn’t need to yield inconclusive results like the Fermi Paradox. Finance must align the design of AI with how investors think, and as part of an empathic human partnership. Otherwise, the efforts are in danger of becoming just a fancy tool that operates at the periphery, and we’ll all be left to ponder that, if it was so obvious, then where are all the AI analysts?

Bryan Cross is the head of UBS Asset Management’s Quantitative Evidence and Data Science team (QED). To read more on how QED functions inside of UBS AM, click here. Bryan also joined the Waters Wavelength Podcast to talk about a range of topics in the field of quantitative finance.

.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here