MEDIA

Matthias Knab, Opalesque for New Managers:

 

Volker Dischler and Julian Moore have founded Cognitive Trading to leverage complexity theory to multiply the power of artificial intelligence.

 

Dischler has 25 years' experience in the financial industry. He studied business administration at the University of Cologne, and wrote his diploma thesis on financial forecasting and trading systems based on artificial intelligence. Before founding Quant Trading in 2005, he implemented hybrid (ANN, GA, fuzzy logic) trading systems for Goldman Sachs, launched the division technical trading systems & managed futures at Portfolio Concept, and worked in Equity Derivatives Trading at WestLB. He was also responsible for project management of AI-based investment strategies in a collaboration with Siemens-Nixdorf.

 

AI practitioners and investors at NextGen Alpha AI Investor Conference March 30th

Volker Dischler and Julian Moore will be speaking at the NextGen Alpha AI Investor Conference March 30th at the Frankfurt Airport Sheraton hotel (see here for more details), offering an exciting exchange between investment management professionals (who employ AI in their products) and professional investors regarding AI developments in relation to fund management and the future potential impact on the investment management industry.

Opalesque's Matthias Knab had the opportunity to interview the two founders about their approach and investment philosophy.

 

What is the role of artificial intelligence (AI) / machine learning (ML) in your approach?

The essence of a successful fund is its ability to forecast the future on behalf of its clients. As a physicist I'm used to making hypotheses and developing models of the world that I can test against experiments. A scientist is happy as long as the theory and experiment can co-evolve it's not a disaster if the experiment proves a hypothesis wrong.

 

The principle of Occam's razor is usually - and incorrectly - taken as meaning that the simplest hypothesis is most likely to be true, but really it's a pragmatic principle. It says that the most efficient way to explore the space of solutions is to start with the simplest ideas first. Only introduce more factors if you need them.

 

Unfortunately, in complex systems like financial markets there are too many variables, too many interacting agents and too much fluidity. If your model is simple and understandable it doesn't agree with reality, but if you try to make it more realistic you have too many possibilities to choose from.

 

The great advantage of deep machine learning is one doesn't have to construct explicit hypotheses and models in order to create predictions, and that means one can use more data and more variables. Such systems work with the information given, and speaking very loosely, optimize themselves. Deep machine learning allows you to cut off larger chunks with Occam´s Razor.

 

Our approach is to take this unique characteristic of deep machine learning and to apply it in a way that recognizes the other great truth of modern science: that not everything is always predictable! We have developed, and are developing further, a number of approaches that allow us to apply deep machine learning where and when it will be most effective, and to tell us when not to waste our resources.

 

 

Would you say that AI / ML are a major source of alpha for your approach?

Yes, of course deep machine learning is a major source of alpha, but the factor is the augmentation of deep machine learning with insights from Complexity Theory.

 

 

Where would you see limits for AI / ML, and are those limits possibly only temporary?

The current limitations on deep machine learning are generally the availability and costs of computing resources and good data. Good systems make efficient use of the data they have, but bad data is worse that no data. Bad data actively misleads! Good data is data that means something, and I suppose you could say that part of our approach is a machine-driven recognition of what makes good data. It could be called meta-learning, but I wouldn't go that far yet!

 

One limitation that won't go away is the effect of feedback. Our hybrid approach has the potential to be genuinely disruptive in the sense that it could, perhaps, drive the formation of a new attractor for market behaviour. I wouldn't call it a probability, but it is a possibility that we want to be prepared for.

 

You could call it the White Rabbit effect. When anyone says to you, "Don't think of a white rabbit!" white rabbits are all you can think of. If our approach says, "This is where the opportunity is!" and says it reliably, that is where the opportunity will be!

 

But, to be more serious for a moment, we recognize that we must remain connected to fundamentals even while we are leveraging advanced deep machine learning and Complexity Theory to open up the dynamics.

 

 

Where do you see the role of humans in investing say 10 years from now?

We would say that our trading system is necessarily based on price.

The role of people will increasingly be that of the arbiters of value to which price is servant.

Price must follow value that humans determine.

 

Volker Dischler & Julian Moore