Machine Learning in Finance Workshop 2017

The workshop is organized by:
 

Abstracts

Bruno Dupire (Bloomberg)

Title: Understanding What the Machine Understands

Abstract: Machine Learning is a potent paradigm but is often perceived as an opaque methodology and this black box aspect prevents many potential users to trust it. We show how we can get some insights into the inner workings of various learning techniques by presenting some visualizations that shed light on the learning process and interaction tools that improve the progress of the algorithm. We demonstrate it on various financial examples such as pricing of illiquid assets, surprise extraction, sentiment analysis, dividend classification, optimal VWAP replication, default prediction… We also show that it is possible to “interview” a neural net in order to both accelerate its training and to make it produce unexpected suggestions.

Andrew Gelman (Columbia Political Science & Statistics)

Title: What Can We Learn from Data?

Abstract: The standard framework for statistical inference leads to estimates that are horribly biased and noisy for many important examples. And these problems all even worse as we study subtle and interesting new questions. Methods such as significance testing are intended to protect us from hasty conclusions, but they have backfired: over and over again, people think they have learned from data but they have not. How can we have any confidence in what we think we've learned from data? One appealing strategy is replication and external validation but this can be difficult in the real world of social science. We discuss statistical methods for actually learning from data without getting fooled.

Michael Kearns (CIS, University of Pennsylvania)

Title: Trading Without Regret

Abstract: No-regret learning is a collection of tools designed to give provable performance
guarantees in the absence of any statistical or other assumptions on the data (!), and thus
stands in stark contrast to most classical modeling approaches. With origins stretching
back to the 1950s, the field has yielded a rich body of algorithms and analyses that
covers problems ranging from forecasting from expert advice to online convex optimization.
I will survey the field, with special emphasis on applications to quantiative finance problems,
including portfolio construction and inventory risk.

Jonathan Larkin (Quantopian)

Title: Herding Robotic Cats: Constructing a Single Portfolio from Hundreds of Thousands of Autonomous Strategies

Abstract: Many multi-strategy and multi-manager investment managers are faced with a common problem: how to implement a single portfolio subject to a single investment mandate when the collection of underlying strategies are autonomous, private, and independent. This talk demonstrates a framework to solve this problem.

Harry Mamaysky (Columbia GSB)

Title: How News and Its Context Drive Risk and Returns Around the World

Abstract: We develop a novel methodology for classifying the context of news articles to predict risk and return in stock markets. For a set of 52 developed and emerging market economies, we show that a parsimonious summary of news, including context-specific sentiment, predicts future countrylevel market outcomes, as measured by returns, volatilities, or drawdowns. Our approach avoids data mining biases that may occur when relying on particular word combinations to detect changes in risk. The effect of present news on future market outcomes differs by news category, as well as across emerging and developed markets. Importantly, news stories about emerging markets contain more incremental information – in relation to known predictors of future returns – than do news stories about developed markets. We also find evidence of regime shifts in the relationship between future market outcomes and news. Out-of-sample testing confirms the efficacy of our approach for forecasting country-level market outcomes. (Joint work with Charles Calomiris)

Stefano Pasquali (BlackRock)

Title: Unified liquidity risk management framework. Where and why use machine learning?

Abstract: We will present an overview of several machine learning application research ideas to leverage this novel tool in financial applications.  We will provide an overview of an holistic liquidity risk management framework where machine learning is under investigation for some of the components. We will frame the problem to solve as a complete multi period optimization framework based on risk, transaction cost and redemption modeling. In particular we will provide the first research outcome of Fund Flows forecast using neural network to estimate conditional extreme value distributions as a concrete example where advanced machine learning tools can provide promising results in the out of sample test.  

Markus Pelger (Stanford MS&E)

Title: Estimating Latent Asset Pricing Factors from Large-Dimensional Data

Abstract: We develop an estimator for latent factors in a large-dimensional panel of financial data that can explain expected excess returns. Statistical factor analysis based on Principal Component Analysis (PCA) has problems identifying factors with a small variance that are important for asset pricing. Our estimator searches for factors with a high Sharpe-ratio that can explain both the expected return and covariance structure. We derive the statistical properties of the new estimator based on new results from random matrix theory and show that our estimator can find asset-pricing factors, which cannot be detected with PCA, even if a large amount of data is available. Applying the approach to portfolio and stock data we find factors with Sharpe-ratios more than twice as large as those based on conventional PCA. Our factors accommodate a large set of anomalies better than notable four- and five-factor alternative models.

Roberto Rigobon (Sloan MIT)

Title: The Billion Prices Project: Using Small Data to Improve Big Data

Abstract: Coming soon!

Mayur Thakur (Goldman Sachs)

Title: Surveillance Development: A Case Study

Abstract: A “surveillance” is a mathematical model, implemented in code, that takes in as input a large amount of data (for example, hundreds of millions of trades and billions of market data points) and identifies those parts of the data that look suspicious. In the above example, the surveillance model may identify, say, a hundred trades as suspicious. We will present one specific surveillance model and our experience implementing it in production. Through this example we will present the key technical challenges and a surveillance architecture that we have developed. This architecture lets us ingest and store billions of rows of data per day, is easy to update, and scales to hundreds of concurrently running jobs and dozens of users running ad hoc queries.


500 W. 120th St., 918 Mudd, New York, NY 10027    212-854-2905                
©2012 Columbia University