7th Annual Bloomberg-Columbia Machine Learning in Finance Workshop 2021
The workshop is organized by:
Due to the current pandemic situation the 7th annual Columbia-Bloomberg Machine Learning in Finance conference on September 17th, 2021 will be conducted as follows:
(a) talks will be pre-recorded and broadcast via Zoom on the day of the conference.
(b) the organizers will introduce the speaker before each talk
(c) Q&A session (10-15 min) will be conducted live via Zoom after each talk.
The organizers will moderate the Q&A sessions.
Model risk and Machine Learning for Finance
Classical mathematical finance has usually been based on well understood parsimonious models, with an understanding of the associated model-risk. In the academic literature, this has been a particular focus since the 2007-08 financial crisis. Recently, machine learning methods have been hailed as a way of avoiding many of the problems arising from using parsimonious models. In this talk, we will consider how model risk appears in machine-learning systems, what existing methods there are to manage and avoid it, and where more research is needed.
Samuel Cohen is Associate Professor in the Mathematical Institute at the University of Oxford, and a fellow of the Alan Turing Institute, where he coordinates the Machine Learning for Finance theme. He has worked on a variety of topics connected with finance, statistics and data science, stochastic analysis, and optimal control.
The Use of Synthetic Data to Determine Capital Adequacy
Machine learning tools have been developed to generate synthetic data that is indistinguishable from available historical data. In this paper, we investigate whether the tools can be used for stress testing. In particular we test whether synthetic data can be used to provide reliable risk measures when the confidence levels are very high. Our results are encouraging and suggest that synthetic data produced from the most recent 250 days of historical data are potentially useful for determining regulatory market risk capital requirements.
John Hull is the Maple Financial Professor of Derivatives and Risk Management at the Joseph L. Rotman School of Management, University of Toronto. His research and teaching have been in the machine learning area in the last few years. He has written four books: Machine Learning in Business: An Introduction to the World of Data Science” (now in its 3rd edition) “Risk Management and Financial Institutions” (now in its 5th edition); "Options, Futures, and Other Derivatives" (now in its 11th edition); "Fundamentals of Futures and Options Markets" (now in its 9th edition). The books have been translated into many languages and are widely used by practicing managers as well as in the classroom. He was in 2016 awarded the title of University Professor (an honor granted to only 2% of faculty at University of Toronto.) He has acted as consultant to many financial institutions throughout the world and has won many teaching awards, including University of Toronto's prestigious Northrop Frye award. Dr. Hull is academic director of FinHub, Rotman’s financial innovation lab.
Deep learning the limit order book: what machines can learn and what can we learn from them?
Markets are word-wide information processing systems where humans and machines interact dynamically searching for an agreement on the price of things. The Limit Order Book (LOB) is a self-organizing complex mechanism where a transaction price emerges from the interaction of many agents. The actuation of the process can be at very high speed but the orders reflect operations and computations that span a very broad range of time-scales from nanoseconds to several years. Given its relatively simple and mechanistic nature, the LOB appears to be an ideal candidate for the use of Artificial Intelligence tools to automatically learn properties, find patterns and devise strategies. The literature is starting to report some success stories in this domain and appropriate methodologies and architectures are emerging. However, markets are complex systems and their modeling is an extremely challenging task. In this talk, I will present some results for deep learning and deep reinforcement learning of the LOB dynamics. I will also discuss general perspectives about the increasingly complex implications of automated modeling of complex systems and their use for the automation of the services industry.
T Aste (2021) What machines can learn about our complex world-and what can we learn from them? Available at SSRN 3797711 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3797711
A Briola, J Turiel, R Marcaccioli, T Aste (2021) Deep Reinforcement Learning for Active High Frequency Trading Available at https://arxiv.org/abs/2101.07107
A Briola, J Turiel, T Aste (2020) Deep Learning Modeling of the Limit Order Book: A Comparative Perspective Available at SSRN 3714230 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3714230
Tomaso Aste is professor of Complexity Science at UCL Computer Science Department. A trained Physicist, he has substantially contributed to research in financial systems modeling, complex structures analysis, and machine learning. Prof. Aste is passionate about the investigation of the interplay between technologies, society and finance. He is founder and Head of the Financial Computing and Analytics Group at UCL, co-founder and Scientific Director of the UCL Centre for Blockchain Technologies, Member of the Board of the ESRC LSE-UCL Systemic Risk Centre and Member of the Board of the Whitechapel Think Thank. He collaborates, on FinTech topics, with the Financial Conduct Authority, The Bank of England, HMRC and the All-Party Parliamentary Group. He is leading an initiative of FinTech training to central bankers and regulators across South America. He is an advisor and consultant for several financial companies, banks, FinTech firms, and digital-economy start-ups. He created four Master Programmes at UCL which offer a wide range of courses from Risk Management to the Digital Economy.
New frontiers in deep learning and quantitative finance: an overview
The field of artificial intelligence has been revolutionized in the recent years by the staggering successes of deep and reinforcement learning in computer vision, natural language processing and strategies-learning. The impact on several industries has been substantial and has introduced a new paradigm around automation based on data and learned systems. Recently, a substantial amount of research has been dedicated to automation and optimization problems in quantitative finance with the promises of revolutionizing its computational and fidelity aspects. In my talk I will give an overview of some the most promising applications of deep and reinforcement learning to finance from the practitioner perspective, highlighting the current status of progress, discussing remaining limitations and bottlenecks and connecting the classical and formal approaches of quantitative finance with the ones brought by artificial intelligence.
Giovanni Faonte is Head of Quantitative Finance Research at Goldman Sachs RDE-CoreAI division, a recently founded team aiming to investigate and apply deep learning techniques in the context of quantitative finance. At Goldman, he has also been involved in applications of deep neural architectures to NLP and anomaly detection. He holds a PhD in Mathematics from Yale University and has been a (post-graduate) Research Fellow at Kavli IPMU and Max Planck Institute for Mathematics, where his research focused on applications of homotopy theory to algebraic geometry. Prior to Goldman, he has worked as a quantitative risk-manager and as a machine learning engineer developing perception stack for autonomous driving systems based on deep neural networks.
An Optimal Control Strategy for Execution of Large Stock Orders Using LSTMs
We consider the problem of executing large stock orders in a limit-order book. Transaction costs are incurred because the large stock orders consume order-book liquidity beyond the best bid/ask. We model these transaction costs as a function that is proportional to overall trading volume and convex in the size of our trade. We use a long short term memory (LSTM) neural network to solve an optimization to minimize the total transaction costs accumulated when executing a large order in a series of smaller sub orders (i.e., the optimal control strategy). We evaluate this policy relative to industry-standard metrics such as time-weighted average price (TWAP) and volume-weighted average price (VWAP). The application that we consider is the liquidation of a large position in individual stocks, which we execute over the course of a trading day. Our studies using recent market data show that that an LSTM strategy can out-perform TWAP and VWAP-based strategies when the size of the trade is very large.
Dr Brian Healy is an Industry Professor of Financial Technology at UCL and holds a research position at the Institute for Computational & Mathematical Engineering at Stanford University. He has over 20 years experience in quantitative finance as a quantitative analyst and option trader and is CEO of Decision Science and a founding partners of AB Quantitative Investments.
eXplainable AI in Credit Risk Management
Artificial Intelligence (AI) has created the single biggest technology revolution the world has ever seen. For the finance sector, it provides great opportunities to enhance customer experience, democratize financial services, ensure consumer protection and significantly improve risk management. While it is easier than ever to run state-of-the-art machine learning models, designing and implementing systems that support real-world finance applications have been challenging. In large part this is due to the lack of transparency and explainability which in turn represent important factors in establishing reliable technology. The research on this topic with a specific focus on applications in credit risk management has been limited. In this paper, we implement different advanced post-hoc model agnostic explainability techniques to machine learning (ML)-based credit scoring models applied to loan performance data. We present multiple comparison scenarios and we discuss in detail the practical challenges associated with the implementation of these state-of-art eXplainabale AI (XAI) methods. This is joint work with Prof. Joerg Osterrieder, ZHAW and University of Twente and Prof. Ali Hirsa, Columbia University.
Branka Hadji Misheva is researcher at ZHAW Zurich University of Applied Sciences, working on AI applications in finance, XAI methods, network models and fintech risk management. She holds a PhD in Economics and Management of Technology with a specific focus on network models as they apply to the operation and performance of P2P lending platforms, from the University of Pavia, Italy. At her position at ZHAW, she leads several research and innovation projects on Artificial Intelligence and Machine Learning for Credit Risk Management. She is a research author of 10 papers in the field of credit risk modeling, graph theory, predictive performance of scoring models, lead behavior in crypto markets and explainable AI models for credit risk management.
Why and how systematic strategies decay
Systematic strategies are known to decay out of sample. Two competing explanations have been proposed: arbitrage and overfitting. In order to pin down, which of the two forces is more relevant, we have reproduced a large number of stock anomalies proposed in the academic literature and set out to determine characteristics that explain their decay out of sample. We use the cross-section of stock anomalies to test variables that proxy various aspects of arbitrage and overfitting. Our study suggests that while some arbitrage-related variables are statistically significant, it is the overfitting variables that explain larger part of the cross-sectional variance of Sharpe decay across strategies.
Adam Rej is Head of Macro Research at Capital Fund Management. He completed his PhD at Max Planck Institute for Gravitational Physics in Potsdam and he held postdoctoral positions at Imperial College London, Institute for Advanced Study in Princeton, and École Normal Supérieure in Paris. He joined CFM in 2014 and focuses on systematic global macro research.
Improving bond trading workflow by learning to rank RFQs
The work of a bond trader involves repeatedly responding to request-for-quotes (or RFQs) as quickly as possible and with the best price, sometimes dealing with up to 10,000 orders or RFQs each day. Working with an ever-increasing volume and under challenging time constraints, many traders are finding it difficult to keep up — missing opportunities and not working as efficiently as they could be. As a result, many are looking to technology to aid them in helping optimize this process. Currently, hard-coded rules are used to help automate the process of routing trades and responding to RFQs. However, this approach is brittle and difficult to scale. Incorporating machine learning can offer more efficient tools for improving workflows. In this talk, we will discuss a recently developed machine learning model that suggests which orders or RFQs the trader should work on first. With the help of these suggestions, traders can focus their time more efficiently on high-value decisions.
Andy Almonte is the team lead, AI Signal Extraction Team at Bloomberg. Over the course of his 15+ year career at Bloomberg, Andy Almonte has led and worked on Engineering teams in London and New York, where he now leads the Signal Extraction team in the AI Group within Bloomberg's Engineering department. This team builds applied machine learning solutions for the company's enterprise trading products: TOMS and AIM. Andy previously led the AI Market Analytics engineering team, which develops software that utilizes machine learning and natural language processing to solve problems across several Bloomberg products, including news summarization, document clustering, and social media analytics. In addition to graduating with a Bachelor's in Engineering in computer engineering from Stony Brook University in 2003, Andy also holds a Master's degree in computer science from the Polytechnic Institute of New York University.
Generative Adversarial Networks and their applications in Finance
In 2014, Ian Goodfellow et al. presented a new two-step Deep Learning architecture called Generative Adversarial Network (GAN). This network is actually made of two subnetworks: a generator and a discriminator that compete against each other. GANs have shown great success in generating realistic pictures of human beings and other areas. Despite great progress in using them for financial time-series and finance applications, many challenges remain. A real zoo of GANs has evolved and we give a short overview of their properties and which ones could be used for financial applications. Furthermore, we analyze a variation of a conditional GAN that directly incorporates some of the key distributional properties that are needed for a realistic simulation of financial markets. Our GAN is based on an additional subnetwork which is used to directly model some relevant characteristics. For numerical illustrations, we implement this model on liquid futures in a high-frequency intraday setting, and conduct a thorough statistical performance analysis, with a particular focus on the tail properties of those time-series. An outlook on potential applications as well as open research questions is given. This is joint work with Prof. Ali Hirsa and Weilong Fu from Columbia University as well as Dr. Branka Hadji Misheva from Zurich University of Applied Sciences.
Joerg Osterrieder is Associate Professor for Artificial Intelligence and Finance at the University of Twente (Netherlands), senior advisor to ING Group and member of Kickstart AI, a Dutch National Initiative on Artificial Intelligence as well as Professor of Finance and Risk Modelling at the ZHAW School of Engineering (Switzerland). He has been working in the area of financial statistics, quantitative finance, algorithmic trading, and digitisation of the finance industry for more than 15 years.
Joerg is the Action Chair of the European COST Action 19130 Fintech and Artificial Intelligence in Finance, an interdisciplinary research network combining 200+ researchers and 38 European countries as well as five international partner countries. He is the director of studies for an executive education course on "Big Data Analytics, Blockchain and Distributed Ledger" and has been the main organizer of an annual research conference series on Artificial Intelligence in Industry and Finance since 2016. He is a founding associate editor of Digital Finance, an editor of Frontiers Artificial Intelligence in Finance and frequent reviewer for academic journals.
In addition, he serves as an expert reviewer for the European Commission on the "Executive Agency for Small & Medium-sized Enterprises" and the "European Innovation Council Accelerator Pilot" programs. Previously he worked as an executive director at Goldman Sachs and Merrill Lynch, as quantitative analyst at AHL as well as a member of the senior management at Credit Suisse Group. Joerg is now also active at the intersection of academia and industry, focusing on the transfer of research results to the financial services sector in order to implement practical solutions.