Here is a link to my SSRN author profile and my arXiv.org profile.
Below you will find my working and published papers. Click on title to see the abstract and link to the full paper.
Operations Research, Forthcoming
We define and develop an approach for risk budgeting allocation — a risk diversification portfolio strategy — where risk is measured using a dynamic time-consistent risk measure. For this, we introduce a notion of dynamic risk contributions that generalise the classical Euler contributions and which allow us to obtain dynamic risk contributions in a recursive manner. We prove that, for the class of dynamic coherent distortion risk measures, the risk allocation problem may be recast as a sequence of strictly convex optimisation problems. Moreover, we show that any self-financing dynamic risk budgeting strategy with initial wealth of 1 is a scaled version of the unique solution of the sequence of convex optimisation problems. Furthermore, we develop an actor-critic approach, leveraging the elicitability of dynamic risk measures, to solve for risk budgeting strategies using deep learning.
Scandanavian Actuarial Journal, Forthcoming
We study a reinsurer who faces multiple sources of model uncertainty. The reinsurer offers contracts to n insurers whose claims follow compound Poisson processes representing both idiosyncratic and systemic sources of loss. As the reinsurer is uncertain about the insurers’ claim severity distributions and frequencies, they design reinsurance contracts that maximise their expected wealth subject to an entropy penalty. Insurers meanwhile seek to maximise their expected utility without ambiguity. We solve this continuous-time Stackelberg game for general reinsurance contracts and find that the reinsurer prices under a distortion of the barycentre of the insurers’ models. We apply our results to proportional reinsurance and excess-of-loss reinsurance contracts, and illustrate the solutions numerically. Furthermore, we solve the related problem where the reinsurer maximises, still under ambiguity, their expected utility and compare the solutions.
Quantitative Finance, Forthcoming
This paper introduces a new approach for generating sequences of implied volatility (IV) surfaces across multiple assets that is faithful to historical prices. We do so using a combination of functional data analysis and neural stochastic differential equations (SDEs) combined with a probability integral transform penalty to reduce model misspecification. We demonstrate that learning the joint dynamics of IV surfaces and prices produces market scenarios that are consistent with historical features and lie within the sub-manifold of surfaces that are free of static arbitrage.
SIAM J. Financial Mathematics, Forthcoming
When an investor is faced with the option to purchase additional information regarding an asset price, how much should she pay? To address this question, we solve for the indifference price of information in a setting where a trader maximizes her expected utility of terminal wealth over a finite time horizon. If she does not purchase the information, then she solves a partial information stochastic control problem, while, if she does purchase the information, then she pays a cost and receives partial information about the asset’s trajectory. We further demonstrate that when the investor can purchase the information at any stopping time prior to the end of the trading horizon, she chooses to do so at a deterministic time.
Mathematics of Operations Research, Forthcoming
Based on the concept of law-invariant convex risk measures, we introduce the notion of distributional convex risk measures and employ them to define distributional dynamic risk measures. We then apply these dynamic risk measures to investigate Markov decision processes, incorporating latent costs, random actions, and weakly continuous transition kernels. Furthermore, the proposed dynamic risk measures allows risk aversion to change dynamically. Under mild assumptions, we derive a dynamic programming principle and show the existence of an optimal policy in both finite and infinite time horizons. Moreover, we provide a sufficient condition for the optimality of deterministic actions. For illustration, we conclude the paper with examples from optimal liquidation with limit order books and autonomous driving.
J. Commodity Markets, vol 35 (2024)
One approach to reducing greenhouse gas (GHG) emissions is to incentivize carbon capturing and carbon reducing projects while simultaneously penalising excess GHG output. In this work, we present a novel market framework and characterise the optimal behaviour of GHG offset credit (OC) market participants in both single-player and two-player settings. The single player setting is posed as an optimal stopping and control problem, while the two-player setting is posed as optimal stopping and mixed-Nash equilibria problem. We demonstrate the importance of acting optimally using numerical solutions and Monte Carlo simulations and explore the differences between the homogeneous and heterogeneous players. In both settings, we find that market participants benefit from optimal OC trading and OC generation.
SIAM J. Control, 62(2), 982-1005
Given an n-dimensional stochastic process X driven by P-Brownian motions and Poisson random measures, we seek the probability measure Q, with minimal relative entropy to P, such that the Q-expectations of some terminal and running costs are constrained. We prove existence and uniqueness of the optimal probability measure, derive the explicit form of the measure change, and characterise the optimal drift and compensator adjustments under the optimal measure. We provide an analytical solution for Value-at-Risk (quantile) constraints, discuss how to perturb a Brownian motion to have arbitrary variance, and show that pinned measures arise as a limiting case of optimal measures. The results are illustrated in a risk management setting — including an algorithm to simulate under the optimal measure — where an agent seeks to answer the question: what dynamics are induced by a perturbation of the Value-at-Risk and the average time spent below a barrier on the reference process?
Applied Mathematical Finance, 2023, v20(3) pg 153-174
The objectives of option hedging/trading extend beyond mere protection against downside risks, with a desire to seek gains also driving agent’s strategies. In this study, we showcase the potential of robust risk-aware reinforcement learning (RL) in mitigating the risks associated with path-dependent financial derivatives. We accomplish this by leveraging a policy gradient approach that optimises robust risk-aware performance criteria. We specifically apply this methodology to the hedging of barrier options, and highlight how the optimal hedging strategy undergoes distortions as the agent moves from being risk-averse to risk-seeking. As well as how the agent robustifies their strategy. We further investigate the performance of the hedge when the data generating process (DGP) varies from the training DGP, and demonstrate that the robust strategies outperform the non-robust ones.
SIAM J. Financial Mathematics, 2024, v15(1), pg 26-53
We study optimal control in models with latent factors where the agent controls the distribution over actions, rather than actions themselves, in both discrete and continuous time. To encourage exploration of the state space, we reward exploration with Tsallis Entropy and derive the optimal distribution over states – which we prove is q-Gaussian distributed with location characterized through the solution of an FBSΔE and FBSDE in discrete and continuous time, respectively. We discuss the relation between the solutions of the optimal exploration problems and the standard dynamic optimal control solution. Finally, we develop the optimal policy in a model-agnostic setting along the lines of soft Q-learning. The approach may be applied in, e.g., developing more robust statistical arbitrage trading strategies.
Insurance: Mathematics & Economics, v114, Pages 56-78
Stress testing, and in particular, reverse stress testing, is a prominent exercise in risk management practice. Reverse stress testing, in contrast to (forward) stress testing, aims to find an alternative but plausible model such that under that alternative model, specific adverse stresses (i.e. constraints) are satisfied. Here, we propose a reverse stress testing framework for dynamic models. Specifically, we consider a compound Poisson process over a finite time horizon and stresses composed of expected values of functions applied to the process at the terminal time. We then define the stressed model as the probability measure under which the process satisfies the constraints and which minimizes the Kullback-Leibler divergence to the reference compound Poisson model.
We solve this optimization problem, prove existence and uniqueness of the stressed probability measure, and provide a characterization of the Radon-Nikodym derivative from the reference model to the stressed model. We find that under the stressed measure, the intensity and the severity distribution of the process depend on time and the state space. We illustrate the dynamic stress testing by considering stresses on VaR and both VaR and CVaR jointly and provide illustrations of how the stochastic process is altered under these stresses. We generalize the framework to multivariate compound Poisson processes and stresses at times other than the terminal time. We illustrate the applicability of our framework by considering “what if” scenarios, where we answer the question: What is the severity of a stress on a portfolio component at an earlier time such that the aggregate portfolio exceeds a risk threshold at the terminal time? Moreover, for general constraints, we provide a simulation algorithm to simulate sample paths under the stressed measure.
SIAM J. Financial Mathematics, (2023), vol 14(4) pg. 1249--1289
We propose a novel framework to solve risk-sensitive reinforcement learning (RL) problems where the agent optimises time-consistent dynamic spectral risk measures. Based on the notion of conditional elicitability, our methodology constructs (strictly consistent) scoring functions that are used as penalizers in the estimation procedure. Our contribution is threefold: we (i) devise an efficient approach to estimate a class of dynamic spectral risk measures with deep neural networks, (ii) prove that these dynamic spectral risk measures may be approximated to any arbitrary accuracy using deep neural networks, and (iii) develop a risk-sensitive actor-critic algorithm that uses full episodes and does not require any additional nested transitions. We compare our conceptually improved reinforcement learning algorithm with the nested simulation approach and illustrate its performance in two settings: statistical arbitrage and portfolio allocation on both simulated and real data.
Proceedings of the 63nd ISI World Statistics Congress, 2023
[ arXiv ]
We introduce a distributional method for learning the optimal policy in risk averse Markov decision process with finite state action spaces, latent costs, and stationary dynamics. We assume sequential observations of states, actions, and costs and assess the performance of a policy using dynamic risk measures constructed from nested Kusuoka-type conditional risk mappings. For such performance criteria, randomized policies may outperform deterministic policies, therefore, the candidate policies lie in the d-dimensional simplex where d is the cardinality of the action space. Existing risk averse reinforcement learning methods seldom concern randomized policies, naïve extensions to current setting suffer from the curse of dimensionality. By exploiting certain structures embedded in the corresponding dynamic programming principle, we propose a distributional learning method for seeking the optimal policy. The conditional distribution of the value function is casted into a specific type of function, which is chosen with in mind the ease of risk averse optimization. We use a deep neural network to approximate said function, illustrate that the proposed method avoids the curse of dimensionality in the exploration phase, and explore the method’s performance with a wide range of model parameters that are picked randomly.
Mathematical Finance, 34 (2), 557-587
We develop an approach for solving time-consistent risk-sensitive stochastic optimization problems using model-free reinforcement learning (RL). Specifically, we assume agents assess the risk of a sequence of random variables using dynamic convex risk measures. We employ a time-consistent dynamic programming principle to determine the value of a particular policy, and develop policy gradient update rules that aid in obtaining optimal policies. We further develop an actor-critic style algorithm using neural networks to optimize over policies. Finally, we demonstrate the performance and flexibility of our approach by applying it to three optimization problems: statistical arbitrage trading strategies, obstacle avoidance robot control, and financial hedging.
SIAM J. Financial Mathematics, 2023, vol 14(4) 175--1214
[ arXiv ][ SSRN ][ SIAM ][ github ]
We study the problem of active portfolio management where an investor aims to outperform a benchmark strategy’s risk profile while not deviating too far from it. Specifically, an investor considers alternative strategies whose terminal wealth lie within a Wasserstein ball surrounding a benchmark’s — being distributionally close — and that have a specified dependence/copula — tying state-by-state outcomes — to it. The investor then chooses the alternative strategy that minimises a distortion risk measure of terminal wealth. In a general (complete) market model, we prove that an optimal dynamic strategy exists and provide its characterisation through the notion of isotonic projections.
We further propose a simulation approach to calculate the optimal strategy’s terminal wealth, making our approach applicable to a wide range of market models. Finally, we illustrate how investors with different copula and risk preferences invest and improve upon the benchmark using the Tail Value-at-Risk, inverse S-shaped, and lower- and upper-tail distortion risk measures as examples. We find that investors’ optimal terminal wealth distribution has larger probability masses in regions that reduce their risk measure relative to the benchmark while preserving the benchmark’s structure.
SIAM J. Financial Mathematics, (2023), Vol. 14(4), pg.1004-1027
We propose a hybrid method for generating arbitrage-free implied volatility (IV) surfaces consistent with historical data by combining model-free Variational Autoencoders (VAEs) with continuous time stochastic differential equation (SDE) driven models. We focus on two classes of SDE models: regime switching models and Lévy additive processes. By projecting historical surfaces onto the space of SDE model parameters, we obtain a distribution on the parameter subspace faithful to the data on which we then train a VAE. Arbitrage-free IV surfaces are then generated by sampling from the posterior distribution on the latent space, decoding to obtain SDE model parameters, and finally mapping those parameters to IV surfaces.
Applied Mathematical Finance, 29(1), 62-78.
Model-free learning for multi-agent stochastic games is an active area of research. Existing reinforcement learning algorithms, however, are often restricted to zero-sum games, and are applicable only in small state-action spaces or other simplified settings. Here, we develop a new data efficient Deep-Q-learning methodology for model-free learning of Nash equilibria for general-sum stochastic games. The algorithm uses a local linear-quadratic expansion of the stochastic game, which leads to analytically solvable optimal actions. The expansion is parametrized by deep neural networks to give it sufficient flexibility to learn the environment without the need to experience all state-action pairs. We study symmetry properties of the algorithm stemming from label-invariant stochastic games and as a proof of concept, apply our algorithm to learning optimal trading strategies in competitive electronic markets.
SIAM Journal on Financial Mathematics, 13(3), 944-968.
Trading frictions are stochastic. They are, moreover, in many instances fast-mean reverting. Here, we study how to optimally trade in a market with stochastic price impact and study approximations to the resulting optimal control problem using singular perturbation methods. We prove, by constructing sub- and super-solutions, that the approximations are accurate to the specified order. Finally, we perform some numerical experiments to illustrate the effect that stochastic trading frictions have on optimal trading.
in Machine Learning and Data Sciences for Financial Markets: A Guide to Contemporary Practices. Cambridge University Press
We employ reinforcement learning (RL) techniques to devise statistical arbitrage strategies in electronic markets. In particular, double deep Q network learning (DDQN) and a new variant of reinforced deep Markov models (RDMMs) are used to derive the optimal strategies for an agent who trades in a foreign exchange (FX) triplet. An FX triplet consists of three currency pairs where the exchange rate of one pair is redundant because, by no-arbitrage, it is determined by the exchange rates of the other two pairs. We use simulations of a co-integrated model of exchange rates to implement the strategies and show their financial performance.
Mathematical Finance, 32(3), 779-824.
SREC markets are a market-based system designed to incentivize solar energy generation. A regulatory body imposes a lower bound on the amount of energy each regulated firm must generate via solar means, providing them with a certificate for each MWh generated. Regulated firms seek to navigate the market to minimize the cost imposed on them, by modulating their SREC generation and trading activities. As such, the SREC market can be viewed through the lens of a large stochastic game with heterogeneous agents, where agents interact through the market price of the certificates. We study this stochastic game by solving the mean-field game (MFG) limit with sub-populations of heterogeneous agents. Our market participants optimize costs accounting for trading frictions, cost of generation, SREC penalty, and generation uncertainty. Using techniques from variational analysis, we characterize firms’ optimal controls as the solution of a new class of McKean-Vlasov FBSDE and determine the equilibrium SREC price. We further prove that MFG strategy has the ε-Nash property for the finite player game. Finally, we numerically solve the MV-FBSDEs and conclude by demonstrating how firms behave in equilibrium using simulated examples.
Finance and Stochastics, 26, 103–129 (2022)
[ FS ]
At the heart of financial mathematics lie stochastic optimisation problems. Traditional approaches to solving such problems, while applicable to broad classes of models, require specifying a model to complete the analysis and obtain implementable results. Even then, the curse of dimensionality challenges the viability of conventional methods to settings of practical relevance. In contrast, machine learning, and reinforcement learning (RL) particularly, promises to learn from data and overcome the curse of dimensionality simultaneously. This article touches on several approaches in the extant literature that are well positioned to merge our traditional techniques with RL.
SIAM J. Financial Mathematics, vol 13, 213-226
[ arXiv ] [ SIFIN ] [ github ]
We present a reinforcement learning (RL) approach for robust optimisation of risk-aware performance criteria. To allow agents to express a wide variety of risk-reward profiles, we assess the value of a policy using rank dependent expected utility (RDEU). RDEU allows the agent to seek gains, while simultaneously protecting themselves against downside events. To robustify optimal policies against model uncertainty, we assess a policy not by its distribution, but rather, by the worst possible distribution that lies within a Wasserstein ball around it. Thus, our problem formulation may be viewed as an actor choosing a policy (the outer problem), and the adversary then acting to worsen the performance of that strategy (the inner problem). We develop explicit policy gradient formulae for the inner and outer problems, and show its efficacy on three prototypical financial problems: robust portfolio allocation, optimising a benchmark, and statistical arbitrage
International Journal of Theoretical and Applied Finance (IJTAF) 24, no. 06n07 (2021): 1-37.
Latency (i.e., time delay) in electronic markets affects the efficacy of liquidity taking strategies. During the time liquidity takers process information and send marketable limit orders (MLOs) to the exchange, the limit order book (LOB) might undergo updates, so there is no guarantee that MLOs are filled. We develop a latency-optimal trading strategy that improves the marksmanship of liquidity takers. The interaction between the LOB and MLOs is modelled as a marked point process. Each MLO specifies a price limit so the order can receive worse prices and quantities than those the liquidity taker targets if the updates in the LOB are against the interest of the trader. In our model, the liquidity taker balances the tradeoff between missing trades and the costs of walking the book. We employ techniques of variational analysis to obtain the optimal price limit of each MLO the agent sends. The price limit of a MLO is characterized as the solution to a new class of forward-backward stochastic differential equations (FBSDEs) driven by random measures. We prove the existence and uniqueness of the solution to the FBSDE and numerically solve it to illustrate the performance of the latency-optimal strategies
2022, The Astrophysical Journal, 926 51
High-resolution spectroscopic surveys of the Milky Way have entered the Big Data regime and have opened avenues for solving outstanding questions in Galactic Archaeology. However, exploiting their full potential is limited by complex systematics, whose characterization has not received much attention in modern spectroscopic analyses. In this work, we present a novel method to disentangle the component of spectral data space intrinsic to the stars from that due to systematics. Using functional principal component analysis on a sample of 18,933 giant spectra from APOGEE, we find that the intrinsic structure above the level of observational uncertainties requires ≈10 Functional Principal Components (FPCs). Our FPCs can reduce the dimensionality of spectra, remove systematics, and impute masked wavelengths, thereby enabling accurate studies of stellar populations. To demonstrate the applicability of our FPCs, we use them to infer stellar parameters and abundances of 28 giants in the open cluster M67. We employ Sequential Neural Likelihood, a simulation-based Bayesian inference method that learns likelihood functions using neural density estimators, to incorporate non-Gaussian effects in spectral likelihoods. By hierarchically combining the inferred abundances, we limit the spread of the following elements in M67: Fe≲0.02 dex; C≲0.03 dex; O,Mg,Si,Ni≲0.04 dex; Ca≲0.05 dex; N,Al≲0.07 dex (at 68% confidence). Our constraints suggest a lack of self-pollution by core-collapse supernovae in M67, which has promising implications for the future of chemical tagging to understand the star formation history and dynamical evolution of the Milky Way.
Applied Mathematical Finance, v28(4), 2021, 361-380
Optimal trade execution is an important problem faced by essentially all traders. Much research into optimal execution uses stringent model assumptions and applies continuous time stochastic control to solve them. Here, we instead take a model free approach and develop a variation of Deep Q-Learning to estimate the optimal actions of a trader. The model is a fully connected Neural Network trained using Experience Replay and Double DQN with input features given by the current state of the limit order book, other trading signals, and available execution actions, while the output is the Q-value function estimating the future rewards under an arbitrary action. We apply our model to nine different stocks and find that it outperforms the standard benchmark approach on most stocks using the measures of (i) mean and median out-performance, (ii) probability of out-performance, and (iii) gain-loss ratios.
Automatica, Volume 139, May 2022, 110177
[ arXiv ] [Automatica]
We study a general class of entropy-regularized multi-variate LQG mean field games (MFGs) in continuous time with K distinct sub-population of agents. We extend the notion of actions to action distributions (exploratory actions), and explicitly derive the optimal action distributions for individual agents in the limiting MFG. We demonstrate that the optimal set of action distributions yields an ϵ-Nash equilibrium for the finite-population entropy-regularized MFG. Furthermore, we compare the resulting solutions with those of classical LQG MFGs and establish the equivalence of their existence.
Probability Surveys 18: 132-178 (2021)
We present an overview of the broad class of financial models in which the prices of assets are Lévy-Ito processes driven by an n-dimensional Brownian motion and an independent Poisson random measure. The Poisson random measure is associated with an n-dimensional Lévy process. Each model consists of a pricing kernel, a money market account, and one or more risky assets. We show how the excess rate of return above the interest rate can be calculated for risky assets in such models, thus showing the relationship between risk and return when asset prices have jumps. The framework is applied to a variety of asset classes, allowing one to construct new models as well as interesting generalizations of familiar models.
Quantitative Finance, 1-23.
We address a portfolio selection problem that combines active (outperformance) and passive (tracking) objectives using techniques from convex analysis. We assume a general semimartingale market model where the assets’ growth rate processes are driven by a latent factor. Using techniques from convex analysis we obtain a closed-form solution for the optimal portfolio and provide a theorem establishing its uniqueness. The motivation for incorporating latent factors is to achieve improved growth rate estimation, an otherwise notoriously difficult task. To this end, we focus on a model where growth rates are driven by an unobservable Markov chain. The solution in this case requires a filtering step to obtain posterior probabilities for the state of the Markov chain from asset price information, which are subsequently used to find the optimal allocation. We show the optimal strategy is the posterior average of the optimal strategies the investor would have held in each state assuming the Markov chain remains in that state. Finally, we implement a number of historical backtests to demonstrate the performance of the optimal portfolio.
Systems & Control Letters. v142 (2020), 104734
We develop a convex analysis approach for solving LQG optimal control problems and apply it to major–minor (MM) LQG mean–field game (MFG) systems. The approach retrieves the best response strategies for the major agent and all minor agents that attain an ϵ-Nash equilibrium. An important and distinctive advantage to this approach is that unlike the classical approach in the literature, we are able to avoid imposing assumptions on the evolution of the mean–field. In particular, this provides a tool for dealing with complex and non–standard systems.
SIAM J. Financial Mathematics, 11(3), 690-719.
We develop the optimal trading strategy for a Foreign Exchange (FX) broker who must liquidate a large position in an illiquid currency pair. To maximise revenues, the broker considers trading in a currency triplet which consists of the illiquid pair and two other liquid currency pairs. The liquid pairs in the triplet are chosen so that one of the pairs is redundant. The broker is risk-neutral and accounts for model ambiguity in the FX rates to make her strategy robust to model misspecification. When the broker is ambiguity neutral (averse) the trading strategy in each pair is independent (dependent) of the inventory in the other two pairs in the triplet. We employ simulations to illustrate how the robust strategies perform. For a range of ambiguity aversion parameters, we find the mean Profit and Loss (P&L) of the strategy increases and the standard deviation of the P&L decreases as ambiguity aversion increases.
Applied Mathematical Finance, 27(1-2), 99-131.
SREC markets are a relatively novel market-based system to incentivize the production of energy from solar means. A regulator imposes a floor on the amount of energy each regulated firm must generate from solar power in a given period and provides them with certificates for each generated MWh. Firms offset these certificates against the floor and pay a penalty for any lacking certificates. Certificates are tradable assets, allowing firms to purchase/sell them freely. In this work, we formulate a stochastic control problem for generating and trading in SREC markets from a regulated firm’s perspective. We account for generation and trading costs, the impact both have on SREC prices, provide a characterization of the optimal strategy, and develop a numerical algorithm to solve this control problem. Through numerical experiments, we explore how a firm who acts optimally behaves under various conditions. We find that an optimal firm’s generation and trading behaviour can be separated into various regimes, based on the marginal benefit of obtaining an additional SREC, and validate our theoretical characterization of the optimal strategy. We also conduct parameter sensitivity experiments and conduct comparisons of the optimal strategy to other candidate strategies
Mathematical Finance, 30(3), 833-868.
A risk-averse agent hedges her exposure to a non-tradable risk factor $U$ using a correlated traded assets and accounts for the impact of her trades on both factors. The effect of the agent’s trades on $U$ is referred to as cross-impact. By solving the agent’s stochastic control problem, we obtain a closed-form expression for the optimal strategy when the agent holds a linear position in $U$. When the exposure to the non-tradable risk factor $\psi(U_T)$ is non-linear, we provide an approximation to the optimal strategy in closed-form, and prove that the value function is correctly approximated by this strategy when cross-impact and risk-aversion are small. We further prove that when $\psi(U_T)$ is non-linear, the approximate optimal strategy can be written in terms of the optimal strategy for a linear exposure with the size of the position changing dynamically according to the exposure’s “Delta” under a particular probability measure.
Mathematical Finance, 30(3), 995-1034.
Even when confronted with the same data, agents often disagree on a model of the real-world. Here, we address the question of how interacting heterogenous agents, who disagree on what model the real-world follows, optimize their trading actions. The market has latent factors that drive prices, and agents account for the permanent impact they have on prices. This leads to a large stochastic game, where each agents’ performance criteria is computed under a different probability measure. We analyse the mean-field game (MFG) limit of the stochastic game and show that the Nash equilibria is given by the solution to a non-standard vector-valued forward-backward stochastic differential equation. Under some mild assumptions, we construct the solution in terms of expectations of the filtered states. We prove the MFG strategy forms an \epsilon-Nash equilibrium for the finite player game. Lastly, we present a least-squares Monte Carlo based algorithm for computing the optimal control and illustrate the results through simulation in market where agents disagree on the model.
SIAM J. Financial Mathematics, 11(1), 201-239.
We develop a mixed least squares Monte Carlo-partial differential equation (LSMC-PDE) method for pricing Bermudan style options on assets whose volatility is stochastic. The algorithm is formulated for an arbitrary number of assets and driving processes and its probabilistic convergence is established. Afterwards, we discuss two methods to greatly improve the algorithm’s complexity. Our numerical examples focus on the single (2D) and multi-dimensional (4D) Heston model and we compare our hybrid algorithm with classical LSMC approaches. In both cases, we see that the time zero price of the hybrid algorithm has far lower variance than traditional LSMC. Moreover, for the 2D example we see that the optimal exercise boundaries for the hybrid algorithm are significantly more accurate compared to full LSMC when using a finite difference approach as a benchmark.
Applied Mathematical Finance, 27(1-2), 67-98
We model the trading strategy of an investor who spoofs the limit order book (LOB) to increase the revenue obtained from selling a position in a security. The strategy employs, in addition to sell limit orders (LOs) and sell market orders (MOs), a large number of spoof buy LOs to manipulate the volume imbalance of the LOB. Spoofing is illegal, so the strategy trades off the gains that originate from spoofing against the expected financial losses due to a fine imposed by the financial authorities. As the expected value of the fine increases, the investor relies less on spoofing, and if the expected fine is large enough, it is optimal for the investor not too spoof the LOB because the fine outweighs the benefits from spoofing. The arrival rate of buy MOs increases because other traders believe that the spoofed buy-heavy LOB shows the true supply of liquidity and interpret this imbalance as an upward pressure in prices. When the fine is low, our results show that spoofing considerably increases the revenues from liquidating a position. The PnL of the spoof strategy is higher than that of a no-spoof strategy for two reasons. First, the investor employs fewer MOs to draw the inventory to zero and benefits from roundtrip trades, which stem from spoof buy LOs that are ‘inadvertently’ filled and subsequently unwound with sell LOs. Second, the midprice trends upward when the book is buy-heavy, therefore, as time evolves, the spoofer sells the asset at better prices (on average).
SIAM J. Financial Mathematics, 10(3), 790–814
We consider an agent who takes a short position in a contingent claim and employs limit orders (LOs) and market orders (MOs) to trade in the underlying asset to maximize expected utility of terminal wealth. The agent solves a combined optimal stopping and control problem where trading has frictions: MOs (executed by the agent and other traders) have permanent price impact and pay exchange fees, and LOs earn the spread (relative to the midprice of the asset) and pay no exchange fees. We show how the agent replicates the payoff of the claim and also speculates in the asset to maximize expected utility of terminal wealth. In the strategy, MOs are used to keep the inventory on target, to replicate the payoff, and LOs are employed to build the inventory at favorable prices and boost expected terminal wealth by executing roundtrip trades that earn the spread. We calibrate the model to the E-mini contract that tracks the S\&P500 index, provide numerical examples of the performance of the strategy, and proof that our scheme converges to the viscosity solution of the dynamic programming equation.
Applied Mathematical Finance, 2019, vol 26(2), 153-185
[ SSRN ] [ final version: AMF ]
Algorithmic trading strategies for execution often focus on the individual agent who is liquidating/acquiring shares.
When generalized to multiple agents, the resulting stochastic game is notoriously difficult to solve in closed-form. Here, we circumvent the difficulties by investigating a mean-field game framework containing (i) a major agent who is liquidating a large number of shares, (ii) a number of minor agents (high-frequency traders (HFTs)) who detect and trade against the liquidator, and (iii) noise traders who buy and sell for exogenous reasons. Our setup accounts for permanent price impact stemming from all trader types inducing an interaction between major and minor agents. Both optimizing agents trade against noise traders as well as one another. This stochastic dynamic game contains couplings in the price and trade dynamics, and we use a mean-field game approach to solve the problem. We obtain a set of decentralized feedback trading strategies for the major and minor agents, and express the solution explicitly in terms of a deterministic fixed point problem. For a finite $N$ population of HFTs, the set of major-minor agent mean-field game strategies is shown to have an $\epsilon_N$-Nash equilibrium property where $\epsilon_N\to0$ as $N\to\infty$.
SIAM Newsletter, Mar 2019
[ Article ]
Algorithms designed for automated trading on financial markets have existed for at least two decades but became ubiquitous with the creation of electronic exchanges. Because of their lightning-fast reaction times and ability to process huge quantities of data in real time, such algorithms are preferable to manual traders for intra-day trading.
Due to the speed and volume of information, trading decisions must be made without human intervention and designers must be conscious of market complexities. As all models are merely approximations, an ideal algorithm should learn from its environment and dynamically adapt its strategy. Some of the earliest mathematical work in algorithmic trading focused on the execution problem [1], but researchers have since devoted much time to areas like market-making, statistical arbitrage, and optimal tracking of stochastic targets [3]…
Energy Economics, 2019, vol 79, 3-20
[ SSRN ]
We derive an investor’s optimal trading strategy of electricity contracts traded in two locations joined by an interconnector. The investor employs a price model which includes the impact of her own trades. The investor’s trades have a permanent impact on prices because her trading activity affects the demand of contracts in both locations. Additionally, the investor receives prices which are worse than the quoted prices as a result of the elasticity of liquidity provision of contracts. Furthermore, the investor is ambiguity averse, so she acknowledges that her model of prices may be misspecified and considers other models when devising her trading strategy. We show that as the investor’s degree of ambiguity aversion increases, her trading activity decreases in both locations, and thus her inventory exposure also decreases. Finally, we show that there is a range of ambiguity aversion parameters where the Sharpe ratio of the trading strategy increases when ambiguity aversion increases.
SIAM Review, 2018, Vol. 60, No. 3 : pp. 673-703
We develop a high frequency (HF) trading strategy where the HF trader uses her superior speed to process information and to post limit sell and buy orders. By introducing a multifactor mutually exciting process, we allow for feedback effects in market buy and sell orders and the shape of the limit order book (LOB). Our model accounts for the arrival of market orders that influence activity, trigger one-sided and two-sided clustering of trades, and induce temporary changes in the shape of the LOB. We also model the impact that market orders have on the short-term drift of the midprice (short-term-alpha). We show that HF traders who do not include predictors of short-term-alpha in their strategies are driven out of the market because they are adversely selected by better-informed traders and because they are not able to profit from directional strategies.
Applied Mathematical Finance, (2018) 25(3), 268-294.
Portfolio management problems are often divided into two types: active and passive, where the objective is to outperform and track a preselected benchmark, respectively. Here, we formulate and solve a dynamic asset allocation problem that combines these two objectives in a unified framework. We look to maximize the expected growth rate differential between the wealth of the investor’s portfolio and that of a performance benchmark while penalizing risk-weighted deviations from a given tracking portfolio. Using stochastic control techniques, we provide explicit closed-form expressions for the optimal allocation and we show how the optimal strategy can be related to the growth optimal portfolio. The admissible benchmarks encompass the class of functionally generated portfolios (FGPs), which include the market portfolio, as the only requirement is that they depend only on the prevailing asset values. The passive component of the problem allows the investor to leverage the relative arbitrage properties of certain FGPs and achieve outperformance in a risk-adjusted sense without requiring the difficult task of estimating of asset growth rates. Finally, some numerical experiments are presented to illustrate the risk-reward profile of the optimal allocation.
Mathematical Finance (2019) 29(3), 735-772.
[ PDF ]
Alpha signals for statistical arbitrage strategies are often driven by latent factors. This paper analyses how to optimally trade with latent factors that cause prices to jump and diffuse. Moreover, we account for the effect of the trader’s actions on quoted prices and the prices they receive from trading. Under fairly general assumptions, we demonstrate how the trader can learn the posterior distribution over the latent states, and explicitly solve the latent optimal trading problem. To illustrate the efficacy of the optimal strategy, we demonstrate its performance through simulations and compare it to strategies which ignore learning in the latent factors.
Journal of Energy Markets (2018) 11(4), 51-73
We introduce a new approach to incorporate uncertainty into the decision to invest in a commodity reserve. The investment is an irreversible one-off capital expenditure, after which the investor receives a stream of cashflow from extracting the commodity and selling it on the spot market. The investor is exposed to price uncertainty and uncertainty in the amount of available resources in the reserves (i.e. technical uncertainty). She does, however, learn about the reserve levels through time, which is a key determinant in the decision to invest. To model the reserve level uncertainty and how she learns about the estimates of the commodity in the reserve, we adopt a continuous-time Markov chain model to value the option to invest in the reserve and investigate the value that learning has prior to investment.
Mathematics and Financial Economics, (2019) 13(1), 1-30.
[ PDF ]
We examine the Foreign Exchange (FX) spot price spreads with and without Last Look on the transaction. We assume that brokers are risk-neutral and they quote spreads so that losses to latency arbitrageurs (LAs) are recovered from other traders in the FX market. These losses are reduced if the broker can reject, ex-post, loss-making trades by enforcing the Last Look option which is a feature of some trading venues in FX markets. For a given rejection threshold the risk-neutral broker quotes a spread to the market so that her expected profits are zero. When there is only one venue, we find that the Last Look option reduces quoted spreads. However, if there are two venues we show that the market reaches an equilibrium where traders have no incentive to migrate. The equilibrium can be reached with both venues coexisting, or with only one venue surviving. Moreover, when one venue enforces Last Look and the other one does not, counterintuitively, it may be the case that the Last Look venue quotes larger spreads.
Int. J. Theoretical and Applied Finance, (2018) v21(03), 1850025
We develop a trading strategy which employs limit and market orders in a multi-asset economy where the assets are not only correlated, but can also be structurally dependent. To model the structural dependence, the midprice processes follow a multivariate reflected Brownian motion on the closure of a no-arbitrage region which is dictated by the assets’ bid-ask spreads. We provide a formal framework for such an economy and solve for the value function and optimal control for an investor who takes positions in these assets. The optimal strategy exhibits two dominant features which depend on how far the vector of midprices is from the no-arbitrage bounds. When midprices are sufficiently far from the no-arbitrage edges, the strategy behaves as that of a market maker who posts buy and sell limit orders. And when the midprice vector is close to the edge of the no-arbitrage region, the strategy executes a combination of market orders and limit orders to profit from statistical arbitrages. Moreover, we discuss a numerical scheme to solve for the value function and optimal control, and perform a simulation study to discuss the main characteristics of the optimal strategy.
Applied Mathematical Finance (2018) 25(1), 1-35.
[ PDF ]
We use high-frequency data from the Nasdaq exchange to build a measure of volume order imbalance in the limit order book (LOB). We show that our measure is a good predictor of the sign of the next market order (MO), i.e. buy or sell, and also helps to predict price changes immediately after the arrival of an MO. Based on these empirical findings, we introduce and calibrate a Markov chain modulated pure jump model of price, spread, LO and MO arrivals, and order imbalance. As an application of the model, we pose and solve a stochastic control problem for an agent who maximizes terminal wealth, subject to inventory penalties, by executing roundtrip trades using LOs. We use in-sample-data (January to June 2014) to calibrate the model to ten equities traded in the Nasdaq exchange, and use out-of-sample data (July to December 2014) to test the performance of the strategy. We show that introducing our volume imbalance measure into the optimisation problem considerably boosts the profits of the strategy. Profits increase because employing our imbalance measure reduces adverse selection costs and positions LOs in the book to take advantage of favorable price movements.
Mathematical Finance, (2019) 29(2), 542-567..
[ PDF ]
Executing a basket of co-integrated assets is an important task facing investors. Here, we show how to do this accounting for the informational advantage gained from assets within and outside the basket, as well as for the permanent price impact of market orders (MOs) from all market participants, and the temporary impact that the agent’s MOs have on prices. The execution problem is posed as an optimal stochastic control problem and we demonstrate that, under some mild conditions, the value function admits a closed-form solution, and prove a verification theorem. Furthermore, we use data of five stocks traded in the Nasdaq exchange to estimate the model parameters and use simulations to illustrate the performance of the strategy. As an example, the agent liquidates a portfolio consisting of shares in INTC and SMH. We show that including the information provided by three additional assets (FARO, NTAP, ORCL) considerably improves the strategy’s performance; for the portfolio we execute, it outperforms the multi-asset version of Almgren-Chriss by approximately 4 to 4.5 basis points.
Int. J. Theoretical and Applied Finance, 2017, v 20 (7), 1750044
[ PDF ]
Real-option valuation traditionally is concerned with investment under conditions of project-value uncertainty, while assuming that the agent has perfect confidence in a specific model. However, agents generally do not have perfect confidence in their models, and this ambiguity affects their decisions. Moreover, real investments are not spanned by tradable assets and generate inherently incomplete markets. In this work, we account for an agent’s aversion to model ambiguity and address market incompleteness through the notation of robust indifference prices. We derive analytical results for the perpetual option to invest and the linear complementarity problem that the finite time problem satisfies. We find that ambiguity aversion has dual effects that are similar to, but distinct from, those of risk aversion. In particular, agents are found to exercise options earlier or later than their ambiguity-neutral counterparts, depending on whether the ambiguity stems from uncertainty in the investment or in a hedging asset.
in High-Performance Computing in Finance: Problems, Methods, and Solutions
Portfolio Liquidation and Ambiguity Aversion
We consider an optimal execution problem where an agent holds a position in an asset which must be liquidated (using limit orders) before a terminal horizon. Beginning with a standard model for the trading dynamics, we analyse how the acknowledgement of model misspecification affects the agent’s optimal trading strategy. The three possible sources of misspecification in this context are: (i) the arrival rate of market orders, (ii) the fill probability of limit orders, and (iii) the dynamics of the asset price. We show that ambiguity aversion with respect to each factor of the model has a similar effect on the optimal strategy, but the magnitude of the effect depends on time and inventory position in different ways depending on the source of uncertainty. In addition we allow the agent to employ market orders to further increase the strategy’s profitability and show the effect of ambiguity aversion on the shape of the optimal impulse region. In some cases we have a closed-form expression for the optimal trading strategy which significantly enhances the efficiency in which the strategy can be executed in real time.
Int. J. Theoretical Applied Finanance, 2016: 19(6), 1650038
[ PDF ]
We assume that the drift in the returns of asset prices consists of an idiosyncratic component and a common component given by a co-integration factor. We analyze the optimal investment strategy for an agent who maximizes expected utility of wealth by dynamically trading in these assets. The optimal solution is constructed explicitly in closed-form and is shown to be affine in the co-integration factor. We calibrate the model to three assets traded on the Nasdaq exchange (Google, Facebook, and Amazon) and employ simulations to showcase the strategy’s performance.
SIAM Journal of Financial Mathematics, 2016 (7) 1-33
[ PDF ]
Agents who acknowledge that their models are incorrectly specified are said to be ambiguity averse, and this affects the prices they are willing to trade at. Models for prices of commodities attempt to capture three stylized features: seasonal trend, moderate deviations (a diffusive factor) and large deviations (a jump factor) both of which mean-revert to the seasonal trend. Here we model ambiguity by allowing the agent to consider a class of models absolutely continuous w.r.t. their reference model, but penalize candidate models that are far from it. The buyer (seller) of a forward contract introduces a negative (positive) drift in the dynamics of the spot price, and enhances downward (upward) jumps so the prices they are willing to trade at are lower (higher) than that of the forward price under P. When ambiguity averse buyers and sellers employ the same reference measure they cannot trade because the seller requires more than what the buyer is willing to pay. Finally, we observe that when ambiguity averse agents price options written on the commodity forward, the effect of ambiguity aversion is strongest when the option is at-the-money, and weaker when it is deep in-the-money or deep out-of-the-money.
Mathematics and Financial Economics, 10(3): 339-364.
[ PDF ]
We provide an explicit closed-form strategy for an investor who executes a large order when market order-flow from all agents, including the investor’s own trades, has a permanent price impact. The strategy is found in closed-form when the permanent and temporary price impacts are linear in the market’s and investor’s rates of trading. We do this under very general assumptions about the stochastic process followed by the order-flow of the market. The optimal strategy consists of an Almgren-Chriss execution strategy adjusted by a weighted-average of the future expected net order-flow (given by the difference of the market’s rate of buy and sell market orders) over the execution trading horizon and proportional to the ratio of permanent to temporary linear impacts. We use historical data to calibrate the model to five Nasdaq traded stocks (FARO, SMH, NTAP, ORCL, INTC) and use simulations to show how the strategy performs.
[ PDF ]
We show how to optimally take positions in the limit order book by placing limit orders at-the-touch when the midprice of the asset is affected by the trading activity of the market. The midprice dynamics have a short-term-alpha component which reflects how instantaneous net order-flow, the difference between the number of market buy and market sell orders, affects the asset’s drift. If net-order flow is positive (negative), so short-term-alpha is positive (negative), the strategy may even withdraw from the sell (buy) side of the limit order book to take advantage of inventory appreciation (depreciation) and to protect the trading strategy from adverse selection costs.
SIAM J. Finan. Math., 7(1), 760–785
[ PDF ]
We provide two explicit closed-form optimal execution strategies to target VWAP. We do this under very general assumptions about the stochastic process followed by the volume traded in the market, and the agent’s orders have both temporary and permanent impact on the midprice. The strategies that target VWAP are found in closed-form. One strategy consists of TWAP adjusted upward by a fraction of instantaneous order-flow and adjusted downward by the average order-flow that is expected over the remaining life of the strategy. The other strategy consists of the Almgren-Chriss execution strategy adjusted by the expected volume and net order-flow during the remaining life of the strategy. We calibrate model parameters to five stocks traded in Nasdaq (FARO, SMH, NTAP, ORCL, INTC) and use simulations to show that the strategies target VWAP very closely and on average outperform the target by between 0.10 and 8 basis points.
Quantitative Finance, Vol. 15, No. 8, 1279–1291
[ PDF ]
We develop an optimal execution policy for an investor seeking to execute a large order using limit and market orders. The investor solves the optimal policy considering different restrictions on volume of both types of orders and depth at which limit orders are posted. As a particular example we show how the execution policies perform when targeting the volume schedule of the time-weighted-average-price (TWAP). The different strategies considered by the investor always outperform TWAP with an average savings per share of about two to three times the spread. This improvement over TWAP is due to the strategies benefiting from the optimal mix of limit orders, which earn the spread, and market orders, which keep the investor’s inventory schedule on target.
Chapter in Handbook of Multi-Commodity Markets and Products: Structuring, Trading and Risk Management
[ PDF ]
[ Handbook of Multi-Commodity Markets and Products: Structuring, Trading and Risk Management]
We show how to value a storage facility using Least Squares Monte Carlo (LSMC). We present a toy model to understand how to employ the LSMC algorithm and then show how to incorporate realistic constraints in the valuation including: the maximum capacity of the storage, injection and withdrawal rates and costs, and market constraints such as bid-ask spread in the spot market and transaction costs.
Int. J. Theoretical and Applied Finance, 19 (04), 1650028
[ PDF ]
We propose a model where an algorithmic trader takes a view on the distribution of prices at a future date and then decides how to trade in the direction of her predictions using the optimal mix of market and limit orders. As time goes by, the trader learns from changes in prices and updates her predictions to tweak her strategy. Compared to a trader that cannot learn from market dynamics or form a view of the market, the algorithmic trader’s profits are higher and more certain. Even though the trader executes a strategy based on a directional view, the sources of profits are both from making the spread as well as capital appreciation of inventories. Higher volatility of prices considerably impairs the trader’s ability to learn from price innovations, but this adverse effect can be circumvented by learning from a collection of assets that co-move.
An accelerated share repurchase (ASR) allows a firm to repurchase a significant portion of its shares immediately, while shifting the burden of reducing the impact and uncertainty in the trade to an intermediary. The intermediary must then purchase the shares from the market over several days, weeks, or as much as several months. Some contracts allow the intermediary to specify when the repurchase ends, at which point the corporation and the intermediary exchange the difference between the arrival price and the TWAP over the trading period plus a spread. Hence, the intermediary effectively has an American option embedded within an optimal execution problem. As a result, the firm receives a discounted spread relative to the no early exercise case. In this work, we address the intermediary’s optimal execution and exit strategy taking into account the impact that trading has on the market. We demonstrate that it is optimal to exercise when the TWAP exceeds \zeta(t) S_t where S_t is the fundamental price of the asset and \zeta(t) is deterministic. Moreover, we develop a dimensional reduction of the stochastic control and stopping problem and implement an efficient numerical scheme to compute the optimal trading and exit strategies.
SIAM Financial Mathematics, 8(1): 635–671.
[ PDF ]
Because algorithmic traders acknowledge that their models are incorrectly specified we allow for ambiguity in their choices to make their models robust to misspecification. We show how to include misspecification to: (i) the arrival rate of market orders (MOs), (ii) the fill probability of limit orders, and (iii) the dynamics of the midprice of the asset they trade. In the context of market making, we demonstrate that market makers (MMs) adjust their quotes to reduce inventory risk and adverse selection costs. Moreover, robust market making increases the strategy’s Sharpe ratio and allows the MM to fine tune the tradeoff between the mean and the standard deviation of profits. Our framework adopts a robust optimal control approach and we provide existence and uniqueness results for the robust optimal strategies as well as a verification theorem. The behavior of the ambiguity averse MM generalizes that of a risk averse MM, and coincide in only one circumstance.
Energy Economics, Volume 45, September 2014, Pages 155-165
[ PDF ]
Canadian oil sands hold the third largest recognized oil deposit in the world. While the rapidly expanding oil sands industry in western Canada has driven economic growth, the extraction of the oil comes at a significant environmental cost. It is believed that the government policies have failed to keep up with the rapid oil sands expansion, creating serious challenges in managing the environmental impacts. This paper presents a practical, yet financially sound, real options model to evaluate the rate of oil sands expansion, under different environmental cost scenarios resulting from governmental policies, while accounting for oil price uncertainty and managerial flexibilities. Our model considers a multi-plant/multi-agent setting, in which labor costs increase for all agents and impact their optimal strategies, as new plants come online. Our results show that a stricter environmental cost scenario delays investment, but leads to a higher rate of expansion once investment begins. Once constructed, a plant is highly unlikely to shut down. Our model can be used by government policy makers, to gauge the impact of policy strategies on the oil sands expansion rate, and by oil companies, to evaluate expansion strategies based on assumptions regarding market and taxation costs.
RISK, July 2014
[ PDF ]
Agents often wish to limit the price they pay for an asset. If they are acquiring a large number of shares, they must balance the risk of trading slowly (to limit price impact) with the risk of future uncertainty in prices. Here, we address the optimal acquisition problem for an agent who is unwilling to pay more than a specified price for an asset while they are subject to market impact and price uncertainty. The problem is posed as an optimal stochastic control and we provide an analytical closed form solution for the perpetual case as well as a dimensional reduced PDE for the general case. The optimal seed of trading is found to no longer be deterministic and instead depends on the fundamental price of the asset. Moreover, we demonstrate that a price limiter constraint significantly reduces the conditional tail expectation of the securities costs.
Mathematical Finance, Vol. 25(3), 576-611
[ PDF ]
We propose risk measures to assess the performance of High Frequency (HF) trading strategies that seek to maximize profits from making the realized spread where the holding period is extremely short (fractions of a second, seconds or at most minutes). The HF trader is risk-neutral and maximizes expected terminal wealth but is constrained by both capital and the amount of inventory that she can hold at any time. The risk measures enable the HF trader to fine tune her strategies by trading off different measures of inventory risk, which also proxy for capital risk, against expected profits. The dynamics of the midprice of the asset are driven by information flows which are impounded in the midprice by market participants who update their quotes in the limit order book. Furthermore, the midprice also exhibits stochastic jumps as a consequence of the arrival of market orders that have an impact on prices which can give rise to market momentum (expected prices to trend up or down).
Quantitative Finance, 14(2) pg. 369-382
[ PDF ]
Guaranteed withdrawal benefits (GWBs) are long term contracts which provide investors with equity participation while guaranteeing them a secured income stream. Due to the long investment horizons involved, stochastic volatility and stochastic interest rates are important factors to include in their valuation. Moreover, investors are typically allowed to participate in a mixed fund composed of both equity and fixed-income securities. Here, we develop an efficient method for valuing these path-dependent products through re-writing the problem in the form of an Asian styled claim and a dimensionally reduced PDE. The PDE is then solved using an Alternating Direction Implicit (ADI) method. Furthermore, we derive an analytical closed form approximation and compare this approximation with the PDE results and find excellent agreement. We illustrate the various effects of the parameters on the valuation through numerical experiments and discuss their financial implications.
SIAM Journal of Financial Mathematics, 5.1 (2014): 415-444.
[ PDF ]
We develop a High Frequency (HF) trading strategy where the HF trader uses her superior speed to process information and to post limit sell and buy orders. We introduce a multi-factor self-exciting process which allows for feedback effects in market buy and sell orders and the shape of the limit order book (LOB). The model accounts for arrival of market orders that influence activity, trigger one-sided and two-sided clustering of trades, and induce temporary changes in the shape of the LOB. The resulting strategy outperforms the Poisson strategy where the trader does not distinguish between influential and non-influential events.
Theory Probab. Appl., 58(3), 493–502
[ PDF ]
In this paper we consider a connection between the famous Skorohod embedding problem and the Shiryaev inverse problem for the first hitting time distribution of a Brownian motion: given a probability distribution, F, find a boundary such that the first hitting time distribution is F. By randomizing the initial state of the process we show that the inverse problem becomes analytically tractable. The randomization of the initial state allows us to significantly extend the class of target distributions in the case of a linear boundary and moreover allows us to establish connection with the Skorohod embedding problem.
Quantitive Finance, 14(2) pg. 259-270
[ PDF ]
The role that clustering in activity and/or severity plays in catastrophe modeling and derivative valuation is a key aspect that has been overlooked in the recent literature. Here, we propose two marked point processes to account for these features. The first approach assumes the points are driven by a stochastic hazard rate modulated by a Markov chain while marks are drawn from a regime specific distribution. In the second approach, the points are driven by a self-exciting process while marks are drawn from a fixed distribution. Within this context, we provide a unified approach to efficiently value catastrophe options — such as those embedded in catastrophe bonds — and show that our results are within the 95% confidence interval computed using Monte Carlo simulations. Our approach is based on deriving the valuation PIDE and utilizes transforms to provide semi-analytical closed form solutions. This contrasts with most prior works where the valuation formulae require computing several infinite sums together with numerical integration.
Commodities, Energy and Environmental Finance, vol. 74,chap. Incorporating Managerial Information into Real Option Valuation, pp. 213–238.Springer (2015)
[ PDF ]
Real options analysis (ROA) is widely recognized as a superior method for valuing projects with managerial flexibilities. Yet, its adoption remains limited due to varied difficulties in its implementation. In this work, we propose a real options approach that utilizes managerial cash-flow estimates to value early stage project investments. By introducing a sector indicator process which drives the project-value we are able to match arbitrary managerial cash-flow distributions. This sector indicator allows us to value managerial flexibilities and obtain hedges in an easy to implement manner. Our approach to ROA is consistent with financial theory, requires minimal subjective input of model parameters, and bridges the gap between theoretical ROA frameworks and practice.
Applied Mathematical Finance, 20 (6) pg. 512-547
[ PDF ]
Algorithmic Trading (AT) and High Frequency (HF) trading, which are responsible for over 70% of US stocks trading volume, have greatly changed the microstructure dynamics of tick-by-tick stock data. In this paper we employ a hidden Markov model to examine how the intra-day dynamics of the stock market have changed, and how to use this information to develop trading strategies at ultra-high frequencies. In particular, we show how to employ our model to submit limit-orders to profit from the bid-ask spread and we also provide evidence of how HF traders may profit from liquidity incentives (liquidity rebates). We use data from February 2001 and February 2008 to show that while in 2001 the intra-day states with shortest average durations were also the ones with very few trades, in February 2008 the vast majority of trades took place in the states with shortest average durations. Moreover, in 2008 the fastest states have the smallest price impact as measured by the volatility of price innovations.
Euro Journal of Finance, 19 (7-8) pg. 625-644
[ PDF ]
In this work we are concerned with valuing the option to invest in a project when the project value and the investment cost are both mean-reverting. Previous works on stochastic project and investment cost concentrate on geometric Brownian motions (GBMs) for driving the factors. However, when the project involved is linked to commodities, mean-reverting assumptions are more meaningful. Here, we introduce a model and prove that the optimal exercise strategy is not a function of the ratio of the project value to the investment V/I — contrary to the GBM case. We also demonstrate that the limiting trigger curve as maturity approaches traces out a non-linear curve in the (V,I) plan and derive its explicit form. Finally, we numerically investigate the finite-horizon problem using the Fourier space time-stepping algorithm of Jaimungal & Surkov (2009). Numerically, the optimal exercise policies are found to be approximately linear in V/I; however, contrary to the GBM case they are not described by a curve of the form V*/I* = c(t). The option price behavior as well as the trigger curve behavior nicely generalize earlier one-factor model results.
SIAM Journal of Financial Mathematics (2) pp. 665-691 (2015)
[ PDF ]
Using spectral decomposition techniques and singular perturbation theory, we develop a systematic method to approximate the prices of a variety of options in a fast mean-reverting stochastic volatility setting. Four examples are provided in order to demonstrate the versatility of our method. These include: European options, up-and-out options, double-barrier knock-out options, and options which pay a rebate upon hitting a boundary. For European options, our method is shown to produce option price approximations which are equivalent to those developed in Fouque, Papanicolaou, and Sircar (2000).
Int. J. Theor. Appl. Finan. 16, 1350034 (2013)
Multi-factor interest rate models are widely used in practice. Quite often, contingent claims with earlier exercise features are valued by resorting to trees, finite-difference schemes and Monte Carlo simulations. However, when jumps are present these methods are less accurate and/or efficient. In this work we develop an algorithm based on a sequence of measure changes coupled with Fourier transform solutions of the pricing partial-integro differential equation to solve the pricing problem. The method, coined the irFST method, also neatly computes option sensitivities. Furthermore, we develop closed form formulae for accrual swaps and accrual range notes under our multi-factor jump-diffusion model. We demonstrate the versatility and precision of the method through numerical experiments on European, Bermudan and callable bond options, (accrual) swaps and range notes.
[ PDF ]
In this article we study a problem related to the first passage and inverse first passage time problems for Brownian motions originally formulated by Jackson, Kreinin and Zhang (2009). Specifically, define $\tau_X = \inf\{t>0:W_t + X \le b(t) \}$ where $W_t$ is a standard Brownian motion, then given a boundary function $b:[0,\infty) \to \RR$ and a target measure $\mu$ on $[0,\infty)$, we seek the random variable $X$ such that the law of $\tau_X$ is given by $\mu$. We characterize the solutions, prove uniqueness and existence and provide several key examples associated with the linear boundary.
ECML-PKDD 2009, LNAI 5781, pp. 628-643, 2009.
[ PDF ]
Kernel-based Copula Processes (KCPs), a new versatile tool for analyzing multiple time-series, are proposed here as a unifying framework to model the interdependency across multiple time-series and the long-range dependency within an individual time-series. KCPs build on the celebrated theory of copula which allows for the modeling of complex interdependence structure, while leveraging the power of kernel methods for efficient learning and parsimonious model specification. Specifically, KCPs can be viewed as a generalization of the Gaussian processes enabling non-Gaussian predictions to be made. Such non Gaussian features are extremely important in a variety of application areas. As one application, we consider temperature series from weather stations across the US. Not only are KCPs found to have modeled the heteroskedasticity of the individual temperature changes well, the KCPs also successfully discovered the interdependencies among different stations. Such results are beneficial for weather derivatives trading and risk management, for example.
Mathematical Finance, Vol. 22 (1), pp. 57-81, 2012
[ PDF ]
It is well known that purely structural models of default cannot explain short term credit spreads, while purely intensity based models of default lead to completely unpredictable default events. Here we introduce a hybrid model of default in which a firm enters distress upon a non-tradable credit worthiness index (CWI) hitting a critical level. Upon distress, the firm defaults at the next arrival of a Poisson process. To value defaultable bonds and CDSs we introduce the concept of robust indifference pricing which differs from the usual indifference valuation paradigm by the inclusion of model uncertainty. To account for model uncertainty, the embedded optimization problems are modified to include a minimization over a set of candidate measures equivalent to the estimated reference measure. With this new model and pricing paradigm, we succeed in determining corporate bond spreads and CDS spreads and find that model uncertainty plays a similar, but distinct, role to risk aversion. In particular, model uncertainty allows for significant short term spreads.
[ PDF ]
The first passage time problem for Brownian motions hitting a barrier has been extensively studied in the literature. In particular, many incarnations of integral equations which link the density of the hitting time to the equation for the barrier itself have appeared. Most interestingly, Peskir(2002b) demonstrates that a master integral equation can be used to generate a countable number of new equations via differentiation or integration by parts. In this article, we generalize Peskir’s results and provide a more powerful unifying framework for generating integral equations through a new class of martingales. We obtain a continuum of Volterra type integral equations of the first kind and prove uniqueness for a subclass. Furthermore, through the integral equations, we demonstrate how certain functional transforms of the boundary affect the density function. Finally, we demonstrate a fundamental connection between the Volterra integral equations and a class of Fredholm integral equations.
Insurance: Mathematics and Economics.46(1), pg. 52-66.
[ PDF ]
In this paper, we extend the Cramer-Lundberg insurance risk model perturbed by diffusion to incorporate stochastic volatility and study the resulting Gerber-Shiu expected discounted penalty(EDP) function. Under the assumption that volatility is driven by an underlying Ornstein-Uhlenbeck (OU) process, we derive the integro-differential equation which the EDP function satisfies. Not surprisingly, no closed-form solution exists; however, assuming the driving OU process is fast mean-reverting, we apply singular perturbation theory to obtain an asymptotic expansion of the solution. Two integro-differential equations for the first two terms in this expansion are obtained and explicitly solved. When the claim size distribution is of phase-type, the asymptotic results simplify even further and we succeed in estimating the error of the approximation. Hyper-exponential and mixed-Erlang distributed claims are considered in some detail.
SIAM Journal on Financial Mathematics (2) pp.464-487
Energy commodities, such as oil, gas and electricity, lack the liquidity of equity markets, have large costs associated with storage, exhibit high volatilities and can have significant spikes in prices. Furthermore, and possibly most importantly, commodities tend to revert to long run equilibrium prices. Many complex commodity contingent claims exist in the markets, such as swing and interruptible options; however, the current method of valuation relies heavily on Monte Carlo simulations and tree based methods. In this article, we develop a new framework for dealing with mean-reverting jump-diffusion (and pure jump) models by working in Fourier space. The method is based on the Fourier space time stepping algorithm of Jackson, Jaimungal, and Surkov (2008), but is tailored for mean-reverting models. We demonstrate the utility of the method by applying it to the valuation of European, American and barrier options on a single underlier, European and Bermudan spread options on two-dimensional underliers, and swing options.
Risk, July, 2009, p78-83.
Diverse finite-difference schemes for solving pricing problems with Levy underliers have been used in the literature. Invariably, the integral and diffusive terms are treated asymmetrically, large jumps are truncated, the methods are difficult to extend to higher dimensions and cannot easily incorporate regime switching or stochastic volatility. We present a new efficient approach which switches between Fourier and real space as time propagates backwards. We dub this method Fourier Space Time-Stepping (FST). The FST method applies to regime switching Levy models and is applicable to a wide class of path-dependent options (such as Bermudan, barrier, shout and catastrophe linked options) and options on multiple assets.
Journal of Computational Finance, Vol 12 Issue 2, p1-29.
Jump-diffusion and Levy models have been widely used to partially alleviate some of the biases inherent in the classical Black-Scholes-Merton model. Unfortunately, the resulting pricing problem requires solving a more difficult partial-integro differential equation (PIDE) and although several approaches for solving the PIDE have been suggested in the literature, none are entirely satisfactory. All treat the integral and diffusive terms asymmetrically, truncate large jumps and are difficult to extend to higher dimensions. We present a new, efficient algorithm, based on transform methods, which symmetrically treats the diffusive and integrals terms, is applicable to a wide class of path-dependent options (such as Bermudan, barrier, and shout options) and options on multiple assets, and naturally extends to regime-switching Levy models. We present a concise study of the precision and convergence properties of our algorithm for several classes of options and Levy models and demonstrate that the algorithm is second-order in space and first-order in time for path-dependent options.
Applied Mathematical Finance, vol 15 Issue 5&6, p449-447.
[ PDF ]
It is well known that stochastic volatility is an essential feature of commodity spot prices. By using methods of singular perturbation theory, we obtain approximate but explicit closed form pricing equations for forward contracts and options on single- and two-name forward prices. The expansion methodology is based on a fast mean-reverting stochastic volatility driving factor, and leads to pricing results in terms of constant volatility prices, their Delta’s and their Delta-Gamma’s. The stochastic volatility corrections lead to efficient calibration and sensitivity calculations.
Proceedings of the 4th IASTED International Conference on Financial Engineering and Applications.
[ PDF ]
Functional Principal Component Analysis (FPCA) provides a powerful and natural way to model functional financial data sets (such as collections of time-indexed futures and interest rate yield curves). However, FPCA assumes each sample curve is drawn from an independent and identical distribution. This assumption is axiomatically inconsistent with financial data; rather, samples are often interlinked by an underlying temporal dynamical process. We present a new modeling approach using Vector auto-regression (VAR) to drive the weights of the principal components. In this novel process, the temporal dynamics are first learned and then the principal components extracted. We dub this method the VAR-FPCA. We apply our method to the NYMEX light sweet crude oil futures curves and demonstrate that it contains significant advantages over the conventional FPCA in applications such as statistical arbitrage and risk management.
Proceedings of the 4th IASTED International Conference on Financial Engineering and Applications.
[ PDF ]
Although jump-diffusion and L´evy models have been widely used in industry, the resulting pricing partial-integro differential equations poses various difficulties for valuation. Diverse finite-difference schemes for solving the problem have been introduced in the literature. Invariably, the integral and diffusive terms are treated asymmetrically, large jumps are truncated and the methods are difficult to extend to higher dimensions. We present a new efficient transform approach for regime-switching L´evy models which is applicable to a wide class of path-dependent options (such as Bermudan, barrier, and shout options) and options on multiple assets.
Int. J. of Theoretical and Applied Finance, vol 10(7), pg. 1111-1135.
[ PDF ]
In this article, we construct forward price curves and value a class of two asset exchange options for energy commodities. We model the spot prices using an affine two-factor mean-reverting process with and without jumps. Within this modeling framework, we obtain closed form results for the forward prices in terms of elementary functions. Through measure changes induced by the forward price process, we further obtain closed form pricing equations for spread options on the forward prices. For completeness, we address both an Actuarial and a risk-neutral approach to the valuation problem. Finally, we provide a calibration procedure and calibrate our model to the NYMEX Light Sweet Crude Oil spot and futures data, allowing us to extract the implied market prices of risk for this commodity.
Insurance Mathematics and Economics (2006) vol 38 (3) 469-483
[ PDF ]
We analyze the pricing and hedging of catastrophe put options under stochastic interest rates with losses generated by a compound Poisson process. Asset prices are modeled through a jump-diffusion process which is correlated to the loss process. We obtain explicit closed form formulae for the price of the option, and the hedging parameters Delta, Gamma and Rho. The effects of stochastic interest rates and variance of the loss process on the options price are illustrated through numerical experiments. Furthermore, we carry out a simulation analysis to hedge a short position in the catastrophe put option by using a Delta-Gamma-Rho neutral self-financing portfolio. We find that accounting for stochastic interest rates, through Rho hedging, can significantly reduce the expected conditional loss of the hedged portfolio.
Insurance Mathematics and Economics (2005) vol 36 (3) 329-346
[ PDF ]
We investigate the pricing problem for pure endowment contracts whose life contingent payment is linked to the performance of a tradable risky asset or index. The heavy tailed nature of asset return distributions is incorporated into the problem by modeling the price process of the risky asset as a finite variation Levy process. We price the contract through the principle of equivalent utility. Under the assumption of exponential utility, we determine the optimal investment strategy and show that the indifference price solves a non-linear partial-integro-differential equation (PIDE). We solve the PIDE in the limit of zero risk aversion, and obtain the unique risk-neutral equivalent martingale measure dictated by indifference pricing. In addition, through an explicit-implicit finite difference discretization of the PIDE we numerically explore the effects of the jump activity rate, jump sizes and jump skewness on the pricing and the hedging of these contracts.
Quantitative Finance (2003) vol 3(2) 145-154
[ PDF ]
We introduce a pricing model for equity options in which sample paths follow a variance-gamma (VG) jump model whose parameters evolve according to a two-state Markov chain process. As in GARCH type models, jump sizes are positively correlated to volatility. The model is capable of justifying the observed implied volatility skews for options at all maturities. Furthermore, the term structure of implied VG kurtosis is an increasing function of the time to maturity, in agreement with empirical evidence. Explicit pricing formulae, extending the known VG formulae, for European options are derived. In addition, a resummation algorithm, based on the method of lines, which greatly reduces the algorithmic complexity of the pricing formulae, is introduced. This algorithm is also the basis of approximate numerical schemes for American and Bermudan options, for which a state dependent exercise boundary can be computed.
RISK, Feb. issue, pg. 65-70
[ PDF ]
The variance gamma jump model is known to describe the volatility smile for shortdated options accurately. However, implementation for exotic path-dependent options can prove difficult. Here, Claudio Albanese, Sebastian Jaimungal and Dmitri Rubisov use the method of lines to develop an alternative approach, allowing prices to be calculated in a more straightforward manner, either analytically or through numerical integration
[ arXiv ]
In a reinforcement learning (RL) setting, the agent’s optimal strategy heavily depends on her risk preferences and the underlying model dynamics of the training environment. These two aspects influence the agent’s ability to make well-informed and time-consistent decisions when facing testing environments. In this work, we devise a framework to solve robust risk-aware RL problems where we simultaneously account for environmental uncertainty and risk with a class of dynamic robust distortion risk measures. Robustness is introduced by considering all models within a Wasserstein ball around a reference model. We estimate such dynamic robust risk measures using neural networks by making use of strictly consistent scoring functions, derive policy gradient formulae using the quantile representation of distortion risk measures, and construct an actor-critic algorithm to solve this class of robust risk-aware RL problems. We demonstrate the performance of our algorithm on a portfolio allocation exampl
We consider the problem where an agent aims to combine the views and insights of different experts’ models. Specifically, each expert proposes a diffusion process over a finite time horizon. The agent then combines the experts’ models by minimising the weighted Kullback-Leibler divergence to each of the experts’ models. We show existence and uniqueness of the barycentre model and proof an explicit representation of the Radon-Nikodym derivative relative to the average drift model. We further allow the agent to include their own constraints, which results in an optimal model that can be seen as a distortion of the experts’ barycentre model to incorporate the agent’s constraints. Two deep learning algorithms are proposed to find the optimal drift of the combined model, allowing for efficient simulations. The first algorithm aims at learning the optimal drift by matching the change of measure, whereas the second algorithm leverages the notion of elicitability to directly estimate the value function. The paper concludes with a extended application to combine implied volatility smiles models that were estimated on different datasets.
We study the perfect information Nash equilibrium between a broker and her clients — an informed trader, and an uniformed trader. In our model, the broker trades in the lit exchange where trades have instantaneous and transient price impact with exponential resilience, while both clients trade with the broker. The informed trader and the broker maximise expected wealth subject to inventory penalties, while the uninformed trader is not strategic and sends the broker random buy and sell orders. We characterise the Nash equilibrium of the trading strategies with the solution to a coupled system of forward-backward stochastic differential equations (FBSDEs). We solve this system explicitly and study the effect of information in the trading strategies of the broker and the informed trader.
[ arXiv ]
We investigate sample-based learning of conditional distributions on multi-dimensional unit boxes, allowing for different dimensions of the feature and target spaces. Our approach involves clustering data near varying query points in the feature space to create empirical measures in the target space. We employ two distinct clustering schemes: one based on a fixed-radius ball and the other on nearest neighbors. We establish upper bounds for the convergence rates of both methods and, from these bounds, deduce optimal configurations for the radius and the number of neighbors. We propose to incorporate the nearest neighbors method into neural network training, as our empirical analysis indicates it has better performance in practice. For efficiency, our training process utilizes approximate nearest neighbors search with random binary space partitioning. Additionally, we employ the Sinkhorn algorithm and a sparsity-enforced transport plan. Our empirical findings demonstrate that, with a suitably designed structure, the neural network has the ability to adapt to a suitable level of Lipschitz continuity locally. For reproducibility, our code is available at this https URL.
[ arXiv ]
This paper proposes a novel framework for identifying an agent’s risk aversion using interactive questioning. Our study is conducted in two scenarios: a one-period case and an infinite horizon case. In the one-period case, we assume that the agent’s risk aversion is characterized by a cost function of the state and a distortion risk measure. In the infinite horizon case, we model risk aversion with an additional component, a discount factor. Assuming the access to a finite set of candidates containing the agent’s true risk aversion, we show that asking the agent to demonstrate her optimal policies in various environment, which may depend on their previous answers, is an effective means of identifying the agent’s risk aversion. Specifically, we prove that the agent’s risk aversion can be identified as the number of questions tends to infinity, and the questions are randomly designed. We also develop an algorithm for designing optimal questions and provide empirical evidence that our method learns risk aversion significantly faster than randomly designed questions in simulations. Our framework has important applications in robo-advising and provides a new approach for identifying an agent’s risk preferences.
[ arXiv ]
Principal agent games are a growing area of research which focuses on the optimal behaviour of a principal and an agent, with the former contracting work from the latter, in return for providing a monetary award. While this field canonically considers a single agent, the situation where multiple agents, or even an infinite amount of agents are contracted by a principal are growing in prominence and pose interesting and realistic problems. Here, agents form a Nash equilibrium among themselves, and a Stackelberg equilibrium between themselves as a collective and the principal. We apply this framework to the problem of implementing emissions markets. We do so while incorporating market clearing as well as agent heterogeneity, and distinguish ourselves from extant literature by incorporating the probabilistic approach to MFGs as opposed to the analytic approach, with the former lending itself more naturally for our problem. For a given market design, we find the Nash equilibrium among agents using techniques from mean field games. We then provide preliminary results for the optimal market design from the perspective of the regulator, who aims to maximize revenue and overall environmental benefit.
[ arXiv ]
In many stochastic games stemming from financial models, the environment evolves with latent factors and there may be common noise across agents’ states. Two classic examples are: (i) multi-agent trading on electronic exchanges, and (ii) systemic risk induced through inter-bank lending/borrowing. Moreover, agents’ actions often affect the environment, and some agent’s may be small while others large. Hence sub-population of agents may act as minor agents, while another class may act as major agents. To capture the essence of such problems, here, we introduce a general class of non-cooperative heterogeneous stochastic games with one major agent and a large population of minor agents where agents interact with an observed common process impacted by the mean field. A latent Markov chain and a latent Wiener process (common noise) modulate the common process, and agents cannot observe them. We use filtering techniques coupled with a convex analysis approach to (i) solve the mean field game limit of the problem, (ii) demonstrate that the best response strategies generate an ϵ-Nash equilibrium for finite populations, and (iii) obtain explicit characterisations of the best response strategies.
[ arXiv ]
Financial markets are often driven by latent factors which traders cannot observe. Here, we address an algorithmic trading problem with collections of heterogeneous agents who aim to perform statistical arbitrage, where all agents filter the latent states of the world, and their trading actions have permanent and temporary price impact. This leads to a large stochastic game with heterogeneous agents. We solve the stochastic game by investigating its mean-field game (MFG) limit, with sub-populations of heterogeneous agents, and, using a convex analysis approach, we show that the solution is characterized by a vector-valued forward-backward stochastic differential equation (FBSDE). We demonstrate that the FBSDE admits a unique solution, obtain it in closed-form, and characterize the optimal behaviour of the agents in the MFG equilibrium. Moreover, we prove the MFG equilibrium provides an $\epsilon$-Nash equilibrium for the finite player game. We conclude by illustrating the behaviour of agents using the optimal MFG strategy through simulated examples.
This article explores the optimisation of trading strategies in Constant Function Market Makers (CFMMs) and centralised exchanges. We develop a model that accounts for the interaction between these two markets, estimating the conditional dependence between variables using the concept of conditional elicibility. Furthermore, we pose an optimal execution problem where the agent hides their orders by controlling the rate at which they trade. We do so without approximating the market dynamics. The resulting dynamic programming equation is not analytically tractable, therefore, we employ the deep Galerkin method to solve it. Finally, we conduct numerical experiments and illustrate that the optimal strategy is not prone to price slippage and outperforms naive strategies.
[ arXiv ]
Here, we develop a deep learning algorithm for solving Principal-Agent (PA) mean field games with market-clearing conditions — a class of problems that have thus far not been studied and one that poses difficulties for standard numerical methods. We use an actor-critic approach to optimization, where the agents form a Nash equilibria according to the principal’s penalty function, and the principal evaluates the resulting equilibria. The inner problem’s Nash equilibria is obtained using a variant of the deep backward stochastic differential equation (BSDE) method modified for McKean-Vlasov forward-backward SDEs that includes dependence on the distribution over both the forward and backward processes. The outer problem’s loss is further approximated by a neural net by sampling over the space of penalty functions. We apply our approach to a stylized PA problem arising in Renewable Energy Certificate (REC) markets, where agents may rent clean energy production capacity, trade RECs, and expand their long-term capacity to navigate the market at maximum profit. Our numerical results illustrate the efficacy of the algorithm and lead to interesting insights into the nature of optimal PA interactions in the mean-field limit of these markets.
[ SSRN ]
This paper studies a general framework for mean-field games (MFGs) with ambiguity averse players based on the probabilistic framework of Carmona & Delarue (2013). We develop a framework for MFGs where the agents protect themselves from model ambiguity by considering a class of candidate models under which they compute their performance criteria. By proving a version of the stochastic maximum principle, which accounts for model ambiguity, we characterize the optimal controls through a forwards-backwards stochastic differential equation and establish a relationship between the finite player game and the MFG. We also demonstrate that the resulting strategy for the finite player game leads to an -Nash equilibrium. Explicit solutions are derived in the case of the linear-quadratic MFG.
[ PDF ]
Interbank borrowing and lending may induce systemic risk into financial markets. A simple model of this is to assume that log-monetary reserves are coupled, and that banks can also borrow/lend from/to a central bank. When all banks optimize their cost of borrowing and lending, this leads to a stochastic game which, as Carmona et.al. (2015) show, induces some stability in the market. All models, however, have error in them, and here we account for model uncertainty (aka, ambiguity aversion) by recasting the problem as a robust stochastic game. We succeed in providing a strategy which leads to a Nash equilibria for the finite game, and also study the mean-field game limit. To this end, we prove that an -Nash equilibrium exists, and a verification theorem is shown to hold for convex-concave cost functions. Moreover, we show that when firms are ambiguity-averse, default probabilities can be reduced relative to their ambiguity-neutral counterparts.