
Projects.

Substrate is a proprietary trading initiative focused on the systematic trading of Bittensor’s subnet tokens—a new class of cryptoassets tied to decentralized machine learning networks. Subnet tokens are digital assets within the dTAO protocol, where each subnet functions like an individual “stock” in a broader decentralized economy, following an automatic market maker (AMM) mechanism.
We’re currently in a live pilot phase, actively trading with real capital and collaborating closely with private counterparties. Our system spans the full research-to-execution stack, including:
Alpha generation for momentum-style forecasting across subnet markets
Backtesting and simulation with slippage-aware models
Portfolio optimization and risk management are guided by crypto factor models, with a focus on maintaining market-neutral exposure
Data infrastructure includes an in-house pipeline that aggregates on-chain and API-based telemetry
Trade execution layer mimicking brokerage functionality (e.g., limit orders, stop-loss orders) over Bittensor’s buy/sell-only primitives
OpTrade: A Comprehensive Toolkit for Quantitative Options Trading Research and Strategy Development
Xavier Mootoo
Xavier Mootoo
OpTrade is an open-source toolkit for quantitative options trading research. This includes a comprehensive data pipeline using ThetaData's API allowing for minute-level data with intelligent contract selection based on moneyness and volatility-scaled strikes. Market environments can be precisely tuned via fundamental parameters (e.g., P/E) and factor modeling (e.g., Fama-French exposures). Optrade also includes a full experimentation framework with PyTorch/scikit-learn models featuring specialized option featurization (e.g., order book imbalance, quote spreads), time-to-expiration features for theta decay, and datetime features for seasonality patterns.
Pricing Worst-Of Options via Control Variate–Enhanced Adaptive Monte Carlo and Machine Learning
A Monte Carlo simulation framework for pricing exotic worst-of basket options. The toolkit supports multi-asset payoff structures with path-independent, European-style settlement, and emphasizes asset correlation modeling through Cholesky-decomposed log-normal simulations. The pricing engine includes a novel, indicator-based control variate method that selectively reduces variance by conditioning on the worst-performing asset within the basket.
Designed for quantitative derivatives research, the framework allows users to generate synthetic market scenarios, evaluate basket option payouts under realistic joint dynamics, and experiment with payoff sensitivity to volatility, correlation, and strike parameters. The simulation pipeline is fully modular, enabling integration with broader alpha research workflows.
EMForecaster: A Deep Learning Framework for Time Series Forecasting in Wireless Networks with Distribution-Free Uncertainty Quantification
EMForecaster is a new deep learning architecture for electromagnetic field exposure forecasting that uses multi-scale temporal pattern processing through patching, reversible instance normalization, and dimension mixing. It incorporates conformal prediction for uncertainty quantification with guaranteed coverage rate 1−α and introduces a "Trade-off Score" metric to balance forecast trustworthiness and prediction interval width. Performance significantly exceeds current solutions: 53.97% better than Transformer architecture and 38.44% better than average baseline in point forecasting, while achieving optimal balance between prediction interval width and coverage in conformal forecasting with 24.73% improvement over average baseline and 49.17% over Transformer.
Xavier Mootoo, Hina Tabassum, Luca Chiaraviglio
IEEE Transactions on Network Science and Engineering (under review)
Stochastic Sparse Sampling: A Framework for Variable-Length Medical Time Series Classification
Xavier Mootoo, Alan A. Díaz-Montiel, Milad Lankarany, Hina Tabassum
Full Version: IEEE Transactions on Neural Networks and Learning Systems (under review)
Workshop: NeurIPS 2024 Workshop on Time Series in the Age of Large Models
Stochastic Sparse Sampling (SSS) introduces a novel framework for variable-length time series classification, specifically designed for medical applications where sequence length varies among patients and events. SSS handles variable-length sequences by sparsely sampling fixed windows to generate local predictions, which are then aggregated and calibrated into a global prediction. We benchmark SSS on the task of seizure onset zone localization, identifying seizure-inducing brain regions from variable-length electrophysiological data. Our experiments on the Epilepsy iEEG Multicenter Dataset show SSS outperforms state-of-the-art baselines across most centers and excels in all out-of-distribution unseen centers, while providing valuable post-hoc insights through visualization of temporally averaged local predictions.
T-VICReg: Self-Supervised Learning for Time Series with Partial Temporal Invariance, Noncontrastive and Augmentation-Free
Xavier Mootoo, Alan A. Díaz-Montiel, Milad Lankarany
T-CAIREM Machine Learning Internship 2023, University of Toronto
We propose Temporal Variance-Invariance-Covariance Regularization (T-VICReg), a self-supervised learning method for time series that enables partial invariance to temporal translations. By integrating past and future representations into current-time representations, T-VICReg improves state transition detection across various tasks. Testing on OpenNeuro dataset ds003029 with epilepsy patients' IEEG signals, our method achieved 92.92% and 89.26% Top-1 accuracy in binary and multiclass seizure detection tasks, outperforming supervised baselines (89.23% and 84.07% respectively). T-VICReg operates without contrastive learning or augmentations and works with both continuous and discrete time series.
QoS-Aware Deep Unsupervised Learning for STAR-RIS Assisted Networks: A Novel Differentiable Projection Framework
Substrate Capital
Mehrazin Alizadeh, Xavier Mootoo, Hina Tabassum
IEEE Wireless Communication Letters 2024
We present a novel unsupervised learning framework for solving constrained optimization problems with non-convex constraints, featuring a differentiable projection function that maps outputs to the feasible solution space with zero constraint violation. Our approach combines a custom neural network, this projection function, and unsupervised training to optimize the primary objective. Evaluated on wireless network optimization, our method outperforms genetic algorithms and existing projection-based approaches in achieved performance, computational efficiency, and convergence speed while maintaining perfect constraint satisfaction. This generalizable framework provides a promising direction for tackling complex optimization problems where traditional methods struggle with constraint enforcement and solution quality.