Chaospy - Toolbox for performing uncertainty quantification.
-
Updated
Sep 4, 2025 - Python
Chaospy - Toolbox for performing uncertainty quantification.
Providing reproducibility in deep learning frameworks
Low-variance, efficient and unbiased gradient estimation for optimizing models with binary latent variables. (ICLR 2019)
The SAG (Stochastic Average Gradient) + SAGA (Accelerated) solver is an optimization algorithm used primarily in machine learning, specifically for logistic regression and linear support vector machines (SVMs) within libraries like scikit-learn. It is designed to be highly efficient for large datasets with many samples and features. Solver
Framework to model two stage stochastic unit commitment optimization problems.
In this paper, we propose Filter Gradient Decent (FGD), an efficient stochastic optimization algorithm that makes a consistent estimation of the local gradient by solving an adaptive filtering problem with different designs of filters.
Code the ICML 2024 paper: "Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models"
PyTorch implementation for " Differentiable Antithetic Sampling for Variance Reduction in Stochastic Variational Inference" (https://arxiv.org/abs/1810.02555).
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization. NeurIPS, 2022
Reproduced PyTorch implementation for ICML 2017 Paper "Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning."
Implementation and brief comparison of different First Order and different Proximal gradient methods, comparison of their convergence rates
Воспроизводимые эксперименты по снижению дисперсии оценки эффекта: plain diff, CUPED, VWE и их комбинация.
📈 Apply financial engineering techniques to option pricing using Monte Carlo simulations and the Black-Scholes model with clear, documented Python code.
Numerical integration of SDEs with variance reduction methods for Monte Carlo simulation
Optimizer reducing gradient variance across augmentations to improve generalization
Monte Carlo option pricing framework with variance reduction and interactive Streamlit dashboard
Modern A/B experimentation utilities: CUPED/CUPAC hooks, triggered analysis, SRM, switchback helpers, and power sims.
Advanced Monte Carlo simulation framework for derivative pricing with variance reduction techniques. Features GBM, Jump Diffusion, Heston models. Advanced quantitative finance implementation in Python.
Option Pricing with Monte Carlo Simulation — A Python library implementing Black–Scholes analytic pricing, Monte Carlo simulations (with variance reduction, quasi-MC), and advanced derivatives such as Asian, Barrier, and American options. Includes performance acceleration using Numba and comprehensive documentation with visualizations.
Add a description, image, and links to the variance-reduction topic page so that developers can more easily learn about it.
To associate your repository with the variance-reduction topic, visit your repo's landing page and select "manage topics."