Help ?

IGMIN: あなたがここにいてくれて嬉しいです. お願いクリック '新しいクエリを作成してください' 当ウェブサイトへの初めてのご訪問で、さらに情報が必要な場合は.

すでに私たちのネットワークのメンバーで、すでに提出した質問に関する進展を追跡する必要がある場合は, クリック '私のクエリに連れて行ってください.'

Search

Organised by  IgMin Fevicon

Regional sites

Browse by Subjects

Welcome to IgMin Research – an Open Access journal uniting Biology, Medicine, and Engineering. We’re dedicated to advancing global knowledge and fostering collaboration across scientific fields.

Browse by Sections

At IgMin Research, we bridge the frontiers of Biology, Medicine, and Engineering to foster interdisciplinary innovation. Our expanded scope now embraces a wide spectrum of scientific disciplines, empowering global researchers to explore, contribute, and collaborate through open access.

Special Issues

Our purpose is to inspire collaborative dialogue among scientific fields and foster accelerated knowledge development.

Members

Our purpose is to inspire collaborative dialogue among scientific fields and foster accelerated knowledge development.

Articles

Our purpose is to inspire collaborative dialogue among scientific fields and foster accelerated knowledge development.

Explore Content

Our purpose is to inspire collaborative dialogue among scientific fields and foster accelerated knowledge development.

Identify Us

Our purpose is to inspire collaborative dialogue among scientific fields and foster accelerated knowledge development.

IgMin Corporation

Welcome to IgMin, a leading platform dedicated to enhancing knowledge dissemination and professional growth across multiple fields of science, technology, and the humanities. We believe in the power of open access, collaboration, and innovation. Our goal is to provide individuals and organizations with the tools they need to succeed in the global knowledge economy.

Publications Support
[email protected]
E-Books Support
[email protected]
Webinars & Conferences Support
[email protected]
Content Writing Support
[email protected]
IT Support
[email protected]

Search

Select Language

Explore Section

Content for the explore section slider goes here.

Abstract

要約 at IgMin Research

Our purpose is to inspire collaborative dialogue among scientific fields and foster accelerated knowledge development.

General-science Group Research Article 記事ID: igmin339

Malliavin Calculus as Stochastic Backpropagation for Gaussian Latent Models: A Variance-Optimal Hybrid Framework

Mathematics DOI10.61927/igmin339 Affiliation

Affiliation

    Kevin D Oden & Associates, San Francisco, CA, USA

15
VIEWS
24
DOWNLOADS
Connect with Us

要約

We establish a rigorous connection between pathwise (reparameterization) and score-function (Malliavin) gradient estimators by showing that both arise from the Malliavin integration-by-parts identity. Building on this equivalence, we introduce a unified and variance-aware hybrid estimator that adaptively combines pathwise and Malliavin gradients using their empirical covariance structure. The connection is established explicitly for Gaussian (and more generally exponential family) latent variable models, where integration-by-parts identities admit closed-form representations. The resulting formulation provides a principled understanding of stochastic backpropagation and achieves minimum variance in theory among all unbiased linear combinations, with closed-form finite-sample convergence bounds. We demonstrate 9% variance reduction on VAEs (CIFAR-10) and up to 35% on strongly-coupled synthetic problems. Exploratory policy gradient experiments reveal that non-stationary optimization landscapes present challenges for the hybrid approach, highlighting important directions for future work. Overall, this work positions Malliavin calculus as a conceptually unifying and practically interpretable framework for stochastic gradient estimation, clarifying when hybrid approaches provide tangible benefits and when they face inherent limitations.

参考文献

    1. Kingma DP, Welling M. Auto-encoding variational Bayes. arXiv [Preprint]. 2013. Available from: https://doi.org/10.48550/arXiv.1312.6114
    2. Rezende DJ, Mohamed S, Wierstra D. Stochastic backpropagation and approximate inference in deep generative models. In: International Conference on Machine Learning. 2014;1278–1286. Available from: https://doi.org/10.48550/arXiv.1401.4082
    3. Williams RJ. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach Learn. 1992;8(3):229–256. Available from: https://link.springer.com/article/10.1007/BF00992696
    4. Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. In: Advances in Neural Information Processing Systems. 2020;6840–6851. Available from: https://doi.org/10.48550/arXiv.2006.11239
    5. Greensmith E, Bartlett PL, Baxter J. Variance reduction techniques for gradient estimates in reinforcement learning. J Mach Learn Res. 2004;5:1471–1530. Available from: https://www.jmlr.org/papers/volume5/greensmith04a/greensmith04a.pdf
    6. Mohamed S, Rosca M, Figurnov M, Mnih A. Monte Carlo gradient estimation in machine learning. J Mach Learn Res. 2020;21(1):5183–5244. Available from: https://doi.org/10.48550/arXiv.1906.10652
    7. Fournié E, Lasry JM, Lebuchoux J, Lions PL, Touzi N. Applications of Malliavin calculus to Monte Carlo methods in finance. Finance Stoch. 1999;3(4):391–412. Available from: https://link.springer.com/article/10.1007/s007800050068
    8. Glasserman P. Monte Carlo methods in financial engineering. New York: Springer; 2003. Available from: https://www.bauer.uh.edu/spirrong/Monte_Carlo_Methods_In_Financial_Enginee.pdf
    9. Nualart D. The Malliavin calculus and related topics. Berlin: Springer; 2006. Available from: https://link.springer.com/book/10.1007/3-540-28329-3
    10. Kohatsu-Higa A, Tankov P. Malliavin calculus in finance. 2011;111–174.
    11. Todorov E, Erez T, Tassa Y. MuJoCo: A physics engine for model-based control. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2012;5026–5033. Available from: https://www.researchgate.net/publication/261353949_MuJoCo_A_physics_engine_for_model-based_control
    12. Tucker G, Mnih A, Maddison CJ, Lawson J, Sohl-Dickstein J. REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models. In: Advances in Neural Information Processing Systems. 2017;2627–2636. Available from: https://doi.org/10.48550/arXiv.1703.07370
    13. Grathwohl W, Choi D, Wu Y, Roeder G, Duvenaud D. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. In: International Conference on Learning Representations. 2018. Available from: https://arxiv.org/abs/1711.00123
    14. Jang E, Gu S, Poole B. Categorical reparameterization with Gumbel-Softmax. In: International Conference on Learning Representations. 2017. Available from: https://doi.org/10.48550/arXiv.1611.01144
    15. Maddison CJ, Mnih A, Teh YW. The concrete distribution: A continuous relaxation of discrete random variables. In: International Conference on Learning Representations. 2017. Available from: https://arxiv.org/abs/1611.00712
    16. Ranganath R, Gerrish S, Blei D. Black box variational inference. In: Artificial Intelligence and Statistics. 2014;814–822. Available from: https://proceedings.mlr.press/v33/ranganath14.html
    17. Gobet E, Kohatsu-Higa A, Turkedjiev P. Malliavin calculus techniques in the computation of Greeks. SIAM J Financ Math. 2015;6(1):493–531.
    18. Liu Q, Wang D. Stein variational gradient descent: A general-purpose Bayesian inference algorithm. Adv Neural Inf Process Syst. 2016;29. Available from: https://doi.org/10.48550/arXiv.1608.04471
    19. Chen RTQ, Rubanova Y, Bettencourt J, Duvenaud DK. Neural ordinary differential equations. In: Advances in Neural Information Processing Systems. 2018;6571–6583. Available from: https://proceedings.neurips.cc/paper_files/paper/2018/file/69386f6bb1dfed68692a24c8686939b9-Paper.pdf
    20. Kidger P, Foster J, Li X, Lyons TJ. Efficient and accurate gradients for neural SDEs. Adv Neural Inf Process Syst. 2021;34:18747–18761. Available from: https://doi.org/10.48550/arXiv.2105.13493
    21. Weaver L, Tao N. The optimal reward baseline for gradient-based reinforcement learning. In: Proc Seventeenth Conf Uncertainty in Artificial Intelligence. 2001;538–545. Available from: https://doi.org/10.48550/arXiv.1301.2315

類似の記事

Sorption-based Spectrophotometric Assay for Lead(II) with Immobilized Azo Ligand
Ashirov Mansur Allanazarovich, Yusupova Mavluda Rajabboyevna, Takhirov Yuldash Rajabovich, Smanova Zulaykho Asanaliyevna and Avazyazov Mukhammad Akbarovich
DOI10.61927/igmin283
On how Doping with Atoms of Gadolinium and Scandium affects the Surface Structure of Silicon
Egamberdiev BE, Daliev Kh S, Khamidjonov I Kh, Norkulov Sh B and Erugliev UK
DOI10.61927/igmin206
Melanocytic Nevi Classification using Transfer Learning
Uma Mahesh RNHarsha Jain HJ, Hemanth Kumar CS, Shreyash Umrao and Mohith DL
DOI10.61927/igmin307
A New Modification of Classification of Traumatic Patients with Pelvic Fracture
Zahra Sedghazar, Ruba Altahla, Somayeh Sadeghi, Maria Karaminasian, Wan Li, Li Jun and Jamal Alshorman
DOI10.61927/igmin297

Why publish with us?

  • Global Visibility – Indexed in major databases

  • Fast Peer Review – Decision within 14–21 days

  • Open Access – Maximize readership and citation

  • Multidisciplinary Scope – Biology, Medicine and Engineering

  • Editorial Board Excellence – Global experts involved

  • University Library Indexing – Via OCLC

  • Permanent Archiving – CrossRef DOI

  • APC – Affordable APCs with discounts

  • Citation – High Citation Potential

Submit Your Article

Advertisement