Bohrer Paper Session Talks

Body

Name: Hanjia Gao
Title: Statistical Inference for Time Series via Sample Splitting
Abstract: Sample splitting has found widespread application in a variety of contemporary statistical problems, such as post-selection inference, conformal prediction, and high-dimensional inference. Its effectiveness often relies on the assumption of independent and identically distributed (iid) data generation processes, but there has been limited exploration of its suitability for handling dependent data. In this presentation, we introduce a novel approach to statistical inference for time series data that integrates sample splitting and self-normalization. We illustrate this new methodology using two specific problems: dimension-agnostic change point testing for multivariate time series, and one-sample and two-sample testing for a functional parameter in Hilbert space-valued time series. We will present both asymptotic theory and numerical results to demonstrate its broad applicability in inferring parameters of low, high, and infinite dimensions.

 

Name: Rentian Yao
Title: Optimization over probability distributions with proximal gradient descent
Abstract: Optimization in the space of probability distributions has broad applications in statistics. Different from many recent studies which address this optimization problem via Wasserstein gradient flow, we consider a KL (proximal) gradient descent algorithm, motivated by non-parametric maximum likelihood estimation (NPMLE), Bayesian posterior computation, and trajectory inference. This algorithm discretizes a continuous-time gradient flow relative to the Kullback—Leibler divergence for minimizing a convex objective functional. We demonstrate that the implicit discretization scheme converges to a global optimum at a polynomial rate, such as computing NPMLE. Moreover, if the objective functional is strongly convex, for example, when the objective functional itself is a KL divergence as in the context of Bayesian posterior computation, the implicit discretization scheme exhibits globally exponential convergence. We implement the implicit discretization via normalizing flow. When the objective functional is multivariate and jointly convex, as in trajectory inference, we propose the coordinate KL gradient descent algorithm as an explicit discretization of coordinate KL divergence gradient flow. We show the polynomial algorithmic convergence rate of the explicit discretization scheme and implement the algorithm via particle method.

 

Name: Zhe Chen
Title: Manipulating a Continuous Instrumental Variable: Algorithm, Partial Identification Bounds, and Inference under Randomization and Biased Randomization Assumptions
Abstract: An instrumental variable (IV) can be thought of as a random nudge, or encouragement, towards accepting a treatment. With a continuous IV, Baiocchi et al. (2010) propose a novel method that “strengthened” the original, possibly weak, IV using a design technique called “non-bipartite matching.” Their key insight is to shift focus from the entire study cohort to a possibly smaller cohort that is amenable to being paired with a larger separation in the IV dose, thus inducing a higher compliance rate. Three elements in a causal analysis change as one switches from one IV-based matched design to the other. First, the study cohort changes. In this article, we show it can be avoided using a non-bipartite, template matching algorithm. Second, the compliance rate changes. Third, the latent complier subgroup changes as a person’s latent principal stratum status in a matched design is defined with respect to the two observed IV doses within each pair. This third element is of particular relevance because the usual effect ratio estimand concerns the treatment effect among the complier subgroup, so the causal estimand is dictated by the IV-based matched design. In this article, we study partial identification bounds for the sample average treatment effect (SATE) in a IV-based matched cohort study. Unlike the efefct ratio estimand, the SATE estimand does not depend on who is matched to whom in the design, although a strengthened-IV design has the potential to narrow its partial identification bounds. We derive valid statistical inference for the partial identification bounds under a randomization assumption and an IV-dose-dependent, biased randomization scheme in a matched-pair design. We apply the proposed study design and inferential methods to a study of the effect of neonatal intensive care units on the mortality rate of premature babies.


Name: Yi Zhang
Title: Another look at bandwidth-free inference: a sample splitting approach
Abstract: The bandwidth-free tests for a multi-dimensional parameter have attracted considerable attention in econometrics and statistics literature. These tests can be conveniently implemented due to their tuning-parameter free nature and possess more accurate size as compared to the traditional heteroskedasticity and autocorrelation consistent-based approaches. However, when sample size is small/medium, these bandwidth-free tests exhibit large size distortion when both the dimension of the parameter and the magnitude of temporal dependence are moderate, making them unreliable to use in practice. In this paper, we propose a sample splitting-based approach to reduce the dimension of the parameter to one for the subsequent bandwidth-free inference. Our SS–SN (sample splitting plus self-normalisation) idea is broadly applicable to many testing problems for time series, including mean testing, testing for zero autocorrelation, and testing for a change point in multivariate mean, among others. Specifically, we propose two types of SS–SN test statistics and derive their limiting distributions under both the null and alternatives and show their effectiveness in alleviating size distortion via simulations. In addition, we obtain the limiting distributions for both SS–SN test statistics in the multivariate mean testing problem when the dimension is allowed to diverge.


Name: Linjun Huang
Title: Minimizing Convex Functionals over Space of Probability Measures via KL Divergence Gradient Flow
Abstract: Motivated by the computation of the non-parametric maximum likelihood estimator (NPMLE) and the Bayesian posterior in statistics, this paper explores the problem of convex optimization over the space of all probability distributions. We introduce an implicit scheme, called the implicit KL proximal descent (IKLPD) algorithm, for discretizing a continuous-time gradient flow relative to the Kullback--Leibler (KL) divergence for minimizing a convex target functional. We show that IKLPD converges to a global optimum at a polynomial rate from any initialization; moreover, if the objective functional is strongly convex relative to the KL divergence, for example, when the target functional itself is a KL divergence as in the context of Bayesian posterior computation, IKLPD exhibits globally exponential convergence. Computationally, we propose a numerical method based on normalizing flow to realize IKLPD. Conversely, our numerical method can also be viewed as a new approach that sequentially trains a normalizing flow for minimizing a convex functional with a strong theoretical guarantee.


Name: Zhiyuan Yu
Title: Sampling from the Random Linear Model via Stochastic Localization Up to the AMP Threshold
Abstract: The Approximate Message Passing (AMP) algorithm has garnered significant attention in recent years for solving linear inverse problems, particularly in the field of Bayesian inference for high-dimensional models. In this paper,
we consider sampling from the posterior in the linear inverse problem, with an i.i.d.\ random design matrix. We develop a sampling algorithm by integrating the AMP algorithm and stochastic localization. We give a proof of for the convergence in smoothed KL divergence between the distribution of the samples generated by our algorithm and the target distribution, whenever the noise variance $\Delta$ is below $\Delta_{\rm AMP}$, which is the computation threshold for mean estimation introduced in (Barbier et al., 2020).


Name: Heman Leung
Title: Online GMM for Time Series
Abstract: Time series data are inherently serially dependent and sequential. In this talk, I will present the online generalized method of moments (OGMM), an estimation and inference framework that utilizes the sequential nature of time series and accounts for the dependence. Statistically, we show that OGMM estimators are consistent and asymptotically efficient. Computationally, we discuss O(1)-time and O(1)-space updates of the weighting matrix for optimal estimation and inference. Since many existing online estimation algorithms can be cast in terms of OGMM, we discuss the connection and difference from the existing methods. Finally, we show some encouraging finite sample results in simulations and applications.


Name: Arghya Chakraborty
Abstract: Post-marketing surveillance for drug and vaccine safety is of paramount importance, since the detection of rare adverse events is often impossible during the phase 3 clinical trials, owing to limited sample size. We will consider in this context the problem of testing whether a drug of interest poses an elevated threat of an adverse event with respect to a baseline drug. As opposed to clinical trials, we do not have any control over the arrival of the data, which typically consist of the drug names, the patient exposure times for the drug, and the patient demographics. We will propose a testing procedure that controls the false alarm probability and aims to detect an elevated risk as early as possible, while relaxing a lot of modeling assumptions in the existing literature. Automatically, our approach also adjusts for population heterogeneity and potential confounding by conditioning across data in each strata. We will do a simulation study and also apply this procedure to a real-world example to examine if a certain drug leads to an elevated risk for acute myocardial infraction, and compare its performance with existing procedures in the literature.


Name: Peng Xu
Title: Generative Quantum Machine Learning via Denoising Diffusion Probabilistic Models
Abstract: Deep generative models are key-enabling technology to computer vision, text generation and large language models. Denoising diffusion probabilistic models (DDPMs) have recently gained much attention due to their ability to generate diverse and high-quality samples in many computer vision tasks, as well as to incorporate flexible model architectures and relatively simple training scheme. Quantum generative models, empowered by entanglement and superposition, have brought new insight to learning classical and quantum data. Inspired by the classical counterpart, we propose the (QuDDPM) to enable efficiently trainable generative learning of quantum data. QuDDPM adopts sufficient layers of circuits to guarantee expressivity, while introduces multiple intermediate training tasks as interpolation between the target distribution and noise to avoid barren plateau and guarantee efficient training. We provide bounds on the learning error and demonstrate QuDDPM’s capability in learning correlated quantum noise model, quantum many-body phases and topological structure of quantum data. The results provide a paradigm for versatile and efficient quantum generative learning.


Name: Sophie Larsen
Title: Immune history influences SARS-CoV-2 booster impacts: the role of efficacy and redundancy
Abstract: Given the continued emergence of SARS-CoV-2 variants of concern as well as unprecedented vaccine development, it is crucial to understand the effect of the updated vaccine formulations at the population level. While bivalent formulations have higher efficacy in vaccine trials, diversity in immune history could lead to different transmission dynamics, especially in settings with a high degree of natural immunity. Known socioeconomic disparities in key metrics such as vaccine coverage, social distancing, and access to healthcare have likely shaped the development and distribution of this immune landscape. Yet little has been done to investigate the population-level impact of booster formulation in the context of host heterogeneity. Using two complementary mathematical models that capture host demographics and immune histories over time, we investigated the potential impacts of bivalent and monovalent boosters in low- and middle-income countries (LMICs). These models allowed us to test the role of natural immunity and cross-protection in determining the optimal booster strategy. Our results show that to avert deaths from a new variant in populations with high immune history, it is more important that a booster is implemented than which booster is implemented (bivalent vs. monovalent). However, in populations with low preexisting immunity, bivalent boosters can become optimal. These findings suggest that for many LMICs - where acquiring a new vaccine stock may be economically prohibitive - monovalent boosters can still be implemented as long as pre-existing immunity is high.


Name: Theren Williams
Title: A Study on Restricted HMMs for Latent Class Attribute Transitions
Abstract: Across most fields and practices, professionals use various Cognitive Diagnostic Models (CDMs) to understand the underlying attribute profiles of a given subject pool. In some settings, subjects may record information over time, such as a study with repeated visits or a pre/post assessment pairing. In such cases of longitudinal data, CDMS development requires that the potential transitions from one profile to another are accounted for. These changes may occur freely or bound to a given set of rules, known or unknown. Existing work leverages Hidden Markov Models (HMM) to model the transitions between attribute profiles over time. Our model builds on the existing framework, adapting a classical transitions model into a more general latent class model framework. We briefly share identifiability conditions that accommodate a general set of constraints on attribute transitions. Additionally, we provide and discuss a Markov Chain Monte Carlo (MCMC) simulation study and practical application for our model and its potential implications in psychometrics.


Name: David Kim
Title: Asymptotic Valid Permutation Test in Strongly Mixing Conditions
Abstract: Two-sample permutation test allows to construct exact level tests for the null about same distributions. However, permutation tests were usually considered only on independent samples, which might be unrealistic assumption in Econometrics. Moreover, permutation tests might be not asymptotically valid when we consider the weaker null on parameters. Here we introduce a novel, asymptotically valid two-sample permutation test on parameters θ of P and Q, where each samples from distribution need not be independent - they only assumed to be strongly mixing with other mild moment conditions. We do this by implementing recent FCLT on non-stationary process and self-normalization on the test statistic. Particularly, this can be extended to the test for causal estimands like ATE in dependent structure. We give a Monte Carlo simulation study to check the performance of this new test. We further give simulations on some block-wise permutation test, which we might discover its nature in the future.


Name: Kaihong Zhang
Title: Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions
Abstract: We study the asymptotic error of score-based diffusion model sampling in large-sample scenarios from a non-parametric statistics perspective. We show that a kernel-based score estimator achieves an optimal mean square error of $\widetilde{O}\left(n^{-1} t^{-\frac{d+2}{2}}(t^{\frac{d}{2}} \vee 1)\right)$ for the score function of $p_0*\mathcal{N}(0,t\boldsymbol{I}_d)$, where $n$ and $d$ represent the sample size and the dimension, $t$ is bounded above and below by polynomials of $n$, and $p_0$ is an arbitrary sub-Gaussian distribution. As a consequence, this yields an $\widetilde{O}\left(n^{-1/2} t^{-\frac{d}{4}}\right)$ upper bound for the total variation error of the distribution of the sample generated by the diffusion model under a mere sub-Gaussian assumption. If in addition, $p_0$ belongs to the nonparametric family of the $\beta$-Sobolev space with $\beta\le 2$, by adopting an early stopping strategy, we obtain that the diffusion model is nearly (up to log factors) minimax optimal. This removes the crucial lower bound assumption on $p_0$ in previous proofs of the minimax optimality of the diffusion model for nonparametric families.

 

Name: ByeongJip Kim
Title: A Hypothesis Testing for the Comparison of Two Multi-layer Networks: a Kernel-based Approach
Abstract: We present an innovative hypothesis testing procedure to test whether two multilayer networks come from the same population. The population consists of multilayer networks, having the same number of layers although they can have different nodes across distinct layers within each network. However, link types are known and comparable across distinct multi-layer networks. In our analysis, a link probability is determined by the latent probability density. We find a test statistic under the null hypothesis, and establish the asymptotics of the test statistic. Furthermore, we show that our hypothesis testing rule has a size equal to the significance level asymptotically. We also present that the power function of our hypothesis testing approaches 1 as the network size diverges to infinity.

Body

Student Awards