Weekly Colloquia

Guest speakers discuss trends and research in the field of Applied Mathematics and Statistics.

Colloquium takes place at 3pm on Fridays in Chauvenet Hall room 143. Attendance is in person or via Zoom.  Please contact Samy Wu Fung at swufung@mines.edu or Daniel McKenzie at dmckenzie@mines.edu for further information and the Zoom link.

Fall 2025

November 14Dr. Laurel Ohm, University of Wisconsin-Madison

Title: Dynamics of an elastic filament in 3D Stokes Flow

Abstract: Many fundamental biophysical processes, from cell division to cellular motility, involve dynamics of thin structures immersed in a very viscous fluid. We will introduce a family of frameworks for modeling the dynamics of a thin elastic filament immersed in 3D Stokes flow as a curve evolution. In the simplest framework, the coupling between the filament and the surrounding fluid is approximated by a local operator known as resistive force theory. In the most detailed framework, the 3D fluid is coupled to the quasi-1D filament dynamics via a novel type of angle-averaged Neumann-to-Dirichlet operator. In each case, we will mention what we can say from a PDE perspective about immersed filament dynamics.
November 21Dr. Andrew Yarger, Purdue University

Title: Asymmetries in multivariate spatial and spatio-temporal covariances

Abstract: Asymmetries in dependence may exist in both multivariate spatial and spatio-temporal environmental data. Flexible covariance models are needed to capture these features, and most existing approaches are based on shift or delay-based asymmetries. We first introduce an approach to incorporate asymmetry in multivariate spatial models through the spectral representation of the covariance. Examples are discussed for multivariate Matérn, squared-exponential, and Cauchy covariance models. We then show how these models can be used straightforwardly to introduce asymmetries in space-time covariance functions.
Past Fall 2025
August 29Dr. Nicholas Danes, Colorado School of Mines

Title: HPC and other Research Computing Resources at Mines

Abstract: Colorado School of Mines offers a robust ecosystem of computational resources through its Research Computing (RC) team, supporting a wide range of scientific and engineering research. Nicholas Danes, Computational Scientist on the RC team, will provide an overview of RC resources available to AMS at Mines.

Topics will include:

- Overview of RC Services, including on-premise and off-premise compuing, data storage, and consultations
- Wendian HPC Cluster – Mines’ on-campus system for high-throughput computing, ideal for simulation, modeling, and statistical analysis
- NSF ACCESS – Free national-scale computing resources for academic research
- Google Cloud & AWS – Commercial platforms offering scalable, on-demand infrastructure

The session will feature live demos of Wendian and NSF ACCESS systems, along with practical guidance on how to get started and integrate these tools into research workflows.
September 5Dr. Samiran Ghosh, UTSPH Houston

Title: Variable Selection in the Presence of Missing Data: A Solution via Multiple Imputation and Penalized Regression

Abstract: Missing data is ubiquitous, although the mechanisms driving the missingness patterns may vary. We propose a one-step solution that integrates Multiple Imputation with Penalized Grouped Variable Selection for identifying important features in regression settings, under the assumption that data are Missing at Random (MAR). The performance of the proposed methodology is evaluated through extensive simulations and a real-world dataset.
September 12Dr. Adrienne Marshall, Colorado School of Mines

Title: From Descriptive Statistics to Machine Learning: Applications of Applied Math and Statistics in Snow Hydrology and Climate

Abstract: Mountainous snowpacks are essential to providing water resources, sustaining aquatic and terrestrial ecosystem function, and moderating climate feedbacks. Yet snow is also sensitive to climate change. Understanding the impacts of climate change on snow over the historic record through future projections is therefore key to identifying climate impacts and potential adaptation strategies. We have a suite of observational and modeled snow data products that facilitate scientific inquiry in this area, but heterogeneous spatial and temporal scales of these data products necessitate creative quantitative approaches to their application. Here, I will discuss past and ongoing research that uses applied math and statistics to investigate the impacts of climate change on snow. Mathematical approaches range in complexity from the creative use of descriptive statistics to new applications of machine learning. Applications span questions about climate impacts on snow drought frequency, impacts of changing snowfall intensity, and changes in snow accumulation and melt dynamics in burned forests. I aim to illustrate the ways that math can be applied in snow and climate studies, and point towards future opportunities for collaboration and synergy among these disciplines.  
September 19Dr. Stephen Becker, CU Boulder

Title: Randomization methods for big-data

Abstract: In this era of big-data, we must adapt our algorithms to handle large datasets. One obvious issue is that the number of floating-point operations (flops) increases as the input size increases, but there are many less obvious issues as well, such as the increased communication cost of moving data between different levels of computer memory.  Randomization is increasingly being used to alleviate some of these issues, as those familiar with random mini-batch sampling in machine learning are well aware of.  This talk goes into some specific examples of using randomization to improve algorithms. We focus on special classes of structured random dimensionality reduction, including the count sketch, tensorSketch, Kronecker fast Johnson-Lindenstrauss sketch, and pre-conditioned sampling.  These randomized techniques can then be applied to speeding up the classical Lloyd's algorithm for K-means and for computing tensor decompositions, for example.  If time permits, we will also show extensions to optimization, including a gradient-free method that uses random finite differences and a method for solving semi-definite programs in an optimal low-memory fashion.
October 3Dr. Matthew Hofkes, Senior Data Scientist with nVoq Incorporated

Title: Bringing LLMs to Market 

Abstract: Large Language Models (LLMs) are used across industries to more accurately and efficiently accomplish a wide range of tasks. One industry where LLMs are being actively tested and implemented is healthcare.

My research over the last six months has focused on developing products that perform audits of clinical documentation, create summaries of clinician visits, and pre-fill lengthy medical forms. This talk will provide an overview of the process required to bring a product from inception to market, including choosing a base model, fine-tuning, applying Quantized Low-Rank Adaptation (QLoRA), prompt engineering, Retrieval-Augmented Generation (RAG), and data creation.
October 17Dr. Andreas Fichtner, ETH Zurich

Title: Strategies to accelerate waveform modelling and inversion

Abstract: Simulations of wave propagation are the central element of numerous applications, including acoustics, medical ultrasound, non-destructive testing, tsunami warning, seismic imaging, and many others. Although computational resources have increased substantially in recent years, many wave simulations still cannot be performed at the frequencies where we have data or would like to compute predictions. In this context, we present strategies to reduce computational requirements of numerical wave simulations by an order of magnitude or more.


The first pair of strategies combines stochastic mini-batch optimisation and wavefield-adapted spectral-element meshes. While the former exploits data redundancies, the latter reduces the cost of wavefield simulations by lowering the effective dimension of the numerical grid. For whole-Earth seismic inversion, this approach reduces the cost of a parameter update to around 0.6 % of a standard update. These developments allowed us to construct the global-scale model REVEAL, which constrains Earth structure with unprecedented detail by assimilating more than 6 million three-component seismic recordings from nearly 2500 earthquakes.


In a more experimental approach we combine standard numerical wave simulations with machine learning to design a neural dispersion corrector. Taking low-cost/low-accuracy numerical simulations as input, the network corrects numerical dispersion errors to produce high-accuracy simulation results that would have been more expensive to compute directly. Preliminary tests suggest that savings in computational cost for 3-D elastic wave simulations are at least a factor of 10.
October 24Antony Sikorski - Grad Student Colloquium
October 31Rachel Bertaud - Grad Student Colloquium
November 7Daniel Ramirez - Grad Student Colloquium
Spring 2025
January 31Dr. Dane Taylor, University of Wyoming

Title: Consensus processes over networks: Past, present, and future

Abstract: Models for consensus---that is, the reaching of agreement---have been developed to study, e.g., group decisions over social networks, the collectively movements of animals/agents, and the training of decentralized machine-learning (ML) algorithms on dispersed data. This talk will explore the important role of network structure in shaping consensus by considering 3 extensions for a linear consensus model. First, I will show that the presence of network community structure may or may not impose a bottleneck for consensus ML, which we analyze using random matrix theory [arxiv.org/abs/2011.01334]. Next, I will model and study collective decisions in human-AI teams by formulating consensus over interconnected networks and use spectral perturbation theory to predict the effects of asymmetric coupling between humans and AI agents [doi.org/10.1109/TNSE.2023.3325278]. Time permitting, I will formulate a consensus model with higher-order, multiway interactions using simplicial complexes (i.e., as opposed to graphs) and use algebraic topology to study how homology influences dynamics. [doi.org/10.1063/5.0080370]
February 7Dr. Christian Parkinson, Michigan State University

Title: Compartmental Models for Epidemiology with Noncompliant Behavior

Abstract: We formulate and analyze ODE and PDE models for epidemiology which incorporate human behavioral concerns. Specifically, we assume that as a disease spreads and a governing body implements non-pharmaceutical intervention methods, there is a portion of the population that does not comply with these mandates and that this noncompliance has a nontrivial effect on the spread of the disease. Borrowing from social contagion theory, we then allow this noncompliance to spread parallel to the disease. We derive reproductive ratios and large-time asymptotics for our models, and then analyze them through the lens of optimal control to account for policy maker decisions. We demonstrate the behavior of all of our models with simulations.
March 28Monique Chyba, University of Hawaii at Manoa

Title: A Tour of Controlled Dynamical Systems

Abstract: In this talk, we explore the interplay between geometry, dynamical systems, and control theory, which has driven significant advances in these fields. Control theory seeks to influence the behavior of dynamical systems to achieve desired objectives, often by optimizing a prescribed cost. Geometric optimal control is deeply connected to sub-Riemannian geometry, which, for instance, plays an important role in studying optimal strokes for micro-swimmers.

Control systems can be either continuous or discrete, depending on the application. However, modeling with dynamical systems is often more effective when incorporating both continuous and discrete states. This approach allows for the representation of local dynamics coupled into a global system, where interactions may involve discrete transitions. We will highlight key features of such systems and discuss various applications.
April 4Dr. Wolfgang Bangerth, Colorado State University

Title: On the notion of "information" in inverse problems

Abstract: Inverse problems are ones where one would like to reconstruct a spatially variable function from measurements of a system in which this function appears as a coefficient or right hand side. Examples are biomedical imaging and seismic imaging of the earth.
In many inverse problems, practitioners have an intuitive notion of how much one "knows" about the coefficient in different parts of the domain -- that is, that there is a spatially variable "amount of information". For example, in seismic imaging, we can only know about those parts of the earth that are traversed by seismic waves on their way from earthquake to receiving seismometer, whereas we cannot know about places not on these raypaths.

Despite the fact that this concept of "information" is intuitive to understand, there are no accepted quantitative measures of information in inverse problems. I will present an approach to define such a spatially variable "information density" and illustrate both how it can be computed practically, as well as used in applications. The approach is based on techniques borrowed from Bayesian inverse problems, and an approximation of the covariance matrix using the Cramer-Rao bound.
April 18Dr. Deena R Schmidt, University of Nevada, Reno

Title: Stochastic network models in biology

Abstract: Many biological systems in nature can be represented as a dynamic model on a network. Examples include gene regulatory systems, neuronal networks, social networks, epidemics spreading within a population described by a contact network, and many others. A fundamental question when studying a biological process represented as a dynamic model on a network is to what extent the network structure is contributing to the observed dynamics. I will give a brief introduction to network modeling in biology along with an overview of my work that addresses this question. I will then focus on two recent projects. The first project investigates the spread of norovirus (stomach flu) within a local population using a stochastic adaptive network model. The second project looks at mammalian sleep-wake regulation at different stages of development using an integrate-and-fire neuronal network model. If time allows, I will also discuss a few related projects with current students.
April 25Jess Ellis Hagman, Colorado State University

Title: Centering students' identities to create critical transformations: An example centering multilingual students in college algebra.

Abstract: One way to critically transform undergraduate mathematics is to center aspects of students' identities to inform the structure of the mathematical spaces and systems that they are in. In this talk, I first draw on multiple large scale NSF projects seeking to improve introductory college math programs to propose a process for achieving critical transformations. I then use recent efforts at CSU to improve instruction for multilingual students as an example of this process, sharing how we used research on multilingual students' experiences to inform the instructional practices I used to teach a large (n=80), College Algebra class with a subsection restricted for multilingual students (n=14). I end by drawing on surveys and interviews from a subset of students in this class to discuss what did and did not land for students in this class. I hope this presentation can serve to (1) share practices that can support multilingual students and also many other students, and (2) exemplify how centering students' social identities can be used to design mathematics spaces to support more students to feel good mathematically.
May 2Dr. Daniel Sanz-Alonzo, University of Chicago

Title: Ensemble Kalman Methods and Structured Operator Estimation 

Abstract: Data assimilation is concerned with estimating the state of a dynamical system from partial observations. In applications such as numerical weather prediction where the state is high dimensional and the dynamics are expensive to simulate, ensemble Kalman filters are often the method of choice. In this talk, I will present new results on structured covariance operator estimation that help explain why these algorithms can be effective even when deployed with a small ensemble size. Our theory also explains the importance of using covariance localization in ensemble Kalman methods for global data assimilation.
Fall 2024
November 22Ziyu Li - Grad Student Colloquium
November 15Chenlu Shi, New Jersey Institute of Technology

Title: Space-Filling Designs for Computer Experiments

Abstract: Computer simulations are essential tools for studying complex systems across various fields in natural and social sciences. However, the complexity of the models behind these simulations often leads to high computational costs. A useful approach to address this is to build a statistical surrogate model based on a set of data generated by running a computer model - computer experiment. Space-filling designs are widely recognized as effective designs for such experiments. This talk will provide an overview of space-filling designs, with a focus on a specific type known as strong orthogonal arrays. These designs are particularly attractive due to their space-filling properties in lower-dimensional projections of the input space. We will introduce this class of designs and share our recent advancements in this area.

Join in-person in room CH143 or through zoom: https://mines.zoom.us/j/99293132717
November 8

*Postponed until Spring of 2025*
Professor Albert Berahas, University of Michigan

Title: Next Generation Algorithms for Stochastic Optimization with Constraints

Abstract: Constrained optimization problems arise in a plethora of applications in science and engineering. More often than not, real-world optimization problems are contaminated with stochasticity. Stochastic gradient and related methods for solving stochastic unconstrained optimization problems have been studied extensively in recent years. It has been shown that such algorithms and much of their convergence and complexity guarantees extend in straightforward ways when one considers problems involving simple constraints, such as when one can perform projections onto the feasible region of the problem. However, settings with general nonlinear constraints have received less attention. Many of the approaches proposed for solving such problems resort to using penalty or (augmented) Lagrangian methods, which are often not the most effective strategies. In this work, we propose and analyze stochastic optimization algorithms for deterministically constrained problems based on the sequential quadratic optimization (commonly known as SQP) methodology. We discuss the rationale behind our proposed techniques, convergence in expectation, and complexity guarantees for our algorithms. Additionally, we present numerical experiments that we have performed. This is joint work with Raghu Bollapragada, Frank E. Curtis, Michael O'Neill, Daniel P. Robinson, Jiahao Shi, and Baoyu Zhou.

Bio: Albert S. Berahas is an Assistant Professor in the Industrial and Operations Engineering department at the University of Michigan. Before joining the University of Michigan, he was a Postdoctoral Research Fellow in the Industrial and Systems Engineering department at Lehigh University working with Professors Katya Scheinberg, Frank Curtis and Martin Takáč. Prior to that appointment, he was a Postdoctoral Research Fellow in the Industrial Engineering and Management Sciences department at Northwestern University working with Professor Jorge Nocedal. Berahas completed his PhD studies in the Engineering Sciences and Applied Mathematics (ESAM) department at Northwestern University in 2018, advised by Professor Jorge Nocedal. He received his undergraduate degree in Operations Research and Industrial Engineering (ORIE) from Cornell University in 2009, and in 2012 obtained an MS degree in Applied Mathematics from Northwestern University. Berahas’ research broadly focuses on designing, developing and analyzing algorithms for solving large scale nonlinear optimization problems. Specifically, he is interested in and has explored several sub-fields of nonlinear optimization such as: (i) general nonlinear optimization algorithms, (ii) optimization algorithms for machine learning, (iii) constrained optimization, (iv) stochastic optimization, (v) derivative-free optimization, and (vi) distributed optimization. Berahas served as the vice-chair of the Nonlinear Optimization cluster for the INFORMS Optimization Society (2020-2022), the chair of the Nonlinear Optimization cluster for the INFORMS Optimization Society Conference (2021-2022), the co-chair of the Nonlinear Optimization cluster for the ICCOPT 2022 conference (2021-2022), and the president of the INFORMS Junior Faculty Interest Group (JFIG) (2023-2024).
November 1Juliette Mukangango - Grad Student Colloquium
October 25Professor Jocelyn Chi, University of Colorado Boulder

Title: Randomized Kaczmarz Method for Linear Discriminant Analysis

Abstract: We present randomized Kaczmarz method for linear discriminant analysis (rkLDA), an iterative randomized approach to binary-class Gaussian model linear discriminant analysis (LDA) for very large data. We harness a least squares formulation and mobilize the stochastic gradient descent framework to obtain a randomized classifier with performance that can achieve comparable accuracy to that of full data LDA. We present analysis for the expected change in the LDA predictions if one employs the rkLDA solution in lieu of the full-data least squares solution that accounts for both the Gaussian modeling assumptions on the data and algorithmic randomness. Our analysis shows how the expected change depends on quantities inherent in the data such as the scaled condition number and Frobenius norm of the input data, how well the linear model fits the data, and choices from the randomized algorithm. Our experiments demonstrate that rkLDA can offer a viable alternative to full data LDA on a range of step-sizes and numbers of iterations.
October 18Dr. Stephen Kissler, University of Colorado Boulder

Title: Modeling infectious disease dynamics across scales

Abstract: Infectious disease outbreaks are inherently multi-scale processes: what happens inside an individual body impacts what happens in community, which in turn impacts what happens in a nation and the world. Linking these scales has proven challenging, both due to a lack of theory and an absence of adequate data. This data/theory landscape is rapidly changing, however, due in part to the renewed interest in cross-scale disease dynamics spurred by the COVID-19 pandemic. I will discuss our recent and ongoing work to characterize the within-host dynamics of acute viral infections and to link these with interpersonal contact patterns and population-level estimates of disease burden. I will touch upon the power and pitfalls of biological data collection, the beauty of statistical modeling, and the remarkable story of a hundred-year-old mathematical model that has only recently become appreciated for its full complexity.

Bio: Stephen Kissler is an assistant professor of Computer Science and an affiliate of the BioFrontiers institute at the University of Colorado Boulder. A Colorado native, he earned his bachelor’s and master’s degrees in Applied Mathematics before moving to the University of Cambridge to complete his PhD in the same subject. He worked for four years as a postdoctoral fellow at the Harvard T.H. Chan School of Public Health before returning to Colorado. He has been interested in infectious diseases since “before they were cool”, and his work remains dedicated to pandemic preparedness and resilience against respiratory viral infections using mathematical modeling.
October 11Isabella Chittumuri - Grad Student Colloquium
September 27Professor Yawen Guan, Colorado State University

Title: Spatial Confounding: The Myths and Remedies

Abstract: A key task in environmental and epidemiological studies is estimating the effect of an exposure variable on a response variable using spatially correlated observational data. However, adding a spatial random effect to account for spatial dependence can sometimes distort effect estimates, leading to differing inference results from models with and without spatial random effects. Spatial confounding, the underlying issue, has recently gained attention, yet there is no consensus on its definition. Two primary definitions have emerged: confounding due to "location" and confounding caused by an unmeasured variable. How to handle spatial confounding, and whether it should be adjusted at all, remains an ongoing debate.

In this talk, I will provide an overview of the challenges posed by spatial confounding, review current deconfounding methods, and introduce a new framework for adjusting for spatial confounding. Our approach represents spatial confounding through a spectral decomposition, which intuitively breaks down confounding into different spatial scales. Within this framework, we define the necessary assumptions to estimate the exposure effect and propose a series of confounder adjustment methods, ranging from parametric adjustments using the Matern coherence function to more flexible semi-parametric methods based on smoothing splines. These methods are applied to both areal and geostatistical data, demonstrated through simulations and real-world datasets.
September 20Professor Denis Silantyev, University of Colorado at Colorado Springs

Title: Generalized Constantin-Lax-Majda Equation: Collapse vs. Blow Up and Global Existence

Abstract: We investigate the behavior of the generalized Constantin-Lax-Majda (CLM) equation which is a 1D model for the advection and stretching of vorticity in a 3D incompressible Euler fluid. Similar to Euler equations the vortex stretching term is quadratic in vorticity, and therefore is destabilizing and has the potential to generate singular behavior, while the advection term does not cause any growth of vorticity and provides a stabilizing effect.
We study the influence of a parameter a which controls the strength of advection, distinguishing a finite time singularity formation (collapse or blow-up) vs. global existence of solutions. We find a new critical value a_c=0.6890665337007457... Such that for a<a_c there are self-similar collapsing solutions with the spatial extent of blow-up shrinking to zero, and for a_c<a1 we find that the solution exists globally. We identify the leading order complex singularity for general values of a which controls the leading order behavior of the collapsing solution. We also rederive a known exact collapsing solution for a=0 and we find a new exact analytical collapsing solution at a=1/2.
September 6Debkumar De

Title: From Data to Decisions: The Use of Statistics in Finance

Abstract: I will discuss the application of statistics and data science in the finance industry. My presentation is geared towards students and will focus on descriptive explanations rather than technical details. Additionally, I plan to discuss risk models used to manage investments in the equity market.
August 30Professor Daniel McKenzie, Colorado School of Mines

Title: Decision-focused learning: How to differentiate the solution of an optimization problem.

Abstract: Many real-world problems can be distilled into an optimization problem, for which many good algorithms exist. However, it is often the case that certain key parameters in the optimization problem are not observed directly. Instead, one can observe large amounts of data that is correlated with these parameters, but in ways that is not easy to describe. This raises the possibility of combining machine learning ---to predict the unknown parameters --- with optimization --- to solve the problem of interest. This combination is sometimes called decision-focused learning. In this talk I'll give an introduction to this field as well as describe some recent work done by myself and Dr. Samy Wu Fung.
August 23Yifan Wu, a postdoctoral researcher from Mines EE

Title: Gridless Harmonics Estimation with Multi-frequency Measurements based on Convex Optimization

Abstract: Harmonics estimation plays a crucial role in various applications, including array processing, wireless sensing, source localization, and remote sensing. In array processing applications, harmonics usually refer to the direction-of-arrival (DOA) of the source, a parameter that appears in the exponent of the complex sinusoid. Traditional harmonics estimation methods typically rely on single-frequency measurements, which are limited to narrowband signals with frequencies concentrated around a specific point. In this presentation, I will introduce our recent advancements in multi-frequency gridless DOA estimation. By leveraging a multi-frequency measurement model, our approach effectively addresses wideband signals with dispersed frequencies. Unlike conventional grid-based methods that suffer from discretization errors due to grid mismatch, our technique avoids such errors by operating in a gridless framework. The problem is initially formulated as an atomic norm minimization (ANM) problem, which can be equivalently expressed as a semidefinite program (SDP). We provide conditions for the optimality of the SDP, allowing for its verification through a computable metric. Additionally, we present a fast version of the SDP to enhance computational efficiency and extend the method to non-uniform setups. Importantly, the multi-frequency setup enables us to resolve more DOAs (harmonics) than the number of physical sensors. Numerical experiments demonstrate the superiority of the proposed method.
Spring 2024
January 12Ryan Peterson - Graduate Student Colloquium

Title: Spatial Statistical Data Fusion with LatticeKrig
January 19Krishna Balasubramanian

Title: High-dimensional scaling limits of least-square online SGD and its fluctuations.

Abstract: Stochastic Gradient Descent (SGD) is widely used in modern data science. Existing analyses of SGD have predominantly focused on the fixed-dimensional setting. In order to perform high-dimensional statistical inference with such algorithms, it is important to study the dynamics of SGD when both the dimension and the iterations go to infinity at a proportional rate. In this talk, I will discuss such high-dimensional limit theorems for the online least-squares SGD iterates for solving over-parameterized linear regression. Specifically, focusing on the double-asymptotic setting (i.e., when both the dimensionality and iterations tend to infinity), I will present the mean-field limit (in the form of an infinite-dimensional ODE) and fluctuations (in the form of an infinite-dimensional SDEs) for the online least-squares SGD iterates, highlighting certain phase-transitions. A direct consequence of the result is obtaining explicit expressions for the mean-squared estimation/prediction errors and its fluctuations, under high-dimensional scalings.
February 2Mahadevan Ganesh

Title: Time- and Frequency-Domain Wave Propagation Models: Reformulations, Algorithms, Analyses, and Simulations

Abstract: Efficient simulation of wave propagation induced by multiple structures is fundamental for numerous applications. Robust mathematical modeling of the underlying
time-dependent physical process is crucial for designing high-order computational methods for the multiple scattering simulations. Development of related algorithms and analyses are based on celebrated continuous mathematical equations either in the time- or frequency-domain, with the latter involving mathematical manipulations.
Consequently, the meaning of the term "multiple scattering" varies depending on the context in which it is used. Physics literature suggests that the continuous frequency-domain (FD) multiple scattering model is a purely mathematical construct, and that in the time-domain (TD), multiple scattering becomes a definite physical phenomenon. In recent years there has been substantial development of computational multiple scattering algorithms in the FD. In the context of computational multiple scattering, it is important to ensure that the simulated solutions represent the definite physical multiple scattering process. In this talk, we describe our recent contributions to the development of high-order wave propagation computational models in both time- and frequency-domains, and we argue that spectrally accurate FD scattering algorithms are crucial for efficient and practical simulation of physically appropriate TD multiple scattering phenomena in unbounded regions with multiple structures.
February 9Matt Hofkes - Graduate Student Colloquium
February 16Brandon Knutson - Graduate Student Colloquium
February 23Julia Arciero


Title: Modeling oxygen transport and flow regulation in the human retina

Abstract: Impairments in retinal blood flow and oxygenation have been shown to contribute to the progression of glaucoma. In this study, a theoretical model of the retina is used to predict retinal blood flow and oxyhemoglobin saturation at differing levels of capillary density and autoregulation capacity as intraluminal pressure, oxygen demand, or intraocular pressure are varied. The model includes a heterogeneous representation of retinal arterioles and a compartmental representation of capillaries and venules. A Green’s function method is used to model oxygen transport in the arterioles, and a Krogh cylinder model is used in the capillaries and venules. Model results predict that increased intraocular pressure and impaired blood flow regulation can each cause decreased tissue oxygenation. Under baseline conditions of a capillary density of 500 mm-2, an autoregulation plateau is predicted for incoming intraluminal pressures in the range of 32 - 40 mmHg. Decreasing capillary density or increasing intraocular pressure leads to a loss in the autoregulation plateau in that pressure range. If the patient has no ability to regulate flow, the autoregulation plateau disappears entirely. Ultimately, the model framework presented in this study will allow for future comparisons to sectorial-specific clinical data to help assess the potential role of impaired blood flow regulation in ocular disease.
March 1Brennan Sprinkle & Dorit Hammerling

Title: Why applied math and statistics work so well together: Detecting, localizing and quantifying methane emissions on oil and gas facilities.

Abstract: Methane, the main component of natural gas, is the second-largest contributor to climate change after carbon dioxide. Methane has a higher heat-trapping potential but shorter lifetime than carbon dioxide, and therefore, rapid reduction of methane emissions can have quick and large climate change mitigation impacts. Reducing emissions from the oil and gas supply chain, which account for approximately 14% of total methane emissions, turns out to be a particularly promising avenue in part due to rapid development in continuous emission monitoring technology. We will present a fast method for the modeling and simulation of methane emission dispersion, and how we use these simulations as a critical building block within a statistical framework for quick emission detection and localization using continuous methane concentration data. In particular, we will highlight the importance of combining approaches from applied math and scientific computing with modern statistics and data science to furnish a practical method for rapid emission detection on oil and gas production facilities. We'll end by discussing some open questions and ongoing challenges with this work and opportunities to get involved.
March 8Jeff Anderson

Title: Ensemble Kalman Filters for Data Assimilation: An Overview and Future Directions

Abstract: The development of numerical weather prediction was a great computational and scientific
achievement of the last century. Models of the PDEs that govern fluid flow and a vast observing
network are required for these predictions. A third vital component is data assimilation (DA)
that combines observations with predictions from previous times to produce initial conditions
for subsequent predictions.
Ensemble Kalman filters are DA algorithms that use a set of predictions to estimate the PDF of
the model state given observations. They are used for weather, but also for many other
geophysical systems, and for applications like disease transmission. They can be extended to
estimate model parameters, guide model improvement, evaluate observation quality and
design future observing systems.
Basic Kalman and ensemble Kalman filter algorithms are reviewed, followed by a discussion of
some heuristic extensions like localization that are required for application to large models.
Recent work to extend ensemble filters to strongly non-Gaussian nonlinear problems will be
discussed. These extensions are particularly beneficial when applying filters for quantities like
rainfall or tracer concentration where the appropriate priors can be best represented by mixed
PDFs; PDFs that are a sum of continuous and discrete functions.
March 15No Colloquium
March 22No Colloquium Due To Spring Break
March 29John Schreck

Title: Evidential Deep Learning: Enhancing Predictive Uncertainty Estimation for Earth System Science Applications

Abstract: Uncertainty quantification is crucial for reliable weather and climate modeling but challenging to achieve. In this seminar, I will demonstrate evidential deep learning, combining probabilistic modeling with deep neural networks, as an effective technique for predictive modeling and calibrated uncertainty estimation. Through atmospheric science classification and regression tasks, we show evidential models attaining predictive accuracy comparable to standard methods while robustly quantifying uncertainty. Uncertainty decomposition provides insights into aleatoric (data variability) and epistemic (model limitations) components. Gaining insights into these distinct uncertainty sources is paramount for enhancing model reliability, utility, and efficiency. We compare the uncertainty metrics derived from evidential neural networks to those obtained from calibrated ensembles, with evidential networks resulting in significant computational savings. Analyses reveal links between uncertainties and underlying meteorological processes, facilitating interpretation. This study establishes deep evidential networks as an adaptable tool for augmenting neural network predictions across geoscience disciplines, overcoming limitations of prevailing approaches. With the ability to produce trustworthy uncertainties alongside predictions, evidential learning has the potential to transform weather and climate modeling, aiding critical analysis and decision-making under uncertainty.
April 5Jennifer Mueller

Title: Electrical impedance tomography for pulmonary imaging: from inverse problems to the clinic

Abstract: Electrical impedance tomography (EIT) is a non-invasive, non-ionizing imaging technique that produces real-time functional images of ventilation and pulmonary perfusion at the bedside. The inverse problem of EIT is to determine the spatially varying conductivity, which arises as a coefficient in the generalized Laplace equation, from measurements of the voltages that arise on electrodes on the surface of the body from applied currents on those same electrodes. The mathematical problem is also known as the Calderon problem of reconstructing the conductivity coefficient from the Neumann-to-Dirichlet map. My lab at CSU is focused on collaborations with engineers and physicians to develop EIT technology for clinical use. In this talk, I will discuss the mathematics of the inverse problem of forming real-time images as well as clinical applications in collaboration with Children's Hospital Colorado and Anschutz Hospital in Aurora. Results from patient data with chronic and critical lung disease will be shown and discussed.
April 12No Colloquium Due To E-Days
April 19Brian Reich

Title: Bayesian computational methods for spatial models with intractable likelihoods

Abstract: Extreme value analysis is critical for understanding the effects of climate change. Exploiting the spatiotemporal structure of climate data can improve estimates by borrowing strength across nearby locations and provide estimates of the probability of simultaneous extreme events. A fundamental probability model for spatially-dependent extremes is the max-stable processes. While this model is theoretically justified, it leads to an intractable likelihood function. We propose to use deep learning to overcome this computational challenge. The approximation is based on simulating millions of draws from the prior and then the data-generating process, and then using deep learning density regression to approximate the posterior distribution. We verify through extensive simulation experiments that this approach leads to reliable Bayesian inference, and discuss extensions to other spatial processes with intractable likelihoods including the autologistic model for binary data and SIR model for the spread of an infectious disease.
April 26Doug Nychka

Title: Hybrid L1 and L2 Smoothing

Abstract: Spline smoothing, and more generally Gaussian process smoothing, have become a successful methodology for estimating a smooth trend or surface from noisy data. Similarly, the LASSO and related L1 penalties have become important tools for variable selection and also admit of a Bayesian version based on the Laplace distribution. This project combines these two approaches as a method to detect discontinuous behavior in an otherwise smooth signal. Every day the Foothills Facility of Denver Water filters more than 250 million gallons of water for the metropolitan area. This process runs continuously and is monitored across an array of filters, each the size of a small swimming pool, at 5-minute intervals. It is important to be able detect anomalous behavior in a filter in a prompt manner or to process past measurements to determine trends. The anomalies take the form of discontinuities or appear as step changes in the smooth filtering cycle. This application is the motivation for a mixed smoothing approach where normal operation is captured by a smoothing spline and the anomalies by basis function coefficients determined by an L1 penalty. As part of this research a frequentist penalty method is compared against its equivalent Bayesian hierarchical model (BHM) based on Gaussian processes and a Laplace prior for the anomaly coefficients. This talk will discuss some of the challenges in implementing both models. Specifically, we study how to choose penalty parameters for the frequentist model and how to formulate then BHM in a way that the MCMC sampling algorithm mixes efficiently. Both approaches appear to classify anomalies in the filter cycles well with the spline model being much faster but the BHM providing measures of uncertainty in the detected anomalies. The similarities between these frequentist and Bayesian models relies on the correspondence between splines and Gaussian processes. This was first described by Grace Wahba, a long-time faculty member of the UW statistics department, and George Kimeldorf. Some background for this connection will be given as part of developing the Bayesian model.
May 3Raul Perez Pelaez
May 10No Colloquium