# Weekly Colloquia

Guest speakers from around the world discuss trends and research in the field of Applied Mathematics and Statistics.

Colloquium takes place at 3 p.m. on Fridays in Chauvenet Hall room 143. Attendance is in person or via Zoom. Please contact Samy Wu Fung at swufung@mines.edu or Daniel McKenzie at dmckenzie@mines.edu for further information and the Zoom link and password.

## Fall 2024

TBD | TBD |

##### Spring 2024

January 12 | Ryan Peterson - Graduate Student Colloquium Title: Spatial Statistical Data Fusion with LatticeKrig |

January 19 | Krishna Balasubramanian Title: High-dimensional scaling limits of least-square online SGD and its fluctuations. Abstract: Stochastic Gradient Descent (SGD) is widely used in modern data science. Existing analyses of SGD have predominantly focused on the fixed-dimensional setting. In order to perform high-dimensional statistical inference with such algorithms, it is important to study the dynamics of SGD when both the dimension and the iterations go to infinity at a proportional rate. In this talk, I will discuss such high-dimensional limit theorems for the online least-squares SGD iterates for solving over-parameterized linear regression. Specifically, focusing on the double-asymptotic setting (i.e., when both the dimensionality and iterations tend to infinity), I will present the mean-field limit (in the form of an infinite-dimensional ODE) and fluctuations (in the form of an infinite-dimensional SDEs) for the online least-squares SGD iterates, highlighting certain phase-transitions. A direct consequence of the result is obtaining explicit expressions for the mean-squared estimation/prediction errors and its fluctuations, under high-dimensional scalings. |

February 2 | Mahadevan Ganesh Title: Time- and Frequency-Domain Wave Propagation Models: Reformulations, Algorithms, Analyses, and Simulations Abstract: Efficient simulation of wave propagation induced by multiple structures is fundamental for numerous applications. Robust mathematical modeling of the underlying time-dependent physical process is crucial for designing high-order computational methods for the multiple scattering simulations. Development of related algorithms and analyses are based on celebrated continuous mathematical equations either in the time- or frequency-domain, with the latter involving mathematical manipulations. Consequently, the meaning of the term "multiple scattering" varies depending on the context in which it is used. Physics literature suggests that the continuous frequency-domain (FD) multiple scattering model is a purely mathematical construct, and that in the time-domain (TD), multiple scattering becomes a definite physical phenomenon. In recent years there has been substantial development of computational multiple scattering algorithms in the FD. In the context of computational multiple scattering, it is important to ensure that the simulated solutions represent the definite physical multiple scattering process. In this talk, we describe our recent contributions to the development of high-order wave propagation computational models in both time- and frequency-domains, and we argue that spectrally accurate FD scattering algorithms are crucial for efficient and practical simulation of physically appropriate TD multiple scattering phenomena in unbounded regions with multiple structures. |

February 9 | Matt Hofkes - Graduate Student Colloquium |

February 16 | Brandon Knutson - Graduate Student Colloquium |

February 23 | Julia Arciero Title: Modeling oxygen transport and flow regulation in the human retina Abstract: Impairments in retinal blood flow and oxygenation have been shown to contribute to the progression of glaucoma. In this study, a theoretical model of the retina is used to predict retinal blood flow and oxyhemoglobin saturation at differing levels of capillary density and autoregulation capacity as intraluminal pressure, oxygen demand, or intraocular pressure are varied. The model includes a heterogeneous representation of retinal arterioles and a compartmental representation of capillaries and venules. A Green’s function method is used to model oxygen transport in the arterioles, and a Krogh cylinder model is used in the capillaries and venules. Model results predict that increased intraocular pressure and impaired blood flow regulation can each cause decreased tissue oxygenation. Under baseline conditions of a capillary density of 500 mm-2, an autoregulation plateau is predicted for incoming intraluminal pressures in the range of 32 - 40 mmHg. Decreasing capillary density or increasing intraocular pressure leads to a loss in the autoregulation plateau in that pressure range. If the patient has no ability to regulate flow, the autoregulation plateau disappears entirely. Ultimately, the model framework presented in this study will allow for future comparisons to sectorial-specific clinical data to help assess the potential role of impaired blood flow regulation in ocular disease. |

March 1 | Brennan Sprinkle & Dorit Hammerling Title: Why applied math and statistics work so well together: Detecting, localizing and quantifying methane emissions on oil and gas facilities. Abstract: Methane, the main component of natural gas, is the second-largest contributor to climate change after carbon dioxide. Methane has a higher heat-trapping potential but shorter lifetime than carbon dioxide, and therefore, rapid reduction of methane emissions can have quick and large climate change mitigation impacts. Reducing emissions from the oil and gas supply chain, which account for approximately 14% of total methane emissions, turns out to be a particularly promising avenue in part due to rapid development in continuous emission monitoring technology. We will present a fast method for the modeling and simulation of methane emission dispersion, and how we use these simulations as a critical building block within a statistical framework for quick emission detection and localization using continuous methane concentration data. In particular, we will highlight the importance of combining approaches from applied math and scientific computing with modern statistics and data science to furnish a practical method for rapid emission detection on oil and gas production facilities. We'll end by discussing some open questions and ongoing challenges with this work and opportunities to get involved. |

March 8 | Jeff Anderson Title: Ensemble Kalman Filters for Data Assimilation: An Overview and Future Directions Abstract: The development of numerical weather prediction was a great computational and scientific achievement of the last century. Models of the PDEs that govern fluid flow and a vast observing network are required for these predictions. A third vital component is data assimilation (DA) that combines observations with predictions from previous times to produce initial conditions for subsequent predictions. Ensemble Kalman filters are DA algorithms that use a set of predictions to estimate the PDF of the model state given observations. They are used for weather, but also for many other geophysical systems, and for applications like disease transmission. They can be extended to estimate model parameters, guide model improvement, evaluate observation quality and design future observing systems. Basic Kalman and ensemble Kalman filter algorithms are reviewed, followed by a discussion of some heuristic extensions like localization that are required for application to large models. Recent work to extend ensemble filters to strongly non-Gaussian nonlinear problems will be discussed. These extensions are particularly beneficial when applying filters for quantities like rainfall or tracer concentration where the appropriate priors can be best represented by mixed PDFs; PDFs that are a sum of continuous and discrete functions. |

March 15 | No Colloquium |

March 22 | No Colloquium Due To Spring Break |

March 29 | John Schreck Title: Evidential Deep Learning: Enhancing Predictive Uncertainty Estimation for Earth System Science Applications Abstract: Uncertainty quantification is crucial for reliable weather and climate modeling but challenging to achieve. In this seminar, I will demonstrate evidential deep learning, combining probabilistic modeling with deep neural networks, as an effective technique for predictive modeling and calibrated uncertainty estimation. Through atmospheric science classification and regression tasks, we show evidential models attaining predictive accuracy comparable to standard methods while robustly quantifying uncertainty. Uncertainty decomposition provides insights into aleatoric (data variability) and epistemic (model limitations) components. Gaining insights into these distinct uncertainty sources is paramount for enhancing model reliability, utility, and efficiency. We compare the uncertainty metrics derived from evidential neural networks to those obtained from calibrated ensembles, with evidential networks resulting in significant computational savings. Analyses reveal links between uncertainties and underlying meteorological processes, facilitating interpretation. This study establishes deep evidential networks as an adaptable tool for augmenting neural network predictions across geoscience disciplines, overcoming limitations of prevailing approaches. With the ability to produce trustworthy uncertainties alongside predictions, evidential learning has the potential to transform weather and climate modeling, aiding critical analysis and decision-making under uncertainty. |

April 5 | Jennifer Mueller Title: Electrical impedance tomography for pulmonary imaging: from inverse problems to the clinic Abstract: Electrical impedance tomography (EIT) is a non-invasive, non-ionizing imaging technique that produces real-time functional images of ventilation and pulmonary perfusion at the bedside. The inverse problem of EIT is to determine the spatially varying conductivity, which arises as a coefficient in the generalized Laplace equation, from measurements of the voltages that arise on electrodes on the surface of the body from applied currents on those same electrodes. The mathematical problem is also known as the Calderon problem of reconstructing the conductivity coefficient from the Neumann-to-Dirichlet map. My lab at CSU is focused on collaborations with engineers and physicians to develop EIT technology for clinical use. In this talk, I will discuss the mathematics of the inverse problem of forming real-time images as well as clinical applications in collaboration with Children's Hospital Colorado and Anschutz Hospital in Aurora. Results from patient data with chronic and critical lung disease will be shown and discussed. |

April 12 | No Colloquium Due To E-Days |

April 19 | Brian Reich Title: Bayesian computational methods for spatial models with intractable likelihoods Abstract: Extreme value analysis is critical for understanding the effects of climate change. Exploiting the spatiotemporal structure of climate data can improve estimates by borrowing strength across nearby locations and provide estimates of the probability of simultaneous extreme events. A fundamental probability model for spatially-dependent extremes is the max-stable processes. While this model is theoretically justified, it leads to an intractable likelihood function. We propose to use deep learning to overcome this computational challenge. The approximation is based on simulating millions of draws from the prior and then the data-generating process, and then using deep learning density regression to approximate the posterior distribution. We verify through extensive simulation experiments that this approach leads to reliable Bayesian inference, and discuss extensions to other spatial processes with intractable likelihoods including the autologistic model for binary data and SIR model for the spread of an infectious disease. |

April 26 | Doug Nychka Title: Hybrid L1 and L2 Smoothing Abstract: Spline smoothing, and more generally Gaussian process smoothing, have become a successful methodology for estimating a smooth trend or surface from noisy data. Similarly, the LASSO and related L1 penalties have become important tools for variable selection and also admit of a Bayesian version based on the Laplace distribution. This project combines these two approaches as a method to detect discontinuous behavior in an otherwise smooth signal. Every day the Foothills Facility of Denver Water filters more than 250 million gallons of water for the metropolitan area. This process runs continuously and is monitored across an array of filters, each the size of a small swimming pool, at 5-minute intervals. It is important to be able detect anomalous behavior in a filter in a prompt manner or to process past measurements to determine trends. The anomalies take the form of discontinuities or appear as step changes in the smooth filtering cycle. This application is the motivation for a mixed smoothing approach where normal operation is captured by a smoothing spline and the anomalies by basis function coefficients determined by an L1 penalty. As part of this research a frequentist penalty method is compared against its equivalent Bayesian hierarchical model (BHM) based on Gaussian processes and a Laplace prior for the anomaly coefficients. This talk will discuss some of the challenges in implementing both models. Specifically, we study how to choose penalty parameters for the frequentist model and how to formulate then BHM in a way that the MCMC sampling algorithm mixes efficiently. Both approaches appear to classify anomalies in the filter cycles well with the spline model being much faster but the BHM providing measures of uncertainty in the detected anomalies. The similarities between these frequentist and Bayesian models relies on the correspondence between splines and Gaussian processes. This was first described by Grace Wahba, a long-time faculty member of the UW statistics department, and George Kimeldorf. Some background for this connection will be given as part of developing the Bayesian model. |

May 3 | Raul Perez Pelaez |

May 10 | No Colloquium |

##### Fall 2023

September 1 | Laura Albright - Graduate Student Colloquium |

September 5 (Tuesday) at 3:30 pm | Sophie Marbach Title: The Countoscope: Counting particles in boxes to probe individual and collective dynamics Abstract: Any imaging technique is limited by its field of view. As objects or particles move in and out of the observation field, tracking their motion, especially over long periods, becomes challenging. Here, we shift this paradigm by introducing a technique that deliberately harvests the limited field of view. We divide images into observation boxes and count the particles in each box. By analyzing the statistical properties of the number of particles, with varying observation box sizes, we show that we can infer the kinetic properties of the particles, such as their diffusion coefficient, without relying on particle tracking. We use a combination of experiments on colloidal suspensions, simulations with fluctuating hydrodynamics and analytical theory to support our findings. By investigating suspensions with increasing packing fraction, we show how box counting can probe, beyond the self-diffusion coefficient, hydrodynamic and steric effects, and collective motion. We extend our technique to various suspensions, such as ions or active particles. The "Countoscope" offers the unique possibility to systematically link individual and collective behavior, opening up broad soft matter and statistical physics perspectives. |

September 15 | Rob Webber Title: Rocket-propelled Cholesky: Addressing the challenges of large-scale kernel computations. Abstract: Kernel methods are used for prediction and clustering in many data science and scientific computing applications, but applying kernel methods to a large number of data points N is expensive due to the high cost of manipulating the N x N kernel matrix. A basic approach for speeding up kernel computations is low-rank approximation, in which we replace the kernel matrix A with a factorized approximation that can be stored and manipulated more cheaply. When the kernel matrix A has rapidly decaying eigenvalues, mathematical existence proofs guarantee that A can be accurately approximated using a constant number of columns (without ever looking at the full matrix). Nevertheless, for a long time designing a practical and provably justified algorithm to select the appropriate columns proved challenging. Recently, we introduced RPCholesky ("randomly pivoted" or "rocket-propelled" Cholesky), a natural algorithm for approximating an N x N positive semidefinite matrix using k adaptively sampled columns. RPCholesky can be implemented with just a few lines of code; it requires only (k + 1) N entry evaluations and O(k^2 N) additional arithmetic operations. In experiments, RPCholesky matches or improves on the performance of alternative algorithms for low-rank psd approximation. Moreover, RPCholesky provably achieves near-optimal approximation guarantees. The simplicity, effectiveness, and robustness of this algorithm strongly support its use for large-scale kernel computations. |

September 22 | Fatemeh Pourahmadian Title: Recent progress in inverse elastic scattering Abstract: The first part of this talk highlights recent laboratory applications of sampling approaches to inverse scattering with a particular focus on the linear sampling method and its generalized form for reconstruction from noisy measurements. For this purpose, I leverage two types of experiments; the first setup is designed to mimic fracking with the aim of seismic sensing of evolving fractures in rock, while the other setup pertains to laser ultrasonic testing for characterization of additively manufactured components. I will also include some preliminary results on our recent efforts to potentially augment and automate the inversion process via deep learning. This would set the stage for the second part of this talk which is mostly theoretical and dedicated to differential evolution indicators for imaging evolving processes in unknown environments. |

September 29 | Ernest Ryu Title: Toward a Grand Unified Theory of Accelerations in Optimization and Machine Learning Abstract: Momentum-based acceleration of first-order optimization methods, first introduced by Nesterov, has been foundational to the theory and practice of large-scale optimization and machine learning. However, finding a fundamental understanding of such acceleration remains a long-standing open problem. In the past few years, several new acceleration mechanisms, distinct from Nesterov’s, have been discovered, and the similarities and dissimilarities among these new acceleration phenomena hint at a promising avenue of attack for the open problem. In this talk, we discuss the envisioned goal of developing a mathematical theory unifying the collection of acceleration mechanisms and the challenges that are to be overcome. |

October 6 | Emily King Title: Interpretable, Explainable, and Adversarial AI: Data Science Buzzwords and You (Mathematicians) Abstract: Many state-of-the-art methods in machine learning are black boxes which do not allow humans to understand how decisions are made. In a number of applications, like medicine and atmospheric science, researchers do not trust such black boxes. Explainable AI can be thought of as attempts to open the black box of neural networks, while interpretable AI focuses on creating white boxes. Adversarial attacks are small perturbations of data, often images, that cause a neural network to misclassify the data. Such attacks are potentially very dangerous when applied to technology like self-driving cars. After a gentle introduction to these topics and data science in general, a sampling of methods from geometry, linear algebra, and harmonic analysis to attack these issues will be presented. |

October 13 | No Colloquium |

October 20 | Kate Bubar Title: Fundamental limits to the effectiveness of traveler screening with molecular tests Abstract: Screening airport travelers during an emerging infectious disease outbreak is a common approach to limit the geographical spread of infection. Previous modeling work has explored the effectiveness of screening travelers for symptoms or exposure risk for a variety of pathogens. We developed a probabilistic modelling framework to build on this past work via three main contributions, (1) estimating the effectiveness of screening with molecular tests (e.g., PCR or rapid tests), (2) integrating important heterogeneities in individuals’ detectability and infectiousness with temporal within-host viral kinetics models, and (3) quantifying the fundamental limits of traveler screening. In this talk, I will describe the relevant biological and epidemiological background, our modelling approach and analysis, and the implications for public health policy makers. |

October 26 | Matt Picklo - Graduate Student Colloquium |

October 27 | Megan Wawro Title: The Inquiry-Oriented Linear Algebra Project Abstract: The goal of the Inquiry-Oriented Linear Algebra (IOLA) project is to promote a research-based, student-centered approach to the teaching and learning of introductory linear algebra at the university level. Based on the instructional design theory of Realistic Mathematics Education, the IOLA curricular materials build from a set of experientially real tasks that allow for active student engagement in the guided reinvention of key mathematical ideas through student and instructor inquiry. The online instructional support materials include various resources such as rationales for task design, implementation suggestions, and examples of typical student work. In this talk, I will share some IOLA tasks and associated examples of student reasoning, as well as some guiding principles for inquiry-oriented instruction. |

November 3 | Elizabeth Newman Title: Training Made Easy: Harnessing Structure and Curvature Information to Train Neural Networks Efficiently Abstract: Deep neural networks (DNNs) have achieved inarguable success as high-dimensional function approximators in countless applications. However, this success comes at a significant hidden expense: the cost of training. Typically, the training problem is posed as a stochastic optimization problem with respect to the learnable DNN weights. With millions of weights, a non-convex objective function, and many hyperparameters to tune, solving the training problem well is no easy task. In this talk, we will present new algorithms that make DNN training easier by exploiting common structure, automating hyperparameter tuning, and computing curvature information efficiently. We will first focus on training separable DNNs; that is, architectures for which the weights of the final layer are applied linearly. We will leverage this structure first in a deterministic setting by eliminating the linear weights through variable projection (i.e., partial optimization). Then, we will extend to a stochastic setting using a powerful iterative sampling approach, which notably incorporates automatic regularization parameter selection. Time-permitting, we will discuss up-and-coming work that introduces a memory and computationally efficient Gauss-Newton optimizer for training large-scale DNN models rapidly. Throughout the talk, we will demonstrate the efficacy of these approaches through numerical examples. |

November 10 | Michael Ivanitskiy - Graduate Student Colloquium |

November 17 | Andrew Zammit Mangion Title: Bayesian Neural Networks for Spatial Process Modelling Abstract: Statistical models for spatial processes play a central role in statistical analyses of spatial data. Yet, it is the simple, interpretable, and well understood models that are routinely employed even though, as is revealed through prior and posterior predictive checks, these can poorly characterise the spatial heterogeneity in the underlying process of interest. In this talk I will propose a new, flexible class of spatial-process models, which I refer to as spatial Bayesian neural networks (SBNNs). An SBNN leverages the representational capacity of a Bayesian neural network; it is tailored to a spatial setting by incorporating a spatial "embedding layer" into the network and, possibly, spatially-varying network parameters. An SBNN is calibrated by matching its finite-dimensional distribution at locations on a fine gridding of space to that of a target process of interest. That process could be easy to simulate from or we have many realisations from it. I will be formulating several variants of SBNNs, most of which are able to match the finite-dimensional distribution of the target process at the selected grid better than conventional BNNs of similar complexity. I will show that a single SBNN can remarkably be used to represent a variety of spatial processes often used in practice, such as Gaussian processes and lognormal processes. I will briefly discuss the tools that could be used to make inference with SBNNs, and will conclude with a discussion on their advantages and limitations. |

November 24 | No Colloquium |

November 30 | Heather Zinn Brooks Title: The Mathematics of Opinion Dynamics Abstract: Given the large audience and the ease of sharing content, the shifts in opinion driven by online interaction have important implications for interpersonal interactions, public opinion, voting, and policy. There is a critical and growing demand to understand the mechanisms behind the spread of content online. While the majority of the research on online content focuses on these phenomena from an empirical or computational perspective, mechanistic mathematical modeling also has an important role to play. Mathematical models can help develop a theory to understand the mechanisms underpinning the spread of content and diffusion of information. These models provide an excellent framework because they are often relatively simple models with surprisingly rich dynamics. In this talk, I’ll introduce you to a variety of mathematical models for opinion dynamics, and I’ll highlight some particular problems that we study in my research group. |

December 1 | Graduate Student Colloquium |

December 8 | No Colloquium |

December 15 | No Colloquium |

##### Spring 2023

January 13 | Siting Liu Title: An Inverse Problem in Mean Field Games from Partial Boundary Measurement Abstract: In this talk, we consider a novel inverse problem in mean-field games (MFG). We aim to recover the MFG model parameters that govern the underlying interactions among the population based on a limited set of noisy partial observations of the population dynamics under the limited aperture. Due to its severe ill-posedness, obtaining a good quality reconstruction is very difficult. Nonetheless, it is vital to recover the model parameters stably and efficiently to uncover the underlying causes of population dynamics for practical needs. Our work focuses on the simultaneous recovery of running cost and interaction energy in the MFG equations from a finite number of boundary measurements of population profile and boundary movement. To achieve this goal, we formalize the inverse problem as a constrained optimization problem of a least squares residual functional under suitable norms. We then develop a fast and robust operator splitting algorithm to solve the optimization using techniques including harmonic extensions, three-operator splitting scheme, and primal-dual hybrid gradient method. Numerical experiments illustrate the effectiveness and robustness of the algorithm. This is based on joint work with Yat Tin Chow, Samy Wu Fung, Levon Nurbekyan, and Stanley J. Osher. |

January 20 | Lyndsay Shand (In-Person) Title: An autotuning approach to DOE’s earth system model Abstract: Calibration is the science (and art) of matching the model to observed data. Global Climate Model (GCM) calibration is a multi-step process done by hand and is a tedious and time-consuming process. GCM calibration involves both high-dimensional input and output spaces. Many rigorous calibration methods have been proposed in both statistical and climate literature, but many are not practical to implement. In this talk, I will demonstrate a promising and practical calibration approach on the atmosphere only model of the Department of Energy’s Energy Exascale Earth System Model (E3SM). We generate a set of designed ensemble runs that span our input parameter space and fit a surrogate model using polynomial chaos expansions on a reduced space. We then use surrogate in an optimization scheme to identify input parameter sets that best match our simulated output to observations. Finally, we run E3SM with the optimal parameter set and compare prediction results across 44 spatial fields to the hand-tuned optimal parameter set chosen by experts. This flexible approach is straightforward to implement and seems to do as well or better than the tuning parameters chosen by the expert while considering high-dimensional output and operating in a fraction of the time. |

January 27 | Shelby Stowe - Graduate Student Colloquium |

February 3 | Dave Montgomery - Graduate Student Colloquium |

February 10 | Nathan Lenssen Title: What will the weather be next year? How we can (and can’t) predict the chaotic evolution of the climate system. Abstract: The Earth’s atmosphere and ocean are coupled chaotic nonlinear dynamical systems that drive the weather and long-term climate patterns we experience. Variability in the climate, or changes in the distribution of weather, can lead to increased chances of climate extremes such as drought and weather extremes such as hurricanes. The El Niño-Southern Oscillation (ENSO) is the dominant source of subseasonal, seasonal, and multi-year climate variability, driving changes in climate and weather worldwide. ENSO is also the dominant source of theoretical predictability on these timescales, giving us the opportunity to predict the weather in the coming months and years. This seminar will discuss the history of modeling and predicting ENSO as a dynamical system from the conception in the 1980s up to the state-of-the art. We will show results from ongoing research into predicting ENSO up to 2 years in advance. Open questions in climate prediction will be discussed with an emphasis on possible applications for applied mathematics, statistics, and machine learning. |

February 17 | Matthias Katzfuss Title: Scalable Gaussian-Process Inference via Sparse Inverse Cholesky Factorization. Abstract: Gaussian processes (GPs) are popular, flexible, and interpretable probabilistic models for functions in geospatial analysis, computer-model emulation, and machine learning. However, direct application of GPs involves dense covariance matrices and is computationally infeasible for large datasets. We consider a framework for fast GP inference based on the so-called Vecchia approximation, which implies a sparse Cholesky factor of the inverse covariance matrix. The approximation can be written in closed form and computed in parallel, and it includes many popular existing approximations as special cases. We discuss various applications and extensions of the framework, including high-dimensional inference and variational approximations for latent GPs. |

February 24 | Dan Cooley (In-Person) Title: Transformed-Linear Methods for Multivariate Extremes and Application to Climate Abstract: Statistical methods for extremes are widely used in climate science. Distributions like the generalized extreme value and generalized Pareto are familiar tools used by climate scientists to describe the extreme behavior of univariate data. Multivariate (and spatial and time series) extremes largely focuses on accurately capturing the tail dependence of several variables. Multivariate extremes models can be complicated and can be difficult to fit in high dimensions. In this talk, we will use methods from classical (non-extreme) statistics as inspiration to create sensible methods for multivariate extremes. Many classical statistical methods (e.g., PCA, spatial, factor analysis, and time series) employ the covariance matrix to learn about dependence, to construct models, or to perform prediction. Most familiar statistics methods are linear. However, extremal dependence is poorly described by the covariance matrix. Linear methods have not been widely employed for extremes, and are difficult to implement for data that are non-negative. In this talk, we will introduce transformed linear methods for extremes. By using the tail pairwise dependence matrix (TPDM) in place of the covariance matrix, and by employing transformed linear operations, extreme analogues can be developed for familiar linear statistical methods. Here, we will focus on developing transformed linear time series models to capture dependence in the upper tail. These models are extremal analogues to familiar ARMA models. We apply these models to perform attribution for seasonal wildfire conditions. To focus on change in fire risk due to climate, we model the fire weather index (FWI) time series. We use our fitted model to perform an attribution study. According to our fitted model, the 2020 Colorado fire season is many times more likely to occur under recent climate than under the climate of 50 years ago. If time allows, we will also present results from a PCA analysis of CONUS extreme precipitation. |

March 3 | No Colloquium |

March 10 | Steve Pankavich Title: Kinetic Models of Collisionless Plasmas Abstract: Collisionless plasmas arise in a variety of settings, ranging from magnetically confined plasmas to study thermonuclear energy to space plasmas in planetary magnetospheres and solar winds. The two fundamental models that describe such phenomena are systems of nonlinear partial differential equations known as the Vlasov-Maxwell (VM) and Vlasov-Poisson (VP) systems. We will derive these kinetic models and discuss the possibility of shocks arising from a continuous distribution of particles. In the process of this investigation, it will be important to delineate the difference between the mathematical formulation of a shock and related phenomena often described by physicists. Finally, we will describe the stability and instability of velocity-dependent steady states in plasmas and discuss some recent computational results regarding their behavior. |

March 17 | Christian Parkinson (Remote) Title: The Hamilton-Jacobi Formulation of Optimal Path Planning for Autonomous Vehicles Abstract: We present a partial-differential-equation-based optimal path planning framework for simple self-driving cars. This formulation relies on optimal control theory, dynamic programming, and a Hamilton-Jacobi-Bellman equation, and thus provides an interpretable alternative to black-box machine learning algorithms. We design grid-based numerical methods used to resolve the solution to the Hamilton-Jacobi-Bellman equation and generate optimal trajectories. We then describe how efficient and scalable algorithms for solutions of high dimensional Hamilton-Jacobi equations can be developed to solve similar problems in higher dimensions and in nearly real-time. We demonstrate our methods with several examples. |

March 24 | No Colloquium |

March 31 | Tusharkanti Ghosh (In-Person) Title: Bayesian Hierarchical Hidden Markov Models for Identifying Differentially-Methylated Cytosines from Bisulfite-Sequencing Data Abstract: DNA methylation is a crucial epigenetic mechanism for controlling gene expression, silencing, and genomic imprinting in living cells, and aberrant methylation has been associated with a variety of important biological processes and disease, including ageing and cancer. Recent developments in ultra-high throughput sequencing technologies and the massive accumulation of sequencing data bring the hope of understanding the workings of DNA methylation at a molecular level, however, these data pose significant challenges in modelling and analysis. In this talk, I discuss how we developed a Bayesian statistical framework and methodology for the identification of differential patterns of DNA methylation between different groups of cells, focusing on a study of human ageing. Our approach develops and extends a class of Bayesian hierarchical hidden Markov models (HHMMs) that can accommodate various degrees of dependence among the sequence-level measurements, both within and across the sequences, and provides the ability to select between competing alternative models for the most appropriate one for a specific methylation data set. Our proposed methodology to determine differentially-methylated Cytosines (DMCs) is implemented through a fast and efficient hybrid Markov chain Monte Carlo algorithm, and we demonstrate how it significantly improves correct prediction rates, with a reduced false discovery rate, compared to several existing methods for DMC detection. |

April 7 | Olivia Walch (In-Person) Title: Circadian Interventions in Shift Workers: Translating math to the real world Abstract: Shift workers experience profound circadian disruption due to the nature of their work, which often has them on-the-clock at times when their internal clock is sending a strong, sleep-promoting signal. This constant disruption of their sleep and circadian rhythms can put them at risk of injury and development of long term chronic disease. Mathematical models can be used to generate recommendations for shift workers that move their internal clock state to better align with their work schedules, promote overall sleep, promote alertness at key times, or achieve other desired outcomes. Yet for these schedules to have a positive effect in the real world, they need to be acceptable to the shift workers themselves. In this talk, I will survey the types of schedules a shift worker may be recommended by an algorithm, and how they can collide with the preferences of the real people being asked to follow them, and how math can be used to arrive at new schedules that take these human factors into account. |

April 14 | Michael Ivanitskiy - Graduate Student Colloquium |

April 21 | Keaton Hamm (In-Person) Title: Optimal Transport Based Manifold Learning Abstract: We will discuss the use of optimal transport in the setting of nonlinear dimensionality reduction and applications to image data. We illustrate the idea with an algorithm called Wasserstein Isometric Mapping (Wassmap) which works for data that can be viewed as a set of probability measures in Wasserstein space. The algorithm provides a low-dimensional, approximately isometric embedding. We show that the algorithm is able to exactly recover parameters of some image manifolds including those generated by translations or dilations of a fixed generating measure. We will discuss computational speedups to the algorithm such as use of linearized optimal transport or the Nystr\"{o}m method. Testing of the proposed algorithms on various image data manifolds show that Wassmap yields good embeddings compared with other global and local techniques. |

April 28 | Elizabeth Barnes (In-Person) Title: Explainable AI for Climate Science: Opening the Black Box to Reveal Planet Earth Abstract: Earth’s climate is chaotic and noisy. Finding usable signals amidst all of the noise can be challenging: be it predicting if it will rain, knowing which direction a hurricane will go, understanding the implications of melting Arctic ice, or detecting the impacts of human-induced climate warming. Here, I will demonstrate how explainable artificial intelligence (XAI) techniques can sift through vast amounts of climate data and push the bounds of scientific discovery: allowing scientists to ask “why?” but now with the power of machine learning. |

May 5 | Rebecca Morrison (In-Person) |

##### Fall 2022

August 26 | Geovani Nunes Grapiglia (Remote) Title: On the Worst-Case Complexity of Non-Monotone Line-Search Methods Abstract: Non-monotone line-search methods form an important class of iterative methods for non-convex unconstrained optimization problems. For the algorithms in this class, the non-monotonicity is controlled by a sequence of non-negative parameters. We prove complexity bounds to achieve approximate first-order optimality even when this sequence is not summable. As a by-product, we obtain a global convergence result that covers many existing non-monotone line-searches. Our generalized results allow more freedom for the development of new algorithms. As an example, we design a non-monotone scheme related to the Metropolis rule. Preliminary numerical experiments suggest that the new method is suitable to non-convex problems with many non-global local minimizers. |

September 2 | Deepanshu Verma (Remote) Title: Advances and Challenges in Solving High-Dimensional HJB Equations Arising in Optimal Control Abstract: We present a neural network approach for approximately solving high-dimensional stochastic as well as deterministic control problems. Our network design and the training problem leverage insights from optimal control theory. We approximate the value function of the control problem using a neural network and use the Pontryagin maximum principle and Dynamic Programming principle to express the optimal control (and therefore the sampling) in terms of the value function. Our training loss consists of a weighted sum of the objective functional of the control problem and penalty terms that enforce the Hamilton Jacobi Bellman equations along the sampled trajectories. As a result, we can obtain the value function in the regions of the state space traveled by optimal trajectories to avoid the Curse of Dimensionality. Importantly, training is self-supervised in that it does not require solutions of the control problem. Our approach for stochastic control problem reduces to the method of characteristics as the system dynamics become deterministic. In our numerical experiments, we compare our method to existing solvers for a more general class of semi-linear PDEs. Using a two-dimensional toy problem, we demonstrate the importance of the PMP to inform the sampling. For a 100-dimensional benchmark problem, we demonstrate that approach improves accuracy and time-to-solution. Finally, we consider a PDE based dynamical system to demonstrate the scalability of our approach. |

September 9 | Brandon Amos (Remote) Title: Differentiable optimization-based modeling for machine learning Abstract: This talk tours the foundations and applications of optimization-based models for machine learning. Optimization is a widely-used modeling paradigm for solving non-trivial reasoning operations and brings precise domain-specific modeling priors into end-to-end machine learning pipelines that are otherwise typically large parameterized black-box functions. We will discuss how to integrate optimization as a differentiable layer and start simple with constrained, continuous, convex problems in Euclidean spaces. We will then move onto active research topics that expand beyond these core components into non-convex, non-Euclidean, discrete, and combinatorial spaces. Throughout all of these, we will consider applications in control, reinforcement learning, and vision. |

September 16 | Kyri Baker (In-Person) Title: Chance Constraints for Smart Buildings and Smarter Grids Abstract: Evolving energy systems are introducing heightened levels of stress on the electric power grid. Fluctuating renewable energy sources, dynamic electricity pricing, and new loads such as plug-in electric vehicles are transforming the operation of the grid, from the high-voltage transmission grid down to individual buildings. Grid overvoltages, instabilities, and overloading issues are increasing, but stochastic predictive optimization and control can help alleviate these undesirable conditions. Optimization techniques leveraging chance (probabilistic) constraints will be presented in this talk. Different ways to incorporate chance constraints into optimization problems, including distributionally robust and joint chance constraint reformulations, will be presented. Applications in smart buildings and distribution grids with high integration of solar energy are shown to benefit from chance constrained optimization formulations, reducing grid voltage issues, conserving energy, and allowing buildings and the grid to interact in new ways. |

September 23 | Willy Hereman (In-Person) Title: Symbolic computation of solitary wave solutions and solitons through homogenization of degree Abstract: Hirota's method is an effective method to find soliton solutions of completely integrable nonlinear PDEs, including the famous Korteweg-de Vries (KdV) equation. Hirota's approach requires a change of the dependent variable (a.k.a. Hirota's transformation) so that the resulting equation can be written in bilinear form using the Hirota operators. Solitons are then computed using a perturbation scheme that terminates after a finite number of steps. It will be shown that the Hirota transformations are crucial to obtain PDEs that are homogenous of degree (in the new dependent variables). The actual recasting into bilinear form which assumes a quadratic equation (or a tricky decoupling into such equations) is not required to compute solitary wave solutions or solitons. To illustrate this idea, soliton solutions of a class of fifth-order KdV equations (due to Lax, Sawada-Kotera, and Kaup-Kupershmidt) will be computed with a straightforward recursive algorithm involving linear and nonlinear operators. Although it circumvents bilinear forms, this method can still be viewed as a simplified version of Hirota's method. Homogenization of degree also allows one to find solitary wave solutions of nonlinear PDEs that are either not completely integrable or for which the bilinear form is unknown. A couple of such examples will also be shown. |

September 30 | No Colloquium |

October 7 | Graduate Student Colloquium Cancelled |

October 14 | Philipp Witte (In-Person) Title: SciAI4Industry: Solving PDEs for industry-scale problems with deep learning Abstract: Solving partial differential equations with deep learning makes it possible to reduce simulation times by multiple orders of magnitude and unlock scientific methods that rely on large numbers of sequential simulations, such as optimization and uncertainty quantification. One of the big challenges of adopting scientific AI for industrial applications such as reservoir simulations is that neural networks for solving large-scale PDEs exceed the memory capabilities of current GPUs. In this talk, we discuss current approaches to parallelism in deep learning and why tensor parallelism is the most promising approach to scaling scientific AI to commercial-scale problems. While implementing tensor parallelism for neural networks is more intrusive than other forms of parallelism such as data or pipeline parallelism, we show how parallel communication primitives can be implemented through linear operators and integrated into deep learning frameworks with automatic differentiation. In our examples, we show that tensor parallelism for scientific AI enables us to train large-scale 3D simulators for solving the Navier Stokes equation and for modeling subsurface CO2 flow in a real-world carbon capture & storage (CCS) scenario. |

October 21 | Levon Nurbekyan (In-Person) Title: Efficient natural gradient method for large-scale optimization problems Abstract: Large-scale optimization is at the forefront of modern data science, scientific computing, and applied mathematics with areas of interest, including high-dimensional PDE, inverse problems, machine learning, etc. First-order methods are workhorses for large-scale optimization due to modest computational cost and simplicity of implementation. Nevertheless, these methods are often agnostic to the structural properties of the problem under consideration and suffer from slow convergence, being trapped in bad local minima, etc. Natural gradient descent is an acceleration technique in optimization that takes advantage of the problem’s geometric structure and preconditions the objective function’s gradient by a suitable “natural” metric. Hence parameter update directions correspond to the steepest descent on a corresponding “natural” manifold instead of the Euclidean parameter space rendering a parametrization invariant descent direction on that manifold. Despite its success in statistical inference and machine learning, the natural gradient descent method is far from a mainstream computational technique due to the computational complexity of calculating and inverting the preconditioning matrix. This work aims at a unified computational framework and streamlining the computation of a general natural gradient flow via the systematic application of efficient tools from numerical linear algebra. We obtain efficient and robust numerical methods for natural gradient flows without directly calculating, storing, or inverting the dense preconditioning matrix. We treat Euclidean, Wasserstein, Sobolev, and Fisher–Rao natural gradients in a single framework for a general loss function. |

October 28 | Alejandro Caballero, Graduate Student Colloquium Title: Solving the 2-D Elastic Radiative Transfer Equations Abstract: The radiative transfer equations are a coupled system of integral-pde's that describe the propagation of energy as a function of space, time, and angular directions. It finds applications in geophysics, optics, atmospheric sciences, medical imaging, astrophysics, underwater acoustics, and other fields. In this talk I will discuss how we develop the elastic formulation as an extension of the acoustic formulation using integral equations. The elastic formulation is of interest because for many applications energy does not only change propagation direction but also mode of propagation (P or S waves). I will show some numerical results which highlight the applicability of the algorithm, as well as show benchmarking of the results using expressions which have been theoretically derived before. |

November 4 | Brennan Sprinkle (In-Person) Title: Two open questions in fluid dynamics: the enhanced traction of microscopic flat tires and the reverse motion of a reverse sprinkler Abstract: In this talk I'll present two recent projects that have opened more questions than they have answered. First, I'll discuss the rolling of active Pickering emulsions - small droplets (~10-100 um) covered in smaller (~1um) active particles that can be controlled/rolled by an external, rotating magnetic field. Curiously, these droplets roll much faster when they are soft vs rigid. I'll describe experiments done by collaborators in the ChemE and numerical simulations that I developed to study this behavior, but I'll stop short of presenting a hydrodynamic model. Second, I'll talk about a classic question in fluid dynamics (first posed by Richard Feynman) concerning the reversibility of hydromechanical sprinklers (lawn sprinklers) that auto-rotate while ejecting fluid: what happens to these sprinklers when they suck fluid in? The question is surprisingly subtle, and I'll present experiments done by collaborators at NYU as well as a mathematical model that resolve some aspects of it. Though I'll also present some preliminary, 2D numerical simulations of sprinklers and discuss why there may be more to the story. |

November 11 | Soraya Terrab, Graduate Student Colloquium Title: Learning Convolutional Filters Abstract: Filters are key post-processing tools that are used to reduce error, remove spurious oscillations, and improve accuracy to numerical solutions. We are interested in developing geometrically-flexible, physically-relevant filters. In this talk, I will be presenting our recent, ongoing work in learning nonlinear convolutional kernel filters. I will first introduce standard filters and our group's work on Smoothness-Increasing Accuracy-Conserving (SIAC) filters. I will next present the optimization problem we aim to solve with moment constraints, the data used to train the convolutional filter, the architecture of the convolutional neural network, and results on test data. To conclude, I will share how we apply our filter across scales as well as initial results on multi-resolution filters. |

November 18 | Indranil Mukhopadhyay Title: Pseudotime reconstruction and downstream spatio-temporal analysis based on single cell RNA-seq data Abstract: Dynamic regulation of gene expression is often governed by progression through transient cell states. Bulk RNA-seq analysis only detects average change in expression levels and is unable to identify this dynamics. Single cell RNA-seq (scRNA-seq) presents an unprecedented opportunity that helps in placing cells on a hypothetical time trajectory that reflects gradual transition of their transcriptomes. This continuum trajectory (pseudotime) may reveal the developmental pathway that provides information on dynamic transcriptomic changes. Existing approaches heavily depend on reducing huge dimension to very low dimensional subspaces and may lead to loss of information. We propose PseudoGA, a genetic algorithm based approach to order cells assuming gene expressions vary according to a smooth curve along the pseudotime trajectory. Our method shows higher accuracy in simulated and real datasets. Generality of the assumption behind PseudoGA and no dependence on dimensionality reduction technique make it a robust choice for pseudotime estimation from scRNA-seq data. We use resampling technique while applying PseudoGA to a large scRNA-seq data. PseudoGA is adaptable to parallel computing. Pseudotime reconstruction opens a broad area of research. Once cells are ordered according to pseudotime, we try to explore gene expression pattern that vary over both time (i.e. pseudotime) and space. |

November 25 | No Colloquium |

December 2 | Weiqi Chu Title: A mean-field opinion model on hypergraphs: from modeling to inference Abstract: The perspectives and opinions of people change and spread through social interactions on a daily basis. In the study of opinion dynamics on networks, one often models entities as nodes and their social relationships as edges, and examines how opinions evolve as dynamical processes on networks, including graphs, hypergraphs, multi-layer networks, etc. In this talk, I will introduce a model of opinion dynamics and derive its mean-field limit, where the opinion density satisfies a kinetic equation of Kac type. We prove properties of the solution of this equation, including nonnegativity, conservativity, and steady-state convergence. The parameters of such opinion models play a nontrivial role in shaping the dynamics. However, in reality, these parameters often can't be measured directly. In the second part of the talk, I will approach the problem from an `inverse' perspective and present how to infer the interaction kernel from limited partial observations. I will provide sufficient conditions of measurement for two scenarios, such that one is able to reconstruct the kernel uniquely. I will also provide a numerical algorithm of the inference when the data set only has a limited number of data points. |

December 9 | Alexander Pak (In-Person) Title: Coarse-Graining as a Hypothesis Testing Framework to Bridge the Microscale to Macroscale Abstract: Understanding the connection between the microscopic structure and macroscopic behavior of self-assembled soft and biological matter has led to numerous advances in energy, sustainability, and healthcare. Many such systems exhibit hierarchical structures undergoing morphological transitions, often under out-of-equilibrium conditions. However, it remains largely unknown how collective molecular reorganization may be modulated, which, in turn, may regulate macroscopic functionality. In this talk, I will explore this theme in the context of biomolecular assembly through the lens of molecular simulations. Throughout, I will describe our systematic coarse-graining strategies, which emulate the behavior of these systems under reduced representations (i.e., degrees of freedom), and explore how coarse-grained models can be leveraged to hypothesize and test connections between different length- and time-scales. I will share vignettes spanning from lipid morphogenesis to viral infection (e.g., for HIV-1 and SARS-CoV-2). The insights from these studies reveal the importance of a dynamical perspective on structure-function relationships and highlight the utility of multiscale simulations. |

December 16 | No Colloquium |

December 23 | No Colloquium |

December 30 | No Colloquium |

##### Spring 2022

January 21 | Information session on organizations within AMS helping to make a difference |

January 28 | Paul Martin, Applied Mathematics and Statistics, Colorado School of Mines Solving Laplace's equation in a tube: how hard can it be? The title problem arises in classical fluid dynamics, and in steady-state diffusion and wave problems. It is almost trivial when there is nothing in the tube apart from flowing fluid, but it becomes much more interesting when the tube contains an obstacle. A related problem is: if I send a wave down a tube, how much of it is reflected by the obstacle? I shall discuss properties of the solution, and methods for approximating the solution. |

February 4 | Federico Municchi, Research Associate in Computational Fluid Dynamics, Colorado School of Mines Combining phase field and geometric algorithms for the numerical simulation of multiphase flows Phase field methods are gaining momentum in science and engineering to model multicomponent and multiphase systems thanks to their thermodynamically consistent formulation and the general smoothness of the resulting fields. In fact, they provide a framework to include complex physical processes (such as phase-change) and result in less spurious oscillations when dealing with surface tension, compared to other methods like the volume of fluid. The Cahn-Hilliard equation is the principal governing equation in the phase field method as it results from a minimization of the free energy functional and thus includes all the relevant physical phenomena such as phase-change and surface tension forces. However, its solution is not straightforward as it is a fourth-order non linear partial differential equation. A number of explicit methods have been proposed in literature together with an implicit mixed formulation. Segregated implicit algorithms are seldom used due to stability issues. In this work, we present a novel segregated algorithm for the solution of the Cahn-Hilliard equation based on the incomplete block Schur preconditioning technique. Performance and accuracy of the algorithm are compared against a block-coupled mixed formulation and the standard Volume Of Fluid method for a number of cases. We also illustrate several applications of the method to multiphase flows with phase change, where the Cahn-Hilliard equation is coupled with the Navier-Stokes equations and the energy conservation equation. In this circumstance, geometric algorithms are integrated with the phase field method to preserve the sharpness of the interface when required. |

February 11 | Ebru Bozdag, Department of Geophysics, Colorado School of Mines Journey to the center of the Earth and Mars: Seismology with big & small data and high-performance computing Seismic waves generated by passive sources such as earthquakes and ambient noise are our primary tools to probe Earth's interior. Improving the resolution of the seismic models of deep Earth's interior is crucial to understand the dynamics of the mantle (from ~30 km to 2900 km depth) and the core (from 2900 km to 6371 km depth), which directly control, for instance, plate tectonics and volcanic activity at the surface, and the generation of Earth's magnetic field, respectively. Meanwhile, the detailed shallower crustal structure is essential for seismic hazard assessment, better modeling earthquakes and nuclear explosions, and oil and mineral explorations. Advances in computational power and the availability of high-quality seismic data from dense seismic networks and emerging instruments offer excellent opportunities to refine our understanding of multi-scale Earth's structure and dynamics from surface to the core. We are at a stage where we need to take the full complexity of wave propagation into account and avoid commonly used approximations to the wave equation and corrections in seismic tomography. Imaging Earth's interior globally with full-waveform inversion has been one of the most extreme projects in seismology in terms of computational requirements and available data that can potentially be assimilated in seismic inversions. While we need to tackle computational and "big data" challenges to better harness the available resources on Earth, we have "small data" challenges on other planetary bodies such as Mars, where we now have the first radially symmetric models constrained by seismic waves generated by marsquakes as part of the Mars InSight mission. I will talk about advances in the theory, computations, and data in exploring multi-scale Earth's and Mars' interiors. I will also talk about our recent efforts to address computational and data challenges and discuss future directions in the context of global seismology. |

February 18 | Eileen Martin, Colorado School of Mines Moving less data in correlation- and convolution-based analyses When analyzing the relationships between multiple streams of time-series data or between images, we often calculate crosscorrelations, convolutions or deconvolutions to explore potential time-lagged or space-lagged similarities between them. However, denser/larger sensor networks are leading to larger datasets, and naively calculating correlations or convolutions often requires significant data movement (quadratic, if naively looking at relationships between all data snapshots). This is particularly problematic in ambient noise interferometry, a method by which Green’s functions of a PDE system (such as the heat equation or a wave equation) are estimated by crosscorrelations across all sensors pairs in a dense sensor network recording randomly distributed sources of energy (heat sources or vibration sources). In this talk I will show some new algorithms to calculate array-wide correlations that take advantage of lossy data compression to reduce data movement and computational costs by performing crosscorrelations directly on compressed data. These methods can apply to crosscorrelation of any time-series data. Often, seismologists use the results of crosscorrelating ambient seismic noise as an input to a few types of array beamforming methods to characterize Earth materials (similar to beamforming used in wireless communications and astronomy). In fact, we can calculate the final beamforming results directly from the ambient seismic noise with new linear algorithms that only implicitly calculate crosscorrelations. |

February 25 | Nancy Rodriguez, CU Boulder |

March 4 | Graduate Student Colloquium Title: Multiwavelets and Machine Learning-based Discontinuity and Edge Detection Presenter: Soraya Terrab Abstract: Spurious oscillations, such as Gibbs phenomenon, are artifacts that occur in numerical computation of PDEs that affect the accuracy of approximations and create non-physical effects. These oscillations need to be identified and eliminated in order to maintain physical relevance and accuracy in the numerical approximations. Identifying the nonphysical oscillations requires having reliable discontinuity detection methods. In this work, we take advantage of the theory behind multi-resolution wavelets analysis as well as machine learning to identify and limit troubled cells, or discontinuous cells, in the numerical approximation. By extracting the fine details through multi-resolution analysis in the multiwavelet approach, we can analyze the global information in the domain and apply theoretical thresholding and outlier detection to identify cells that are troubled. Additionally, we have trained classifiers on smooth and discontinuous data, enabling a machine learning solution to discontinuity detection. The ideas from discontinuity detection are not limited to numerical solutions to PDEs; we can also apply these methods for the detection of edges in images. While typical edge detection methods include partial derivative operators, continuous wavelet or shearlet transforms, segmentation, or high-order and variable-order total variation, machine learning has only been recently explored as an edge detection tool for image processing [Wen et al. J. Sci. Comput. (2020)]. For this reason, we have been interested in using this imaging application to compare the multi-resolution wavelet and machine learning-based discontinuity detection methods in two-dimensional, static image data. In its simplest zero-degree multiwavelet construction, our discontinuity detection method results in a Haar wavelet-based detection of edges in images. We will present these initial results along with machine learning-based edge detection and will compare the two discontinuity detection approaches in computational cost and accuracy. ——————————————— Title: Leveraging multiple continuous monitoring sensors for emission identification and localization on oil and gas facilities Presenter: Will Daniels Abstract: Methane, the primary component of natural gas, is a greenhouse gas with about 85 times the global warming potential of carbon dioxide over a 20-year timespan. This makes reducing methane emissions a vital tool for combatting climate change. Oil and gas facilities are a promising avenue for reducing emissions, as leaks from these facilities can be mitigated if addressed quickly. To better alert oil and gas operators to emission on their facilities, we developed a framework to identify when a methane emission is occurring and where it is coming from. This framework utilizes continuous monitoring sensors placed around the perimeter of the facility, but these sensors only observe ambient methane concentrations at their location and do not directly provide information about when and where an emission is occurring. Our framework turns these observations into a location estimate via the following steps. First, we identify spikes in the observations and perform local regression on non-spike data to estimate the methane background. Second, we simulate methane concentrations at the sensor locations from all potential leak sources separately. Third, we pattern match the simulated and observed concentrations, giving more weight to sources whose simulated concentrations more closely match observations. Finally, we synthesize this information across all sensors on the facility to provide a single location estimate with uncertainty. Here we discuss our framework in more detail and demonstrate its effectiveness under real-world conditions. |

March 11 | Grad Student Colloquium Title: The Radiative Transfer Equations: What they are, Why they are important, and How do we solve them? Presenter: Alejandro Jaimes Abstract: In this talk I will discuss the radiative transfer equations (RTE) in acoustic media. RTE describes the angular spatio-temporal distribution of energy density in scattering media, and has found applications in areas such as geophysics, acoustics, astrophysics, atmospheric sciences, and optics. RTE takes the form of integral partial differential equation which has motivated the development of numerical techniques such as discontinuous galerkin method and particle swarm optimization. I will first introduce the one-dimensional formulation of RTE and then generalize it to two and three dimensions. Through this generalization, I will discuss the complications that arise when dealing with 2 or 3-D scattering. I will then briefly discuss four standard approaches to solve RTE: spherical harmonics, discretization methods, iteration methods, and monte carlo techniques. I will show results of a numerical algorithm that I construct by mixing ideas of the iteration and discretization methods, and if time allows show some results of solving RTE through physics informed neural networks. |

March 18 | Suzanne Sindi, UC-Merced A Chemical Master Equation Model for Prion Aggregate Infectivity Shows Prion Strains Differ by Nucleus Size Prion proteins are responsible for a variety of neurodegenerative diseases in mammals such as Creutzfeldt-Jakob disease in humans and “mad-cow” disease in cattle. While these diseases are fatal to mammals, a host of harmless phenotypes have been associated with prion proteins in S. cerevisiae, making yeast an ideal model organism for prion diseases. Most mathematical approaches to modeling prion dynamics have focused on either the protein dynamics in isolation, absent from a changing cellular environment, or modeling prion dynamics in a population of cells by considering the “average” behavior. However, such models have been unable to recapitulate in vivo properties of yeast prion strains including rates of appearance during seeding experiments. The common assumption in prion phenotypes is that the only limiting event is the establishment of a stable prion aggregate of minimal size. We show this model is inconsistent with seeding experiments. We then develop a minimal model of prion phenotype appearance: the first successful amplification of an aggregate. Formally, we develop a chemical master equation of prion aggregate dynamics through conversion (polymerization) and fragmentation under the assumption of a minimal stable size. We frame amplification as a first-arrival time process that must occur on a time-scale consistent with the yeast cell cycle. This model, and subsequent experiments, then establish for the first time that two standard yeast prion strains have different minimally stable aggregate sizes. This suggests a novel approach (albeit entirely theoretical) for managing prion diseases, shifting prion strains towards larger nucleus sizes. |

April 1 | Andee Kaplan, Colorado State University Title: A Practical Approach to Proper Inference with Linked Data Abstract: Entity resolution (ER), comprising record linkage and de-duplication, is the process of merging noisy databases in the absence of unique identifiers to remove duplicate entities. One major challenge of analysis with linked data is identifying a representative record among determined matches to pass to an inferential or predictive task, referred to as the downstream task. Additionally, incorporating uncertainty from ER in the downstream task is critical to ensure proper inference. To bridge the gap between ER and the downstream task in an analysis pipeline, we propose five methods to choose a representative (or canonical ) record from linked data, referred to as canonicalization. Our methods are scalable in the number of records, appropriate in general data scenarios, and provide natural error propagation via a Bayesian canonicalization stage. In this talk, the proposed methodology is evaluated on three simulated data sets and one application — determining the relationship between demographic information and party affiliation in voter registration data from the North Carolina State Board of Elections. We first perform Bayesian ER and evaluate our proposed methods for canonicalization before considering the downstream tasks of linear and logistic regression. Bayesian canonicalization methods are empirically shown to improve downstream inference in both settings through prediction and coverage. |

April 8 | Prof. Snigdhansu (Ansu) Chatterjee, Minnesota Title: Nonparametric Hypothesis Testing in High Dimensions Abstract: High-dimensional data, where the dimension of the feature space is much larger than sample size, arise in a number of statistical applications. In this context, we present the generalized multivariate sign transformation, defined as a vector divided by its norm. For different choices of the norm function, the resulting transformed vector adapts to certain geometrical features of the data distribution. We obtain one-sample and two-sample testing procedures for mean vectors of high-dimensional data using these generalized sign vectors. These tests are based on U-statistics using kernel inner products, do not require prohibitive assumptions, and are amenable to a fast randomization-based implementation. Theoretical developments, simulated data and real data examples are discussed. |

April 15 | Tammy Kolda, Mathematical Consultant Title: Tensor Moments of Gaussian Mixture Models Abstract: Gaussian mixture models (GMMs) are fundamental tools in statistical and data sciences that are useful for clustering, anomaly detection, density estimation, etc. We are interested in high-dimensional problems (e.g., many features) and a potentially massive number of data points. One way to compute the parameters of a GMM is via the method of moments, which compares the sample and model moments. The first moment is the mean, the second (centered) moment is the covariance. We are interested in third, fourth, and even higher-order moments. The d-th moment of an n-dimensional random variable is a symmetric d-way tensor (multidimensional array) of size n x n x ... x n (d times), so working with moments is assumed to be prohibitively expensive in both storage and time for d>2 and larger values of n. In this talk, we show that the estimation of the model parameters can be accomplished without explicit formation of the model or sample moments. In fact, the cost per iteration for the method of moments is the same order as that of expectation maximization (EM), making method of moments competitive. Along the way, we show how to concisely describe the moments of Gaussians and GMMs using tools from algebraic geometry, enumerative combinatorics, and multilinear algebra. Numerical results validate and illustrate the numerical efficiency of our approaches. |

April 22 | |

April 29 | Ishani Roy, Serein Title: Using Data to circumvent biases Abstract: Did you know that in 2013, the US Food and Drug Administration (FDA) recommended cutting the dose in half for women, but not men, after the results of driving simulation studies indicated women metabolise the drug at a slower rate. The FDA report came after 20 years of incorrect and dangerous prescribing of Ambien to women. Action was taken only after more than 700 reports of motor vehicle crashes associated with Ambien use that put the lives of many women, their children and other drivers on the road at risk. Biases not only affect recruitment, team morale and productivity it also affects how we design products and grow a business. In this talk I will speak about how unconscious biases may affect inclusion and how data and research can be used to measure and monitor exclusion to circumvent biases. |

##### Fall 2021

August 27 | Monique Chyba Epidemiological modeling, and COVID-19 Heterogeneity in Islands Chain Environment SARS-CoV-2 (COVID-19) has impacted not only health, but the economy and how we live daily life. On January 30, 2020 the World Health Organization (WHO) declared a global health emergency. COVID-19 was officially named on February 11, as it continued to spread across Asia and Europe. Mathematicians have found themselves at the front seat of this race against COVID-19. However, there is still a lot of unanswered questions and challenges regarding the outcome of several models as well as their limitations. It is unclear at this time if there is a "better" model, and while most of the challenges in epidemiological forecasting come from incomplete data and impossibility to model people's behavior, there is still the question of what model to use when and for what purpose. Throughout the current COVID-19 pandemic, most results and forecasting come from one model but not a combination. We consider what can be learned from running both compartment and agent based models side-by-side; taking and applying the best of each model using the measured data. We will also discuss the Hawaiian Islands are providing a unique opportunity to study heterogeneity and demographics in a controlled environment due to the geographically closed borders and mostly uniform pandemic-induced governmental controls and restrictions. |

September 3 | Math and Social Justice: Sara Clifton, St. Olaf College Modeling the leaky pipeline in hierarchical professions Women constitute approximately 50% of the population and have been an active part of the U.S. workforce for over half a century. Yet women continue to be poorly represented in leadership positions within business, government, medical, and academic hierarchies. As of 2018, less than 5% of Fortune 500 chief executive officers are female, 20% of the U.S. Congress is female, and 34% of practicing physicians are female. The decreasing representation of women at increasing levels of power within hierarchical professions has been called the “leaky pipeline” effect, but the main cause of this phenomenon remains contentious. Using a mathematical model of gender dynamics within professional hierarchies and a new database of gender fractionation over time, we quantify the impact of the two major decision-makers in the ascension of people through hierarchies: those applying for promotion and those who grant promotion. We quantify the degree of homophily (self-seeking) and gender bias in a wide range of professional hierarchies and demonstrate that intervention may be required to reach gender parity in some fields. We also preview an in-progress effort to extend the model to quantify racial bias and homophily in professional hierarchies. |

September 10 | Samy Wu Fung Efficient Training of Infinite-Depth Neural Networks via Jacobian-Free Backpropagation A promising trend in deep learning replaces fixed depth models by approximations of the limit as network depth approaches infinity. This approach uses a portion of network weights to prescribe behavior by defining a limit condition. This makes network depth implicit, varying based on the provided data and an error tolerance. Moreover, existing implicit models can be implemented and trained with fixed memory costs in exchange for additional computational costs. In particular, backpropagation through implicit depth models requires solving a Jacobian-based equation arising from the implicit function theorem. We propose a new Jacobian-free backpropagation (JFB) scheme that circumvents the need to solve Jacobian-based equations while maintaining fixed memory costs. This makes implicit depth models much cheaper to train and easy to implement. Numerical experiments on classification, CT reconstructions, and predicting traffic models are provided. |

September 17 | AMS Graduate Student Colloquium Dave Montgomery Title: Parallelization of a Navier-Stokes solver for applications in extravascular injury modeling Blood flow is governed by the incompressible Navier-Stokes equations, a set of non-linear equations that are regarded as computationally expensive to solve. Since the blood coagulation process happens over the time scale of tens of minutes, parallelization techniques are necessary to minimize overall computation time. We will present a method for decomposing the H-shaped extravascular injury domain so that the Navier-Stokes equations can be solved in parallel on multiple cores using distributed memory. |

September 24 | Math and Social Justice: Emma Pierson (Microsoft Research) on "Data science for social equality" |

October 1 | AMS Graduate Student Colloquium Laura Albrecht Title: A spatio-temporal model to estimate West Nile Virus cases in Ontario Abstract: West Nile virus is the most common mosquito borne disease in North America and the leading cause of viral encephalitis. West Nile virus is primarily transmitted between birds and mosquitoes while humans are incidental, dead-end hosts. We develop a Poisson spatio-temporal model to investigate how human West Nile virus case counts vary with respect to mosquito abundance and infection rates, bird abundance, and other environmental covariates. We use a Bayesian paradigm to fit our model to data from 2010-2019 in Ontario, Canada. |

October 8 | Research Open House Doug Nychka: Deep learning a statistical model Eileen Martin: Green’s function estimation with non-ideal noise Samy Wu Fung: Solving High-Dimensional Optimal Control Problems with Deep Learning Paul Martin: Generation of internal waves in the ocean |

October 15 | AMS Graduate Student Colloquium |

October 22 | Beth Malmskog, Colorado College Colorado in Context: Using Mathematics to Detect and Prevent Gerrymandering in Colorado and Beyond Gerrymandering is the process of manipulating the boundaries of electoral districts for political gain. This is considered by many to be deeply unfair, but it has been common practice in states across the country for more than 200 years. This talk will introduce a mathematical/statistical technique called ensemble analysis in the context of electoral boundaries, and describe how this perspective has become central to the national conversation about fair redistricting. I will share the big picture ideas, recent progress, and the work that our group is doing here in Colorado. |

October 29 | Math and Social Justice: Veronica Ciocanel (Duke University) on "Analyzing Racial Equity and Bias of Federal Judges through Inferred Sentencing Records" |

November 5 | Hannah Director, Mines Title: Identification and Uncertainty of Sea Ice Leads Abstract: Sea ice has substantial effects on the climate of Polar regions and the Earth overall. For example, open ocean tends to absorb heat from solar radiation while ice covered surfaces tend to reflect radiation. Large, narrow cracks in the ice’s surface, called leads, affect how sea ice grows and melts. Information about when and where leads form is needed to understand sea ice behavior and feedbacks between the ocean and atmosphere. To develop this understanding, scientists need an efficient way to identify sea ice leads in observational data and climate model output. For sea ice, remote sensing data and climate model output provide gridded fields showing the proportion of area in each grid box that is ice-covered. This granular identification, however, does not directly identify leads as distinct and coherent features. We introduce a likelihood-based method to efficiently identify sea ice leads from data of this form. Our method also provides uncertainty estimates of the presence and location of leads. We apply this identification method to high-resolution model output to assess the frequency of lead formation, structure of typical leads, and environmental conditions when leads form. |

November 12 | Derek Onken, Eli Lilly Title: A Neural Network Approach for Real-Time High-Dimensional Optimal Control Abstract: Optimal control (OC) problems aim to find an optimal policy that control given dynamics over a period of time. For systems with high-dimensional state (for example, systems with many centrally controlled agents), OC problems can be difficult to solve globally. We propose a neural network approach for solving such problems. When trained offline in a semi-global manner, the model is robust to shocks or disturbances that may occur in real-time deployment (e.g., wind interference). Our unsupervised approach is grid-free and scales efficiently to dimensions where grids become impractical or infeasible. We demonstrate the effectiveness of our approach on several multi-agent collision-avoidance problems in up to 150 dimensions. |

November 19 | Math and Social Justice: Jonathan Mattingly (Duke University) on "Fairness in Redistricting" The US political system is built on representatives chosen by geographically localized regions. This presents the government with the problem of designing these districts. Every ten years, the US census counts the population and new political districts must be drawn. The practice of harnessing this administrative process for partisan political gain is often referred to as gerrymandering. How does one identify and understand gerrymandering? Can we really recognize gerrymandering when we see it? If one party wins over 50% of the vote, is it fair that it wins less than 50% of the seats? What do we mean by fair? How can math help illuminate these questions? How does the geopolitical geometry of the state (where which groups live and the shape of the state) inform these answers? For me, these questions began with an undergraduate research program project in 2013 and has led me to testify twice in two cases: Common Cause v. Rucho (that went to the US Supreme Court) and Common Cause v. Lewis. This work has partially resulted in the redrawing of the NC State Legislative district maps and NC congressional maps. The resulting new maps will be used in our upcoming 2020 elections. In the remedy phase of North Carolina v. Covington, Greg Herschlog from the Duke group addresses the question if attempts to satisfy the VRA alone explained the observed level political packing and cracking. This is a story of interaction between lawyer, mathematicians, and policy advocates. The legal discussion has been increasingly informed by the mathematical framework. And the mathematics has been pushed to better include to the policy. The back and forth has been important to find ways to effectively inform the policy makers and courts to the insite the analyses provide. The problem of understanding gerrymandering has also prompted the development of a number of new computational algorithms which come with new mathematical questions. The next round of redistricting analysis will necessarily need to be more refined and nuanced. There is also the opportunity to be less reactive. There are opportunities to try to influence the process by which new maps are drawn before turning to the courts. There is also the possibility to direct the conversation by showing the effect more fully considering factors such as communities of interest, incumbency or proposed procedural elements of laws. This presentation reflects joint work Gregory Herschlag and a number of other researchers including many undergraduates, graduate students, and a few high school students. |

December 3 | Zachary Kilpatrick How heterogeneity shapes the efficiency of collective decisions and foraging Many organisms regularly make decisions regarding foraging, home-site selection, mating, and danger avoidance in groups ranging from hundreds up to millions of individuals. These decisions involve evidence-accumulation processes by individuals and information exchange within the group. Moreover, these decisions take place in complex, dynamic, and spatially structured environments, which shape the flow of information between group mates. We will present a statistical inference model for framing evidence accumulation and belief sharing in groups and some examples of how interactions shape decision efficiency in groups. Our canonical model is of Bayesian agents deciding between two equally likely options by accumulating evidence to a threshold. When neighbors only share their decisions with each other, groups comprised of individuals with a distribution of decision thresholds make more efficient decisions than homogeneous ones. We then turn our attention to specific examples of collective decision making in foraging animal groups like honey bees. For honey bees, spatial heterogeneity resulting from confinement to a hive bottlenecks communication, but creates an effective colony-level signal-detection mechanism whereby recruitment to low quality objectives is blocked. Heterogeneity in communication, on the other hand, hobbles the foraging efficiency of small groups. |

##### Spring 2021

January 29 | Book Club "Factfulness: 10 Reasons We’re Wrong about the World – and Why Things are Getting Better" (2018), Hans Rosling Chapters 1-3 |

February 5 | Michelle McCarthy Boston University Title: Mathematical modeling of neuronal rhythms: from physiology to function Abstract: Brain rhythms are a ubiquitous feature of brain dynamics, tightly correlated with neuronal activity underlying such basic functions as cognition, emotions, movement, and sleep. Moreover, abnormal rhythmic activity is associated with brain disfunction and altered brain states. Identifying the neuronal units and network structures that create, sustain and modulate brain rhythms is fundamental to identifying both their function and dysfunction in mediating behavioral output. Experimental studies of brain rhythms are limited by the inability to isolate large ensembles of neurons and their interconnections during active brain states. However, mathematical models have been used extensively to study network dynamics of the brain and to give insight into the determinants and functions of brain oscillations during various cognitive and behavioral states. Here I will give a brief introduction to the field of study of rhythmic brain activity and the mathematical formulations underlying biophysical neuronal network models. Existing mathematical models of brain development, sleep and neurodegenerative disease will be used to demonstrate how neuronal models of rhythmic dynamics can be used to explore the link between the brain physiology and functional network dynamics. |

February 12 | Daniel Nordman Iowa State Title: Within-sample prediction of a number of future events Abstract: The talk overviews a prediction problem encountered in reliability engineering, where a need arises to predict the number of future events (e.g., failures) among a cohort of units associated with a time-to-event process. Examples include the prediction of warranty returns or the prediction of the number of future product failures that could cause serious harm. Important decisions, such as a product recall, are often based on such predictions. Data, typically right-censored, are used to estimate the parameters of a time-to-event distribution. This distribution can then be used to predict the number of events over future periods of time. Because all units belong to the same data set, either by providing information (i.e., observed event times) or by becoming the subject of prediction (i.e., censored event times), such predictions are called within-sample predictions and differ from other prediction problems considered in most literature. A standard plug-in (also known as estimative) prediction approach is shown to be invalid for this problem (i.e., for even large amounts of data, the method fails to have correct coverage probability). However, a commonly used prediction calibration method is shown to be asymptotically correct for within-sample predictions, and two alternative predictive-distribution-based methods are presented that perform better than the calibration method. |

February 19 Special Time of 1:00 PM | Olivia Prosper University of Tennessee Title: Modeling malaria parasite dynamics within the mosquito Abstract: The malaria parasite Plasmodium falciparum requires a vertebrate host and a female Anopheles mosquito to complete a full life cycle, with sexual reproduction occurring in the mosquito. While parasite dynamics within the vertebrate host, such as humans, has been extensively studied, less is understood about dynamics within the mosquito, a critical component of malaria transmission dynamics. This sexual stage of the parasite life cycle allows for the production of genetically novel parasites. In the meantime, a mosquito’s biology creates bottlenecks in the infecting parasites’ development. We developed a two-stage stochastic model of the generation of parasite diversity within a mosquito and were able to demonstrate the importance of heterogeneity amongst parasite dynamics across a population of mosquitoes on estimates of parasite diversity. A key epidemiological parameter related to the timing of onward transmission from mosquito to vertebrate host is the extrinsic incubation period (EIP). Using simple models of within-mosquito parasite dynamics fitted to empirical data, we investigated factors influencing the EIP. |

February 26 | Book Club "Factfulness: 10 Reasons We’re Wrong about the World – and Why Things are Getting Better" (2018), Hans Rosling Chapters 4-6 |

March 12 | Lise-Marie Imbert-Gerard University of Arizona Title: Wave propagation in inhomogeneous media: An introduction to Generalized Plane Waves Abstract: Trefftz methods rely, in broad terms, on the idea of approximating solutions to Partial Differential Equation (PDEs) using basis functions which are exact solutions of the PDE, making explicit use of information about the ambient medium. But wave propagation problems in inhomogeneous media is modeled by PDEs with variable coefficients, and in general no exact solutions are available. Generalized Plane Waves (GPWs) are functions that have been introduced, in the case of the Helmholtz equation with variable coefficients, to address this problem: they are not exact solutions to the PDE but are instead constructed locally as high order approximate solutions. We will discuss the origin, the construction, and the properties of GPWs. The construction process introduces a consistency error, requiring a specific analysis. |

March 19 | Ethan Anderes UCDavis Title: Gravitational wave and lensing inference from the CMB polarization Abstract: In the last decade cosmologists have spent a considerable amount of effort mapping the radially-projected large-scale mass distribution in the universe by measuring the distortion it imprints on the CMB. Indeed, all the major surveys of the CMB produce estimated maps of the projected gravitational potential generated by mass density fluctuations over the sky. These maps contain a wealth of cosmological information and, as such, are an important data product of CMB experiments. However, the most profound impact from CMB lensing studies may not come from measuring the lensing effect, per se, but rather from our ability to remove it, a process called delensing. This is due to the fact that lensing, along with emission of millimeter wavelength radiation from the interstellar medium in our own galaxy, are the two dominant sources of foreground contaminants for primordial gravitational wave signals in the CMB polarization. As such delensing, i.e. the process of removing the lensing contaminants, and our ability to either model or remove galactic foreground emission sets the noise floor on upcoming gravitational wave science. In this talk we will present a complete Bayesian solution for simultaneous inference of lensing, delensing and gravitational wave signals in the CMB polarization as characterized by the tensor-to-scalar ratio r parameter. Our solution relies crucially on a physically motivated re-parameterization of the CMB polarization which is designed specifically, along with the design of the Gibbs Markov chain itself, to result in an efficient Gibbs sampler---in terms of mixing time and the computational cost of each step---of the Bayesian posterior. This re-parameterization also takes advantage of a newly developed lensing algorithm, which we term LenseFlow, that lenses a map by solving a system of ordinary differential equations. This description has conceptual advantages, such as allowing us to give a simple non-perturbative proof that the lensing determinant is equal to unity in the weak-lensing regime. The algorithm itself maintains this property even on pixelized maps, which is crucial for our purposes and unique to LenseFlow as compared to other lensing algorithms we have tested. It also has other useful properties such as that it can be trivially inverted (i.e. delensing) for the same computational cost as the forward operation, and can be used for fast and exact likelihood gradients with respect to the lensing potential. Incidentally, the ODEs for calculating these derivatives are exactly analogous to the backpropagation techniques used in deep neural networks but are derived in this case completely from ODE theory. |

March 26 | Book Club "Factfulness: 10 Reasons We’re Wrong about the World – and Why Things are Getting Better" (2018), Hans Rosling Chapters 7-9 |

April 9 | Andrew Zammit Mangion University of Wollongong Title: Statistical Machine Learning for Spatio-Temporal Forecasting Abstract: Conventional spatio-temporal statistical models are well-suited for modelling and forecasting using data collected over short time horizons. However, they are generally time-consuming to fit, and often do not realistically encapsulate temporally-varying dynamics. Here, we tackle these two issues by using a deep convolution neural network (CNN) in a hierarchical statistical framework, where the CNN is designed to extract process dynamics from the process' most recent behaviour. Once the CNN is fitted, probabilistic forecasting can be done extremely quickly online using an ensemble Kalman filter with no requirement for repeated parameter estimation. We conduct an experiment where we train the model using 13 years of daily sea-surface temperature data in the North Atlantic Ocean. Forecasts are seen to be accurate and calibrated. We show the versatility of the approach by successfully producing 10-minute nowcasts of weather radar reflectivities in Sydney using the same model that was trained on daily sea-surface temperature data in the North Atlantic Ocean. This is joint work with Christopher Wikle, University of Missouri. |

April 16 | Diogo Bolster University of Notre Dame Title: Incomplete mixing in reactive systems - from Lab to Field scale Abstract: In order for two items to react they must physically come into contact with one another. In the lab we often measure reaction rates by forcing two species to continuously mix together. However, in real systems such forced mixing mechanisms may often not exist and so a natural question arises: How do we take measurements from our well mixed laboratory experiments and use them to make meaningful predictions at scales of interest? In this talk we propose a novel modeling framework that aims precisely to do this. To show its applicability we will discuss it as related to a few examples: (i) mixing driven reactions in a quasi-well-mixed systems (ii) mixing driven reactions in a porous column experiment and (iii) mixing in a highly heterogeneous aquifer with a broad range of velocity and spatial scales. While this work was originally motivated by chemical reactions in porous media, the modeling framework is much more general than this and should be applicable to a broad range of problems. Also, the term reaction, as defined within our framework, can loosely be defined as an event where two items come together to produce something else; it is not in any way limited to purely chemical reactions. |

April 23 | Kiona Ogle Northern Arizon Title: A Bayesian approach to quantifying time-scales of influence and ecological memory Abstract: Many time-varying ecological processes are influenced by both concurrent and antecedent (past) conditions; in some cases, antecedent conditions may outweigh concurrent influences. The time-scales over which environmental conditions influence processes of interest (e.g., photosynthesis, carbon and water fluxes, tree growth, ecosystem productivity) are not well understood, motivating our development and application of the stochastic antecedent modeling (SAM) approach. The SAM approach is applied to ecological time-series data within a Bayesian statistical framework to quantify ecological memory. We use “memory” to broadly describe time-scales of influence, including the importance of antecedent conditions experienced at different times into the past, potentially revealing lagged responses. The coupled Bayesian-SAM approach, however, can lead to computational inefficiencies, and we describe reparameterization “solutions” to address such issues. To illustrate, we apply the approach to responses operating at distinctly different time-scales: annual tree growth (e.g., tree-rings widths) and sub-daily plant physiological responses (e.g., indices of stomatal behavior). Our Bayesian-SAM applications to tree growth in arid and semi-arid regions has identified particular seasons or months during which climatic conditions (e.g., precipitation or temperature) are most influential to subsequent tree growth; in many cases, conditions experienced 2-4 years ago continue to influence growth. The analysis has also revealed novel, multi-day lagged responses of plant physiological behavior to soil and atmospheric moisture conditions. In general, the Bayesian-SAM approach has demonstrated that ecological memory is an important process governing plant and ecosystem responses to environmental perturbations. |

April 30 | Book Club "Factfulness: 10 Reasons We’re Wrong about the World – and Why Things are Getting Better" (2018), Hans Rosling Chapter 9-10 + Factfulness Rules of Thumb |

##### Fall 2020

September 18 | Zachary J. Grant Oak Ridge National Lab Analysis and Development of Strong Stability Preserving Time Stepping Schemes High order spatial discretizations with monotonicity properties are often desirable for the solution of hyperbolic partial differential equations. These methods can advantageously be coupled with high order strong stability preserving time scheme to accurately evolve solutions forward in time while preserving convex functionals that are satisfied from the design of the spatial discretization. The search for high order strong stability time- stepping methods with large allowable strong stability coefficient has been an active area of research over the last three decades. In this talk I will review the foundations of SSP time stepping schemes as in how to analyze a given scheme, and how to optimally build a method which allows the largest effective stable time step. We will then discuss some extensions of the SSP methods in recent years and some ongoing research problems in the field, and show some the need of the SSP property through simple yet demonstrative examples. |

September 25 | Book Club “Weapons of Math Destruction” Chapter 1-4 |

October 2 | |

October 16 | Minah Oh James Madison University Fourier Finite Element Methods and Multigrid for Axisymmetric H(div) Problems An axisymmetric problem is a problem defined on a three-dimensional (3D) axisymmetric domain, and it appears in numerous applications. An axisymmetric problem can be reduced to a sequence of two-dimensional (2D) problems by using cylindrical coordinates and a Fourier series decomposition. Fourier Finite Element Methods (Fourier-FEMs) can be used to approximate each Fourier-mode of the solution by using a suitable FEM. Such dimension reduction is an attractive feature considering computation time, but the resulting 2D problems are posed in weighted function spaces where the weight function is the radial component r. Furthermore, the grad, curl, and div operators appearing in these weighted problems are quite different from the standard ones, so the analysis of such weighted problems requires special attention. Multigrid is an effective iterative method that can be used to solve large matrix systems arising from FEMs. In this talk, I will present a multigrid algorithm that can be applied to weighted H(div) problems that arise after performing a dimension reduction to an axisymmetric H(div) problem. Theoretical results that show the uniform convergence of the multigrid V-cycle with respect to meshsize will be presented as well as numerical results. |

October 30 | Book Club: “Weapons of Math Destruction” Chapters 5 - 7 |

November 6 | Mokshay Madiman University of Delaware Concentration of information for log-concave distributions In 2011, S. Bobkov and the speaker showed that for a random vector X in R^n drawn from a log-concave density f=e^{-V}, the information content per coordinate, namely V(X)/n, is highly concentrated about its mean. The result demonstrated that high-dimensional log-concave measures are in a sense close to uniform distributions on the annulus between 2 nested convex sets (generalizing the well known fact that the standard Gaussian measure is concentrated on a thin spherical annulus). We present recent work that obtains an optimal concentration bound in this setting, using a much simplified proof. Applications that motivated the development of these results include high-dimensional convex geometry, random matrix theory, and shape-constrained density estimation. The talk is based on joint works with Sergey Bobkov (University of Minnesota), Matthieu Fradelizi (Université Paris Est), and Liyao Wang. |

November 20 | Ayaboe Edoh Edwards AFRL Balancing Numerical Dispersion, Dissipation, and Aliasing for Time-Accurate Simulations The investigation of unsteady flow phenomena calls for the need to improve time-accurate simulation capabilities. Numerical errors responsible for affecting solution accuracy and robustness can be broadly categorized in terms of dispersion, dissipation, and aliasing. Their presence is a consequence of discretizing the continuous governing equations, and their impact may be felt at all scales (albeit to varying degrees). The task of constructing an effective numerical method may therefore be interpreted in terms of reducing the influence of these errors over as broad a range of scales as possible. Here, a concerted assembly of scheme components is chosen relative to a target aliasing limit. High-order and optimized finite difference stencils are employed in order to achieve accuracy; meanwhile, split representations for nonlinear transport terms are used in order to greatly improve robustness. Finally, tunable and scale-discriminant artificial-dissipation methods are incorporated for de-aliasing purposes and as a means of further enhancing both accuracy and stability. The proposed framework is motivated by the need to devise a numerical format capable of mitigating discretization effects in Large-Eddy Simulations. |

December 4 | Book Club “Weapons of Math Destruction” Chapters 8-10 |

##### Spring 2020

January 24 | Mevin Hooten Colorado State University Runnning on empty: Recharge dynamics from animal movement data |
---|---|

February 14 | Mark Risser Lawrence Berkeley National Laboratory Bayesian inference for high-dimensional nonstationary Gaussian processes |

February 21 | Donna Calhoun Boise State University A fully unsplit wave propagation algorithm for shallow water flows on GPUs |

February 28 | Matthias Katzfuss Texas A&M Gaussian-Process Approximations for Big Data |

March 20 | Nancy Rodriguez |

April 3 | Dan Nordman |

April 10 | Grady Wright |

April 24 | Feng Bao |