# Colloquia

The Applied Mathematics and Statistics Colloquium takes place at 3 p.m. on Fridays in Chauvenet Hall room 143. Attendance is in person or via Zoom. Please contact Samy Wu Fung at swufung@mines.edu or Daniel McKenzie at dmckenzie@mines.edu for further information and the Zoom link and password.

## Fall 2022

August 26 | Geovani Nunes Grapiglia (Remote) Title: On the Worst-Case Complexity of Non-Monotone Line-Search Methods Abstract: Non-monotone line-search methods form an important class of iterative methods for non-convex unconstrained optimization problems. For the algorithms in this class, the non-monotonicity is controlled by a sequence of non-negative parameters. We prove complexity bounds to achieve approximate first-order optimality even when this sequence is not summable. As a by-product, we obtain a global convergence result that covers many existing non-monotone line-searches. Our generalized results allow more freedom for the development of new algorithms. As an example, we design a non-monotone scheme related to the Metropolis rule. Preliminary numerical experiments suggest that the new method is suitable to non-convex problems with many non-global local minimizers. |

September 2 | Deepanshu Verma (Remote) Title: Advances and Challenges in Solving High-Dimensional HJB Equations Arising in Optimal Control Abstract: We present a neural network approach for approximately solving high-dimensional stochastic as well as deterministic control problems. Our network design and the training problem leverage insights from optimal control theory. We approximate the value function of the control problem using a neural network and use the Pontryagin maximum principle and Dynamic Programming principle to express the optimal control (and therefore the sampling) in terms of the value function. Our training loss consists of a weighted sum of the objective functional of the control problem and penalty terms that enforce the Hamilton Jacobi Bellman equations along the sampled trajectories. As a result, we can obtain the value function in the regions of the state space traveled by optimal trajectories to avoid the Curse of Dimensionality. Importantly, training is self-supervised in that it does not require solutions of the control problem. Our approach for stochastic control problem reduces to the method of characteristics as the system dynamics become deterministic. In our numerical experiments, we compare our method to existing solvers for a more general class of semi-linear PDEs. Using a two-dimensional toy problem, we demonstrate the importance of the PMP to inform the sampling. For a 100-dimensional benchmark problem, we demonstrate that approach improves accuracy and time-to-solution. Finally, we consider a PDE based dynamical system to demonstrate the scalability of our approach. |

September 9 | Brandon Amos (Remote) Title: Differentiable optimization-based modeling for machine learning Abstract: This talk tours the foundations and applications of optimization-based models for machine learning. Optimization is a widely-used modeling paradigm for solving non-trivial reasoning operations and brings precise domain-specific modeling priors into end-to-end machine learning pipelines that are otherwise typically large parameterized black-box functions. We will discuss how to integrate optimization as a differentiable layer and start simple with constrained, continuous, convex problems in Euclidean spaces. We will then move onto active research topics that expand beyond these core components into non-convex, non-Euclidean, discrete, and combinatorial spaces. Throughout all of these, we will consider applications in control, reinforcement learning, and vision. |

September 16 | Kyri Baker (In-Person) Title: Chance Constraints for Smart Buildings and Smarter Grids Abstract: Evolving energy systems are introducing heightened levels of stress on the electric power grid. Fluctuating renewable energy sources, dynamic electricity pricing, and new loads such as plug-in electric vehicles are transforming the operation of the grid, from the high-voltage transmission grid down to individual buildings. Grid overvoltages, instabilities, and overloading issues are increasing, but stochastic predictive optimization and control can help alleviate these undesirable conditions. Optimization techniques leveraging chance (probabilistic) constraints will be presented in this talk. Different ways to incorporate chance constraints into optimization problems, including distributionally robust and joint chance constraint reformulations, will be presented. Applications in smart buildings and distribution grids with high integration of solar energy are shown to benefit from chance constrained optimization formulations, reducing grid voltage issues, conserving energy, and allowing buildings and the grid to interact in new ways. |

September 23 | Willy Hereman (In-Person) Title: Symbolic computation of solitary wave solutions and solitons through homogenization of degree Abstract: Hirota's method is an effective method to find soliton solutions of completely integrable nonlinear PDEs, including the famous Korteweg-de Vries (KdV) equation. Hirota's approach requires a change of the dependent variable (a.k.a. Hirota's transformation) so that the resulting equation can be written in bilinear form using the Hirota operators. Solitons are then computed using a perturbation scheme that terminates after a finite number of steps. It will be shown that the Hirota transformations are crucial to obtain PDEs that are homogenous of degree (in the new dependent variables). The actual recasting into bilinear form which assumes a quadratic equation (or a tricky decoupling into such equations) is not required to compute solitary wave solutions or solitons. To illustrate this idea, soliton solutions of a class of fifth-order KdV equations (due to Lax, Sawada-Kotera, and Kaup-Kupershmidt) will be computed with a straightforward recursive algorithm involving linear and nonlinear operators. Although it circumvents bilinear forms, this method can still be viewed as a simplified version of Hirota's method. Homogenization of degree also allows one to find solitary wave solutions of nonlinear PDEs that are either not completely integrable or for which the bilinear form is unknown. A couple of such examples will also be shown. |

September 30 | No Colloquium |

October 7 | Graduate Student Colloquium Cancelled |

October 14 | Philipp Witte (In-Person) Title: SciAI4Industry: Solving PDEs for industry-scale problems with deep learning Abstract: Solving partial differential equations with deep learning makes it possible to reduce simulation times by multiple orders of magnitude and unlock scientific methods that rely on large numbers of sequential simulations, such as optimization and uncertainty quantification. One of the big challenges of adopting scientific AI for industrial applications such as reservoir simulations is that neural networks for solving large-scale PDEs exceed the memory capabilities of current GPUs. In this talk, we discuss current approaches to parallelism in deep learning and why tensor parallelism is the most promising approach to scaling scientific AI to commercial-scale problems. While implementing tensor parallelism for neural networks is more intrusive than other forms of parallelism such as data or pipeline parallelism, we show how parallel communication primitives can be implemented through linear operators and integrated into deep learning frameworks with automatic differentiation. In our examples, we show that tensor parallelism for scientific AI enables us to train large-scale 3D simulators for solving the Navier Stokes equation and for modeling subsurface CO2 flow in a real-world carbon capture & storage (CCS) scenario. |

October 21 | Levon Nurbekyan (In-Person) Title: Efficient natural gradient method for large-scale optimization problems Abstract: Large-scale optimization is at the forefront of modern data science, scientific computing, and applied mathematics with areas of interest, including high-dimensional PDE, inverse problems, machine learning, etc. First-order methods are workhorses for large-scale optimization due to modest computational cost and simplicity of implementation. Nevertheless, these methods are often agnostic to the structural properties of the problem under consideration and suffer from slow convergence, being trapped in bad local minima, etc. Natural gradient descent is an acceleration technique in optimization that takes advantage of the problem’s geometric structure and preconditions the objective function’s gradient by a suitable “natural” metric. Hence parameter update directions correspond to the steepest descent on a corresponding “natural” manifold instead of the Euclidean parameter space rendering a parametrization invariant descent direction on that manifold. Despite its success in statistical inference and machine learning, the natural gradient descent method is far from a mainstream computational technique due to the computational complexity of calculating and inverting the preconditioning matrix. This work aims at a unified computational framework and streamlining the computation of a general natural gradient flow via the systematic application of efficient tools from numerical linear algebra. We obtain efficient and robust numerical methods for natural gradient flows without directly calculating, storing, or inverting the dense preconditioning matrix. We treat Euclidean, Wasserstein, Sobolev, and Fisher–Rao natural gradients in a single framework for a general loss function. |

October 28 | Alejandro Caballero, Graduate Student Colloquium Title: Solving the 2-D Elastic Radiative Transfer Equations Abstract: The radiative transfer equations are a coupled system of integral-pde's that describe the propagation of energy as a function of space, time, and angular directions. It finds applications in geophysics, optics, atmospheric sciences, medical imaging, astrophysics, underwater acoustics, and other fields. In this talk I will discuss how we develop the elastic formulation as an extension of the acoustic formulation using integral equations. The elastic formulation is of interest because for many applications energy does not only change propagation direction but also mode of propagation (P or S waves). I will show some numerical results which highlight the applicability of the algorithm, as well as show benchmarking of the results using expressions which have been theoretically derived before. |

November 4 | Brennan Sprinkle (In-Person) Title: Two open questions in fluid dynamics: the enhanced traction of microscopic flat tires and the reverse motion of a reverse sprinkler Abstract: In this talk I'll present two recent projects that have opened more questions than they have answered. First, I'll discuss the rolling of active Pickering emulsions - small droplets (~10-100 um) covered in smaller (~1um) active particles that can be controlled/rolled by an external, rotating magnetic field. Curiously, these droplets roll much faster when they are soft vs rigid. I'll describe experiments done by collaborators in the ChemE and numerical simulations that I developed to study this behavior, but I'll stop short of presenting a hydrodynamic model. Second, I'll talk about a classic question in fluid dynamics (first posed by Richard Feynman) concerning the reversibility of hydromechanical sprinklers (lawn sprinklers) that auto-rotate while ejecting fluid: what happens to these sprinklers when they suck fluid in? The question is surprisingly subtle, and I'll present experiments done by collaborators at NYU as well as a mathematical model that resolve some aspects of it. Though I'll also present some preliminary, 2D numerical simulations of sprinklers and discuss why there may be more to the story. |

November 11 | Soraya Terrab, Graduate Student Colloquium Title: Learning Convolutional Filters Abstract: Filters are key post-processing tools that are used to reduce error, remove spurious oscillations, and improve accuracy to numerical solutions. We are interested in developing geometrically-flexible, physically-relevant filters. In this talk, I will be presenting our recent, ongoing work in learning nonlinear convolutional kernel filters. I will first introduce standard filters and our group's work on Smoothness-Increasing Accuracy-Conserving (SIAC) filters. I will next present the optimization problem we aim to solve with moment constraints, the data used to train the convolutional filter, the architecture of the convolutional neural network, and results on test data. To conclude, I will share how we apply our filter across scales as well as initial results on multi-resolution filters. |

November 18 | Indranil Mukhopadhyay Title: Pseudotime reconstruction and downstream spatio-temporal analysis based on single cell RNA-seq data Abstract: Dynamic regulation of gene expression is often governed by progression through transient cell states. Bulk RNA-seq analysis only detects average change in expression levels and is unable to identify this dynamics. Single cell RNA-seq (scRNA-seq) presents an unprecedented opportunity that helps in placing cells on a hypothetical time trajectory that reflects gradual transition of their transcriptomes. This continuum trajectory (pseudotime) may reveal the developmental pathway that provides information on dynamic transcriptomic changes. Existing approaches heavily depend on reducing huge dimension to very low dimensional subspaces and may lead to loss of information. We propose PseudoGA, a genetic algorithm based approach to order cells assuming gene expressions vary according to a smooth curve along the pseudotime trajectory. Our method shows higher accuracy in simulated and real datasets. Generality of the assumption behind PseudoGA and no dependence on dimensionality reduction technique make it a robust choice for pseudotime estimation from scRNA-seq data. We use resampling technique while applying PseudoGA to a large scRNA-seq data. PseudoGA is adaptable to parallel computing. Pseudotime reconstruction opens a broad area of research. Once cells are ordered according to pseudotime, we try to explore gene expression pattern that vary over both time (i.e. pseudotime) and space. |

November 25 | No Colloquium |

December 2 | Weiqi Chu Title: A mean-field opinion model on hypergraphs: from modeling to inference Abstract: The perspectives and opinions of people change and spread through social interactions on a daily basis. In the study of opinion dynamics on networks, one often models entities as nodes and their social relationships as edges, and examines how opinions evolve as dynamical processes on networks, including graphs, hypergraphs, multi-layer networks, etc. In this talk, I will introduce a model of opinion dynamics and derive its mean-field limit, where the opinion density satisfies a kinetic equation of Kac type. We prove properties of the solution of this equation, including nonnegativity, conservativity, and steady-state convergence. The parameters of such opinion models play a nontrivial role in shaping the dynamics. However, in reality, these parameters often can't be measured directly. In the second part of the talk, I will approach the problem from an `inverse' perspective and present how to infer the interaction kernel from limited partial observations. I will provide sufficient conditions of measurement for two scenarios, such that one is able to reconstruct the kernel uniquely. I will also provide a numerical algorithm of the inference when the data set only has a limited number of data points. |

December 9 | Alexander Pak (In-Person) Title: Coarse-Graining as a Hypothesis Testing Framework to Bridge the Microscale to Macroscale Abstract: Understanding the connection between the microscopic structure and macroscopic behavior of self-assembled soft and biological matter has led to numerous advances in energy, sustainability, and healthcare. Many such systems exhibit hierarchical structures undergoing morphological transitions, often under out-of-equilibrium conditions. However, it remains largely unknown how collective molecular reorganization may be modulated, which, in turn, may regulate macroscopic functionality. In this talk, I will explore this theme in the context of biomolecular assembly through the lens of molecular simulations. Throughout, I will describe our systematic coarse-graining strategies, which emulate the behavior of these systems under reduced representations (i.e., degrees of freedom), and explore how coarse-grained models can be leveraged to hypothesize and test connections between different length- and time-scales. I will share vignettes spanning from lipid morphogenesis to viral infection (e.g., for HIV-1 and SARS-CoV-2). The insights from these studies reveal the importance of a dynamical perspective on structure-function relationships and highlight the utility of multiscale simulations. |

December 16 | No Colloquium |

December 23 | No Colloquium |

December 30 | No Colloquium |

##### Spring 2022

January 21 | Information session on organizations within AMS helping to make a difference |

January 28 | Paul Martin, Applied Mathematics and Statistics, Colorado School of Mines Solving Laplace's equation in a tube: how hard can it be? The title problem arises in classical fluid dynamics, and in steady-state diffusion and wave problems. It is almost trivial when there is nothing in the tube apart from flowing fluid, but it becomes much more interesting when the tube contains an obstacle. A related problem is: if I send a wave down a tube, how much of it is reflected by the obstacle? I shall discuss properties of the solution, and methods for approximating the solution. |

February 4 | Federico Municchi, Research Associate in Computational Fluid Dynamics, Colorado School of Mines Combining phase field and geometric algorithms for the numerical simulation of multiphase flows Phase field methods are gaining momentum in science and engineering to model multicomponent and multiphase systems thanks to their thermodynamically consistent formulation and the general smoothness of the resulting fields. In fact, they provide a framework to include complex physical processes (such as phase-change) and result in less spurious oscillations when dealing with surface tension, compared to other methods like the volume of fluid. The Cahn-Hilliard equation is the principal governing equation in the phase field method as it results from a minimization of the free energy functional and thus includes all the relevant physical phenomena such as phase-change and surface tension forces. However, its solution is not straightforward as it is a fourth-order non linear partial differential equation. A number of explicit methods have been proposed in literature together with an implicit mixed formulation. Segregated implicit algorithms are seldom used due to stability issues. In this work, we present a novel segregated algorithm for the solution of the Cahn-Hilliard equation based on the incomplete block Schur preconditioning technique. Performance and accuracy of the algorithm are compared against a block-coupled mixed formulation and the standard Volume Of Fluid method for a number of cases. We also illustrate several applications of the method to multiphase flows with phase change, where the Cahn-Hilliard equation is coupled with the Navier-Stokes equations and the energy conservation equation. In this circumstance, geometric algorithms are integrated with the phase field method to preserve the sharpness of the interface when required. |

February 11 | Ebru Bozdag, Department of Geophysics, Colorado School of Mines Journey to the center of the Earth and Mars: Seismology with big & small data and high-performance computing Seismic waves generated by passive sources such as earthquakes and ambient noise are our primary tools to probe Earth's interior. Improving the resolution of the seismic models of deep Earth's interior is crucial to understand the dynamics of the mantle (from ~30 km to 2900 km depth) and the core (from 2900 km to 6371 km depth), which directly control, for instance, plate tectonics and volcanic activity at the surface, and the generation of Earth's magnetic field, respectively. Meanwhile, the detailed shallower crustal structure is essential for seismic hazard assessment, better modeling earthquakes and nuclear explosions, and oil and mineral explorations. Advances in computational power and the availability of high-quality seismic data from dense seismic networks and emerging instruments offer excellent opportunities to refine our understanding of multi-scale Earth's structure and dynamics from surface to the core. We are at a stage where we need to take the full complexity of wave propagation into account and avoid commonly used approximations to the wave equation and corrections in seismic tomography. Imaging Earth's interior globally with full-waveform inversion has been one of the most extreme projects in seismology in terms of computational requirements and available data that can potentially be assimilated in seismic inversions. While we need to tackle computational and "big data" challenges to better harness the available resources on Earth, we have "small data" challenges on other planetary bodies such as Mars, where we now have the first radially symmetric models constrained by seismic waves generated by marsquakes as part of the Mars InSight mission. I will talk about advances in the theory, computations, and data in exploring multi-scale Earth's and Mars' interiors. I will also talk about our recent efforts to address computational and data challenges and discuss future directions in the context of global seismology. |

February 18 | Eileen Martin, Colorado School of Mines Moving less data in correlation- and convolution-based analyses When analyzing the relationships between multiple streams of time-series data or between images, we often calculate crosscorrelations, convolutions or deconvolutions to explore potential time-lagged or space-lagged similarities between them. However, denser/larger sensor networks are leading to larger datasets, and naively calculating correlations or convolutions often requires significant data movement (quadratic, if naively looking at relationships between all data snapshots). This is particularly problematic in ambient noise interferometry, a method by which Green’s functions of a PDE system (such as the heat equation or a wave equation) are estimated by crosscorrelations across all sensors pairs in a dense sensor network recording randomly distributed sources of energy (heat sources or vibration sources). In this talk I will show some new algorithms to calculate array-wide correlations that take advantage of lossy data compression to reduce data movement and computational costs by performing crosscorrelations directly on compressed data. These methods can apply to crosscorrelation of any time-series data. Often, seismologists use the results of crosscorrelating ambient seismic noise as an input to a few types of array beamforming methods to characterize Earth materials (similar to beamforming used in wireless communications and astronomy). In fact, we can calculate the final beamforming results directly from the ambient seismic noise with new linear algorithms that only implicitly calculate crosscorrelations. |

February 25 | Nancy Rodriguez, CU Boulder |

March 4 | Graduate Student Colloquium Title: Multiwavelets and Machine Learning-based Discontinuity and Edge Detection Presenter: Soraya Terrab Abstract: Spurious oscillations, such as Gibbs phenomenon, are artifacts that occur in numerical computation of PDEs that affect the accuracy of approximations and create non-physical effects. These oscillations need to be identified and eliminated in order to maintain physical relevance and accuracy in the numerical approximations. Identifying the nonphysical oscillations requires having reliable discontinuity detection methods. In this work, we take advantage of the theory behind multi-resolution wavelets analysis as well as machine learning to identify and limit troubled cells, or discontinuous cells, in the numerical approximation. By extracting the fine details through multi-resolution analysis in the multiwavelet approach, we can analyze the global information in the domain and apply theoretical thresholding and outlier detection to identify cells that are troubled. Additionally, we have trained classifiers on smooth and discontinuous data, enabling a machine learning solution to discontinuity detection. The ideas from discontinuity detection are not limited to numerical solutions to PDEs; we can also apply these methods for the detection of edges in images. While typical edge detection methods include partial derivative operators, continuous wavelet or shearlet transforms, segmentation, or high-order and variable-order total variation, machine learning has only been recently explored as an edge detection tool for image processing [Wen et al. J. Sci. Comput. (2020)]. For this reason, we have been interested in using this imaging application to compare the multi-resolution wavelet and machine learning-based discontinuity detection methods in two-dimensional, static image data. In its simplest zero-degree multiwavelet construction, our discontinuity detection method results in a Haar wavelet-based detection of edges in images. We will present these initial results along with machine learning-based edge detection and will compare the two discontinuity detection approaches in computational cost and accuracy. ——————————————— Title: Leveraging multiple continuous monitoring sensors for emission identification and localization on oil and gas facilities Presenter: Will Daniels Abstract: Methane, the primary component of natural gas, is a greenhouse gas with about 85 times the global warming potential of carbon dioxide over a 20-year timespan. This makes reducing methane emissions a vital tool for combatting climate change. Oil and gas facilities are a promising avenue for reducing emissions, as leaks from these facilities can be mitigated if addressed quickly. To better alert oil and gas operators to emission on their facilities, we developed a framework to identify when a methane emission is occurring and where it is coming from. This framework utilizes continuous monitoring sensors placed around the perimeter of the facility, but these sensors only observe ambient methane concentrations at their location and do not directly provide information about when and where an emission is occurring. Our framework turns these observations into a location estimate via the following steps. First, we identify spikes in the observations and perform local regression on non-spike data to estimate the methane background. Second, we simulate methane concentrations at the sensor locations from all potential leak sources separately. Third, we pattern match the simulated and observed concentrations, giving more weight to sources whose simulated concentrations more closely match observations. Finally, we synthesize this information across all sensors on the facility to provide a single location estimate with uncertainty. Here we discuss our framework in more detail and demonstrate its effectiveness under real-world conditions. |

March 11 | Grad Student Colloquium Title: The Radiative Transfer Equations: What they are, Why they are important, and How do we solve them? Presenter: Alejandro Jaimes Abstract: In this talk I will discuss the radiative transfer equations (RTE) in acoustic media. RTE describes the angular spatio-temporal distribution of energy density in scattering media, and has found applications in areas such as geophysics, acoustics, astrophysics, atmospheric sciences, and optics. RTE takes the form of integral partial differential equation which has motivated the development of numerical techniques such as discontinuous galerkin method and particle swarm optimization. I will first introduce the one-dimensional formulation of RTE and then generalize it to two and three dimensions. Through this generalization, I will discuss the complications that arise when dealing with 2 or 3-D scattering. I will then briefly discuss four standard approaches to solve RTE: spherical harmonics, discretization methods, iteration methods, and monte carlo techniques. I will show results of a numerical algorithm that I construct by mixing ideas of the iteration and discretization methods, and if time allows show some results of solving RTE through physics informed neural networks. |

March 18 | Suzanne Sindi, UC-Merced A Chemical Master Equation Model for Prion Aggregate Infectivity Shows Prion Strains Differ by Nucleus Size Prion proteins are responsible for a variety of neurodegenerative diseases in mammals such as Creutzfeldt-Jakob disease in humans and “mad-cow” disease in cattle. While these diseases are fatal to mammals, a host of harmless phenotypes have been associated with prion proteins in S. cerevisiae, making yeast an ideal model organism for prion diseases. Most mathematical approaches to modeling prion dynamics have focused on either the protein dynamics in isolation, absent from a changing cellular environment, or modeling prion dynamics in a population of cells by considering the “average” behavior. However, such models have been unable to recapitulate in vivo properties of yeast prion strains including rates of appearance during seeding experiments. The common assumption in prion phenotypes is that the only limiting event is the establishment of a stable prion aggregate of minimal size. We show this model is inconsistent with seeding experiments. We then develop a minimal model of prion phenotype appearance: the first successful amplification of an aggregate. Formally, we develop a chemical master equation of prion aggregate dynamics through conversion (polymerization) and fragmentation under the assumption of a minimal stable size. We frame amplification as a first-arrival time process that must occur on a time-scale consistent with the yeast cell cycle. This model, and subsequent experiments, then establish for the first time that two standard yeast prion strains have different minimally stable aggregate sizes. This suggests a novel approach (albeit entirely theoretical) for managing prion diseases, shifting prion strains towards larger nucleus sizes. |

April 1 | Andee Kaplan, Colorado State University Title: A Practical Approach to Proper Inference with Linked Data Abstract: Entity resolution (ER), comprising record linkage and de-duplication, is the process of merging noisy databases in the absence of unique identifiers to remove duplicate entities. One major challenge of analysis with linked data is identifying a representative record among determined matches to pass to an inferential or predictive task, referred to as the downstream task. Additionally, incorporating uncertainty from ER in the downstream task is critical to ensure proper inference. To bridge the gap between ER and the downstream task in an analysis pipeline, we propose five methods to choose a representative (or canonical ) record from linked data, referred to as canonicalization. Our methods are scalable in the number of records, appropriate in general data scenarios, and provide natural error propagation via a Bayesian canonicalization stage. In this talk, the proposed methodology is evaluated on three simulated data sets and one application — determining the relationship between demographic information and party affiliation in voter registration data from the North Carolina State Board of Elections. We first perform Bayesian ER and evaluate our proposed methods for canonicalization before considering the downstream tasks of linear and logistic regression. Bayesian canonicalization methods are empirically shown to improve downstream inference in both settings through prediction and coverage. |

April 8 | Prof. Snigdhansu (Ansu) Chatterjee, Minnesota Title: Nonparametric Hypothesis Testing in High Dimensions Abstract: High-dimensional data, where the dimension of the feature space is much larger than sample size, arise in a number of statistical applications. In this context, we present the generalized multivariate sign transformation, defined as a vector divided by its norm. For different choices of the norm function, the resulting transformed vector adapts to certain geometrical features of the data distribution. We obtain one-sample and two-sample testing procedures for mean vectors of high-dimensional data using these generalized sign vectors. These tests are based on U-statistics using kernel inner products, do not require prohibitive assumptions, and are amenable to a fast randomization-based implementation. Theoretical developments, simulated data and real data examples are discussed. |

April 15 | Tammy Kolda, Mathematical Consultant Title: Tensor Moments of Gaussian Mixture Models Abstract: Gaussian mixture models (GMMs) are fundamental tools in statistical and data sciences that are useful for clustering, anomaly detection, density estimation, etc. We are interested in high-dimensional problems (e.g., many features) and a potentially massive number of data points. One way to compute the parameters of a GMM is via the method of moments, which compares the sample and model moments. The first moment is the mean, the second (centered) moment is the covariance. We are interested in third, fourth, and even higher-order moments. The d-th moment of an n-dimensional random variable is a symmetric d-way tensor (multidimensional array) of size n x n x ... x n (d times), so working with moments is assumed to be prohibitively expensive in both storage and time for d>2 and larger values of n. In this talk, we show that the estimation of the model parameters can be accomplished without explicit formation of the model or sample moments. In fact, the cost per iteration for the method of moments is the same order as that of expectation maximization (EM), making method of moments competitive. Along the way, we show how to concisely describe the moments of Gaussians and GMMs using tools from algebraic geometry, enumerative combinatorics, and multilinear algebra. Numerical results validate and illustrate the numerical efficiency of our approaches. |

April 22 | |

April 29 | Ishani Roy, Serein Title: Using Data to circumvent biases Abstract: Did you know that in 2013, the US Food and Drug Administration (FDA) recommended cutting the dose in half for women, but not men, after the results of driving simulation studies indicated women metabolise the drug at a slower rate. The FDA report came after 20 years of incorrect and dangerous prescribing of Ambien to women. Action was taken only after more than 700 reports of motor vehicle crashes associated with Ambien use that put the lives of many women, their children and other drivers on the road at risk. Biases not only affect recruitment, team morale and productivity it also affects how we design products and grow a business. In this talk I will speak about how unconscious biases may affect inclusion and how data and research can be used to measure and monitor exclusion to circumvent biases. |

##### Fall 2021

August 27 | Monique Chyba Epidemiological modeling, and COVID-19 Heterogeneity in Islands Chain Environment SARS-CoV-2 (COVID-19) has impacted not only health, but the economy and how we live daily life. On January 30, 2020 the World Health Organization (WHO) declared a global health emergency. COVID-19 was officially named on February 11, as it continued to spread across Asia and Europe. Mathematicians have found themselves at the front seat of this race against COVID-19. However, there is still a lot of unanswered questions and challenges regarding the outcome of several models as well as their limitations. It is unclear at this time if there is a "better" model, and while most of the challenges in epidemiological forecasting come from incomplete data and impossibility to model people's behavior, there is still the question of what model to use when and for what purpose. Throughout the current COVID-19 pandemic, most results and forecasting come from one model but not a combination. We consider what can be learned from running both compartment and agent based models side-by-side; taking and applying the best of each model using the measured data. We will also discuss the Hawaiian Islands are providing a unique opportunity to study heterogeneity and demographics in a controlled environment due to the geographically closed borders and mostly uniform pandemic-induced governmental controls and restrictions. |

September 3 | Math and Social Justice: Sara Clifton, St. Olaf College Modeling the leaky pipeline in hierarchical professions Women constitute approximately 50% of the population and have been an active part of the U.S. workforce for over half a century. Yet women continue to be poorly represented in leadership positions within business, government, medical, and academic hierarchies. As of 2018, less than 5% of Fortune 500 chief executive officers are female, 20% of the U.S. Congress is female, and 34% of practicing physicians are female. The decreasing representation of women at increasing levels of power within hierarchical professions has been called the “leaky pipeline” effect, but the main cause of this phenomenon remains contentious. Using a mathematical model of gender dynamics within professional hierarchies and a new database of gender fractionation over time, we quantify the impact of the two major decision-makers in the ascension of people through hierarchies: those applying for promotion and those who grant promotion. We quantify the degree of homophily (self-seeking) and gender bias in a wide range of professional hierarchies and demonstrate that intervention may be required to reach gender parity in some fields. We also preview an in-progress effort to extend the model to quantify racial bias and homophily in professional hierarchies. |

September 10 | Samy Wu Fung Efficient Training of Infinite-Depth Neural Networks via Jacobian-Free Backpropagation A promising trend in deep learning replaces fixed depth models by approximations of the limit as network depth approaches infinity. This approach uses a portion of network weights to prescribe behavior by defining a limit condition. This makes network depth implicit, varying based on the provided data and an error tolerance. Moreover, existing implicit models can be implemented and trained with fixed memory costs in exchange for additional computational costs. In particular, backpropagation through implicit depth models requires solving a Jacobian-based equation arising from the implicit function theorem. We propose a new Jacobian-free backpropagation (JFB) scheme that circumvents the need to solve Jacobian-based equations while maintaining fixed memory costs. This makes implicit depth models much cheaper to train and easy to implement. Numerical experiments on classification, CT reconstructions, and predicting traffic models are provided. |

September 17 | AMS Graduate Student Colloquium Dave Montgomery Title: Parallelization of a Navier-Stokes solver for applications in extravascular injury modeling Blood flow is governed by the incompressible Navier-Stokes equations, a set of non-linear equations that are regarded as computationally expensive to solve. Since the blood coagulation process happens over the time scale of tens of minutes, parallelization techniques are necessary to minimize overall computation time. We will present a method for decomposing the H-shaped extravascular injury domain so that the Navier-Stokes equations can be solved in parallel on multiple cores using distributed memory. |

September 24 | Math and Social Justice: Emma Pierson (Microsoft Research) on "Data science for social equality" |

October 1 | AMS Graduate Student Colloquium Laura Albrecht Title: A spatio-temporal model to estimate West Nile Virus cases in Ontario Abstract: West Nile virus is the most common mosquito borne disease in North America and the leading cause of viral encephalitis. West Nile virus is primarily transmitted between birds and mosquitoes while humans are incidental, dead-end hosts. We develop a Poisson spatio-temporal model to investigate how human West Nile virus case counts vary with respect to mosquito abundance and infection rates, bird abundance, and other environmental covariates. We use a Bayesian paradigm to fit our model to data from 2010-2019 in Ontario, Canada. |

October 8 | Research Open House Doug Nychka: Deep learning a statistical model Eileen Martin: Green’s function estimation with non-ideal noise Samy Wu Fung: Solving High-Dimensional Optimal Control Problems with Deep Learning Paul Martin: Generation of internal waves in the ocean |

October 15 | AMS Graduate Student Colloquium |

October 22 | Beth Malmskog, Colorado College Colorado in Context: Using Mathematics to Detect and Prevent Gerrymandering in Colorado and Beyond Gerrymandering is the process of manipulating the boundaries of electoral districts for political gain. This is considered by many to be deeply unfair, but it has been common practice in states across the country for more than 200 years. This talk will introduce a mathematical/statistical technique called ensemble analysis in the context of electoral boundaries, and describe how this perspective has become central to the national conversation about fair redistricting. I will share the big picture ideas, recent progress, and the work that our group is doing here in Colorado. |

October 29 | Math and Social Justice: Veronica Ciocanel (Duke University) on "Analyzing Racial Equity and Bias of Federal Judges through Inferred Sentencing Records" |

November 5 | Hannah Director, Mines Title: Identification and Uncertainty of Sea Ice Leads Abstract: Sea ice has substantial effects on the climate of Polar regions and the Earth overall. For example, open ocean tends to absorb heat from solar radiation while ice covered surfaces tend to reflect radiation. Large, narrow cracks in the ice’s surface, called leads, affect how sea ice grows and melts. Information about when and where leads form is needed to understand sea ice behavior and feedbacks between the ocean and atmosphere. To develop this understanding, scientists need an efficient way to identify sea ice leads in observational data and climate model output. For sea ice, remote sensing data and climate model output provide gridded fields showing the proportion of area in each grid box that is ice-covered. This granular identification, however, does not directly identify leads as distinct and coherent features. We introduce a likelihood-based method to efficiently identify sea ice leads from data of this form. Our method also provides uncertainty estimates of the presence and location of leads. We apply this identification method to high-resolution model output to assess the frequency of lead formation, structure of typical leads, and environmental conditions when leads form. |

November 12 | Derek Onken, Eli Lilly Title: A Neural Network Approach for Real-Time High-Dimensional Optimal Control Abstract: Optimal control (OC) problems aim to find an optimal policy that control given dynamics over a period of time. For systems with high-dimensional state (for example, systems with many centrally controlled agents), OC problems can be difficult to solve globally. We propose a neural network approach for solving such problems. When trained offline in a semi-global manner, the model is robust to shocks or disturbances that may occur in real-time deployment (e.g., wind interference). Our unsupervised approach is grid-free and scales efficiently to dimensions where grids become impractical or infeasible. We demonstrate the effectiveness of our approach on several multi-agent collision-avoidance problems in up to 150 dimensions. |

November 19 | Math and Social Justice: Jonathan Mattingly (Duke University) on "Fairness in Redistricting" The US political system is built on representatives chosen by geographically localized regions. This presents the government with the problem of designing these districts. Every ten years, the US census counts the population and new political districts must be drawn. The practice of harnessing this administrative process for partisan political gain is often referred to as gerrymandering. How does one identify and understand gerrymandering? Can we really recognize gerrymandering when we see it? If one party wins over 50% of the vote, is it fair that it wins less than 50% of the seats? What do we mean by fair? How can math help illuminate these questions? How does the geopolitical geometry of the state (where which groups live and the shape of the state) inform these answers? For me, these questions began with an undergraduate research program project in 2013 and has led me to testify twice in two cases: Common Cause v. Rucho (that went to the US Supreme Court) and Common Cause v. Lewis. This work has partially resulted in the redrawing of the NC State Legislative district maps and NC congressional maps. The resulting new maps will be used in our upcoming 2020 elections. In the remedy phase of North Carolina v. Covington, Greg Herschlog from the Duke group addresses the question if attempts to satisfy the VRA alone explained the observed level political packing and cracking. This is a story of interaction between lawyer, mathematicians, and policy advocates. The legal discussion has been increasingly informed by the mathematical framework. And the mathematics has been pushed to better include to the policy. The back and forth has been important to find ways to effectively inform the policy makers and courts to the insite the analyses provide. The problem of understanding gerrymandering has also prompted the development of a number of new computational algorithms which come with new mathematical questions. The next round of redistricting analysis will necessarily need to be more refined and nuanced. There is also the opportunity to be less reactive. There are opportunities to try to influence the process by which new maps are drawn before turning to the courts. There is also the possibility to direct the conversation by showing the effect more fully considering factors such as communities of interest, incumbency or proposed procedural elements of laws. This presentation reflects joint work Gregory Herschlag and a number of other researchers including many undergraduates, graduate students, and a few high school students. |

December 3 | Zachary Kilpatrick How heterogeneity shapes the efficiency of collective decisions and foraging Many organisms regularly make decisions regarding foraging, home-site selection, mating, and danger avoidance in groups ranging from hundreds up to millions of individuals. These decisions involve evidence-accumulation processes by individuals and information exchange within the group. Moreover, these decisions take place in complex, dynamic, and spatially structured environments, which shape the flow of information between group mates. We will present a statistical inference model for framing evidence accumulation and belief sharing in groups and some examples of how interactions shape decision efficiency in groups. Our canonical model is of Bayesian agents deciding between two equally likely options by accumulating evidence to a threshold. When neighbors only share their decisions with each other, groups comprised of individuals with a distribution of decision thresholds make more efficient decisions than homogeneous ones. We then turn our attention to specific examples of collective decision making in foraging animal groups like honey bees. For honey bees, spatial heterogeneity resulting from confinement to a hive bottlenecks communication, but creates an effective colony-level signal-detection mechanism whereby recruitment to low quality objectives is blocked. Heterogeneity in communication, on the other hand, hobbles the foraging efficiency of small groups. |

##### Spring 2021

January 29 | Book Club "Factfulness: 10 Reasons We’re Wrong about the World – and Why Things are Getting Better" (2018), Hans Rosling Chapters 1-3 |

February 5 | Michelle McCarthy Boston University Title: Mathematical modeling of neuronal rhythms: from physiology to function Abstract: Brain rhythms are a ubiquitous feature of brain dynamics, tightly correlated with neuronal activity underlying such basic functions as cognition, emotions, movement, and sleep. Moreover, abnormal rhythmic activity is associated with brain disfunction and altered brain states. Identifying the neuronal units and network structures that create, sustain and modulate brain rhythms is fundamental to identifying both their function and dysfunction in mediating behavioral output. Experimental studies of brain rhythms are limited by the inability to isolate large ensembles of neurons and their interconnections during active brain states. However, mathematical models have been used extensively to study network dynamics of the brain and to give insight into the determinants and functions of brain oscillations during various cognitive and behavioral states. Here I will give a brief introduction to the field of study of rhythmic brain activity and the mathematical formulations underlying biophysical neuronal network models. Existing mathematical models of brain development, sleep and neurodegenerative disease will be used to demonstrate how neuronal models of rhythmic dynamics can be used to explore the link between the brain physiology and functional network dynamics. |

February 12 | Daniel Nordman Iowa State Title: Within-sample prediction of a number of future events Abstract: The talk overviews a prediction problem encountered in reliability engineering, where a need arises to predict the number of future events (e.g., failures) among a cohort of units associated with a time-to-event process. Examples include the prediction of warranty returns or the prediction of the number of future product failures that could cause serious harm. Important decisions, such as a product recall, are often based on such predictions. Data, typically right-censored, are used to estimate the parameters of a time-to-event distribution. This distribution can then be used to predict the number of events over future periods of time. Because all units belong to the same data set, either by providing information (i.e., observed event times) or by becoming the subject of prediction (i.e., censored event times), such predictions are called within-sample predictions and differ from other prediction problems considered in most literature. A standard plug-in (also known as estimative) prediction approach is shown to be invalid for this problem (i.e., for even large amounts of data, the method fails to have correct coverage probability). However, a commonly used prediction calibration method is shown to be asymptotically correct for within-sample predictions, and two alternative predictive-distribution-based methods are presented that perform better than the calibration method. |

February 19 Special Time of 1:00 PM | Olivia Prosper University of Tennessee Title: Modeling malaria parasite dynamics within the mosquito Abstract: The malaria parasite Plasmodium falciparum requires a vertebrate host and a female Anopheles mosquito to complete a full life cycle, with sexual reproduction occurring in the mosquito. While parasite dynamics within the vertebrate host, such as humans, has been extensively studied, less is understood about dynamics within the mosquito, a critical component of malaria transmission dynamics. This sexual stage of the parasite life cycle allows for the production of genetically novel parasites. In the meantime, a mosquito’s biology creates bottlenecks in the infecting parasites’ development. We developed a two-stage stochastic model of the generation of parasite diversity within a mosquito and were able to demonstrate the importance of heterogeneity amongst parasite dynamics across a population of mosquitoes on estimates of parasite diversity. A key epidemiological parameter related to the timing of onward transmission from mosquito to vertebrate host is the extrinsic incubation period (EIP). Using simple models of within-mosquito parasite dynamics fitted to empirical data, we investigated factors influencing the EIP. |

February 26 | Book Club "Factfulness: 10 Reasons We’re Wrong about the World – and Why Things are Getting Better" (2018), Hans Rosling Chapters 4-6 |

March 12 | Lise-Marie Imbert-Gerard University of Arizona Title: Wave propagation in inhomogeneous media: An introduction to Generalized Plane Waves Abstract: Trefftz methods rely, in broad terms, on the idea of approximating solutions to Partial Differential Equation (PDEs) using basis functions which are exact solutions of the PDE, making explicit use of information about the ambient medium. But wave propagation problems in inhomogeneous media is modeled by PDEs with variable coefficients, and in general no exact solutions are available. Generalized Plane Waves (GPWs) are functions that have been introduced, in the case of the Helmholtz equation with variable coefficients, to address this problem: they are not exact solutions to the PDE but are instead constructed locally as high order approximate solutions. We will discuss the origin, the construction, and the properties of GPWs. The construction process introduces a consistency error, requiring a specific analysis. |

March 19 | Ethan Anderes UCDavis Title: Gravitational wave and lensing inference from the CMB polarization Abstract: In the last decade cosmologists have spent a considerable amount of effort mapping the radially-projected large-scale mass distribution in the universe by measuring the distortion it imprints on the CMB. Indeed, all the major surveys of the CMB produce estimated maps of the projected gravitational potential generated by mass density fluctuations over the sky. These maps contain a wealth of cosmological information and, as such, are an important data product of CMB experiments. However, the most profound impact from CMB lensing studies may not come from measuring the lensing effect, per se, but rather from our ability to remove it, a process called delensing. This is due to the fact that lensing, along with emission of millimeter wavelength radiation from the interstellar medium in our own galaxy, are the two dominant sources of foreground contaminants for primordial gravitational wave signals in the CMB polarization. As such delensing, i.e. the process of removing the lensing contaminants, and our ability to either model or remove galactic foreground emission sets the noise floor on upcoming gravitational wave science. In this talk we will present a complete Bayesian solution for simultaneous inference of lensing, delensing and gravitational wave signals in the CMB polarization as characterized by the tensor-to-scalar ratio r parameter. Our solution relies crucially on a physically motivated re-parameterization of the CMB polarization which is designed specifically, along with the design of the Gibbs Markov chain itself, to result in an efficient Gibbs sampler---in terms of mixing time and the computational cost of each step---of the Bayesian posterior. This re-parameterization also takes advantage of a newly developed lensing algorithm, which we term LenseFlow, that lenses a map by solving a system of ordinary differential equations. This description has conceptual advantages, such as allowing us to give a simple non-perturbative proof that the lensing determinant is equal to unity in the weak-lensing regime. The algorithm itself maintains this property even on pixelized maps, which is crucial for our purposes and unique to LenseFlow as compared to other lensing algorithms we have tested. It also has other useful properties such as that it can be trivially inverted (i.e. delensing) for the same computational cost as the forward operation, and can be used for fast and exact likelihood gradients with respect to the lensing potential. Incidentally, the ODEs for calculating these derivatives are exactly analogous to the backpropagation techniques used in deep neural networks but are derived in this case completely from ODE theory. |

March 26 | Book Club "Factfulness: 10 Reasons We’re Wrong about the World – and Why Things are Getting Better" (2018), Hans Rosling Chapters 7-9 |

April 9 | Andrew Zammit Mangion University of Wollongong Title: Statistical Machine Learning for Spatio-Temporal Forecasting Abstract: Conventional spatio-temporal statistical models are well-suited for modelling and forecasting using data collected over short time horizons. However, they are generally time-consuming to fit, and often do not realistically encapsulate temporally-varying dynamics. Here, we tackle these two issues by using a deep convolution neural network (CNN) in a hierarchical statistical framework, where the CNN is designed to extract process dynamics from the process' most recent behaviour. Once the CNN is fitted, probabilistic forecasting can be done extremely quickly online using an ensemble Kalman filter with no requirement for repeated parameter estimation. We conduct an experiment where we train the model using 13 years of daily sea-surface temperature data in the North Atlantic Ocean. Forecasts are seen to be accurate and calibrated. We show the versatility of the approach by successfully producing 10-minute nowcasts of weather radar reflectivities in Sydney using the same model that was trained on daily sea-surface temperature data in the North Atlantic Ocean. This is joint work with Christopher Wikle, University of Missouri. |

April 16 | Diogo Bolster University of Notre Dame Title: Incomplete mixing in reactive systems - from Lab to Field scale Abstract: In order for two items to react they must physically come into contact with one another. In the lab we often measure reaction rates by forcing two species to continuously mix together. However, in real systems such forced mixing mechanisms may often not exist and so a natural question arises: How do we take measurements from our well mixed laboratory experiments and use them to make meaningful predictions at scales of interest? In this talk we propose a novel modeling framework that aims precisely to do this. To show its applicability we will discuss it as related to a few examples: (i) mixing driven reactions in a quasi-well-mixed systems (ii) mixing driven reactions in a porous column experiment and (iii) mixing in a highly heterogeneous aquifer with a broad range of velocity and spatial scales. While this work was originally motivated by chemical reactions in porous media, the modeling framework is much more general than this and should be applicable to a broad range of problems. Also, the term reaction, as defined within our framework, can loosely be defined as an event where two items come together to produce something else; it is not in any way limited to purely chemical reactions. |

April 23 | Kiona Ogle Northern Arizon Title: A Bayesian approach to quantifying time-scales of influence and ecological memory Abstract: Many time-varying ecological processes are influenced by both concurrent and antecedent (past) conditions; in some cases, antecedent conditions may outweigh concurrent influences. The time-scales over which environmental conditions influence processes of interest (e.g., photosynthesis, carbon and water fluxes, tree growth, ecosystem productivity) are not well understood, motivating our development and application of the stochastic antecedent modeling (SAM) approach. The SAM approach is applied to ecological time-series data within a Bayesian statistical framework to quantify ecological memory. We use “memory” to broadly describe time-scales of influence, including the importance of antecedent conditions experienced at different times into the past, potentially revealing lagged responses. The coupled Bayesian-SAM approach, however, can lead to computational inefficiencies, and we describe reparameterization “solutions” to address such issues. To illustrate, we apply the approach to responses operating at distinctly different time-scales: annual tree growth (e.g., tree-rings widths) and sub-daily plant physiological responses (e.g., indices of stomatal behavior). Our Bayesian-SAM applications to tree growth in arid and semi-arid regions has identified particular seasons or months during which climatic conditions (e.g., precipitation or temperature) are most influential to subsequent tree growth; in many cases, conditions experienced 2-4 years ago continue to influence growth. The analysis has also revealed novel, multi-day lagged responses of plant physiological behavior to soil and atmospheric moisture conditions. In general, the Bayesian-SAM approach has demonstrated that ecological memory is an important process governing plant and ecosystem responses to environmental perturbations. |

April 30 | Book Club "Factfulness: 10 Reasons We’re Wrong about the World – and Why Things are Getting Better" (2018), Hans Rosling Chapter 9-10 + Factfulness Rules of Thumb |

##### Fall 2020

September 18 | Zachary J. Grant Oak Ridge National Lab Analysis and Development of Strong Stability Preserving Time Stepping Schemes High order spatial discretizations with monotonicity properties are often desirable for the solution of hyperbolic partial differential equations. These methods can advantageously be coupled with high order strong stability preserving time scheme to accurately evolve solutions forward in time while preserving convex functionals that are satisfied from the design of the spatial discretization. The search for high order strong stability time- stepping methods with large allowable strong stability coefficient has been an active area of research over the last three decades. In this talk I will review the foundations of SSP time stepping schemes as in how to analyze a given scheme, and how to optimally build a method which allows the largest effective stable time step. We will then discuss some extensions of the SSP methods in recent years and some ongoing research problems in the field, and show some the need of the SSP property through simple yet demonstrative examples. |

September 25 | Book Club “Weapons of Math Destruction” Chapter 1-4 |

October 2 | |

October 16 | Minah Oh James Madison University Fourier Finite Element Methods and Multigrid for Axisymmetric H(div) Problems An axisymmetric problem is a problem defined on a three-dimensional (3D) axisymmetric domain, and it appears in numerous applications. An axisymmetric problem can be reduced to a sequence of two-dimensional (2D) problems by using cylindrical coordinates and a Fourier series decomposition. Fourier Finite Element Methods (Fourier-FEMs) can be used to approximate each Fourier-mode of the solution by using a suitable FEM. Such dimension reduction is an attractive feature considering computation time, but the resulting 2D problems are posed in weighted function spaces where the weight function is the radial component r. Furthermore, the grad, curl, and div operators appearing in these weighted problems are quite different from the standard ones, so the analysis of such weighted problems requires special attention. Multigrid is an effective iterative method that can be used to solve large matrix systems arising from FEMs. In this talk, I will present a multigrid algorithm that can be applied to weighted H(div) problems that arise after performing a dimension reduction to an axisymmetric H(div) problem. Theoretical results that show the uniform convergence of the multigrid V-cycle with respect to meshsize will be presented as well as numerical results. |

October 30 | Book Club: “Weapons of Math Destruction” Chapters 5 - 7 |

November 6 | Mokshay Madiman University of Delaware Concentration of information for log-concave distributions In 2011, S. Bobkov and the speaker showed that for a random vector X in R^n drawn from a log-concave density f=e^{-V}, the information content per coordinate, namely V(X)/n, is highly concentrated about its mean. The result demonstrated that high-dimensional log-concave measures are in a sense close to uniform distributions on the annulus between 2 nested convex sets (generalizing the well known fact that the standard Gaussian measure is concentrated on a thin spherical annulus). We present recent work that obtains an optimal concentration bound in this setting, using a much simplified proof. Applications that motivated the development of these results include high-dimensional convex geometry, random matrix theory, and shape-constrained density estimation. The talk is based on joint works with Sergey Bobkov (University of Minnesota), Matthieu Fradelizi (Université Paris Est), and Liyao Wang. |

November 20 | Ayaboe Edoh Edwards AFRL Balancing Numerical Dispersion, Dissipation, and Aliasing for Time-Accurate Simulations The investigation of unsteady flow phenomena calls for the need to improve time-accurate simulation capabilities. Numerical errors responsible for affecting solution accuracy and robustness can be broadly categorized in terms of dispersion, dissipation, and aliasing. Their presence is a consequence of discretizing the continuous governing equations, and their impact may be felt at all scales (albeit to varying degrees). The task of constructing an effective numerical method may therefore be interpreted in terms of reducing the influence of these errors over as broad a range of scales as possible. Here, a concerted assembly of scheme components is chosen relative to a target aliasing limit. High-order and optimized finite difference stencils are employed in order to achieve accuracy; meanwhile, split representations for nonlinear transport terms are used in order to greatly improve robustness. Finally, tunable and scale-discriminant artificial-dissipation methods are incorporated for de-aliasing purposes and as a means of further enhancing both accuracy and stability. The proposed framework is motivated by the need to devise a numerical format capable of mitigating discretization effects in Large-Eddy Simulations. |

December 4 | Book Club “Weapons of Math Destruction” Chapters 8-10 |

##### Spring 2020

January 24 | Mevin Hooten Colorado State University Runnning on empty: Recharge dynamics from animal movement data |
---|---|

February 14 | Mark Risser Lawrence Berkeley National Laboratory Bayesian inference for high-dimensional nonstationary Gaussian processes |

February 21 | Donna Calhoun Boise State University A fully unsplit wave propagation algorithm for shallow water flows on GPUs |

February 28 | Matthias Katzfuss Texas A&M Gaussian-Process Approximations for Big Data |

March 20 | Nancy Rodriguez |

April 3 | Dan Nordman |

April 10 | Grady Wright |

April 24 | Feng Bao |

##### Fall 2019

August 23 | Chris Elvidge NOAA and Mines' Payne Institute of Public Policy VIIRS Data Gems From the Nights |

September 13 | Cynthia Phillips Sandia National Laboratory Advanced Data Structures for National Cyber Security |

September 20 | Will Kleiber University of Colorado - Boulder Mixed Graphical-Basis Models for Large Nonstationary and Multivariate Spatial Data Problems |

October 4 | Igor Cialenco Illinois Institute of Technology Adaptive Robust Control Under Model Uncertainty |

October 18 | Tathagata Bandyopadhyay Indian Institute of Management Ahmedabad Inference Problems in Binary Regression Model with Misclassified Responses Video |

October 25 | Daniel Forger University of Michigan Math, Music and the Mind; Analysis of the performed Trio Sonatas of J.S. Bach |

November 8 | Daniel Larremore University of Colorado - Boulder Complex Networks & Malaria: From Evolution to Epidemiology Video |

November 22 | Marisa Eisenberg University of Michigan |

December 3 | Russell Cummings United States Air Force Academy The DoD High Performance Computing Modernization Program’s Hypersonic Vehicle Simulation Institute: Objectives and Progress -A Mechanical Engineering Seminar- |

##### Spring 2019

January 25 | Steve Sain Jupiter Intelligence Data Science @ Jupiter |

February 1 | Xingping Sun Missouri State University Kernel Based Monte Carlo Approximation Methods |

February 8 | Mandy Hering Baylor University Fault Detection and Attribution for a Complex Decentralized Wastewater Treatment Facility |

February 22 | Bailey K. Fosdick Colorado State University Inference for Network Regressions with Exchangeable Errors |

March 8 | Radu Cascaval University of Colorado - Colorado Springs The Mathematics of (Spatial) Mobility |

March 15 | Amneet Bhalla San Diego State University A Robust and Efficient Wave-Structure Interaction Solver for High Density Ratio Multiphase Flows Video |

March 22 | Robert Lund Clemson University Stationary Count Time Series |

April 5 | Hua Wang Colorado School of Mines Learning Sparsity-Induced Models for Understanding Imaging Genetics Data Video |

April 26 | Wen Zhou Colorado State University Estimation and Inference of Heteroskedasticity Models with Latent Semiparametric Factors for Multivariate Time Series Video |

May 3 | Olivier Pinaud Colorado State University Time Reversal by Time-dependent Perturbations Video |

##### Fall 2018

August 31 | Michael Wakin Colorado School of Mines Modal Analysis from Random and Compressed Samples Video |

September 14 | Michael Scheuerer National Oceanic and Atmospheric Administration (NOAA) Generating Calibrated Ensembles of Physically Realistic, High-Resolution Precipitation Forecast Fields based on GEFS Model Output Video |

September 28 | Kathryn Colborn CU Denver, Anschutz Medical Campus Spatio-Temporal Modelling of Malaria Incidence for Early Epidemic Detection in Mozambique Video |

October 12 | Philippe Naveau Laboratoire des Sciences du Climat et de l'Environnement, IPSL-CNRS, France Analysis of Extreme Climate Events by Combining Multivariate Extreme Values Theory and Causality Theory Video |

October 26 | Carrie Manore Los Alamos National Laboratory Modeling Disease Risk with Social and Environmental Drivers and Non-traditional Data Sources |

November 2 | Jon Trevelyan Durham University, UK Enriched Simulations in Computational Mechanics Video |

November 9 | Sarah Olson Worcester Polytechnic Institute Modeling Cell Motility: From Agent Based Models to Continuous Approximations Video |

November 30 | Elwin van't Wout Pontificia Universidad Católica de Chile Efficient Numerical Simulations of Wave Propagation Phenomena Video |

December 7 | Bruce Bugbee National Renewable Energy Laboratory (NREL) |

##### Spring 2018

March 2 | Grant Brown University of Iowa Biostatistics Working with Approximate Bayesian Computation in Stochastic Compartmental Models Video |

March 9 | Victoria Booth University of Michigan Mathematics Neuromodulation of Neural Network Dynamics Video |

March 23 | Daniel Appelö University of Colorado Applied Math What’s New with the Wave Equation? Video |

April 6 | Grad Student Showcase Video |

April 20 | Jem Corcoran University of Colorado Applied Math A Birth-and-Death Process for the Discretization of Continuous Attributes in Bayesian Network Structure Recovery Video |

May 4 | Ian Sloan University of New South Wales Mathematics Sparse Approximation and the Cosmic Microwave Background Video |

##### Fall 2017

August 25 | Zachary Kilpatrick University of Colorado Boulder, Department of Applied Mathematics Evidence accumulation in changing environments: Neurons, organisms, and groups |

September 8 | Lincoln Carr Colorado School of Mines, Department of Physics Many-Body Quantum Chaos of Ultracold Atoms in a Quantum Ratchet Video |

September 22 | Joe Guinness North Carolina State University, Department of Statistics A General Framework for Vecchia Approximations of Gaussian Processes Video |

October 13 | Eliot Fried Okinawa Institute of Science and Technology, Mathematics, Mechanics, and Materials Unit Shape Selection Induced by Competition Between Surface and Line Energy |

October 20 | Arthur Sherman National Institutes of Health Diabetes Pathogenesis as a Threshold-Crossing Process Video |

November 3 | Adrianna Gillman Rice University, Department of Computational and Applied Mathematics Fast Direct Solvers for Boundary Integral Equations Video |

November 17 | Laura Miller University of North Carolina at Chapel Hill, Departments of Mathematics and Biology Using Computational Fluid Dynamics to Understand the Neuromechanics of Jellyfish Swimming Video |

December 1 | AMS Graduate Student Showcase Video |

##### Spring 2017

January 13 | Roger Ghanem University of Southern California, Department of Aerospace and Mechanical Engineering Uncertainty quantification at the interface of computing and everything else Special joint colloquium with Department of Mechanical Engineering Video |

January 27 | Wolfgang Bangerth Colorado State University, Department of Mathematics Simulating complex flows in the Earth mantle Video |

February 10 | Chris Mast Mercer, Actuary and Employee Benefits Consultant Actuarial problems in employer-sponsored healthcare Video |

February 24 | Natasha Flyer National Center for Atmospheric Research, Computational Math Group Bengt Fornberg University of Colorado Boulder, Department of Applied Mathematics Radial basis functions: Freedom from meshes in scientific computing Video |

March 10 | Michael Sprague National Renewable Energy Laboratory, Computational Science Center A computational model for a dilute biomass suspension undergoing mixing and settling Video |

March 24 | Randall J. LeVeque University of Washington, Department of Applied Mathematics Generating random earthquakes for probabilistic hazard assessment Special joint colloquium with US Geological Survey Video |

April 7 | Fred J. Hickernell Illinois Institute of Technology, Department of Applied Mathematics Think like an applied mathematician and a statistician Video |

April 14 | Ian Sloan University of New South Wales, School of Mathematics How high is high dimensional? Video |

April 21 | Mark Embree Virginia Tech, Department of Mathematics Using interpolatory approximations to learn from an instrumented building Video |

April 28 | James A. Warren National Institute of Standards and Technology, Material Measurement Laboratory The Materials Genome Initiative: NIST, data, and open science Video Special joint colloquium with Department of Metallurgical and Materials Engineering |

May 5 | Jessica F. Ellis Colorado State University, Department of Mathematics The features of college calculus programs: An overview of the MAA two calculus projects' main findings Video |

##### Fall 2016

September 2 | Stephen Becker University of Colorado Boulder, Department of Applied Mathematics Subsampling large datasets via random mixing Video |

September 16 | Art Owen Stanford University, Department of Statistics Permutation p-value approximation via generalized Stolarsky invariance Video |

September 30 | Stefan Wild Argonne National Laboratory, Mathematics and Computer Science Division Beyond the black box in derivative-free and simulation-based optimization Video |

October 14 | Erica Graham Bryn Mawr College, Department of Mathematics Modeling physiological and pathological mechanisms in ovulation Video |

October 28 | Jim Koehler Google Boulder, Principal Statistician Statistical methods supporting Google's ad business |

November 11 | Dennis Cook University of Minnesota, School of Statistics An Introduction to envelopes: Methods for improving efficiency in multivariate statistics Video |

December 2 | Howard Elman University of Maryland, Department of Computer Science Efficient computational methods for parameterized partial differential equations Video |