Fall 2024 Applied and Computational Mthematics (ACM) Seminar

Host: Zecheng Zhang.

Room: Love 0231 or online.

Zoom Link: will be avaiable up on request.

Time: 03:05 pm to 04:05 pm. It is weekly on Tuesday, see below for the detailed schedule.

Table Example
Date Speaker Abstract of the Talk
Aug 27 Zecheng Zhang Introduction to the ACM seminar of Fall 2024, speaker informations, and what to know.
Sept 3 Jingmin Sun (CMU) In this talk, we develop and explore a foundational model for solving Partial Differential Equations (PDEs) with an emphasis on building a versatile, robust framework capable of addressing multiple PDE operators simultaneously. Our objective is to create a single model that not only handles various operators but also demonstrates the ability to generalize to new, unseen physical phenomena in a "zero-shot" manner. We present numerical examples to demonstrate the model's ability to generalize physical phenomena in a "zero-shot" manner. We introduce LeMON, a Learning to Learn Multi-Operator Network pipeline, which integrates both pre-training and fine-tuning processes. Furthermore, we investigate a new scaling law specifically tailored to PDE foundation models, providing insights into optimizing model performance as it scales. Finally, we improve the LeMON pipeline by integrating LoRA and meta-learning strategies.
Sept 10 Bryce Morsky (FSU) The replicator equation is widely used in modelling biological, economic, and social systems. It traditionally assumes proportional selection and that replicators earn mean payoffs. In this talk, I will present extensions that feature payoff distributions and truncation selection, where only replicators with fitness above a threshold survive and reproduce. We can distinguish between two types of truncation selection: replicators below a fixed fitness threshold are culled, or a bottom proportion of the population is culled. I will present analyses of these equations, comparing them to the standard replicator equation.
Sept 17 Wenqi Cui (Caltech and NYU) This talk will describe how to bridge the gap between learning and safety-critical constraints through structured neural networks guided by control theory and the physics of energy systems. Using Lyapunov theory, I will show how we can extract stabilizing controller structures for transient stability problems, and show how to parameterize the structures by neural networks. Then I will further show how we can achieve provable guarantees on steady-state optimal resource allocation and adapt to time-varying loads and renewables. The extension of the framework to broader networked systems will also be discussed.
Sept 24 Rishi Sonthalia (Boston College) A fundamental problem in machine learning is understanding the effect of early stopping and mini-batching on the parameters obtained and the generalization capabilities of the trained model. Even for linear models, the effect is not fully understood for arbitrary learning rates and data. We analyze the dynamics of discrete full batch gradient descent for linear regression. With minimal assumptions, we characterize the trajectory of the parameters and the expected excess risk. Using this characterization, we show that when training with a learning rate schedule $\eta_k$ and a finite time horizon $T$, the early stopped solution $\beta_T$ is equivalent to the minimum norm solution for a generalized ridge regularized problem. We also prove that early stopping is beneficial for generic data with arbitrary covariance spectrum and various learning rate schedules. We provide an estimate for the optimal stopping time and empirically demonstrate the accuracy of our estimate. We also study the discrete dynamics of mini-batch gradient descent with random reshuffling for least squares regression. We show that the error dynamics and generalization error depends on a sample cross-covariance matrix $\mathbf{Z}$ between the original features $\mathbf{X}$ and a set of new features $\widetilde{\mathbf{X}}$, in which each feature is modified by the mini-batches that appear before it during the learning process in an averaged way.
Oct 1 Already taken. TBA
Oct 8 TBA TBA
Oct 15 TBA TBA
Oct 29 Hannah Lu (UT Austin) TBA
Nov 5 Fengjiao Liu (FSU) TBA
Nov 12 Sui Tang (UCSB) TBA
Nov 19 Weiqi Chu (Umass) TBA
Dec 3 TBA TBA
Please contact me if you are interested in giving a talk.