The aforementioned algorithm does not work when the matrix of the system, A(u)x = b(u), is rank deficient and some evaluations cause errors. We next present an algorithm for solving black box linear systems where the entries are polynomials over a field and the matrix of the system is rank deficient.
Our algorithm for parametric linear system solving can be specialized to the
recovery of a vector of rational functions from its evaluations. We view the
recovery of a vector of polynomials as the decoding of interleaved Reed-Solomon
codes. It is known that interleaved schemes improve the error correction
capabilities of codes when errors occur in bursts. If we consider that the
evaluations of the polynomials in the vector are done one at a time and that
the errors are dependent then it is likely that errors occur in consecutive
evaluations (bursts). We show how to encode and transmit data (polynomials) on
a burst channel so that the data (polynomials) transmitted can be recovered
with fewer evaluations than is required by Generalized Welch/Berlekamp decoding.
Video recording is available here.
Slides of the talk are available here.
1) to present a differential algebra method to test structural identifiability based on the structure of the characteristic set of the differential ideal generated by the polynomials defining the model,
2) to explain the important role played by the model initial conditions in the characteristic set and the role of a system-theoretic property called accessibility, crucial to correctly check identifiability,
3) to illustrate how one can combine our structural identifiability test with practical identifiability approaches in order to calculate either all the multiple parameter solutions of a locally identifiable model or, the analytic relations between the infinite number of solutions of a nonidentifiable model. These different solutions are equivalent to describe the observable input-output behaviour but they generally yield different dynamic behaviours of unmeasurable variables, whose prediction is often the main goal of mathematical modeling.
The relevance of structural identifiability analysis in biological
modeling is shown by some recent examples including HIV and
oncological models. Structural identifiability is tested with our
freely available software DAISY (Differential Algebra for
Identifiability of SYstems).
Video recording is available here.
Slides of the talk are available here.
In this talk, we place cryo-EM reconstruction inside a general framework for statistical estimation under noisy group actions — orbit retrieval. We show a tight relation between the sample complexity for orbit retrieval and the invariant theory of the underlying symmetry group. On the algebra side, this motivates apparently new questions in invariant theory and computational algebraic geometry — to which we offer partial algorithmic answers in general.
Work-in-progress extending basic orbit retrieval is also presented,
leading to specially structured problems in tensor decomposition and
non-convex optimization. For the cryo-EM case, we validate our
methodology on real data, in the context of ab initio modeling.
Joint work with Amit Singer's group and Afonso Bandeira's group.
Video recording is available here.
Slides of the talk are available here
I will sketch the background, before explaining Esterov's
classification and ongoing work identifying some of the
imprimitive Galois groups for polynomial systems.
Video recording is available here.
Depending on time I will conclude with specific examples associated
with gene regulatory networks.
Video recording is available here.
This is based on joint work with T. Boege, A. D'Ali, and B. Sturmfels.
Video recording is available here.
Algebraic statistics studies statistical models through the lens
of algebra, geometry, and combinatorics. From model selection to
inference, this interdisciplinary field has seen applications in
a wide range of statistical procedures. In this talk, I will
review maximum likelihood estimation and the maximum likelihood
degree (ML degree) for discrete models. In particular, I will
say how the maximum likelihood degree gives a measure of the
algebraic complexity of the point estimate for MLE. Next, I will
show how the monodromy homotopy method can be used to determine
a point estimate. Moreover, if time permits, I will also discuss
ongoing work regarding the algebraic degree of GMM. There will
be illustrative examples throughout the talk, and a background
in algebra will not be assumed.
Video recording is available here.
It has been successfully applied to checking the consistency of polynomial equations over complex numbers by Sylvester and Macaulay, reducing it to that of linear equations. It has been also successfully applied to differential equations by Kolchin, reducing it to that of polynomial equations.
In this talk, we share our on-going effort on applying the technique to polynomial equations/inequalities over real numbers, reducing it to that of linear equations/inequalities.
This is a joint work with Brittany Riggs.
Video recording is available here.
1) We prove that using random sparse and structured sampling enables rank-r approximation of the average input matrix having numerical rank r.
2) We prove that Gaussian elimination with no pivoting (GENP) is numerically safe for the aver- age nonsingular and well-conditioned matrix preprocessed with a nonsingular and well-conditioned f-Circulant or another structured multiplier. This can be an attractive alternative to the customary Gaussian elimination with partial pivoting (GEPP).
3) By using structured matrices of a large family we compress large-scale neural networks while retaining high accuracy. The results of our extensive are in good accordance with those of our theoretical study.
On the surface, these problems are entirely unrelated. But they all share an important similarity from a computational point of view: in the worst case, they require quadratic running time, but in many cases the size of the output is significantly less than the worst-case bound suggests. This happens because of overlapping matches in pattern matching, repeated sums in X+Y, or colliding terms in polynomial multiplication. So it is reasonable to seek output-sensitive algorithms, whose running time decreases when the output size is small.
As it turns out, the core to solving all three problems comes from
recent advances in sparse polynomial interpolation, that is,
discovering a formula for an unknown polynomial after sampling it
at a small number of chosen points. We will outline the connection
between each of these important problems and sparse interpolation,
and present a randomized algorithm to compute the answer to all
three in "nearly linear" expected running time.
Based on our previous work on computing the real/complex roots of a (dense) univariate polynomial, we propose an algorithm that, for a given integer L, returns disjoint disks Di in the complex space and corresponding positive integers mi such that each disk Di is centered at the real axis, has radius less than 2-L, and contains exactly mi roots of f counted with multiplicity. The sum of all mi is upper bounded by 2k and, in addition, each real root of f is contained in one of the disks. Hence, our algorithm computes (absolute) L-bit approximations of all real roots but may also return approximations of some non-real roots with small imaginary part.
We show that the bit complexity of our algorithm is polynomial in k and log n, and even near-linear in L and τ, where
2-τ and 2τ constitute lower and upper bounds on the absolute values of the non-zero coefficients of f, respectively.
If f has only simple real roots, our algorithm can also be used to isolate all real roots, in which case the bit complexity is polynomial
in k and log n, and near-linear in τ and -log(s), where s denotes the separation of f.
For the problem of isolating the real roots of a very sparse polynomial f with integer coefficients,
our method performs near-optimal.
Abstract: Systems of polynomial equations with
parameters arise in many fields, such as geometric computing,
flexibility of molecules, chemical reactions, game theory, operations
research, and differential equations. Here we will focus on recent work
with Bela Palancz and Joseph Awange on image analysis. Given a point
cloud created by a laser scan, we want to discern underlying shapes.
This entails separating "inliers" from "outliers" by the maximization of
the likelihood function of a Gaussian mixture distribution. We will
talk about trying to solve the resulting polynomial system with
resultants and with Groebner bases.
This is an applied talk with no mathematical theory beyond senior level undergraduate. Video recording is available here. |
The seminar activities are partially supported by the National Science Foundation.