1) We prove that using random sparse and structured sampling enables rank-r approximation of the average input matrix having numerical rank r.
2) We prove that Gaussian elimination with no pivoting (GENP) is numerically safe for the aver- age nonsingular and well-conditioned matrix preprocessed with a nonsingular and well-conditioned f-Circulant or another structured multiplier. This can be an attractive alternative to the customary Gaussian elimination with partial pivoting (GEPP).
3) By using structured matrices of a large family we compress large-scale neural networks while retaining high accuracy. The results of our extensive are in good accordance with those of our theoretical study.
On the surface, these problems are entirely unrelated. But they all share an important similarity from a computational point of view: in the worst case, they require quadratic running time, but in many cases the size of the output is significantly less than the worst-case bound suggests. This happens because of overlapping matches in pattern matching, repeated sums in X+Y, or colliding terms in polynomial multiplication. So it is reasonable to seek output-sensitive algorithms, whose running time decreases when the output size is small.
As it turns out, the core to solving all three problems comes from
recent advances in sparse polynomial interpolation, that is,
discovering a formula for an unknown polynomial after sampling it
at a small number of chosen points. We will outline the connection
between each of these important problems and sparse interpolation,
and present a randomized algorithm to compute the answer to all
three in "nearly linear" expected running time.
Based on our previous work on computing the real/complex roots of a (dense) univariate polynomial, we propose an algorithm that, for a given integer L, returns disjoint disks D_{i} in the complex space and corresponding positive integers m_{i} such that each disk D_{i} is centered at the real axis, has radius less than 2^{-L}, and contains exactly m_{i} roots of f counted with multiplicity. The sum of all m_{i} is upper bounded by 2k and, in addition, each real root of f is contained in one of the disks. Hence, our algorithm computes (absolute) L-bit approximations of all real roots but may also return approximations of some non-real roots with small imaginary part.
We show that the bit complexity of our algorithm is polynomial in k and log n, and even near-linear in L and τ, where
2^{-τ} and 2^{τ} constitute lower and upper bounds on the absolute values of the non-zero coefficients of f, respectively.
If f has only simple real roots, our algorithm can also be used to isolate all real roots, in which case the bit complexity is polynomial
in k and log n, and near-linear in τ and -log(s), where s denotes the separation of f.
For the problem of isolating the real roots of a very sparse polynomial f with integer coefficients,
our method performs near-optimal.
Abstract: Systems of polynomial equations with
parameters arise in many fields, such as geometric computing,
flexibility of molecules, chemical reactions, game theory, operations
research, and differential equations. Here we will focus on recent work
with Bela Palancz and Joseph Awange on image analysis. Given a point
cloud created by a laser scan, we want to discern underlying shapes.
This entails separating "inliers" from "outliers" by the maximization of
the likelihood function of a Gaussian mixture distribution. We will
talk about trying to solve the resulting polynomial system with
resultants and with Groebner bases.
This is an applied talk with no mathematical theory beyond senior level undergraduate. |