The Complexity of Accurate Floating Point Computation
University of California at Berkeley
Abstract: We consider the complexity of accurately evaluating floating point expressions, by which we mean getting some guaranteed relative accuracy, and identify a class of expressions where this can be done in polynomial time in the size of the expression and data. We also have a simple expression not in this class for which we have strong evidence that it cannot be evaluated accurately in polynomial time. We extend these results to the accurate computations of certain matrix computations including inverses, LU decompositions, and the SVD.
If time permits, we will also present a recent result where we show that the complexity of approximate condition estimation is as large as "verifying" matrix multiplication. Together with the widely-believed conjecture that verifying that A*B=0 cannot be done more cheaply than by multiplying A*B, this implies that all fast condition estimators in widespread use have counterexamples, i.e. matrices for which their estimates are arbitrarily wrong.
This is joint work with Plamen Koev and Ben Diament.