Hemant Tyagi (INIRA Lille), le 20 novembre 2020
The problem of learning a d-variate function f from its samples in a compact domain of Rd is a classical problem which has been studied extensively in statistics and numerical analysis.
In general, if we only make smoothness assumptions on f , then the number of samples needed for a reliable approximation of f grows exponentially with d. This is the well known curse of dimensionality and a common way to bypass this is to make additional structural assumptions on f. One such class of functions are sparse additive models where f can be written as the sum of a sparse number of univariates, bivariates, ..., r-variate functions, where r - d. We will focus on such models assuming black box access to f and derive randomized algorithms that query f at carefully constructed set of points, and learn f with sample complexity depending mildly on
d. Existing literature for such models are predominantly for r = 2. This is joint work with Jan Vybı́ral.