Workshop On Local Algorithms (WOLA) 2022

Local algorithms – that is, algorithms that compute and make decisions on parts of the output considering only a portion of the input – have been studied in a number of areas in theoretical computer science and mathematics. Some of the related areas include sublinear-time algorithms, distributed algorithms, streaming algorithms, (massively) parallel algorithms, inference in large networks, and graphical models. These communities have similar goals but a variety of approaches, techniques, and methods.

This workshop is aimed at fostering dialogue and cross-pollination of ideas between the various communities. The workshop will feature a small number of longer spotlight talks that, in part, survey approaches by various communities, as well as a poster session, focused on recent, exciting results.

This Workshop is part of a TUgbOAT project that has received funding from the European Research Council (ERC) under the European Union’s [Horizon 2020 research and innovation programme. Grant agreement No. 772346].

The worshop is also an integral part of Simons Semester.

WOLA has been organized since 2016. Previous WOLA webpages can be found at: 2016, 2018, 2019, 2020, 2021.

———————————————

When: June 25th – 27th, 2022
Where: The workshop will be held in hybrid mode:

  1. in-person at BUW Aula (University of Warsaw Old Library) at Krakowskie Przedmieście 26/28 in Warsaw and
  2. at Zoom meeting:  https://zoom.us/j/98206494746?pwd=Qy83V0VjdWJLT1lROHY5dVh6WG9SUT09; Meeting ID: 982 0649 4746; Passcode: 975518

Program Committee:

Noga Alon (Princeton University)
Robert Krauthgamer (Weizmann Institute of Science)
Sepideh Mahabadi (Microsoft Research
Krzysztof Onak (chair, Boston University)
Noga Ron-Zewi (University of Haifa)
Jukka Suomela (Aalto University)

Local organizers:

Piotr Sankowski, University of Warsaw & IDEAS NCBR

———————————————

JUNIOR-SENIOR LUNCHES

 The idea of these meetings is that several junior participants meet a senior participant for an informal discussion. This is a great opportunity to learn more about the community, career choices and possibilities, and many other aspects of life in academia and beyond. In particular, we are asking senior participants to sign up early as the number of slots for junior participants is limited by the number of senior participants. Thank you for providing this opportunity to more junior members of the community.

———————————————

SCHEDULE

Saturday (June 25, 2022)

  • 2:30pm – 2:40pm – Opening remarks
  • 2:40pm – 3:00pm – Tamalika Mukherjee “Differentially private sublinear algorithms”
  • 3:00pm – 3:30pm – Slobodan Mitrović “Faster algorithms for (1+ε)-approximate maximum matchings”

3:30pm – 4:00pm – coffee break

  • 4:00pm – 5:00pm – SPOTLIGHT: Vincent Cohen-Addad “Recent Progress on Local Clustering Algorithms”
  • 5:50pm – 8:20pm – Evening tour of Warsaw by bus and tasting of Polish specialties

———————————————

Sunday (June 26, 2022)

  • 10:00am – 11:00am – SPOTLIGHT: Rotem Oshman “Distributed Property Testing”
  • 11:00am – 11:30am – Guy Even “An Extendable Data Structure for Incremental Stable Perfect Hashing”
  • 11:30am – 11:40am – Community announcements
  • 11:40am -12:15pm – Bernhard Haeupler “TCS: Powered by Neurodiversity & Exceptional Brains” with discussion

12:15pm – 2:15pm – lunch break

  • 2:15pm – 2:45pm – Jane Lange “Properly learning monotone functions via local corrections”
  • 2:45pm – 3:15pm – “Connections between different types of local algorithms (success stories and unexplored connections)”

Please contribute!

  • 3:15pm – 4:15pm – Open problem session
  • 4:15pm – 4:35pm – Zihan Tan “Query Complexity of the Metric Steiner Tree Problem”
  • 4:35pm – 4:55pm – Tal Wagner “Generalization Bounds for Data-Driven Numerical Linear Algebra”

———————————————

Monday (June 27,2022)

  • 9:00am – 9:30am – Hendrik Fichtenberger “From Round-Adaptive Queries to Multi-Pass Streaming”
  • 9:30am – 10:00am – Ravi Kumar
  • 10:00am – 11:00am – SPOTLIGHT: Michael Kapralov “Simulating Random Walks in Random Streams”

11:00am – 11:30am – coffee break

  • 11:30am – 12:00pm – Christian Sohler “A Sublinear Local Access Implementation for the Chinese Restaurant Process”
  • 12:00am – 12:30pm – Runtian Ren “Min-cost perfect matching with concave delays”

12:30pm – 2:30pm – lunch break

  • 2:30pm – 3:30pm – SPOTLIGHT: Bernhard Haeupler “The Quest for Universally-Optimal Distributed Algorithms”
  • 3:30pm – 3:50pm – Maryam Aliakbarpour “Estimation of Entropy in Constant Space”
  • 3:50pm – 4:10pm – Rajarshi Bhattacharjee “Sublinear Time Eigenvalue Approximation via Random Sampling”
  • 4:10pm – 4:30pm – Minshen Zhu “Exponential Lower Bounds for Locally Decodable and Correctable Codes for Insertions and Deletions”
  • 4:30pm – 4:50pm – Omri Ben-Eliezer “Exponentially Better Approximation in Tree-Based Nearest Neighbors” 

———————————————

Spotlight talks

Michael Kapralov (EPFL)

Title: Simulating Random Walks in Random Streams

I will talk about the random order graph streaming model, where a worst case graph is presented to the algorithm as a randomly ordered stream of edges. The random order assumption enables the use of statistical estimation primitives, which turns out to be a surprisingly powerful approach: several fundamental graph problems such as matching size estimation, component counting, and the evaluation of bounded degree constant query testable properties have recently been shown to admit surprisingly space efficient random order streaming algorithms.

The main results of this talk are a space efficient single pass random order streaming algorithm for simulating nearly independent random walks that start at uniformly random vertices of an undirected graph, and a strong lower bound ruling out a similar result for directed graphs.

==========

Vincent Cohen-Addad (Google Research)

Title: Recent Progress on Local Clustering Algorithms

Clustering algorithms are at the heart of numerous modern unsupervised machine learning and data mining applications. To achieve scalability, locality is key: We look for algorithms that can construct a clustering by processing parts of the input independently. We will review recent progress made on this domain for various clustering objectives, such as k-means and k-median, and correlation clustering. We will discuss several potential research directions.

==========

Rotem Oshman (Tel Aviv University)

Distributed Property Testing

In this talk I will give an overview of some recent work on distributed property testing in the CONGEST model, a distributed network model with bounded communication bandwidth. The will focus on two problems: subgraph-freeness and uniformity testing. In the subgraph-freeness problem, we wish to check whether the network graph contains a copy of some fixed constant-sized subgraph H; this problem has received significant attention, and efficient algorithms are known for some specific subgraphs, but many questions remain open, even in the exact (non-property testing) case. In the uniformity-testing problem, each network node receives a small number of samples from some unknown distribution mu, and our goal is to check whether mu is the uniform distribution or whether it is far from uniform (in L1 distance). I will describe recent upper and lower bounds on the problem, and discuss open problems.

==========

Bernhard Haeupler (CMU)

The Quest for Universally-Optimal Distributed Algorithms

Many distributed optimization algorithms achieve an existentially-optimal round complexity (of ~O(sqrt(n) + D)), i.e., there exists some pathological worst-case topology on which no algorithm can be faster. However, most networks of interest allow for exponentially faster algorithms. This motivates two questions:

What network topology parameters determine the complexity of distributed optimization?
Are there universally-optimal algorithms that are as fast as possible on every single topology?
This talk provides an overview over the freshly-completed 6-year program that resolves these 25-year-old open problems for a wide class of global network optimization problems including MST, (1 + epsilon)-min cut, various approximate shortest path problems, sub-graph connectivity, etc. We provide several equivalent graph parameters that are tight universal lower bounds for the above problems, fully characterizing their inherent complexity. We also give the first universally-optimal algorithms approximately achieving this complexity on every topology.

The quest for universally-optimal distributed algorithms required novel techniques that also answer fundamental (open) questions in seemingly unrelated fields, such as, network information theory, approximation algorithms, (oblivious) packet routing, (algorithmic & topological) graph theory, and metric embeddings. Generally, the problems addressed in these fields explicitly or implicitly ask to jointly optimize ℓ∞ & ℓ1 parameters such as congestion & dilation, communication rate & delay, capacities & diameters of subnetworks, or the makespan of packet routings. In particular, results obtained on the way include the following firsts: (Congestion+Dilation)-Competitive Oblivious Routing, Network Coding Gaps for Completion-Times, Hop-Constrained Expanders & Expander Decompositions, Bi-Criteria (Online / Demand-Robust) Approximation Algorithms for many Diameter-Constrained Network Design Problems (e.g., (Group) Steiner Tree/Forest), Makespan-Competitive (Compact and Distributed) Routing Tables, and (Probabilistic) Tree Embeddings for Hop-Constrained Distances.

Joint work with M. Ghaffari, G. Zuzic, D.E. Hershkowitz, D. Wajc, J. Li, H. Raecke, and T. Izumi.

———————————————

Contributed talks

Slobodan Mitrović (UC Davis)

Faster algorithms for (1+ε)-approximate maximum matchings

In this talk I will present a recent result on computing (1+ε)-approximate maximum matchings. I will outline a deterministic approach that solves this problem in poly(1/ε) semi-streaming passes. This algorithm exponentially improves on the well-known randomized exp(O(1/ε))-pass algorithm from the seminal work by McGregor’05 and the recent deterministic algorithm by Tirodkar’18.

I will also briefly touch on how these ideas extend to other models of computation. In particular, they yield a poly(log n, 1/ε) round algorithm for computing (1+ε)-approximate maximum matchings in CONGEST. In terms of the dependence on 1/ε, this improves exponentially state-of-the-art result by Lotker, Patt-Shamir, and Pettie’15.

This is based on a joint work with Manuela Fischer and Jara Uitto.

==========

Guy Even (TAU)

An Extendable Data Structure for Incremental Stable Perfect Hashing

We consider the problem of dynamically assigning n_t elements unique hashcodes in the range [(1+o(1))n_t]. This problem is known as perfect hashing and is considered a fundamental building block in the design of more involved data structures.

The challenge we address is that of designing a data structure that meets several, seemingly opposing, requirements:

(1) the range and the space of the data structure must be, at all times, proportional to the current cardinality n_t of the input set, and

(2) the hashcodes must be stable while an element is continuously in the set.

A simple argument shows that these two desiderata are impossible to achieve when arbitrary deletions and insertions are allowed.

In this paper, we show that one can achieve these requirements when only insertions occur and, more generally, when the hashcode range and the space are allowed to grow as a function of the maximum cardinality of the set until time t. The data structure executes all operations in worst case constant time with high probability and requires space that is within a constant factor of the lower bound.

Applications include:

(1) A hash table design that does not need to move elements as its size grows.

(2) A compact dynamic dictionary with constant time operations requiring only O(log log n) bits per element above the lower bound. This construction applies both to the extendable and non-extendable settings.

Joint work with Ioana Bercea. Accepted to STOC 2022.

==========

Rajarshi Bhattacharjee (University of Massachusetts Amherst)

Sublinear Time Eigenvalue Approximation via Random Sampling

We study the problem of approximating the eigenspectrum of a symmetric matrix with bounded entries. We present a simple sublinear time algorithm that approximates all eigenvalues of A up to additive error (eps*n) using those of a randomly sampled O((log n/eps)^3) x O((log n/eps)^3) principal submatrix. Our result can be viewed as a concentration bound on the full eigenspectrum of a random submatrix, significantly extending known bounds on just the top eigenvalue (the spectral norm). When A is sparse and rows of A can be efficiently sampled with probabilities proportional to their sparsity, we present an improved error bound of eps*nnz(A), where nnz(A) is the number of non-zero entries in A. Our results are the first ones that can take advantage of the sparsity of A. From a technical perspective, our bounds require several new eigenvalue concentration and perturbation bounds for matrices with bounded entries.

==========

Christian Sohler (Universität zu Köln)

A Sublinear Local Access Implementation for the Chinese Restaurant Process

The Chinese restaurant process is a stochastic process that groups objects that arrive sequentially into a variable number of classes, such that within each class objects are cyclically ordered. The process is closely related to the Dirichlet process and can also be described in terms of a Chinese restaurant, where customers arrive one by one and either sit down next to a randomly chosen customer at one of the existing tables or open a new table. The full state of the process after n steps is given by a permutation of the n objects and cannot be represented in sublinear space. In particular, if we only need specific information about a few objects or classes it would be preferable to obtain the answers without simulating the process completely.

A recent line of research attempts to provide access to huge random objects without fully instantiating them. Such local access implementations provide answers to a sequence of queries about the random object, which follow the same distribution as if the object was fully generated. In this paper, we provide a local access implementation for a generalization of the Chinese restaurant process described above. Our implementation can be used to answer any sequence of adaptive queries about class affiliation of objects, number and sizes of classes at any time, position of elements within a class, or founding time of a class. The running time per query is polylogarithmic in the total size of the object, with high probability. The Chinese restaurant process is related to the computation of a preferential attachment tree in the sense that both represent a “rich-get-richer” phenomenon. Therefore, our approach relies on some ideas from the recent local access implementation for preferential attachment trees by Even et al.

A novel ingredient in our implementation is to embed the process in continuous time, in which the evolution of the different classes becomes stochastically independent. This independence is used to keep the probabilistic structure manageable even if many queries have already been answered. As similar embeddings are available for a wide range of urn processes, we believe that this approach is also interesting beyond the process considered in this paper.

Joint work with Peter Mörters and Stefan Walzer.

==========

Hendrik Fichtenberger (Google Research)

From Round-Adaptive Queries to Multi-Pass Streaming

We investigate the relationship between two popular computational models of sublinear algorithms, query algorithms and streaming algorithms. Query algorithms can probe the input arbitrarily, and their complexity is measured by the number of queries they require to compute the output. On the other hand, streaming algorithms make one or more passes on the whole input in a fixed order, and their complexity is measured by the maximum amount of space they use during the whole computation. While query complexity and space complexity may seem like orthogonal definitions, we present a reduction that relates the adaptivity of a query algorithm to the pass complexity of a corresponding streaming algorithm, and it is applicable to all algorithms in standard sublinear-time graph query models, e.g., the (augmented) general model. We show that this transformation can be used to improve state-of-the-art results for subgraph counting in streams.

==========

Maryam Aliakbarpour (BU/Northeastern)

Estimation of Entropy in Constant Space

Recent work of Acharya et al. (NeurIPS 2019) showed how to estimate the entropy of a distribution D over an alphabet of size k up to ±ǫ additive error by streaming over (k/epsilon^3) · polylog(1/epsilon) i.i.d. samples and using only O(1) words of memory. In this work, we give a new constant memory scheme that reduces the sample complexity to (k/epsilon^2) · polylog(1/epsilon). We conjecture that this is optimal up to polylog(1/epsilon) factors.

==========

Tamalika Mukherjee (Purdue University)

Differentially private sublinear algorithms

Differential privacy is a desirable feature of algorithms manipulating individuals’ private data and has become a standard requirement in public data analysis. Modern algorithms often work with large data sets, and therefore, another desirable feature is that they run in time that is sublinear in the size of the input. In this talk, I will present our recent work on differentially-private sublinear algorithms for several combinatorial problems, such as clustering and estimation of graph parameters. I will conclude with several broad open problems.

==========

Jane Lange (MIT)

Properly learning monotone functions via local correction

We give a $2^{\tilde{O}(\sqrt{n}/\epsilon)}$-time algorithm for properly learning monotone Boolean functions under the uniform distribution over $\{0,1\}^n$. Our algorithm is robust to adversarial label noise and has a running time nearly matching that of the state-of-the-art improper learning algorithm of Bshouty and Tamon [BT96] and an information-theoretic lower bound of [BCO+15]. Prior to this work, no proper learning algorithm with running time smaller than $2^{\Omega(n)}$ was known to exist.

The core of our proper learner is a local computation algorithm for sorting binary labels on a poset. Our algorithm is built on a body of work on distributed greedy graph algorithms; specifically we rely on a recent work of Ghaffari and Uitto [GU19], which gives an efficient algorithm for computing maximal matchings in a graph in the LCA model of [RTVX11,ARVX11]. The applications of our local sorting algorithm extend beyond learning on the Boolean cube: we also give a tolerant tester for Boolean functions over general posets that distinguishes functions that are $\epsilon/3$-close to monotone from those that are $\epsilon$-far.

Previous tolerant testers for the Boolean cube only distinguished between $\epsilon/\Omega(\sqrt{n})$-close and $\epsilon$-far.

==========

Tal Wagner (Microsoft Research Redmond)

Generalization Bounds for Data-Driven Numerical Linear Algebra

Data-driven algorithms can adapt their internal structure or parameters to inputs from unknown application-specific distributions, by learning from a training sample of inputs. Several recent works have applied this approach to problems in numerical linear algebra, obtaining significant empirical gains in performance. However, no theoretical explanation for their success was known. In this work we prove generalization bounds for those algorithms, within the PAC-learning framework for data-driven algorithm selection proposed by Gupta and Roughgarden (ITCS 2016, SICOMP 2017). Our main results are closely matching upper and lower bounds on the fat shattering dimension of the learning-based low rank approximation algorithm of Indyk, Vakilian and Yuan (NeurIPS 2019), a data-driven variant of the CountSketch-based 2-pass streaming algorithm of Clarkson and Woodruff (STOC 2013). Our techniques are general, and provide generalization bounds for many other recently proposed data-driven algorithms in numerical linear algebra, covering both sketching-based and multigrid-based methods. This considerably broadens the class of data-driven algorithms for which a PAC-learning analysis is available.

Joint work with Peter Bartlett and Piotr Indyk.

==========

Zihan Tan (University of Chicago; Rutgers University)

Query Complexity of the Metric Steiner Tree Problem

In the metric Steiner Tree problem, we are given an n by n metric w on a set of points along with a set of k terminals, and the goal is to find a tree of minimum cost that contains all terminals. This is a well-known NP-hard problem and much of the previous work has focused on understanding its polynomial-time approximability. In this talk, I will present some results on the query complexity of the Steiner Tree problem. Specifically, if we desire an \alpha-approximate estimate of the Steiner Tree cost, how many entries need to be queried in the metric w?

==========

Omri Ben-Eliezer (MIT)

Exponentially Better Approximation in Tree-Based Nearest Neighbors

We revisit the performance of tree-based algorithms for approximate near neighbor search in high dimensions. While these algorithms have been popular in practice for decades, from the theoretical perspective they are often considered inferior compared to other classes of algorithms, most notably locality sensitive hashing.

In this talk we demonstrate that for some tree-based algorithms, one can get far better theoretical guarantees than previously thought. We propose and analyze a new data-dependent variant of the spill tree [Liu et al., 2004], a well known tree-based algorithm which allows overlap between different cells in its space partition. The (very simple) analysis, based on standard near-orthogonality results for random inner products, gives an approximation factor of c = O(sqrt{log log n}) for the Euclidean (c,r)-near neighbor search problem. This is an exponential improvement over the existing analysis of spill trees, which achieves approximation O(log n) under essentially the same query time and space constraints.

==========

Minshen Zhu (Purdue University)

Exponential Lower Bounds for Locally Decodable and Correctable Codes for Insertions and Deletions

Locally Decodable Codes (LDCs) are error-correcting codes for which individual message symbols can be quickly recovered despite errors in the codeword. LDCs for Hamming errors have been studied extensively in the past few decades, where a major goal is to understand the amount of redundancy that is necessary and sufficient to decode from large amounts of error, with small query complexity.

In this work, we study LDCs for insertion and deletion errors, called Insdel LDCs. Their study was initiated by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), who gave a reduction from Hamming LDCs to Insdel LDCs with a small blowup in the code parameters. On the other hand, the only known lower bounds for Insdel LDCs come from those for Hamming LDCs, thus there is no separation between them. Here we prove new, strong lower bounds for the existence of Insdel LDCs. In particular, we show that 2-query linear Insdel LDCs do not exist, and give an exponential lower bound for the length of all q-query Insdel LDCs with constant q. For q≥3 our bounds are exponential in the existing lower bounds for Hamming LDCs. Furthermore, our exponential lower bounds continue to hold for adaptive decoders, and even in private-key settings where the encoder and decoder share secret randomness. This exhibits a strict separation between Hamming LDCs and Insdel LDCs.

Our strong lower bounds also hold for the related notion of Insdel LCCs (except in the private-key setting), due to an analogue to the Insdel notions of a reduction from Hamming LCCs to LDCs. Our techniques are based on a delicate design and analysis of hard distributions of insertion and deletion errors, which depart significantly from typical techniques used in analyzing Hamming LDCs.

==========

Runtian Ren (University of Warsaw)

Min-cost perfect matching with concave delays

We consider the problem of online min-cost perfect matching with concave delays. We focus on introducing the single metrical point case. Specifically, requests arrive in an online fashion at a single location. The algorithm must then choose between matching a pair of requests or delaying them to be matched later on. The cost is defined by a concave function on the delay. Given linear or even convex delay functions, matching any two available requests is trivially optimal. However, this does not extend to concave delays. We solve this by providing an O(1)-competitive algorithm that is defined through a series of delay counters. The idea of such an algorithm can be extended to deal with the bichromatic case, wherein requests have polarities and only opposite polarities may be matched.

———————————————

Registration closed