Papers made digestable
We investigate the problem of unconstrained combinatorial multi-armed bandits
with full-bandit feedback and stochastic rewards for submodular maximization.
Previous works investigate the same problem assuming a submodular and monotone
reward function. In this work, we study a more general problem, i.e., when the
reward function is not necessarily monotone, and the submodularity is assumed
only in expectation. We propose Randomized Greedy Learning (RGL) algorithm and
theoretically prove that it achieves a $\frac{1}{2}$-regret upper bound of
$\tilde{\mathcal{O}}(n T^{\frac{2}{3}})$ for horizon $T$ and number of arms
$n$. We also show in experiments that RGL empirically outperforms other
full-bandit variants in submodular and non-submodular settings.
Authors: Fares Fourati, Vaneet Aggarwal, Christopher John Quinn, Mohamed-Slim Alouini.
We consider the R\'enyi-$\alpha$ tripartite information $I_3^{(\alpha)}$ of
three adjacent subsystems in the stationary state emerging after global
quenches in noninteracting spin chains from both homogeneous and bipartite
states. We identify settings in which $I_3^{(\alpha)}$ remains nonzero also in
the limit of infinite lengths and develop a field theory description. We map
the calculation into a Riemann-Hilbert problem with a piecewise constant matrix
for a doubly connected domain. We find an explicit solution for $\alpha=2$ and
an implicit one for $\alpha>2$. In the latter case, we develop a rapidly
convergent perturbation theory that we use to derive analytic formulae
approximating $I_3^{(\alpha)}$ with outstanding accuracy.
Authors: Vanja Marić, Maurizio Fagotti.
Given a complete Riemannian manifold $\mathcal{M}\subset\mathbb{R}^d$ which
is a Lipschitz neighbourhood retract of dimension $m+n$, of class
$C^{3,\beta}$, without boundary and an oriented, closed submanifold $\Gamma
\subset \mathcal M$ of dimension $m-1$, of class $C^{3,\alpha}$ with
$\alpha<\beta$, which is a boundary in integral homology, we construct a
complete metric space $\mathcal{B}$ of $C^{3,\alpha}$-perturbations of $\Gamma$
inside $\mathcal{M}$ with the following property. For the typical element
$b\in\mathcal B$, in the sense of Baire categories, every $m$-dimensional
integral current in $\mathcal{M}$ which solves the corresponding Plateau
problem has an open dense set of boundary points with density $1/2$. We deduce
that the typical element $b\in\mathcal{B}$ admits a unique solution to the
Plateau problem. Moreover we prove that, in a complete metric space of integral
currents without boundary in $\mathbb{R}^{m+n}$, metrized by the flat norm, the
typical boundary admits a unique solution to the Plateau problem.
Authors: Gianmarco Caldini, Andrea Marchese, Andrea Merlo, Simone Steinbrüchel.
We present speculative sampling, an algorithm for accelerating transformer
decoding by enabling the generation of multiple tokens from each transformer
call. Our algorithm relies on the observation that the latency of parallel
scoring of short continuations, generated by a faster but less powerful draft
model, is comparable to that of sampling a single token from the larger target
model. This is combined with a novel modified rejection sampling scheme which
preserves the distribution of the target model within hardware numerics. We
benchmark speculative sampling with Chinchilla, a 70 billion parameter language
model, achieving a 2-2.5x decoding speedup in a distributed setup, without
compromising the sample quality or making modifications to the model itself.
Authors: Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, John Jumper.
Diffusion-based generative models have shown great potential for image
synthesis, but there is a lack of research on the security and privacy risks
they may pose. In this paper, we investigate the vulnerability of diffusion
models to Membership Inference Attacks (MIAs), a common privacy concern. Our
results indicate that existing MIAs designed for GANs or VAE are largely
ineffective on diffusion models, either due to inapplicable scenarios (e.g.,
requiring the discriminator of GANs) or inappropriate assumptions (e.g., closer
distances between synthetic images and member images). To address this gap, we
propose Step-wise Error Comparing Membership Inference (SecMI), a black-box MIA
that infers memberships by assessing the matching of forward process posterior
estimation at each timestep. SecMI follows the common overfitting assumption in
MIA where member samples normally have smaller estimation errors, compared with
hold-out samples. We consider both the standard diffusion models, e.g., DDPM,
and the text-to-image diffusion models, e.g., Stable Diffusion. Experimental
results demonstrate that our methods precisely infer the membership with high
confidence on both of the two scenarios across six different datasets
Authors: Jinhao Duan, Fei Kong, Shiqi Wang, Xiaoshuang Shi, Kaidi Xu.