Papers made digestable
We study the universal coding under side-channel attacks posed and
investigated by Santoso and Oohama (2021). They proposed a theoretical security
model for Shannon cipher system under side-channel attacks, where the adversary
is not only allowed to collect ciphertexts by eavesdropping the public
communication channel, but is also allowed to collect the physical information
leaked by the devices where the cipher system is implemented on such as running
time, power consumption, electromagnetic radiation, etc. For any distributions
of the plain text, any noisy channels through which the adversary observe the
corrupted version of the key, and any measurement device used for collecting
the physical information, we can derive an achievable rate region for
reliability and security such that if we compress the ciphertext using an
affine encoder with rate within the achievable rate region, then: (1) anyone
with secret key will be able to decrypt and decode the ciphertext correctly,
but (2) any adversary who obtains the ciphertext and also the side physical
information will not be able to obtain any information about the hidden source
as long as the leaked physical information is encoded with a rate within the
rate region.
Authors: Yasutada Oohama, Bagus Santoso.
In this work, we demonstrate how to reliably estimate epistemic uncertainty
while maintaining the flexibility needed to capture complicated aleatoric
distributions. To this end, we propose an ensemble of Normalizing Flows (NF),
which are state-of-the-art in modeling aleatoric uncertainty. The ensembles are
created via sets of fixed dropout masks, making them less expensive than
creating separate NF models. We demonstrate how to leverage the unique
structure of NFs, base distributions, to estimate aleatoric uncertainty without
relying on samples, provide a comprehensive set of baselines, and derive
unbiased estimates for differential entropy. The methods were applied to a
variety of experiments, commonly used to benchmark aleatoric and epistemic
uncertainty estimation: 1D sinusoidal data, 2D windy grid-world ($\it{Wet
Chicken}$), $\it{Pendulum}$, and $\it{Hopper}$. In these experiments, we setup
an active learning framework and evaluate each model's capability at measuring
aleatoric and epistemic uncertainty. The results show the advantages of using
NF ensembles in capturing complicated aleatoric while maintaining accurate
epistemic uncertainty estimates.
Authors: Lucas Berry, David Meger.
Multi-objective Bayesian optimization aims to find the Pareto front of
optimal trade-offs between a set of expensive objectives while collecting as
few samples as possible. In some cases, it is possible to evaluate the
objectives separately, and a different latency or evaluation cost can be
associated with each objective. This presents an opportunity to learn the
Pareto front faster by evaluating the cheaper objectives more frequently. We
propose a scalarization based knowledge gradient acquisition function which
accounts for the different evaluation costs of the objectives. We prove
consistency of the algorithm and show empirically that it significantly
outperforms a benchmark algorithm which always evaluates both objectives.
Authors: Jack M. Buckingham, Sebastian Rojas Gonzalez, Juergen Branke.
Understanding the extent to which the perceptual world can be recovered from
language is a fundamental problem in cognitive science. We reformulate this
problem as that of distilling psychophysical information from text and show how
this can be done by combining large language models (LLMs) with a classic
psychophysical method based on similarity judgments. Specifically, we use the
prompt auto-completion functionality of GPT3, a state-of-the-art LLM, to elicit
similarity scores between stimuli and then apply multidimensional scaling to
uncover their underlying psychological space. We test our approach on six
perceptual domains and show that the elicited judgments strongly correlate with
human data and successfully recover well-known psychophysical structures such
as the color wheel and pitch spiral. We also explore meaningful divergences
between LLM and human representations. Our work showcases how combining
state-of-the-art machine models with well-known cognitive paradigms can shed
new light on fundamental questions in perception and language research.
Authors: Raja Marjieh, Ilia Sucholutsky, Pol van Rijn, Nori Jacoby, Thomas L. Griffiths.
By a $z$-coloring of a graph $G$ we mean any proper vertex coloring
consisting of the color classes $C_1, \ldots, C_k$ such that $(i)$ for any two
colors $i$ and $j$ with $1 \leq i < j \leq k$, any vertex of color $j$ is
adjacent to a vertex of color $i$, $(ii)$ there exists a set $\{u_1, \ldots,
u_k\}$ of vertices of $G$ such that $u_j \in C_j$ for any $j \in \{1, \ldots,
k\}$ and $u_k$ is adjacent to $u_j$ for each $1 \leq j \leq k$ with $j \not=k$,
and $(iii)$ for each $i$ and $j$ with $i \not= j$, the vertex $u_j$ has a
neighbor in $C_i$. Denote by $z(G)$ the maximum number of colors used in any
$z$-coloring of $G$. Denote the Grundy and {\rm b}-chromatic number of $G$ by
$\Gamma(G)$ and ${\rm b}(G)$, respectively. The $z$-coloring is an improvement
over both the Grundy and b-coloring of graphs. We prove that $z(G)$ is much
better than $\min\{\Gamma(G), {\rm b}(G)\}$ for infinitely many graphs $G$ by
obtaining an infinite sequence $\{G_n\}_{n=3}^{\infty}$ of graphs such that
$z(G_n)=n$ but $\Gamma(G_n)={\rm b}(G_n)=2n-1$ for each $n\geq 3$. We show that
acyclic graphs are $z$-monotonic and $z$-continuous. Then it is proved that to
decide whether $z(G)=\Delta(G)+1$ is $NP$-complete even for bipartite graphs
$G$. We finally prove that to recognize graphs $G$ satisfying $z(G)=\chi(G)$ is
$coNP$-complete, improving a previous result for the Grundy number.
Authors: Abbas Khaleghi, Manouchehr Zaker.