Commit 4237e78f by Zidong Du

~

parent 28788039
......@@ -4,10 +4,9 @@
%\section{Agent Capacity vs. Compositionality}
%\label{ssec:exp}
We examine the relationship between agent capacity and the compositionality of
symbolic language that emerged in our natural referential game with various
vocabulary size.
For each configuration of
We exploit the relationship between agent capacity and the compositionality of
symbolic language that emerged in our natural referential game.
For various configuration of
vocabulary size, we train the speaker-listener agents to emerge symbolic
language when varying the agent capacities, i.e., hidden layer size
($h_{size}$), from 6 to 100.
......@@ -47,7 +46,8 @@ Taking vocabulary size $|V|=4$ as an example, symbolic languages with
compositionality $MIS>0.99$ take only around 10\% over all the emerged symbolic
languages, when $h_{size}<20$; the ratio reduces to 0\%$\sim$5\% when $h_{size}$
increases to 40; the ratio reduces around 3\% when $h_{size}$ goes beyond 40.
Especially, when $h_size$ is large enough (e.g., $>40$), high compositional
$MIS>0.9$ reports similar results.
Notably, when $h_{size}$ is large enough (e.g., $>40$), high compositional
symbolic language is hard to emerge in a natural referential game, for
easy-to-emerge low compositional symbolic language is sufficient in scenarios of
referential game.
......@@ -105,12 +105,12 @@ Figure~\ref{fig:bench}.
Figure~\ref{fig:exp3} reports the accuracy of Listener, i.e., ratio of the correctly
predicted symbols spoken by Speaker ($t=\hat(t)$), which varies with the
training iterations under different agent capacities.
Figure~\ref{fig:exp3} (a) shows that when $h_size$ equals to 1, the agent capacity is
too low to handle languages. Figure~\ref{fig:exp3} (b) shows that when $h_size$
Figure~\ref{fig:exp3} (a) shows that when $h_{size}$ equals to 1, the agent capacity is
too low to handle languages. Figure~\ref{fig:exp3} (b) shows that when $h_{size}$
equals to 2, agent can only learn $LA$ whose compositionality (i.e. \emph{MIS})
is highest in all three languages. Combing these two observations, we can infer that
language with lower compositionality requires higher agent capacity to ensure communicating
successfully (i.e., $h_size$). Figure~\ref{fig:exp3} (c) to (h) show that the
successfully (i.e., $h_{size}$). Figure~\ref{fig:exp3} (c) to (h) show that the
higher agent capacity causes a faster training process for all three languages, but the
improvement for different languages is quite different.
It is obvious that language with lower compositionality also requires higher agent
......
......@@ -13,14 +13,18 @@ reinforcement learning~\cite{}.
%the environment setting.
The quality of emergent symbolic language is typically measured by its \emph{compositionality}.
The quality of emergent symbolic language is typically measured by its
\emph{compositionality}.
Compositionality is a principle that determines
whether the meaning of a complex expression (e.g, phrase), which is assembled out of a
given set of simple components (e.g., symbols), can be determined by its
constituent components and the rules that combines them~\cite{}.
constituent components and the rule combining them~\cite{}.
\note{For example, the expression "AAAI is a conference'' consists of two
meaningful words ``AAAI'' and ``conference'', and a rule for definition (``is'').
More recently, measuring the compositionality \note{xxxxx}.}
Compositionality is considered to be a source of the productivity,
systematicity, and learnability of language, and the reason why a language with finite
vocabulary can express infinite concepts.}
%More recently, measuring the compositionality \note{xxxxx}.}
%It
......@@ -57,19 +61,25 @@ environment and agents are sufficient for achieving high compositionality.
In this paper, we are the first work to achieve high compositional
symbolic language without any deliberately handcrafted induction. The key observation
is that the internal \emph{agent capacity} plays a crucial role in the compositionality
of symbolic language,
by thoroughly analyzing the compositionality after removing the inductions in
is that the internal \emph{agent capacity} plays a crucial role in the
compositionality of symbolic language,
by %thoroughly
analyzing the compositionality after removing the inductions in
the most widely-used listener-speaker referential game framework.
Concretely, the relationship between the agent capacity and the compositionality
of symbolic language is characterized both theoretically and experimentally.
of symbolic language is characterized, with a novel mutual information-based
metric for the compositionality.
%both theoretically and experimentally.
%theoretically
Regarding the theoretical analysis, we use the
\note{Markov Series Channel (MSC)~\cite{} to model the language transmission process and a
novel mutual information-based metric to measure the compositionality quantitatively}.
Regarding the theoretical analysis, we propose
%use the \note{Markov Series Channel (MSC)~\cite{} to model the language
% transmission process and}
a novel mutual information-based metric to measure the compositionality quantitatively.
%experimentally
Regarding the experimental validation, two different dedicated experiments, i.e.,
\note{XXX and XXX, are utilized for XXX}.
Regarding the experimental validation, we exploit the relationship between agent
capacity and the compositionality of symbolic language that emerged
\emph{naturally} in our experiments.
%two different dedicated experiments, i.e., \note{XXX and XXX, are utilized for XXX}.
%Regarding the experimental validation, it is conducted on a listener-speaker
%referential game framework with eliminated unnatural inductions.
Both the theoretical analysis and experimental results lead to a counter-intuitive
......
......@@ -45,11 +45,6 @@ $t={[0,0,1],[0,1,0]}$ would be equal to $\hat{t}=[0,0,0,0,0,1]$ if they both mea
circle''.
\subsection{Agent architecture}
\label{ssec:agent}
\begin{figure*}[t]
\centering
\includegraphics[width=1.8\columnwidth]{fig/Figure3_The_architecture_of_agents.pdf}
......@@ -57,6 +52,13 @@ circle''.
\label{fig:agents}
\end{figure*}
\subsection{Agent architecture}
\label{ssec:agent}
Figure~\ref{fig:agents} shows the architecture of the constructed agents,
including the Speaker $S$ and Listener $L$.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment