Commit 250ea04e by Zidong Du
parents 808b18e4 29d96540
......@@ -179,20 +179,20 @@ The emergence of symbolic languages with high compositionality has
attracted extensive attention from a broad range of communities. Existing
studies achieve high compositionality through \emph{deliberately handcrafted}
inductions (e.g., additional rewards, constructed
loss functions and ease-of-teaching) in multi-agent learning, which are unnatural.
loss functions and structural input data) in multi-agent learning, which are unnatural.
Yet, few studies investigate the emergence of symbolic language with high
compositionality \emph{naturally}, i.e., without deliberately handcrafted
inductions.
In this paper, \note{we are the first to successfully achieve high compositional
symbolic language} in a \emph{natural} manner without handcrafted inductions.
Initially, by investigating the emerged symbolic
Initially, by investigating the emergent
language after removing the \emph{deliberately handcrafted}
inductions, we observe the difficulty in naturally generating high compositional
symbolic language.
language.
%the agent capacity plays a key role in compositionality.
Further, we reveal and characterize the \note{quantitative relationship}
between the agent capacity and the compositionality of symbolic language, with
between the agent capacity and the compositionality of emergent language, with
a novel mutual information-based metric for more reasonable measuring the compositionality.
% both theoretically and experimentally.
%The theoretical analysis is built on the MSC
......@@ -203,9 +203,9 @@ a novel mutual information-based metric for more reasonable measuring the compos
%with eliminated external environment factors.
%With a novel mutual information-based metric for the compositionality,
The experimental results lead to a counter-intuitive conclusion that lower agent
capacity facilitates the emergence of symbolic language with higher
compositionality. \note{Based on our conclusion, we can generate higher
compositional symbolic language with a higher probability.}
capacity facilitates the emergence of language with higher
compositionality. \note{Based on our conclusion, we can get a more
compositional language with a higher probability.}
% The natural emergence of symbolic languages with high compositionality has
......
......@@ -13,6 +13,8 @@
bibsource = {dblp computer science bibliography, https://dblp.org}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Related Work%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@inproceedings{kottur-etal-2017-natural,
title = "Natural Language Does Not Emerge {`}Naturally{'} in Multi-Agent Dialog",
author = "Kottur, Satwik and
......@@ -92,4 +94,64 @@
author={Chaabouni, Rahma and Kharitonov, Eugene and Bouchacourt, Diane and Dupoux, Emmanuel and Baroni, Marco},
journal={arXiv preprint arXiv:2004.09124},
year={2020}
}
\ No newline at end of file
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@article{DBLP:journals/corr/LazaridouPB16b,
author = {Angeliki Lazaridou and
Alexander Peysakhovich and
Marco Baroni},
title = {Multi-Agent Cooperation and the Emergence of (Natural) Language},
journal = {CoRR},
volume = {abs/1612.07182},
year = {2016},
url = {http://arxiv.org/abs/1612.07182},
archivePrefix = {arXiv},
eprint = {1612.07182},
timestamp = {Mon, 13 Aug 2018 16:47:57 +0200},
biburl = {https://dblp.org/rec/journals/corr/LazaridouPB16b.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{bogin2018emergence,
title={Emergence of Communication in an Interactive World with Consistent Speakers},
author={Bogin, Ben and Geva, Mor and Berant, Jonathan},
journal={arXiv},
pages={arXiv--1809},
year={2018}
}
@inproceedings{jaques2019social,
title={Social influence as intrinsic motivation for multi-agent deep reinforcement learning},
author={Jaques, Natasha and Lazaridou, Angeliki and Hughes, Edward and Gulcehre, Caglar and Ortega, Pedro and Strouse, DJ and Leibo, Joel Z and De Freitas, Nando},
booktitle={International Conference on Machine Learning},
pages={3040--3049},
year={2019},
organization={PMLR}
}
@article{mul2019mastering,
title={Mastering emergent language: learning to guide in simulated navigation},
author={Mul, Mathijs and Bouchacourt, Diane and Bruni, Elia},
journal={arXiv preprint arXiv:1908.05135},
year={2019}
}
@inproceedings{kharitonov2019egg,
title={EGG: a toolkit for research on Emergence of lanGuage in Games},
author={Kharitonov, Eugene and Chaabouni, Rahma and Bouchacourt, Diane and Baroni, Marco},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations},
pages={55--60},
year={2019}
}
@article{labash2020perspective,
title={Perspective taking in deep reinforcement learning agents},
author={Labash, Aqeel and Aru, Jaan and Matiisen, Tambet and Tampuu, Ardi and Vicente, Raul},
journal={Frontiers in Computational Neuroscience},
volume={14},
year={2020},
publisher={Frontiers Media SA}
}
......@@ -98,7 +98,7 @@ Notably, when $h_{size}$ is large enough (e.g., $>40$), high compositional
symbolic language is hard to emerge in a natural referential game, for
easy-to-emerge low compositional symbolic language is sufficient in scenarios of
referential game.
On other side, agents are enforced to use compositionality to express
On the other side, agents are enforced to use compositionality to express
more meanings, for the constraint from low capacity.
......@@ -107,8 +107,8 @@ Additionally, we also perform $\chi^2$ test to check the statistical
significance between the high compositionality and agent
capacity. Table~\ref{tab:exp10} reports the $\chi^2$ test results for
$\mathit{MIS}>0.99$ and $\mathit{MIS}>0.9$, respectively. It can be observed that
for different vocabulary size, the p-value is always less than 0.05, which means
the high compositionality has statistical significance related to agent
for different vocabulary sizes, the p-value is always less than 0.05, which means
the high compositionality has a statistical significance related to agent
capacity.
......@@ -142,10 +142,10 @@ We further breakdown the learning process to investigate the language teaching
scenario, where the Speaker teaches the Listener its fixed symbolic language.
We define three symbolic languages in different compositionality for Speaker to
teach, i.e., high (LA, $\mathit{MIS}=1$), mediate (LB, $\mathit{MIS}=0.83$), low (LC, $\mathit{MIS}=0.41$), see
Figure~\ref{fig:bench}.
Figure~\ref{fig:bench}.
Figure~\ref{fig:exp3} reports the accuracy of Listener, i.e., ratio of the correctly
predicted symbols spoken by Speaker ($t=\hat(t)$), which varies with the
Figure~\ref{fig:exp3} reports the accuracy of Listener, i.e., the ratio of the correctly
predicted symbols spoke by Speaker ($t=\hat(t)$), which varies with the
training iterations under different agent capacities.
Figure~\ref{fig:exp3} (a) shows that when $h_{size}$ equals to 1, the agent capacity is
too low to handle languages. Figure~\ref{fig:exp3} (b) shows that when $h_{size}$
......@@ -153,7 +153,7 @@ equals to 2, agent can only learn $LA$ whose compositionality (i.e. \emph{MIS})
is highest in all three languages. Combing these two observations, we can infer that
language with lower compositionality requires higher agent capacity to ensure
communicating successfully (i.e., $h_{size}$).
Additionally, Figure~\ref{fig:exp3} (c)$\sim$(h) show that the
Additionally, Figure~\ref{fig:exp3} (c)$\sim$(h) shows that the
higher agent capacity causes a faster training process for all three languages, but the
improvement for different languages is quite different. It is obvious that language with lower compositionality also requires higher agent
capacity to train faster.
......
\section{Introduction}
\label{sec:introduction}
The emergence of symbolic language has always been an important issue,
The emergence of language has always been an important issue,
which attracts attention from a broad range of communities,
including philology~\cite{}, biology~\cite{}, and computer
science~\cite{}. Especially in computer science, efforts in recent years trying to explore
the emergence of symbolic language in virtual multi-agent environments, where
the emergent language in virtual multi-agent environments, where
agents are trained to communicate with neural-network-based methods such as deep
reinforcement learning~\cite{}.
%Such works can be roughly classified into two categories,
......@@ -13,7 +13,7 @@ reinforcement learning~\cite{}.
%the environment setting.
The quality of emergent symbolic language is typically measured by its
The quality of emergent language is typically measured by its
\emph{compositionality}.
Compositionality is a principle that determines
whether the meaning of a complex expression (e.g, phrase), which is assembled out of a
......@@ -72,14 +72,16 @@ vocabulary can express almost infinite concepts.}
\end{table*}
Prior studies focus on achieving high compositional symbolic language
through \emph{deliberately handcrafted} inductions, e.g., small vocabulary
sizes~\cite{}, memoryless~\cite{}, additional rewards~\cite{}, constructed loss functions~\cite{}, and
ease-of-teaching~\cite{}. \note{Such optimization methodologies are driven by the challenges to generate high compositional symbolic without induction in an existing multi-agent environment.}
through \emph{deliberately handcrafted} inductions, e.g., additional rewards~\cite{},
constructed loss functions~\cite{}, structural input data~\cite{},
memoryless~\cite{}, and ease-of-teaching~\cite{}.
\note{Such optimization methodologies are driven by the challenges to generate high compositional symbolic without induction in an existing multi-agent environment.}
Figure~\ref{fig:induction} reports the compositionality when training two agents
in the widely-used listener-speaker referential game for emerging 100 symbolic
in the widely-used listener-speaker referential game~\cite{} for emerging 100
languages, and it can be observed that \note{the compositionality
of emerged symbolic language is extremely low without any induction. Moreover, varying
of emergent language is seldom high (e.g., $<5\%$ for compositionality $>0.99$)
without any induction. Moreover, varying
the vocabulary size does not affect the compositionality notably.}
Though such unnatural inductions are useful, they prevent us from better understanding the mystery of
the emergence of language and even intelligence among our pre-human ancestors.
......@@ -89,14 +91,14 @@ In other words, it is never clear whether \emph{natural}
environment and agents are sufficient for achieving high compositionality.
This paper is the first one to achieve high compositional
symbolic language without any deliberately handcrafted induction. The key observation
language without any deliberately handcrafted induction. The key observation
is that the internal \emph{agent capacity} plays a crucial role in the
compositionality of symbolic language.
compositionality of emergent language.
%by thoroughly
%analyzing the compositionality after removing the inductions in
%the most widely-used listener-speaker referential game framework.
Concretely, the relationship between the agent capacity and the compositionality
of symbolic language is characterized, with a novel mutual information-based
Concretely, the relationship between the agent capacity and the compositionality
of emergent language is characterized, with a novel mutual information-based
metric for the compositionality.
%both theoretically and experimentally.
%theoretically
......@@ -111,11 +113,11 @@ capacity and the compositionality of symbolic language that emerged
%two different dedicated experiments, i.e., \note{XXX and XXX, are utilized for XXX}.
%Regarding the experimental validation, it is conducted on a listener-speaker
%referential game framework with eliminated unnatural inductions.
Both the theoretical analysis and experimental results lead to a counter-intuitive
conclusion that \emph{lower agent capacity facilitates the emergence of symbolic language
Both the theoretical analysis and experimental results lead to a counter-intuitive
conclusion that \emph{lower agent capacity facilitates the emergence of language
with higher compositionality}. \note{Therefore, by only reducing the agent capacity
in such a natural environment, we
can generate a higher compositional symbolic language with a higher probability.}
in such a natural environment, we
can generate a more compositional language with a higher probability.}
%Prior studies focus on investigating how to affect the
......
\section{Related works}
\section{Related Works}
\label{sec:relatedwork}
%external environmental factors
Previous works focus on the external environmental factors that impact the
compositionality of emerged symbolic language.
Some significant works on studying the external environmental factor on the compositionality of emergent language are summarized on Table~\ref{tab:rel}.
compositionality of emerged symbolic language.
Some significant works on studying the external environmental factor on the compositionality of emergent language are summarized in Table~\ref{tab:rel}.
For example, ~\citet{kirby2015compression} explored how the pressures for expressivity and compressibility lead the structured language.
~\citet{kottur-etal-2017-natural} constrained the vocabulary size and whether the listener has memory to coax the compositionality of the emergent language.
~\citet{lazaridou2018emergence} showed that the degree of structure found in the input data affects the emergence of the symbolic language.
~\citet{li2019ease} studied how the pressure, ease of teaching, impact on the iterative language of the population regime.
~\citet{evtimova2018emergent} designed a novel multi-modal scenarios, which the speaker and the listener should access to different modalities of the input object, to explore the language emergence.
~\citet{evtimova2018emergent} designed novel multi-modal scenarios, which the speaker and the listener should access to different modalities of the input object, to explore the language emergence.
Such factors are deliberately designed, which are too ideal to be true in
the real world.
In this paper, these handcrafted inductions above are all removed, and the high compostional language is leaded only by the agent capacity.
the real world.
In this paper, these handcrafted inductions above are all removed, and the high compositional language is leaded only by the agent capacity.
......@@ -33,18 +33,18 @@ proposed~\cite{kottur-etal-2017-natural,choi2018compositional,lazaridou2018emerg
%either speakers or listeners. They can not measure the degree of \emph{bilateral}
%understanding between speakers and listeners, i.e., the concept-symbol mapping
%consistency between speakers and listeners.
At the initial stage, many researches only analyzed the language compositionality qualitatively.
At the initial stage, many studies only analyzed the language compositionality qualitatively.
For example, ~\citet{choi2018compositional} printed the agent messages with the letter `abcd' at some training round, and directly analyzed the compositionality on these messages.
~\citet{kottur-etal-2017-natural} introduced the dialog tree to show the evolution of language compositionality during the trianing process.
~\citet{kottur-etal-2017-natural} introduced the dialog tree to show the evolution of language compositionality during the training process.
Latter, some quantitative metrics are explored.
The topographic similarity\cite{lazaridou2018emergence} is introduced to measure the distances between all the possible pairs of meanings and the corresponding pairs of signals.
\citet{chaabouni2020compositionality} proposed the positional disentanglement, which measures whether symbols in specific postion clearly relate to the specific attribute of the input object.
From Table~\ref{tab:rel}, most metrics are proposed on the sight of the speaker. In our view, human begings developed the language based on both the speakers and the listener. Only one research of \cite{choi2018compositional} in Table~\ref{tab:rel} qualitatively considered from the sight of the speaker and the listener. In this paper, we propose a novel quatitative metric from both the speaker's sight and the listener's sight.
\citet{chaabouni2020compositionality} proposed the positional disentanglement, which measures whether symbols in a specific position relate to the specific attribute of the input object.
From Table~\ref{tab:rel}, most metrics are proposed on the sight of the speaker. In our view, human beings developed the language based on both the speakers and the listener. Only one research of \cite{choi2018compositional} in Table~\ref{tab:rel} qualitatively considered from the perspective of the speaker and the listener. In this paper, we propose a novel quantitative metric from both the speaker's sight and the listener's sight.
In conclusion, the previous works coaxed the compositional language based on some careful designed handcrafted inductions,
and the metric from the sight of both the speaker and the listener is still lacking.
In this paper, we remove all the handcrafted inductions in Table~\ref{tab:rel},
and use the minimized induction based on theoretical analysis.
and the metric from the sight of both the speaker and the listener is still lacking.
In this paper, we remove all the handcrafted inductions in Table~\ref{tab:rel},
and use the minimized induction based on theoretical analysis.
Moreover, we propose a novel quantitative metric, which is properer than previous works based on the speaker's sight.
......@@ -62,32 +62,15 @@ Before going to the detail of the training algorithms, we first introduce the en
\subsection{Environment setup}
\label{ssec:env}
Figure~\ref{fig:game} shows the entire environment used in this study,
i.e., a commonly used referential game. Roughly, the referential game requires
the speaker and listener working cooperatively to accomplish a certain task.
i.e., a commonly used referential game. Roughly, the referential game requires the speaker and listener to work cooperatively to accomplish a certain task.
In this paper, the task is to have the listener agent reconstruct the object
what the speaker claims it has seen, only through their emerged communication
protocol. The success in this game indicates that symbolic language has emerged
between speaker and listener.
what the speaker claims it has seen, only through their emerged communication protocol. The success in this game indicates that symbolic language has emerged between speaker and listener.
\textbf{Game rules} In our referential game, agents follow the following rules
to finish the game in a cooperative manner. In each round, once received an
input object $t$, Speaker $S$ speaks a symbol sequence $s$ to Listener $L$ ;
Listener $L$ reconstruct the predicted result $\hat{t}$ based on the listened
sequence $s$; if $t=\hat{t}$, agents win this game and receive positive rewards
($r(t,\hat{t})=1$); otherwise agents fail this game and receive negative rewards
($r(t,\hat{t})=-1$).
\textbf{Game rules} In our referential game, agents follow the following rules to finish the game in a cooperative manner. In each round, once received an input object $t$, Speaker $S$ speaks a symbol sequence $s$ to Listener $L$ ; Listener $L$ reconstruct the predicted result $\hat{t}$ based on the listened sequence $s$; if $t=\hat{t}$, agents win this game and receive positive rewards ($r(t,\hat{t})=1$); otherwise agents fail this game and receive negative rewards ($r(t,\hat{t})=-1$).
Precisely, during the game, Speaker $S$ receives an input object $t$, which is
an expression with two words from the vocabulary set $V$, i.e., two
one-hot vector representing shape and color, respectively. Based on the $t$,
Speaker $S$ speaks a symbol sequence $s$, which similarly contains two words
from $V$. The Listener $L$ receives $s$ and output predicted result $\hat{t}$,
a single word (one-hot vector) selected from the Cartesian product of set two $V$s
($V\times V$), which representing all the meanings of two combined words from $V$.
Please note that since $t$ and $\hat{t}$ have different length, we say
$t=\hat{t}$ if $t$ expresses the same meaning as $\hat{t}$, e.g., ``red circle''.
Precisely, during the game, Speaker $S$ receives an input object $t$, which is an expression with two words from the vocabulary set $V$, i.e., two one-hot vectors representing shape and color, respectively. Based on the $t$, Speaker $S$ speaks a symbol sequence $s$, which similarly contains two words from $V$. The Listener $L$ receives $s$ and output predicted result $\hat{t}$, a single word (one-hot vector) selected from the Cartesian product of set two $V$s ($V\times V$), which represents all the meanings of two combined words from $V$. Please note that since $t$ and $\hat{t}$ have different length, we say $t=\hat{t}$ if $t$ expresses the same meaning as $\hat{t}$, e.g., ``red circle''.
......@@ -134,8 +117,8 @@ expected reward$ J(\theta_S, \theta_L)$ by fixing the parameter $\theta_S$ and
adjusting the parameter $\theta_L$.
Additionally, to avoid the handcrafted induction on emergent language, we only
use the predict result $\hat{t}$ of the listener agent as the
evidence of whether giving the positive rewards. Then, the gradients of the
use the predicted result $\hat{t}$ of the listener agent as the
evidence of whether giving positive rewards. Then, the gradients of the
expected reward $ J(\theta_S, \theta_L)$ can be calculated as follows:
\begin{align}
\nabla_{\theta^S} J &= \mathbb{E}_{\pi^S, \pi^L} \left[ r(\hat{t}, t) \cdot
......
\section{Mutual Information Similarity (MIS)}\label{sec:mis}
In this section, we propose the \emph{Mutual Information Similarity (MIS)} as a metric of compositionality, and give a thorough theoretical analyse.
In this section, we propose the \emph{Mutual Information Similarity (MIS)} as a metric of compositionality and give a thorough theoretical analysis.
MIS is the similarity between an identity matrix and the mutual information matrix of concepts and symbols.
\begin{figure}[t]
......@@ -38,7 +38,7 @@ R\left(c_0,s_0\right) & R\left(c_0,s_0\right)\\
R\left(c_0,s_0\right) & R\left(c_0,s_0\right)
\end{pmatrix}
\end{equation}
Each column of $M$ correspond to the semantic information carried by one symbol. In a perfectly compositional language, each symbol represents one specific concept exclusively. Therefore, the similarity between the columns of $M$ and a one-hot vector is align with the compositionality of the emergent language.
Each column of $M$ corresponds to the semantic information carried by one symbol. In a perfectly compositional language, each symbol represents one specific concept exclusively. Therefore, the similarity between the columns of $M$ and a one-hot vector is aligned with the compositionality of the emergent language.
\begin{figure}[t]
\centering \includegraphics[width=0.99\columnwidth]{fig/Figure6_Compostionality_of_symbolic_language.pdf}
......@@ -67,5 +67,5 @@ following formula:
\end{aligned}\end{equation}
MIS is a bilateral metric. Unilateral metrics, e.g. \emph{topographic similarity (topo)}\cite{} and \emph{posdis}\cite{}, only take the policy of the speaker into consideration. We provide an example to illustrate the inadequacy of unilateral metrics, shown in Figure~\ref{fig:unilateral}. In this example, the speaker only uses $s_1$ to represent shape. From the perspective of speaker, the language is perfectly compositional (i.e. both topo and posdis are 1). However, the listener cannot distinguish the shape depend only on $s_1$, showing the non-compositionality in this language. The bilateral metric MIS addresses such defect by taking the policy of the listener into account, thus $\mathit{MIS} < 1$.
MIS is a bilateral metric. Unilateral metrics, e.g. \emph{topographic similarity (topo)}\cite{} and \emph{posdis}\cite{}, only take the policy of the speaker into consideration. We provide an example to illustrate the inadequacy of unilateral metrics, shown in Figure~\ref{fig:unilateral}. In this example, the speaker only uses $s_1$ to represent the shape. From the perspective of the speaker, the language is perfectly compositional (i.e. both topo and posdis are 1). However, the listener cannot distinguish the shape depend only on $s_1$, showing the non-compositionality in this language. The bilateral metric MIS addresses such defects by taking the policy of the listener into account, thus $\mathit{MIS} < 1$.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment