Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
A
AAAI21_Emergent_language
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
haoyifan
AAAI21_Emergent_language
Commits
f9368418
Commit
f9368418
authored
Sep 10, 2020
by
Qi Guo
Browse files
Options
Browse Files
Download
Plain Diff
Merge branch 'master' of
http://62.234.201.16/hao/AAAI21_Emergent_language
parents
0750d107
37c67ee2
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
40 additions
and
28 deletions
+40
-28
AAAI2021/fig/Figure1. motivation.pdf
+0
-0
AAAI2021/tex/experiments.tex
+8
-8
AAAI2021/tex/introduction.tex
+25
-15
AAAI2021/tex/theory.tex
+7
-5
No files found.
AAAI2021/fig/Figure1. motivation.pdf
0 → 100644
View file @
f9368418
File added
AAAI2021/tex/experiments.tex
View file @
f9368418
...
...
@@ -4,10 +4,9 @@
%\section{Agent Capacity vs. Compositionality}
%\label{ssec:exp}
We examine the relationship between agent capacity and the compositionality of
symbolic language that emerged in our natural referential game with various
vocabulary size.
For each configuration of
We exploit the relationship between agent capacity and the compositionality of
symbolic language that emerged in our natural referential game.
For various configuration of
vocabulary size, we train the speaker-listener agents to emerge symbolic
language when varying the agent capacities, i.e., hidden layer size
(
$
h
_{
size
}$
), from 6 to 100.
...
...
@@ -47,7 +46,8 @@ Taking vocabulary size $|V|=4$ as an example, symbolic languages with
compositionality
$
MIS>
0
.
99
$
take only around 10
\%
over all the emerged symbolic
languages, when
$
h
_{
size
}
<
20
$
; the ratio reduces to 0
\%
$
\sim
$
5
\%
when
$
h
_{
size
}$
increases to 40; the ratio reduces around 3
\%
when
$
h
_{
size
}$
goes beyond 40.
Especially, when
$
h
_
size
$
is large enough (e.g.,
$
>
40
$
), high compositional
$
MIS>
0
.
9
$
reports similar results.
Notably, when
$
h
_{
size
}$
is large enough (e.g.,
$
>
40
$
), high compositional
symbolic language is hard to emerge in a natural referential game, for
easy-to-emerge low compositional symbolic language is sufficient in scenarios of
referential game.
...
...
@@ -105,12 +105,12 @@ Figure~\ref{fig:bench}.
Figure~
\ref
{
fig:exp3
}
reports the accuracy of Listener, i.e., ratio of the correctly
predicted symbols spoken by Speaker (
$
t
=
\hat
(
t
)
$
), which varies with the
training iterations under different agent capacities.
Figure~
\ref
{
fig:exp3
}
(a) shows that when
$
h
_
size
$
equals to 1, the agent capacity is
too low to handle languages. Figure~
\ref
{
fig:exp3
}
(b) shows that when
$
h
_
size
$
Figure~
\ref
{
fig:exp3
}
(a) shows that when
$
h
_
{
size
}
$
equals to 1, the agent capacity is
too low to handle languages. Figure~
\ref
{
fig:exp3
}
(b) shows that when
$
h
_
{
size
}
$
equals to 2, agent can only learn
$
LA
$
whose compositionality (i.e.
\emph
{
MIS
}
)
is highest in all three languages. Combing these two observations, we can infer that
language with lower compositionality requires higher agent capacity to ensure communicating
successfully (i.e.,
$
h
_
size
$
). Figure~
\ref
{
fig:exp3
}
(c) to (h) show that the
successfully (i.e.,
$
h
_
{
size
}
$
). Figure~
\ref
{
fig:exp3
}
(c) to (h) show that the
higher agent capacity causes a faster training process for all three languages, but the
improvement for different languages is quite different.
It is obvious that language with lower compositionality also requires higher agent
...
...
AAAI2021/tex/introduction.tex
View file @
f9368418
...
...
@@ -13,14 +13,18 @@ reinforcement learning~\cite{}.
%the environment setting.
The quality of emergent symbolic language is typically measured by its
\emph
{
compositionality
}
.
The quality of emergent symbolic language is typically measured by its
\emph
{
compositionality
}
.
Compositionality is a principle that determines
whether the meaning of a complex expression (e.g, phrase), which is assembled out of a
given set of simple components (e.g., symbols), can be determined by its
constituent components and the rule
s that combines
them~
\cite
{}
.
constituent components and the rule
combining
them~
\cite
{}
.
\note
{
For example, the expression "AAAI is a conference'' consists of two
meaningful words ``AAAI'' and ``conference'', and a rule for definition (``is'').
More recently, measuring the compositionality
\note
{
xxxxx
}
.
}
Compositionality is considered to be a source of the productivity,
systematicity, and learnability of language, and the reason why a language with finite
vocabulary can express almost infinite concepts.
}
%More recently, measuring the compositionality \note{xxxxx}.}
%It
...
...
@@ -35,14 +39,14 @@ More recently, measuring the compositionality \note{xxxxx}.}
%
\begin{figure}
[t]
\centering
\includegraphics
[width=0.9
\columnwidth]
{
fig/occupy
}
\caption
{
\rmk
{
compositionality.
}
}
\includegraphics
[width=0.9
9\columnwidth]
{
fig/Figure1
_
motivation.pdf
}
\caption
{}
\label
{
fig:induction
}
\end{figure}
Prior studies focus on achieving high compositional symbolic language
through
\emph
{
deliberately handcrafted
}
inductions, e.g., small vocabulary
sizes~
\cite
{}
, memoryless~
\cite
{}
,
carefully constructed reward
s~
\cite
{}
, and
sizes~
\cite
{}
, memoryless~
\cite
{}
,
addtional rewards~
\cite
{}
, constructed loss function
s~
\cite
{}
, and
ease-of-teaching~
\cite
{}
.
\note
{
The possible intuition is that high compositional symbolic
language cannot emerge without induction in existing multi-agent environment.
}
Figure~
\ref
{
fig:induction
}
reports the compositionality when training two agents in the widely-used
...
...
@@ -57,19 +61,25 @@ environment and agents are sufficient for achieving high compositionality.
In this paper, we are the first work to achieve high compositional
symbolic language without any deliberately handcrafted induction. The key observation
is that the internal
\emph
{
agent capacity
}
plays a crucial role in the compositionality
of symbolic language,
by thoroughly analyzing the compositionality after removing the inductions in
is that the internal
\emph
{
agent capacity
}
plays a crucial role in the
compositionality of symbolic language,
by
%thoroughly
analyzing the compositionality after removing the inductions in
the most widely-used listener-speaker referential game framework.
Concretely, the relationship between the agent capacity and the compositionality
of symbolic language is characterized both theoretically and experimentally.
of symbolic language is characterized, with a novel mutual information-based
metric for the compositionality.
%both theoretically and experimentally.
%theoretically
Regarding the theoretical analysis, we use the
\note
{
Markov Series Channel (MSC)~
\cite
{}
to model the language transmission process and a
novel mutual information-based metric to measure the compositionality quantitatively
}
.
Regarding the theoretical analysis, we propose
%use the \note{Markov Series Channel (MSC)~\cite{} to model the language
% transmission process and}
a novel mutual information-based metric to measure the compositionality quantitatively.
%experimentally
Regarding the experimental validation, two different dedicated experiments, i.e.,
\note
{
XXX and XXX, are utilized for XXX
}
.
Regarding the experimental validation, we exploit the relationship between agent
capacity and the compositionality of symbolic language that emerged
\emph
{
naturally
}
in our experiments.
%two different dedicated experiments, i.e., \note{XXX and XXX, are utilized for XXX}.
%Regarding the experimental validation, it is conducted on a listener-speaker
%referential game framework with eliminated unnatural inductions.
Both the theoretical analysis and experimental results lead to a counter-intuitive
...
...
AAAI2021/tex/theory.tex
View file @
f9368418
...
...
@@ -45,11 +45,6 @@ $t={[0,0,1],[0,1,0]}$ would be equal to $\hat{t}=[0,0,0,0,0,1]$ if they both mea
circle''.
\subsection
{
Agent architecture
}
\label
{
ssec:agent
}
\begin{figure*}
[t]
\centering
\includegraphics
[width=1.8\columnwidth]
{
fig/Figure3
_
The
_
architecture
_
of
_
agents.pdf
}
...
...
@@ -57,6 +52,13 @@ circle''.
\label
{
fig:agents
}
\end{figure*}
\subsection
{
Agent architecture
}
\label
{
ssec:agent
}
Figure~
\ref
{
fig:agents
}
shows the architecture of the constructed agents,
including the Speaker
$
S
$
and Listener
$
L
$
.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment