Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
A
AAAI21_Emergent_language
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
haoyifan
AAAI21_Emergent_language
Commits
aee4d501
Commit
aee4d501
authored
Sep 08, 2020
by
Zidong Du
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
add Makefile
parent
6b1f7234
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
95 additions
and
0 deletions
+95
-0
AAAI2021/tex/introduction.tex
+95
-0
No files found.
AAAI2021/tex/introduction.tex
View file @
aee4d501
\section
{
Introduction
}
\label
{
sec:introduction
}
The emergence of human language has always been an important and controversial
issue. This problem attracts attentions from a broad range of communities,
including philology, biology and computer science. In computer science,
researchers induce and analyze the emergent language in multi-agent systems by
setting up communication scenarios, such as referential games and
communication-action policies.
Compositionality is a widely used metric to evaluate the emergent language. It
is a concept in the philosophy of language [1], which describes and quantifies
how complex expressions can be assembled out of simpler parts [2]. For example,
Figure1(a) shows a perfect compositional language (with maximum
compostionality). In this example, each shape is represented by a unique value
of symbol
$
s
_
0
$
and each color is represented by symbol
$
s
_
1
$
. Figure1(b) shows a
language with low compostionality. Colors and shapes are ambiguous if only we
extract information from a single symbol.
\begin{figure}
[t]
\centering
\includegraphics
[width=0.9\columnwidth]
{
fig/occupy
}
\caption
{
(a): The correspondence between symbol sequences (
$
s
_
0
$
,
$
s
_
1
$
) and (shape,
color) pairs in a perfectly compostional language.
$
s
_
0
$
,
$
s
_
1
$
in
{
a, b, c
}
, shape
in
{
circle, square
}
and color in
{
red, blue, green
}
; (b): The correspondence
between symbol sequences (
$
s
_
0
$
,
$
s
_
1
$
) and (shape, color) pairs in a language with
low compostionality.
}
\label
{
fig:symbols
}
\end{figure}
Prior studies focus on investigating how to affect the compositionality of the
emergent language. Researchers have found that various environmental pressures
would affect compositionality, e.g., small vocabulary sizes[3], memoryless[4],
carefully constructed rewards[5] and ease-of-teaching[6]. However, these works
only consider
\emph
{
nurture
}
[7] (i.e., environmental factors), rather than
\emph
{
nature
}
(i.e., hereditary factors from agents), when inducing or exploring
the emergent language without exception. Moreover, some environmental pressures,
like regrading the entropy as an item of additional rewards, may be too ideal to
exist in the real world.
In contrast to prior work, we investigate the compositionality of emergent
language from a new perspective, i.e., the agent capacity. Different from
previous work that only considers external environmental factors, we study the
impact of agent internal capacity on the compositionality of emergent
language. Specifically, we first analyze the correlation between agent capacity
and compositionality theoretically, and propose a novel metric to evaluate
compostionality quantitatively. Then, on the basis of the theoretical analysis
and the metric proposed, we verify the relationship between agent capacity and
compostionality experimentally.
Theoretically, on the basis of mutual information theory[8], we analyse the
correlation between compostionality of the emergent language and complexity of
the semantic information carried by a symbol. Such semantic information can be
characterized in neural network-based agents and requires the certain capacity
(i.e., the count of neural nodes in the hidden layer). Specifically, we use the
MSC (Markov Series Channel)[9] to model the language transmission process and
use the probability distribution of symbols and concepts to model policies of
agents. After modelling, we use the mutual information matrix
$
MRI
^
B
$
to
quantitatively represent the semantic information, and each column of
$
MRI
^
B
$
correspond to information carried by one symbol. We find that each column of the
matrix should be an one-hot vector for a perfectly compositional language, cause
a symbol only transmit information of a certain concept exclusively. Therefore,
the average similarity between the columns of
$
MRI
^
B
$
and a one-hot vector is
higher, indicating that the emergent language is more compostional (i.e., the
compostionality is higher). We propose the metric
\emph
{
MIS
}
to measure
compositionality by calculating such average similarity
quantitatively. Moreover, MIS comes lower indicates that the emergent language
tends to delivery semantic information about more concepts in each symbol, so
that the complexity of semantic information carried by one symbol tend to be
higher. As a result, higher agent capacity is required to characterize the more
complex semantic information when MIS (i.e., compositionality) is lower.
Experimentally, we verify the relationship between agent capacity and
compostionality in a
Then with experiments we show that a low-bilateral (i.e. low-compositionality)
language needs higher capacity of the model to emerge. We build a
listener-speaker referential game as experimental framework, and train agents
with the correctness of forecast output from the listener as the only
criterion. The criterion does not imply any environmental pressures on the
agents. Therefore, we can study the impact of capacity on the compositionality
without any environmental pressures’ affection. Moreover, to study the impact of
capacity on the compositionality under a more ‘natural’ environment, the speaker
and listener are individual agents, i.e. disconnected models without sharing
parameters. The conclusion suggests that by restricting the number of neurons in
a model the emerging languages attend to have higher bilaterality, thus higher
compositionality.
This paper makes the following contributions:
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment