Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
A
AAAI21_Emergent_language
Overview
Overview
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
haoyifan
AAAI21_Emergent_language
Commits
0673ccbb
Commit
0673ccbb
authored
May 29, 2020
by
Zidong Du
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
~
parent
7a3fbf89
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
41 additions
and
2 deletions
+41
-2
NIPS2020/main.tex
+41
-2
No files found.
NIPS2020/main.tex
View file @
0673ccbb
...
...
@@ -25,7 +25,7 @@
\usepackage
{
amsfonts
}
% blackboard math symbols
\usepackage
{
nicefrac
}
% compact symbols for 1/2, etc.
\usepackage
{
microtype
}
% microtypography
\usepackage
[pdftex]
{
graphicx
}
%%Added by Du, Zidong
\usepackage
{
ifthen
}
...
...
@@ -168,17 +168,56 @@ systems. However, even using the multi-agent systems, previous works at root are
training one brain for communicating among connected sensors (agents), as they combine
the neural networks of all agents in the training process.
\begin{figure}
\centering
\fbox
{
\rule
[-.5cm]
{
0.0cm
}{
4cm
}
\includegraphics
{
fig/occupy.pdf
}
\rule
[-.5cm]
{
0.5cm
}{
0.0cm
}}
\caption
{
(a) A referential game example~
\cite
{
??
}
. (b) Training procedure.
}
\end{figure}
Roughly, to evolve a symbolic language, previous works force the
agents to finish a set target through cooperation, which requires communication
among the agents.
These works can be classified into two categories based on the
environment settings:
\emph
{
referential games
}
and
\emph
{
multi-agent
reinforcement learning system
}
.
reinforcement learning system
}
(MARL).
%referential game
In
\emph
{
referential games
}
, agents are divided into
\emph
{
sender
}
and
\emph
{
receiver
}
, for speaking and listening, respectively. As the
the referential game example shown in Figure~
\ref
{
fig:rg
}
, one agent (Agent A) sends
description of a target picture to another agent (Agent B), who will identify
the target picture from a set of pictures~
\cite
{
??
}
. However, in training,
\note
{
xxxxxx
}
.
%marl
In
\emph
{
MARL
}
, agents are placed in a virtual environment to cooperate in a
continuous action space. For generating symbolic language, agents share the
model parameters and/or environment information. Therefore, those agents can be
taken as different sensors connected to a huge brain, not separate, individual
brains.
Moreover, despite the individual agents issue, two more flaws existed in previous works.
%intention
First, intention is not considered in the cooperation among agents. Previous
works always allocate each agent a role, either sender or receiver, for forced
communication without considering their intention.
%
Second,
In this paper, to achieve the naturally emergence of symbolic language among
individual agents, we propose a novel Self-grounding-Introspection-Cooperation
(SIM) model. There are four key difference between SIC model and previous
works. First, to the best of our knowledge, SIC is the first work to generate
symbolic language among
\emph
{
individual
}
agents. Second, SIC is the first to
achieve the naturally emergence of symbolic language.
\section
{
The Self-grounding-Introspection-Cooperation Model
}
\section
{
Experiments
}
\subsection
{
Methodology
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment