Commit 30e61641 by Zidong Du

~

parent 6225c8f6
......@@ -54,7 +54,7 @@
{
%%writing mode
\newcommand{\rmk}[1]{\textcolor{alizarin}{--[#1]--}}%%communication and remarks
\newcommand{\rmk}[1]{\textcolor{red}{--[#1]--}}%%communication and remarks
\newcommand{\del}[1]{\textcolor{gray}{\sout{#1}}}%%delete the text
\newcommand{\rmkdone}[1]{\textcolor{gray}{--[#1]--}}%%done for remarks
\newcommand{\add}[1]{\textcolor{PineGreen}{#1}}%%add
......@@ -143,17 +143,66 @@ and the emergence of intelligence in individual human.
\section{Introduction}
%Symbolic Language is important
Symbolic language, a language using symbols and characters to represent concepts
for communication, is widely taken as the key factor for the emergence and
evolution of human intelligence. xxxxxxxxxx\rmk{some claimes.} Even with so many studies,
the emergence of symbolic language among our pre-human ancestors is still a
mystery accompanied with many controversial theories and hypothesis. This old
problem had been a long-standing challenge in many fields, including artificial
intelligence in computer science.
%Recent effort for emergence is not correct
Recent efforts in computer science start to explore the emergence of symbolic
language but among multiple agents in a virtual environment by leveraging the
neural network based methods, i.e., deep reinforcement learning. In these works,
researchers put multiple agents into a situation that agents have to communicate
with each other for achieving a pre-defined goal cooperatively. Researchers hope
that agents can evolve a stable communication protocol (e.g., symbolic language)
among themselves by the compulsory cooperation through communication. Those
works can be roughly classified into two categories, referential game and
multi-agent reinforcement learning (MARL), based on the environment setting.
However, previous works, no matter referential game related or multi-agent reinforcement
learning related, ignore the independence of agents in training or
inference. Those agents usually share one or more of the model parameters, loss functions,
observation of environments, and thusly can be taken as one huge brain with
multiple connected sensors (agents). In other words, previous works did not
really achieve emergence of symbolic language among \emph{multiple agents}.
%difference from group intelligence, population intelligence, community intelligence
Besides, previous works also failed on the \emph{naturally emergence}. In
referential games, agents act in a pre-set role of sender or receiver for speaking or
listening, respectively. In MARL, agents xxxxx\rmk{how}
%In this paper, we proposed a SIC model
In this paper, To achieve the naturally emergence of symbolic language among
individual agents, we propose a novel three-step model---the Self-grounding-Introspection-Cooperation
(SIC) model. In the first step of SIC model, i.e., \emph{Self-grounding}, each agent is trained
in the virtual environment to perform tasks successfully, through a
self-playing process. In the second step of SIC model, i.e., \emph{Introspection}, each
agent stops to act with the observed target but self-examine the consistency of spoken words
and listened words through a talking-to-oneself process. In the third step of
SIC model, i.e., \emph{Cooperation}, multiple agents are put in a same
environment and are required to perform a certain task together, where the
symbolic language appears. With the SIC model, we could achieve goal of
naturally emergence of symbolic language among multiple agents. Experimental
results show that \rmk{xxxx} To the best of our knowledge, we are the first to
\rmk{xxx} This work presents a trial to
shed some lights on the origin of human language and the emergence of
intelligence in individual human.
%Our contribution
We made the following contributions.
First, we proposed the SIC model, which is the first work to naturally generate symbolic
language among multiple \emph{individual} agents.
Second, we confirmed the necessity and importance of environment, \rmk{xxxxxx}.
Third, we observed that the intention for cooperation is the base for
communication, \rmk{xxxxx}.
%\begin{itemize}[topsep=0cm,itemsep=0cm,leftmargin=0.3cm]
%\item
%\item
%\item
%\end{itemize}
\section{Background and Motivation}
......@@ -162,6 +211,17 @@ related to the emergence symbolic language.
\subsection{Symbolic Language}
%Symbolic language,
%How to evaluate whether a language is a symbolic language
Symbolic languages use symbols and characters to represent concepts for
communication. Before diving into the question of emergence, we first examine
the measurement of symbolic language, i.e., whether a generated communication
protocol among agents can be taken as a symbolic language. \rmk{Add related work,
and our proposals.}
\subsection{Multi-agent Systems}
Recent works are focusing on the emergence of grounded symbolic language in
neuron network based multi-agent systems. The grounded symbolic language, where
......@@ -193,23 +253,30 @@ In \emph{referential games}, agents are divided into \emph{sender} and
the referential game example shown in Figure~\ref{fig:rg}, one agent (Agent A) sends
description of a target picture to another agent (Agent B), who will identify
the target picture from a set of pictures~\cite{??}. However, in training, \note{xxxxxx}.
\rmk{Some other examples}
Moreover, most referential games do not consider the environments, where agents
only send/receive symbols to finish a target together. \rmk{Some examples.}
%marl
In \emph{MARL}, agents are placed in a virtual environment to cooperate in a
continuous action space. For generating symbolic language, agents share the
model parameters and/or environment information. Therefore, those agents can be
taken as different sensors connected to a huge brain, not separate, individual
brains.
brains. For example, \rmk{some examples}
Moreover, despite the individual agents issue, two more flaws existed in previous works.
Moreover, despite the individual agents issue, two more common flaws existed in previous works.
%intention
First, intention is not considered in the cooperation among agents. Previous
works always allocate each agent a role, either sender or receiver, for forced
communication without considering their intention.
%
Second,
%the actions from other agents
Second, actions of other agents are not considered in the cooperation among
agents. \note{Agents in previous works mainly focus on the targets and the
environment, without understanding actions from other agents. Thus, their
cooperation can hardly be taken as cooperation. }
\textbf{SIC model.} To achieve the naturally emergence of symbolic language among
\textbf{Our Proposal.} To achieve the naturally emergence of symbolic language among
individual agents, we propose a novel Self-grounding-Introspection-Cooperation
(SIM) model. There are four key difference between SIC model and previous
works.
......@@ -232,7 +299,9 @@ To achieve the goal of emerging symbolic language naturally among individual
agents, we decide to follow three key design principles.
\begin{itemize}[topsep=0cm,itemsep=0cm,leftmargin=0.3cm]
\item \textbf{Individual agents.}
\item \textbf{.}
\item \textbf{Complex environments.} As \note{mentioned by
Tomasello~\cite{??}, outside environment is the key factor in the emergence of
humane language and animal language, which can not ignored.}
\item \textbf{.}
\end{itemize}
......@@ -240,7 +309,7 @@ agents, we decide to follow three key design principles.
\begin{figure}
\centering
\fbox{\rule[-.5cm]{0.0cm}{4cm}
\includegraphics{fig/occupy.pdf}
\includegraphics[width=.99\columnwidth]{fig/overall.png}
\rule[-.5cm]{0.5cm}{0.0cm}}
\caption{The processing flow of SIC model.}
\label{fig:sic}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment