@@ -34,8 +34,8 @@ Following the Equation~\ref{eq:ri} we can obtain the normalized mutual informati
...
@@ -34,8 +34,8 @@ Following the Equation~\ref{eq:ri} we can obtain the normalized mutual informati
\begin{equation}\label{eq:mri}
\begin{equation}\label{eq:mri}
M =
M =
\begin{pmatrix}
\begin{pmatrix}
R\left(c_0,s_0\right) & R\left(c_0,s_0\right)\\
R\left(c_0,s_0\right) & R\left(c_0,s_1\right)\\
R\left(c_0,s_0\right) & R\left(c_0,s_0\right)
R\left(c_1,s_0\right) & R\left(c_1,s_1\right)
\end{pmatrix}
\end{pmatrix}
\end{equation}
\end{equation}
Each column of $M$ corresponds to the semantic information carried by one symbol. In a perfectly compositional language, each symbol represents one specific concept exclusively. Therefore, the similarity between the columns of $M$ and a one-hot vector is aligned with the compositionality of the emergent language.
Each column of $M$ corresponds to the semantic information carried by one symbol. In a perfectly compositional language, each symbol represents one specific concept exclusively. Therefore, the similarity between the columns of $M$ and a one-hot vector is aligned with the compositionality of the emergent language.