@@ -90,6 +90,29 @@ We did not use pre-trained models in our study. Note that it is impossible to re
**8. What do your results tell us about the use of RL in macro placement?**
- The solutions typically produced by human experts and SA are superior to those generated by the RL framework in the majority of cases we tested.
- Furthermore, in our experiments, SA in nearly all cases produces better results than Circuit Training, **using less computational resources**, across both benchmark sets that we studied.
<table>
<thead>
<tr>
<th>Testcases</th>
<th>Proxy cost</th>
<th>Wirelength (WL)</th>
</tr>
</thead>
<tbody>
<tr>
<td>ICCAD04 (IBM)</td>
<td>SA wins 17/17</td>
<td>SA wins 16/17 (HPWL)</td>
</tr>
<tr>
<td>Modern IC designs</td>
<td>SA wins 4/6</td>
<td>SA wins 5/6 (routed WL)</td>
</tr>
</tbody>
</table>
**9. Did the work by Prof. David Pan show that Google open-source code was sufficient?**
...
...
@@ -122,6 +145,7 @@ The list of available [testcases](./Testcases) is as follows.
In the [Nature Paper](https://www.nature.com/articles/s41586-021-03544-w), the authors report results for an Ariane design with 133 memory (256x16, single ported SRAM) macros. We observe that synthesizing from the available Ariane RTL in the [lowRISC](https://github.com/lowRISC/ariane) GitHub repository using 256x16 memories results in an Ariane design that has 136 memory macros. We outline the steps to instantiate the memories for Ariane 136 [here](./Testcases/ariane136/) and we show how we convert the Ariane 136 design to an Ariane 133 design that matches Google's memory macros count [here](./Testcases/ariane133/).
We provide flop count, macro type and macro count for all the testcases in the following table.