Commit 28d40517 by sakundu

Updated README

Signed-off-by: sakundu <sakundu@ucsd.edu>
parent 6d2fd58a
...@@ -87,7 +87,7 @@ We did not use pre-trained models in our study. Note that it is impossible to re ...@@ -87,7 +87,7 @@ We did not use pre-trained models in our study. Note that it is impossible to re
- CMP: Innovus launched with 8 threads - CMP: Innovus launched with 8 threads
- AutoDMP: run on NVIDIA DGX-A100 machine with two GPU workers - AutoDMP: run on NVIDIA DGX-A100 machine with two GPU workers
**8. What do your results tell us about the use of RL in macro placement?** **8. In your experiments are Simulated Annealing (SA) and Reinforcement Learning (i.e., Circuit Training) given comparable runtime and computational resources?**
- The solutions typically produced by human experts and SA are superior to those generated by the RL framework in the majority of cases we tested. - The solutions typically produced by human experts and SA are superior to those generated by the RL framework in the majority of cases we tested.
- Furthermore, in our experiments, SA in nearly all cases produces better results than Circuit Training, **using less computational resources**, across both benchmark sets that we studied. - Furthermore, in our experiments, SA in nearly all cases produces better results than Circuit Training, **using less computational resources**, across both benchmark sets that we studied.
...@@ -103,13 +103,13 @@ We did not use pre-trained models in our study. Note that it is impossible to re ...@@ -103,13 +103,13 @@ We did not use pre-trained models in our study. Note that it is impossible to re
<tbody> <tbody>
<tr> <tr>
<td>ICCAD04 (IBM)</td> <td>ICCAD04 (IBM)</td>
<td>SA wins 17/17</td> <td>SA wins over CT 17/17</td>
<td>SA wins 16/17 (HPWL)</td> <td>SA wins over CT 16/17 (HPWL)</td>
</tr> </tr>
<tr> <tr>
<td>Modern IC designs</td> <td>Modern IC designs</td>
<td>SA wins 4/6</td> <td>SA wins over CT 4/6</td>
<td>SA wins 5/6 (routed WL)</td> <td>SA wins over CT 5/6 (routed WL)</td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
...@@ -119,7 +119,7 @@ We did not use pre-trained models in our study. Note that it is impossible to re ...@@ -119,7 +119,7 @@ We did not use pre-trained models in our study. Note that it is impossible to re
- No. The arXiv paper “Delving into Macro Placement with Reinforcement Learning” was published in September 2021, before the open-sourcing of Circuit Training. To our understanding, the work focused on use of DREAMPlace instead of force-directed placement. - No. The arXiv paper “Delving into Macro Placement with Reinforcement Learning” was published in September 2021, before the open-sourcing of Circuit Training. To our understanding, the work focused on use of DREAMPlace instead of force-directed placement.
**10. Did you replicate results from Stronger Baselines?** **10. Which conclusions did you confirm from the Nature paper and from Stronger Baselines?**
- For the Nature paper: We confirmed that Circuit Training beats RePlAce **on modern testcases** with respect to both proxy cost and Nature Table 1 metrics. (Out of 6 head-to-head comparisons for each available metric, RePlAce wins only 3/6 routed wirelength comparisons and 2/6 total power comparisons.) - For the Nature paper: We confirmed that Circuit Training beats RePlAce **on modern testcases** with respect to both proxy cost and Nature Table 1 metrics. (Out of 6 head-to-head comparisons for each available metric, RePlAce wins only 3/6 routed wirelength comparisons and 2/6 total power comparisons.)
- For Stronger Baselines: We confirmed that SA beats Circuit Training on ICCAD04 benchmarks. (Out of 17 head-to-head comparisons for each available metric, Circuit Training wins 1/17 HPWL comparisons. - For Stronger Baselines: We confirmed that SA beats Circuit Training on ICCAD04 benchmarks. (Out of 17 head-to-head comparisons for each available metric, Circuit Training wins 1/17 HPWL comparisons.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment