@@ -2059,7 +2059,7 @@ The following table and screenshots show the CT result.
...
@@ -2059,7 +2059,7 @@ The following table and screenshots show the CT result.
<table>
<table>
<thead>
<thead>
<tr>
<tr>
<thcolspan="10"><palign="center"><aid="Ariane133_NG45_1.5ns_CT"></a>Ariane133-68-NG45 CT result for TCP:1.5ns [Corresponding CMP result <a href="#Ariane133_NG45_1.5ns_CMP">link</a>]</th>
<thcolspan="10"><palign="center"><aid="Ariane133_NG45_1.5ns_CT"></a>Ariane133-68-NG45 CT result for TCP:1.5ns [Corresponding CMP result <a href="#Ariane133_NG45_1.5ns_CMP">link</a>]</p></th>
</tr>
</tr>
</thead>
</thead>
<tbody>
<tbody>
...
@@ -2134,7 +2134,7 @@ The following table and screenshots show the CT result.
...
@@ -2134,7 +2134,7 @@ The following table and screenshots show the CT result.
<table>
<table>
<thead>
<thead>
<tr>
<tr>
<thcolspan="10"><palign="center"><aid="Ariane133_NG45_1.3ns_CT"></a>Ariane133-68-NG45 CT result for TCP:1.3ns [Corresponding CMP result <a href="#Ariane133_NG45_1.3ns_CMP">link</a>]</th>
<thcolspan="10"><palign="center"><aid="Ariane133_NG45_1.3ns_CT"></a>Ariane133-68-NG45 CT result for TCP:1.3ns [Corresponding CMP result <a href="#Ariane133_NG45_1.3ns_CMP">link</a>]</p></th>
</tr>
</tr>
</thead>
</thead>
<tbody>
<tbody>
...
@@ -2475,7 +2475,6 @@ We shared the Ariane133-NG45-68% protobuf netlist and clustered netlist with Goo
...
@@ -2475,7 +2475,6 @@ We shared the Ariane133-NG45-68% protobuf netlist and clustered netlist with Goo
</table>
</table>
<palign="center">
<table>
<table>
<thead>
<thead>
<tr>
<tr>
...
@@ -2507,7 +2506,6 @@ We shared the Ariane133-NG45-68% protobuf netlist and clustered netlist with Goo
...
@@ -2507,7 +2506,6 @@ We shared the Ariane133-NG45-68% protobuf netlist and clustered netlist with Goo
@@ -2515,7 +2513,7 @@ We shared the Ariane133-NG45-68% protobuf netlist and clustered netlist with Goo
...
@@ -2515,7 +2513,7 @@ We shared the Ariane133-NG45-68% protobuf netlist and clustered netlist with Goo
</p>
</p>
**October 8:**
**October 9:**
<aid="Question9"></a>
<aid="Question9"></a>
**<span style="color:blue">Question 9.</span>** Are CT results stable? If not, how much does the outcome vary?
**<span style="color:blue">Question 9.</span>** Are CT results stable? If not, how much does the outcome vary?
...
@@ -4033,7 +4031,7 @@ In the following table we report the Kendall rank correlation coefficient for pr
...
@@ -4033,7 +4031,7 @@ In the following table we report the Kendall rank correlation coefficient for pr
<aid="MemPoolGroup_NG45_68"></a>
<aid="MemPoolGroup_NG45_68"></a>
**Circuit Training Baseline Result on “Our MemPool_Group-NanGate45_68”.**
**Circuit Training Baseline Result on “Our MemPool_Group-NanGate45_68”.**
We have trained CT to generate a macro placement for the [MemPool Group design](../../Flows/NanGate45/mempool_group/). For this experiment we use the NanGate45 enablement; the initial canvas size is generated by setting utilization to 68%. We use the default hyperparameters used for Ariane to train CT for bp_quad design. The number of hard macros in MemPool Group is 324, so we update [max_sequence_length](https://github.com/google-research/circuit_training/blob/6a76e327a70b5f0c9e3291b57c085688386da04e/circuit_training/learning/ppo_collect.py#L53) to 325 in [ppo_collect.py](https://github.com/google-research/circuit_training/blob/6a76e327a70b5f0c9e3291b57c085688386da04e/circuit_training/learning/ppo_collect.py#L53) and [sequence_length](https://github.com/google-research/circuit_training/blob/6a76e327a70b5f0c9e3291b57c085688386da04e/circuit_training/learning/train_ppo.py#L57) to 325 in [train_ppo.py](https://github.com/google-research/circuit_training/blob/6a76e327a70b5f0c9e3291b57c085688386da04e/circuit_training/learning/train_ppo.py#L57).
We have trained CT to generate a macro placement for the [MemPool Group design](../../Flows/NanGate45/mempool_group/). For this experiment we use the NanGate45 enablement; the initial canvas size is generated by setting utilization to 68%. We use the default hyperparameters used for Ariane to train CT for bp_quad design. The number of hard macros in MemPool Group is 324, so we update [max_sequence_length](https://github.com/google-research/circuit_training/blob/6a76e327a70b5f0c9e3291b57c085688386da04e/circuit_training/learning/ppo_collect.py#L53) to 325 in [ppo_collect.py](https://github.com/google-research/circuit_training/blob/6a76e327a70b5f0c9e3291b57c085688386da04e/circuit_training/learning/ppo_collect.py#L53) and [sequence_length](https://github.com/google-research/circuit_training/blob/6a76e327a70b5f0c9e3291b57c085688386da04e/circuit_training/learning/train_ppo.py#L57) to 325 in [train_ppo.py](https://github.com/google-research/circuit_training/blob/6a76e327a70b5f0c9e3291b57c085688386da04e/circuit_training/learning/train_ppo.py#L57).
...
@@ -4212,7 +4210,7 @@ We have trained CT to generate a macro placement for the [MemPool Group design](
...
@@ -4212,7 +4210,7 @@ We have trained CT to generate a macro placement for the [MemPool Group design](
</p>
</p>
## **Pinned (to bottom) question list:**
## **Pinned (to bottom) question list:**
**<span style="color:blue">[Question 1](#Question1).</span>** How does having an initial set of placement locations (from physical synthesis) affect the (relative) quality of the CT result?
**<span style="color:blue">[Question 1](#Question1).</span>** How does having an initial set of placement locations (from physical synthesis) affect the (relative) quality of the CT result?
**<span style="color:blue">[Question 2](#Question2).</span>** How does utilization affect the (relative) performance of CT?
**<span style="color:blue">[Question 2](#Question2).</span>** How does utilization affect the (relative) performance of CT?
**<span style="color:blue">[Question 3](#Question3).</span>** Is a testcase such as Ariane-133 “probative”, or do we need better testcases?
**<span style="color:blue">[Question 3](#Question3).</span>** Is a testcase such as Ariane-133 “probative”, or do we need better testcases?