Next, we need to prepare your trained policy and the testcase you want to evaluate on. Assume you have trained models (which usually can be found under `./logs`), copy the run folder into `saved_model` folder. Make sure your testcase is under `./test`.
Next, we need to prepare your trained policy and the testcase you want to evaluate on. Assume you have trained models (which usually can be found under `./logs`), copy the run folder into `saved_model` folder. Make sure your testcase is under `./test`. The `ckptID` is the policy checkpoint ID saved after each iteration.
Finally, run the following command with path to netlist file, initial placement file and model run directory path.
The placement will be stored under `CodeElements/EvalCT/` and named as `eval_[RUN_DIR]_to_[TESTCASE].plc`.
## Trained Policy
We are providing one of the run we trained from scratch using Google's Ariane testcase. **This is not a truthful representation of the potential of Circuit Training**. We are only providing these trained weights here for the sake of testing. Please feel free to load any of your own trained weights. You may find similar file structure under `./logs` after training.
## View Your Result
You can view the result by supplying this placement file into the open-sourced Plc_client testbench and use the `display_canvas` function.