Unverified Commit 7ad47f7c by Yucheng Wang Committed by GitHub

Merge pull request #16 from TILOS-AI-Institute/plc_client-open-source

Plc client open source
parents 29a44d1c 19d47bef
# PlacementCost
## Quick Start
Under `MACROPLACEMENT/CodeElements` directory, run the following command:
```
# Download plc_client from Google's circuit training
curl 'https://raw.githubusercontent.com/google-research/circuit_training/main/circuit_training/environment/plc_client.py' > ./Plc_client/plc_client.py
# Copies the placement cost binary to /usr/local/bin and makes it executable.
sudo curl https://storage.googleapis.com/rl-infra-public/circuit-training/placement_cost/plc_wrapper_main \
-o /usr/local/bin/plc_wrapper_main
# Run plc testbench
python -m Plc_client.plc_client_os_test
```
## HPWL Computation
Given a net $i$, its wirelength can be computed as the following:
$$
HPWL(i) = W_{i\_{source}} \cdot [max_{b\in i}(x_b) - min_{b\in i}(x_b) + max_{b\in i}(y_b) - min_{b\in i}(y_b)]
$$
where $W_{i\_{source}}$ is the weight (default to $1$) defined on the source pin.
The total wirelength of the netlist can be computed as the following:
$$
HPWL(netlist) = \sum_{i}^{N_{netlist}} W_{i\_{source}} \cdot [max_{b\in i}(x_b) - min_{b\in i}(x_b) + max_{b\in i}(y_b) - min_{b\in i}(y_b)]
$$
## Adjacency Matrix Computation
The adjacency matrix is represented as an array of $[N_{hardmacros} + N_{softmacros} + N_{ports}] \times [N_{hardmacros} + N_{softmacros} + N_{ports}]$ elements.
For each entry of the adjacency matrix, it represents the total number of connections between module $i$ and module $j$ subject to all corresponding pins.
# PlacementCost
*Circuit Training Open Source* is an effort to open-source the entire framework for generating chip floor plans
with distributed deep reinforcement learning. This framework is originally reproduces the
methodology published in the Nature 2021 paper:
*[A graph placement methodology for fast chip design.](https://www.nature.com/articles/s41586-021-03544-w)
Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, Joe Wenjie Jiang, Ebrahim
Songhori, Shen Wang, Young-Joon Lee, Eric Johnson, Omkar Pathak, Azade Nazi,
Jiwoo Pak, Andy Tong, Kavya Srinivasa, William Hang, Emre Tuncer, Quoc V. Le,
James Laudon, Richard Ho, Roger Carpenter & Jeff Dean, 2021. Nature, 594(7862),
pp.207-212.
[[PDF]](https://www.nature.com/articles/s41586-021-03544-w.epdf?sharing_token=tYaxh2mR5EozfsSL0WHZLdRgN0jAjWel9jnR3ZoTv0PW0K0NmVrRsFPaMa9Y5We9O4Hqf_liatg-lvhiVcYpHL_YQpqkurA31sxqtmA-E1yNUWVMMVSBxWSp7ZFFIWawYQYnEXoBE4esRDSWqubhDFWUPyI5wK_5B_YIO-D_kS8%3D)*
## Quick Start
Under `MACROPLACEMENT` main directory, run the following command:
```shell
# Copies the placement cost binary to /usr/local/bin and makes it executable.
sudo curl https://storage.googleapis.com/rl-infra-public/circuit-training/placement_cost/plc_wrapper_main \
-o /usr/local/bin/plc_wrapper_main
# Run plc testbench
python -m Plc_client.plc_client_os_test
```
## What do we open-source here?
The current Circuit Training framework requires user to download an executable binary <em>plc_wrapper_main</em> in order to run. The <em>plc_wrapper_main</em> is not open-sourced and not documented anywhere publically. [plc_client.py](https://github.com/google-research/circuit_training/blob/main/circuit_training/environment/plc_client.py) then talks to this binary excutable for crtical information to build a training environment, extract a state observation and even call a force-directed placer. This is an attempt to open-source this "last piece" to the puzzle and hopefully reproduce a fully transparent framework to the public.
## How do I open-source?
All current progress can be reviewed [here](https://github.com/TILOS-AI-Institute/MacroPlacement/blob/plc_client-open-source/Plc_client/plc_client_os.py). I have generated numerous testcases, varying from a few macros to hundreds of different modules. The purpose of these testcases is to study the behavior of <em>plc_wrapper_main</em> in a scalable way. I have also set up testbench to compare my result to the result from [plc_client.py](https://github.com/google-research/circuit_training/blob/main/circuit_training/environment/plc_client.py).
## What is the end-goal?
The first step and the current step of this open-source effor is to reproduce similar results to Google's <em>plc_wrapper_main</em> in the testbench. The final step will be plugging [plc_client_os.py](https://github.com/TILOS-AI-Institute/MacroPlacement/blob/plc_client-open-source/Plc_client/plc_client_os.py) into the Circuit Training Framework and reproduce a quality training.
## Reference
```
@article{mirhoseini2021graph,
title={A graph placement methodology for fast chip design},
author={Mirhoseini, Azalia and Goldie, Anna and Yazgan, Mustafa and Jiang, Joe
Wenjie and Songhori, Ebrahim and Wang, Shen and Lee, Young-Joon and Johnson,
Eric and Pathak, Omkar and Nazi, Azade and Pak, Jiwoo and Tong, Andy and
Srinivasa, Kavya and Hang, William and Tuncer, Emre and V. Le, Quoc and
Laudon, James and Ho, Richard and Carpenter, Roger and Dean, Jeff},
journal={Nature},
volume={594},
number={7862},
pages={207--212},
year={2021},
publisher={Nature Publishing Group}
}
```
```
@misc{CircuitTraining2021,
title = {{Circuit Training}: An open-source framework for generating chip
floor plans with distributed deep reinforcement learning.},
author = {Guadarrama, Sergio and Yue, Summer and Boyd, Toby and Jiang, Joe
Wenjie and Songhori, Ebrahim and Tam, Terence and Mirhoseini, Azalia},
howpublished = {\url{https://github.com/google_research/circuit_training}},
url = "https://github.com/google_research/circuit_training",
year = 2021,
note = "[Online; accessed 21-December-2021]"
}
```
# coding=utf-8
# Copyright 2021 The Circuit Training Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""PlacementCost client class."""
import json
import socket
import subprocess
import tempfile
from typing import Any, Text
from absl import flags
from absl import logging
flags.DEFINE_string('plc_wrapper_main', 'plc_wrapper_main',
'Path to plc_wrapper_main binary.')
FLAGS = flags.FLAGS
class PlacementCost(object):
"""PlacementCost object wrapper."""
BUFFER_LEN = 1024 * 1024
MAX_RETRY = 10
def __init__(self,
netlist_file: Text,
macro_macro_x_spacing: float = 0.0,
macro_macro_y_spacing: float = 0.0) -> None:
"""Creates a PlacementCost client object.
It creates a subprocess by calling plc_wrapper_main and communicate with
it over an `AF_UNIX` channel.
Args:
netlist_file: Path to the netlist proto text file.
macro_macro_x_spacing: Macro-to-macro x spacing in microns.
macro_macro_y_spacing: Macro-to-macro y spacing in microns.
"""
# if not FLAGS.plc_wrapper_main:
# raise ValueError('FLAGS.plc_wrapper_main should be specified.')
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
address = tempfile.NamedTemporaryFile().name
self.sock.bind(address)
self.sock.listen(1)
args = [
FLAGS.plc_wrapper_main, #
'--uid=',
'--gid=',
f'--pipe_address={address}',
f'--netlist_file={netlist_file}',
f'--macro_macro_x_spacing={macro_macro_x_spacing}',
f'--macro_macro_y_spacing={macro_macro_y_spacing}',
]
self.process = subprocess.Popen([str(a) for a in args])
self.conn, _ = self.sock.accept()
# See circuit_training/environment/plc_client_test.py for the supported APIs.
def __getattr__(self, name) -> Any:
# snake_case to PascalCase.
name = name.replace('_', ' ').title().replace(' ', '')
def f(*args) -> Any:
json_args = json.dumps({'name': name, 'args': args})
self.conn.send(json_args.encode('utf-8'))
json_ret = b''
retry = 0
# The stream from the unix socket can be incomplete after a single call
# to `recv` for large (200kb+) return values, e.g. GetMacroAdjacency. The
# loop retries until the returned value is valid json. When the host is
# under load ~10 retries have been needed. Adding a sleep did not seem to
# make a difference only added latency. b/210838186
while True:
part = self.conn.recv(PlacementCost.BUFFER_LEN)
json_ret += part
if len(part) < PlacementCost.BUFFER_LEN:
json_str = json_ret.decode('utf-8')
try:
output = json.loads(json_str)
break
except json.decoder.JSONDecodeError as e:
logging.warn('JSONDecode Error for %s \n %s', name, e)
if retry < PlacementCost.MAX_RETRY:
logging.info('Looking for more data for %s on connection:%s/%s',
name, retry, PlacementCost.MAX_RETRY)
retry += 1
else:
raise e
if isinstance(output, dict):
if 'ok' in output and not output['ok']: # Status::NotOk
raise ValueError(
f"Error in calling {name} with {args}: {output['message']}.")
elif '__tuple__' in output: # Tuple
output = tuple(output['items'])
return output
return f
def __del__(self) -> None:
self.conn.close()
self.process.kill()
self.process.wait()
self.sock.close()
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment