Name |
Last commit
|
Last update |
---|---|---|
.github | ||
docker | ||
docs | ||
examples | ||
patches | ||
scripts | ||
tests | ||
verl | ||
.gitignore | ||
.readthedocs.yaml | ||
.style.yapf | ||
LICENSE | ||
Notice.txt | ||
README.md | ||
pyproject.toml | ||
requirements.txt | ||
setup.py |
Thanks: @HillZhang1999 - Related issue: https://github.com/volcengine/verl/issues/189 `[36m(main_task pid=3523385)[0m ValueError: max_num_batched_tokens (8192) is smaller than max_model_len (9216). This effectively limits the maximum sequence length to max_num_batched_tokens and makes vLLM reject longer sequences. Please increase max_num_batched_tokens or decrease max_model_len.` When enable_chunked_prefill is activated, the aforementioned issue will be concealed. Please increase `max_num_batched_tokens` or `decrease max_model_len`.
Name |
Last commit
|
Last update |
---|---|---|
.github | Loading commit data... | |
docker | Loading commit data... | |
docs | Loading commit data... | |
examples | Loading commit data... | |
patches | Loading commit data... | |
scripts | Loading commit data... | |
tests | Loading commit data... | |
verl | Loading commit data... | |
.gitignore | Loading commit data... | |
.readthedocs.yaml | Loading commit data... | |
.style.yapf | Loading commit data... | |
LICENSE | Loading commit data... | |
Notice.txt | Loading commit data... | |
README.md | Loading commit data... | |
pyproject.toml | Loading commit data... | |
requirements.txt | Loading commit data... | |
setup.py | Loading commit data... |