[rollout]: fix incorrect response_attention_mask in vLLM rollout (#213)
This PR addresses issue https://github.com/volcengine/verl/issues/212. The changes include: - read eos_token_id from generation_config to ensure alignment with vLLM - modified the get_eos_mask function to accept both int and list types for the eos_token parameter.
Showing
Please
register
or
sign in
to comment