- 03 Feb, 2025 3 commits
-
-
runnning -> running
Ikko Eltociear Ashimine committed -
HL committed
-
HL committed
-
- 02 Feb, 2025 1 commit
-
-
Chujie Zheng committed
-
- 01 Feb, 2025 2 commits
-
-
since 'lighteval/MATH' is no longer available on huggingface.
HL committed -
- As titled
Guangming Sheng committed
-
- 31 Jan, 2025 4 commits
-
-
HL committed
-
Xingyao Wang committed
-
Chujie Zheng committed
-
 --------- Co-authored-by: HL <linhaibin.eric@gmail.com>
dignfei committed
-
- 30 Jan, 2025 8 commits
-
-
HL committed
-
This is a follow-up to https://github.com/volcengine/verl/issues/151 ## Motivation Currently, in order to add a custom score function you need to fork verl and update the `_select_rm_score_fn` to define your logic. This makes it harder to use verl as part of a larger application while staying up to date with upstream improvements in verl. It would be convenient to allow end users to directly pass in a reward function they wish to use, without requiring them to clone/fork verl to do so. ## Design In this PR I slightly modify `main_ppo.py` to allow users to import a new function `run_ppo`. `run_ppo` behaves very similarly to the existing `main`, with the important addition of a new `compute_score` argument. This argument, if passed in, is used to compute the score of every generation. This is the change that allows The `compute_score` function is similar in shape to the existing `compute_score` on gsm8k and math. However, I have added a new `data_source` parameter so that the user can compute the score differently if desired depending on the task shape. ## Example Usage This is a sample script showing how you can use the new functionality. I have tested that this works. ```python from verl.trainer.main_ppo import run_ppo from omegaconf import OmegaConf def custom_compute_score(data_source, solution_str, ground_truth): """Dummy compute_score function that reward the model for generations of exactly 20 characters :) """ return abs(len(solution_str) - 20) config = OmegaConf.load("vendor/verl/verl/trainer/config/ppo_trainer.yaml") # Update config as needed config.data.train_files = "path/to/train.parquet" config.data.val_files = "path/to/test.parquet" # ... run_ppo(config, custom_compute_score) ``` ## Breaking changes There are no breaking changes in this PR. It is still possible to call `python -m verl.trainer.main_ppo ...` as before (although if you want to pass in a custom compute_score you will need to use the new method described above). ## Possible future work It would be great to move to [structured configs](https://omegaconf.readthedocs.io/en/2.1_branch/structured_config.html) as well since they'd allow us to have typesafe, autocompletable configurations from Python. I thought about adding those changes here as well but they would be much more extensive and I'm not sure whether there's interest from the project.
Kyle Corbitt committed -
Franz Srambical committed
-
Franz Srambical committed
-
## Summary This PR enables to use Liger Kernel's `_apply_liger_kernel_to_instance` to init a fsdp worker model. ## Main Changes 1. Adding an option of using `liger_kernel.transformers.AutoLigerKernelForCausalLM` to load a model from pretained, instead of the default `transformers.AutoModelForCausalLM` 2. Added a test case using configuration file `tests/e2e/run_qwen_gsm8k_model_rm_liger_kernel.sh` ## Related Issue #96 ## TODO #97 optimize the memory usage when computing entropy & log_probs https://github.com/volcengine/verl/blob/6d96fda3d47f057caaa8f494ca7804181903e911/verl/workers/actor/dp_actor.py#L94-L106 --------- Signed-off-by: Hongpeng Guo <hpguo@anyscale.com>
Hongpeng Guo committed -
The logits is of shape `(bsz, response_length, vocab_size)`. This PR doesn't change any code execution, but explicitly show the logits shape and easier for readers to understand the code. Signed-off-by: Hongpeng Guo <hpguo@anyscale.com>
Hongpeng Guo committed -
Add contribution guide
Chi Zhang committed -
Chi Zhang committed
-
- 29 Jan, 2025 3 commits
-
-
`token_level_rewards == (token_level_rewards * non_zero_mask)`
Franz Srambical committed -
HL committed
-
HL committed
-
- 28 Jan, 2025 1 commit
-
-
- As titled - Solved: #149 Waiting for testing from @chujiezheng --------- Co-authored-by: Chi Zhang <zhangchi.usc1992@bytedance.com>
Guangming Sheng committed
-
- 27 Jan, 2025 12 commits
-
-
Guangming Sheng committed
-
HL committed
-
HL committed
-
- Add link to performance tuning
Chi Zhang committed -
- Previous gradient accumulation value is computed by micro_batch_size, which is wrong when using dynamic_bsz - Fix ci script to avoid overlooking this issue - Change vLLM state log default value to True to disable log. - We will check the `self.config.actor.ppo_mini_batch_size % self.config.actor.ppo_micro_batch_size_per_gpu == 0` after normalization in fsdp_workers instead of in dp_actor and dp_critic.
Guangming Sheng committed -
- As titled
Guangming Sheng committed -
# Add Sequence Parallelism and Padding Removal to SFT Trainer This PR adds sequence parallelism (SP) and padding removal optimizations to the SFT trainer, which can help improve training efficiency for large language models. ## Key Changes ### Core Features 1. **Sequence Parallelism**: Added support for sequence parallelism through the Ulysses framework - Configurable via `ulysses_sequence_parallel_size` parameter - Properly handles data distribution across SP ranks - Maintains consistent loss computation across distributed setup 2. **Padding Removal**: Added support for efficient handling of variable-length sequences - Enabled via `use_remove_padding` flag (requires SP to be enabled) - Uses flash-attention's padding removal utilities - Handles proper re-padding and loss computation 3. **Training Improvements**: - Added label smoothing support to loss computation - Added progress bar with epoch information - Added RoPE scaling configuration support - Improved error messages for batch size validation ### Testing - Added comprehensive test suite (`test_trainer.py`) to verify: - Forward pass consistency between original and SP+rmpad implementations - Loss computation correctness across distributed setup - Proper handling of micro-batches ### Example Usage Added example script `examples/sft/gsm8k/run_qwen_05_sp2.sh` demonstrating how to use the new features with Qwen-2.5B model. ## Implementation Details - Uses device mesh for proper distributed training setup - Handles data distribution ensuring same sequences within SP groups but different across DP groups - Carefully manages backward pass timing with gradient checkpointing - Maintains compatibility with existing FSDP features ## Testing Instructions 1. Run the example script with sequence parallelism: ```bash bash examples/sft/gsm8k/run_qwen_05_sp2.sh <nproc_per_node> <save_path> ``` 2. Run the test suite: ```bash tests/sft/run_sft_sp_loss_match.sh``` ^^ These are PR description generated by [OpenHands](https://github.com/All-Hands-AI/OpenHands) --------- Co-authored-by: Jiayi Pan <i@jiayipan.me> Co-authored-by: openhands <openhands@all-hands.dev>
Xingyao Wang committed -
We set `max_num_batched_tokens` in config `.rollout`, but they weren't actually being passed to VLLM -- causing potential insufficient use of GPUs. This PR: - properly pass `max_num_batched_tokens` from config to vLLM - set `disable_log_stats` to False, so vLLM performance information can be properly displayed (to spot issues)
Xingyao Wang committed -
## Summary This PR changes all the micro_batch_size to micro_batch_size_per_gpu. **The Core logic of setting batch size:** - **All algorithmic metrics** (train batch size, ppo mini batch size): are global (from the perspective of single-controller), which will be normalized in each Worker. - **All performance-related parameters** (micro batch size, max token length in dynamic batch size) are local parameters, which represent the data sizes per GPU (i.e., each Worker). ## Main Changes 1. Change the scripts and config and delete the normalization for micro_bsz 2. Fix CI for SFT
Guangming Sheng committed -
HL committed
-
HL committed
-
- As titled
Guangming Sheng committed
-
- 26 Jan, 2025 2 commits
-
-
minor fix
Ikko Eltociear Ashimine committed -
Guangming Sheng committed
-
- 25 Jan, 2025 1 commit
-
-
This PR adds support for LoRA (Low-Rank Adaptation) for efficient model fine-tuning. ### Changes 1. Added LoRA configuration support in trainer config 2. Modified FSDP wrapping policy to handle LoRA modules 3. Integrated with existing FSDP training infrastructure 4. Added peft dependency 5. Removed unused ring_attn_utils.py ### Features - Configurable LoRA rank and alpha parameters - Target module specification for selective adaptation - Compatible with FSDP sharding strategy ### Testing Tested with Qwen2.5-0.5B-Instruct model on GSM8K dataset using the provided example script. ### Dependencies - Added `peft` package to requirements.txt This PR is based on commit 902ddbe6 and has been merged with the latest upstream main branch. --------- Co-authored-by: Jiayi Pan <i@jiayipan.me> Co-authored-by: openhands <openhands@all-hands.dev>
Xingyao Wang committed
-
- 24 Jan, 2025 3 commits