Commit 3fc56504 by wyt2000

update model and inference.

parent 1cda58a7
*.slurm
submit.sh
ret_one
---
language:
- en
- zh
base_model: data/MiniCPM_quant_per_head_fp4_LSQ_after_rope_safesoft_lowrope_prune_fixed
tags:
- MiniCPM
- ModelBest
- THUNLP
- aimo
- generated_from_trainer
datasets:
- /lustre/S/wuyt/dataset/minicpm/Code-Math-QA-WizardLM-deepseekproof-Lean-Workbook-V3-MiniF2F-Valid-Diff-Prompt
model-index:
- name: Code-Math-QA-WizardLM-deepseekproof-Lean-Workbook-V3-MiniF2F-Valid-Diff-Prompt-minicpm-quant-per-head-fp4-LSQ-after-rope-safesoft-lowrope-rotamul-fixed-1022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<div align="center">
<h1>
MiniCPM
</h1>
</div>
# Code-Math-QA-WizardLM-deepseekproof-Lean-Workbook-V3-MiniF2F-Valid-Diff-Prompt-minicpm-quant-per-head-fp4-LSQ-after-rope-safesoft-lowrope-rotamul-fixed-1022
<p align="center">
<a href="https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4" target="_blank">MiniCPM 技术报告</a><a href="https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4" target="_blank"> Technical Report</a> |
<a href="https://github.com/OpenBMB/OmniLMM/" target="_blank">OmniLMM 多模态模型 Multi-modal Model</a> |
<a href="https://luca.cn/" target="_blank">CPM-C 千亿模型试用 ~100B Model Trial </a>
</p>
This model is a fine-tuned version of [data/MiniCPM_quant_per_head_fp4_LSQ_after_rope_safesoft_lowrope_prune_fixed](https://huggingface.co/data/MiniCPM_quant_per_head_fp4_LSQ_after_rope_safesoft_lowrope_prune_fixed) on the /lustre/S/wuyt/dataset/minicpm/Code-Math-QA-WizardLM-deepseekproof-Lean-Workbook-V3-MiniF2F-Valid-Diff-Prompt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6380
MiniCPM 是面壁与清华大学自然语言处理实验室共同开源的系列端侧语言大模型,主体语言模型 MiniCPM-1B 仅有 12亿(1.2B)的非词嵌入参数量。
- 经过 SFT 后,MiniCPM 在公开综合性评测集上,MiniCPM 与 Mistral-7B相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。
- 经过 DPO 后,MiniCPM 在当前最接近用户体感的评测集 MTBench上,MiniCPM-2B 也超越了 Llama2-70B-Chat、Vicuna-33B、Mistral-7B-Instruct-v0.1、Zephyr-7B-alpha 等众多代表性开源大模型。
- 以 MiniCPM-2B 为基础构建端侧多模态大模型 MiniCPM-V,整体性能在同规模模型中实现最佳,超越基于 Phi-2 构建的现有多模态大模型,在部分评测集上达到与 9.6B Qwen-VL-Chat 相当甚至更好的性能。
- 经过 Int4 量化后,MiniCPM 可在手机上进行部署推理,流式输出速度略高于人类说话速度。MiniCPM-V 也首次跑通了多模态大模型在手机上的部署。
- 一张1080/2080可高效参数微调,一张3090/4090可全参数微调,一台机器可持续训练 MiniCPM,二次开发成本较低。
## Model description
我们将完全开源MiniCPM-2B的模型参数供学术研究和有限商用,以及训练过程中的所有Checkpoint和大部分非专有数据供模型机理研究。
More information needed
- 基于MiniCPM-2B的指令微调与人类偏好对**MiniCPM-2B-SFT/DPO。**
- 基于MiniCPM-2B的多模态模型**MiniCPM-V**,能力超越基于Phi-2的同参数级别多模态模型**。**
- MiniCPM-2B-SFT/DPO的Int4量化版**MiniCPM-2B-SFT/DPO-Int4。**
- 基于MLC-LLM、LLMFarm开发的MiniCPM手机端程序,**文本及多模态模型均可在手机端进行推理。**
## Intended uses & limitations
More information needed
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 1.2B parameters excluding embeddings.
## Training and evaluation data
- MiniCPM has very close performance compared with Mistral-7B on open-sourced general benchmarks with better ability on Chinese, Mathmetics and Coding after SFT. The overall performance exceeds Llama2-13B, MPT-30B, Falcon-40B, etc.
- After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench.
- MiniCPM-V, based on MiniCPM-2B, achieves the best overall performance among multimodel models of the same scale, surpassing existing multimodal large models built on Phi-2 and achieving performance comparable to or even better than 9.6B Qwen-VL-Chat on some tasks.
- MiniCPM can be deployed and infer on smartphones, and the speed of streaming output is relatively higher than the verbal speed of human. MiniCPM-V is the first multi-modal models that can be deployed on smartphones.
- The cost of developing based on MiniCPM is low. Parameter efficient finetuning can be conducted with a single 1080/2080 GPU and full parameter finetuning can be conducted with a 3090/4090 GPU.
More information needed
We release all model parameters for research and limited commercial use. We also release all the checkpoint during training and most public training data for research on model mechanism.
## Training procedure
- SFT and DPO version based on MiniCPM-2B and human preference: **MiniCPM-2B-SFT/DPO**
- The multi-modal model **MiniCPM-V** based on MiniCPM-2B, which outperforms models with similar size, i.e., Phi-2
- The INT4 quantized version **MiniCPM-2B-SFT/DPO-Int4** based on MiniCPM-2B-SFT/DPO
- Mobile phone application based on MLC-LLM and LLMFarm. Both language model and multimodel model can conduct inference on smartphones.
### Training hyperparameters
### 评测结果 Evaluation Results
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
详细的评测结果位于[github仓库](https://github.com/OpenBMB/MiniCPM?tab=readme-ov-file#%E8%AF%84%E6%B5%8B%E7%BB%93%E6%9E%9C)
### Training results
Detailed evaluation results are in [github repo](https://github.com/OpenBMB/MiniCPM/blob/main/README-en.md#evaluation-results)
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7601 | 1.0 | 10999 | 0.6696 |
| 0.7265 | 2.0 | 21998 | 0.6447 |
| 0.6744 | 3.0 | 32997 | 0.6380 |
注意:我们发现使用Huggingface生成质量略差于vLLM,因此推荐使用vLLM进行测试。我们正在排查原因。
Notice: We discovered that the quality of Huggingface generation is slightly lower than vLLM, thus benchmarking using vLLM is recommended.
We are investigating the cause now.
### Framework versions
### 局限性 Limitations
- 受限于模型规模,模型可能出现幻觉性问题。其中由于DPO模型生成的回复内容更长,更容易出现幻觉。我们也将持续进行MiniCPM模型的迭代改进;
- 为了保证在学术研究用途上模型的通用性,我们未对模型进行任何身份认同训练。同时由于我们用ShareGPT开源语料作为部分训练数据,模型可能会输出类似GPT系列模型的身份认同信息;
- 受限于模型规模,模型的输出受到提示词(prompt)的影响较大,可能多次尝试产生不一致的结果;
- 受限于模型容量,模型的知识记忆较不准确,后续我们将结合RAG方法来增强模型的知识记忆能力。
- Due to limitations in model size, the model may experience hallucinatory issues. As DPO model tend to generate longer response, hallucinations are more likely to occur. We will also continue to iterate and improve the MiniCPM model.
- To ensure the universality of the model for academic research purposes, we did not conduct any identity training on the model. Meanwhile, as we use ShareGPT open-source corpus as part of the training data, the model may output identity information similar to the GPT series models.
- Due to the limitation of model size, the output of the model is greatly influenced by prompt words, which may result in inconsistent results from multiple attempts.
- Due to limited model capacity, the model's knowledge memory is not accurate. In the future, we will combine the RAG method to enhance the model's knowledge memory ability.
## 模型下载 Download
| HuggingFace | ModelScope | WiseModel |
|-------------|------------|-----------|
|[sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)|[sft-bf16](https://modelscope.cn/models/OpenBMB/miniCPM-bf16)|[sft-bf16](https://wisemodel.cn/models/OpenBMB/miniCPM-bf16)
|[sft-fp32](https://huggingface.co/openbmb/MiniCPM-2B-sft-fp32)|[sft-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-sft-fp32)|[sft-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32)
|[dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16)|[dpo-bf16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16/summary)|[dpo-bf16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-bf16)
|[dpo-fp16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp16)|[dpo-fp16](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16/)|[dpo-fp16](https://wisemodel.cn/models/OpenBMB/MiniCPM-2B-dpo-fp16)
|[dpo-fp32](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://modelscope.cn/models/OpenBMB/MiniCPM-2B-dpo-fp32)|[dpo-fp32](https://wisemodel.cn/models/OpenBMB/miniCPM-dpo-fp32)
## 模型使用 Usage
* 安装`transformers>=4.36.0`以及`accelerate`后,运行以下代码
* 注意:需要在`from_pretrained`中明确指明模型的数据类型,否则会引起较大计算误差
* Run the following code after install `transformers>=4.36.0` and `accelerate`
* Warning: It is necessary to specify the data type of the model clearly in 'from_pretrained', otherwise large calculation errors will be caused
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM-2B-sft-bf16'
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda', trust_remote_code=True)
responds, history = model.chat(tokenizer, "山东省最高的山是哪座山, 它比黄山高还是矮?差距多少?", temperature=0.8, top_p=0.8)
print(responds)
```
* 期望输出 Expected Output
```shell
山东省最高的山是泰山,海拔1545米。
相对于黄山(海拔1864米),泰山海拔较低,相差约319米。
```
## 开源协议 LICENSE
#### 模型协议 Model LICENSE
* 本仓库中代码依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) 协议开源
* MiniCPM 模型权重的使用则需要遵循 [“通用模型许可协议-来源说明-宣传限制-商业授权”](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md)
* MiniCPM 模型权重对学术研究完全开放。
* 如需将模型用于商业用途,请联系cpm@modelbest.cn来获取书面授权,在登记后亦允许免费商业使用。
* This repository is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
* The usage of MiniCPM model weights must strictly follow [the General Model License (GML)](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md).
* The models and weights of MiniCPM are completely free for academic research.
* If you intend to utilize the model for commercial purposes, please reach out to cpm@modelbest.cn to obtain the certificate of authorization.
#### 声明 Statement
* 作为一个语言模型,MiniCPM 通过学习大量的文本来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。
* 因此用户在使用 MiniCPM 生成的内容时,应自行负责对其进行评估和验证。
* 如果由于使用 MinCPM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
* As a language model, MiniCPM generates content by learning from a vast amount of text.
* However, it does not possess the ability to comprehend or express personal opinions or value judgments.
* Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.
* Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
<p id="8"></p>
## 工作引用 Citation
* 如果觉得MiniCPM有助于您的工作,请考虑引用下列[技术报告](https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a?pvs=4)
* Please cite our [techinical report](https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20?pvs=4) if you find our work valuable.
```
@inproceedings{minicpm2024,
title={MiniCPM:Unveiling the Potential of End-side Large Language Models},
booktitle={OpenBMB Blog},
year={2024}
}
```
- Transformers 4.41.2
- Pytorch 2.4.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
{
"_name_or_path": "openbmb/CPM-2B",
"_name_or_path": "data/MiniCPM_quant_per_head_fp4_LSQ_after_rope_safesoft_lowrope_prune_fixed",
"architectures": [
"MiniCPMForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"auto_map": {
"AutoConfig": "configuration_minicpm.MiniCPMConfig",
"AutoModel": "modeling_minicpm.MiniCPMModel",
......@@ -11,22 +13,31 @@
"AutoModelForSequenceClassification": "modeling_minicpm.MiniCPMForSequenceClassification"
},
"bos_token_id": 1,
"dim_model_base": 256,
"eos_token_id": 2,
"head_w_quantbit": 4,
"head_x_quantbit": 8,
"hidden_act": "silu",
"hidden_size": 1536,
"initializer_range": 0.1,
"intermediate_size": 3840,
"kv_cache_quantbit": 4,
"linear_w_quantbit": 4,
"linear_x_quantbit": 8,
"lm_head_rank": 1024,
"max_position_embeddings": 4096,
"model_type": "minicpm",
"num_attention_heads": 24,
"num_hidden_layers": 52,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"scale_depth": 1.4,
"scale_emb": 12,
"torch_dtype": "bfloat16",
"transformers_version": "4.36.0",
"transformers_version": "4.41.2",
"use_cache": true,
"vocab_size": 73440,
"scale_emb": 12,
"dim_model_base": 256,
"scale_depth": 1.4
"vocab_size": 73440
}
......@@ -176,7 +176,8 @@ class MiniCPMConfig(PretrainedConfig):
)
try:
import flash_attn
self._attn_implementation = "flash_attention_2"
# self._attn_implementation = "flash_attention_2"
self._attn_implementation = "eager"
except:
pass
......
{
"bos_token_id": 1,
"do_sample": true,
"top_p": 0.8,
"eos_token_id": 2,
"temperature": 0.8,
"bos_token_id": 1,
"eos_token_id": 2
"top_p": 0.8,
"transformers_version": "4.41.2"
}
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
import os
model_path = os.getcwd()
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map='auto',
low_cpu_mem_usage=True,
trust_remote_code=True,
attn_implementation="eager",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
tokenizer.pad_token = ''
input_list = ["### Problem: Write a Python program to calculate the 10th prime."]
inputs = tokenizer(input_list, return_tensors="pt", padding=True).to(model.device)
outputs = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_length=2048,
num_return_sequences=1,
do_sample=False
)
print("response:",tokenizer.decode(outputs[0], skip_special_tokens=True))
/home/S/wuyt/lustre/model/aimo-progress-prize-trained-models/Code-Math-QA-WizardLM-deepseekproof-Lean-Workbook-V3-MiniF2F-Valid-Diff-Prompt-minicpm-quant-per-head-fp4-LSQ-after-rope-safesoft-lowrope-rotamul-fixed-1022/model.safetensors
\ No newline at end of file
......@@ -25,6 +25,7 @@ from typing import List, Optional, Tuple, Union, Dict
import torch
import torch.nn.functional as F
import torch.utils.checkpoint
import numpy as np
from torch import nn
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
......@@ -48,7 +49,7 @@ from transformers.utils import (
replace_return_docstrings,
)
from transformers.utils.import_utils import is_torch_fx_available
from .utils_quant import CLMLinear, activation_quant
from .utils_quant import CLMLinear, quant_fp4, dequant_fp4
from .configuration_minicpm import MiniCPMConfig
import re
......@@ -244,10 +245,12 @@ def apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim=1):
orig_dtype = k.dtype
cos = cos[position_ids].unsqueeze(unsqueeze_dim) # [bs, 1, seq_len, dim]
sin = sin[position_ids].unsqueeze(unsqueeze_dim) # [bs, 1, seq_len, dim]
q_fp32 = q.to(dtype=torch.float32, device=q.device)
k_fp32 = k.to(dtype=torch.float32, device=k.device)
q_embed = (q_fp32 * cos) + (rotate_half(q_fp32) * sin)
k_embed = (k_fp32 * cos) + (rotate_half(k_fp32) * sin)
cos_bf16 = cos.to(dtype=torch.bfloat16, device=cos.device)
sin_bf16 = sin.to(dtype=torch.bfloat16, device=sin.device)
q_bf16 = q.to(dtype=torch.bfloat16, device=q.device)
k_bf16 = k.to(dtype=torch.bfloat16, device=k.device)
q_embed = (q_bf16 * cos_bf16) + (rotate_half(q_bf16) * sin_bf16)
k_embed = (k_bf16 * cos_bf16) + (rotate_half(k_bf16) * sin_bf16)
return q_embed.to(dtype=orig_dtype), k_embed.to(dtype=orig_dtype)
class MiniCPMMLP(nn.Module):
......@@ -256,9 +259,11 @@ class MiniCPMMLP(nn.Module):
self.config = config
self.hidden_size = config.hidden_size
self.intermediate_size = config.intermediate_size
self.gate_proj = CLMLinear(self.hidden_size, self.intermediate_size,weight_bits=4,input_bits=8, bias=False)
self.up_proj = CLMLinear(self.hidden_size, self.intermediate_size,weight_bits=4,input_bits=8, bias=False)
self.down_proj = CLMLinear(self.intermediate_size, self.hidden_size,weight_bits=4,input_bits=8, bias=False)
self.linear_w_quantbit = config.linear_w_quantbit
self.linear_x_quantbit = config.linear_x_quantbit
self.gate_proj = CLMLinear(self.hidden_size, self.intermediate_size,weight_bits=self.linear_w_quantbit,input_bits=self.linear_x_quantbit, bias=False)
self.up_proj = CLMLinear(self.hidden_size, self.intermediate_size,weight_bits=self.linear_w_quantbit,input_bits=self.linear_x_quantbit, bias=False)
self.down_proj = CLMLinear(self.intermediate_size, self.hidden_size,weight_bits=self.linear_w_quantbit,input_bits=self.linear_x_quantbit, bias=False)
self.act_fn = ACT2FN[config.hidden_act]
def forward(self, x):
......@@ -320,17 +325,23 @@ class MiniCPMAttention(nn.Module):
self.max_position_embeddings = config.max_position_embeddings
self.rope_theta = config.rope_theta
self.is_causal = True
self.linear_w_quantbit = config.linear_w_quantbit
self.linear_x_quantbit = config.linear_x_quantbit
self.kv_cache_quantbit = config.kv_cache_quantbit
if (self.head_dim * self.num_heads) != self.hidden_size:
raise ValueError(
f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
f" and `num_heads`: {self.num_heads})."
)
self.q_proj = CLMLinear(self.hidden_size, self.num_heads * self.head_dim, weight_bits=4,input_bits=8,bias=config.attention_bias)
self.k_proj = CLMLinear(self.hidden_size, self.num_key_value_heads * self.head_dim, weight_bits=4,input_bits=8,bias=config.attention_bias)
self.v_proj = CLMLinear(self.hidden_size, self.num_key_value_heads * self.head_dim, weight_bits=4,input_bits=8,bias=config.attention_bias)
self.o_proj = CLMLinear(self.num_heads * self.head_dim, self.hidden_size, weight_bits=4,input_bits=8,bias=config.attention_bias)
self.q_proj = CLMLinear(self.hidden_size, self.num_heads * self.head_dim, weight_bits=self.linear_w_quantbit,input_bits=self.linear_x_quantbit,bias=config.attention_bias)
self.k_proj = CLMLinear(self.hidden_size, self.num_key_value_heads * self.head_dim, weight_bits=self.linear_w_quantbit,input_bits=self.linear_x_quantbit,bias=config.attention_bias)
self.v_proj = CLMLinear(self.hidden_size, self.num_key_value_heads * self.head_dim, weight_bits=self.linear_w_quantbit,input_bits=self.linear_x_quantbit,bias=config.attention_bias)
self.o_proj = CLMLinear(self.num_heads * self.head_dim, self.hidden_size, weight_bits=self.linear_w_quantbit,input_bits=self.linear_x_quantbit,bias=config.attention_bias)
self.key_scales = None
self.value_scales = None
self._init_rope()
def _init_rope(self):
......@@ -402,10 +413,7 @@ class MiniCPMAttention(nn.Module):
key_states = self.k_proj(hidden_states)
value_states = self.v_proj(hidden_states)
# kv 4bit quantization
key_states = key_states + (activation_quant(key_states, 4) - key_states).detach()
value_states = value_states + (activation_quant(value_states, 4) - value_states).detach()
# kv fp4 quantization
query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
......@@ -420,12 +428,26 @@ class MiniCPMAttention(nn.Module):
)
kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
cos, sin = self.rotary_emb(value_states.to(torch.float32), seq_len=kv_seq_len)
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
# save and load kv cache
if past_key_value is not None:
# quant before saving
key_s, key_states, key_type = quant_fp4(key_states, self.kv_cache_quantbit)
value_s, value_states, value_type = quant_fp4(value_states, self.kv_cache_quantbit)
# save and load scales
if len(past_key_value) <= self.layer_idx:
self.key_scales = key_s
self.value_scales = value_s
else:
self.key_scales = torch.cat((self.key_scales, key_s), dim=-2)
self.value_scales = torch.cat((self.value_scales, value_s), dim=-2)
# save and load states
cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
# dequant after loading
key_states = dequant_fp4(self.key_scales, key_states, key_type)
value_states = dequant_fp4(self.value_scales, value_states, value_type)
key_states = repeat_kv(key_states, self.num_key_value_groups)
value_states = repeat_kv(value_states, self.num_key_value_groups)
......@@ -445,7 +467,7 @@ class MiniCPMAttention(nn.Module):
attn_weights = attn_weights + attention_mask
# upcast attention to fp32
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
attn_weights = nn.functional.softmax(attn_weights, dim=-1).to(query_states.dtype)
attn_weights = nn.functional.dropout(attn_weights, p=self.attention_dropout, training=self.training)
attn_output = torch.matmul(attn_weights, value_states)
......@@ -514,10 +536,6 @@ class MiniCPMFlashAttention2(MiniCPMAttention):
key_states = self.k_proj(hidden_states)
value_states = self.v_proj(hidden_states)
# kv 4bit quantization
key_states = key_states + (activation_quant(key_states, 4) - key_states).detach()
value_states = value_states + (activation_quant(value_states, 4) - value_states).detach()
# Flash attention requires the input to have the shape
# batch_size x seq_length x head_dim x hidden_dim
# therefore we just need to keep the original shape
......@@ -531,9 +549,24 @@ class MiniCPMFlashAttention2(MiniCPMAttention):
cos, sin = self.rotary_emb(value_states.to(torch.float32), seq_len=kv_seq_len)
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
# save and load kv cache
if past_key_value is not None:
# quant before saving
key_s, key_states, key_type = quant_fp4(key_states, self.kv_cache_quantbit)
value_s, value_states, value_type = quant_fp4(value_states, self.kv_cache_quantbit)
# save and load scales
if len(past_key_value) <= self.layer_idx:
self.key_scales = key_s
self.value_scales = value_s
else:
self.key_scales = torch.cat((self.key_scales, key_s), dim=-2)
self.value_scales = torch.cat((self.value_scales, value_s), dim=-2)
# save and load states
cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
# dequant after loading
key_states = dequant_fp4(self.key_scales, key_states, key_type)
value_states = dequant_fp4(self.value_scales, value_states, value_type)
# TODO: These transpose are quite inefficient but Flash Attention requires the layout [batch_size, sequence_length, num_heads, head_dim]. We would need to refactor the KV cache
# to be able to avoid many of these transpose/reshape/view.
......@@ -713,10 +746,6 @@ class MiniCPMSdpaAttention(MiniCPMAttention):
key_states = self.k_proj(hidden_states)
value_states = self.v_proj(hidden_states)
# kv 4bit quantization
key_states = key_states + (activation_quant(key_states, 4) - key_states).detach()
value_states = value_states + (activation_quant(value_states, 4) - value_states).detach()
query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
......@@ -728,9 +757,24 @@ class MiniCPMSdpaAttention(MiniCPMAttention):
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
# save and load kv cache
if past_key_value is not None:
# quant before saving
key_s, key_states, key_type = quant_fp4(key_states, self.kv_cache_quantbit)
value_s, value_states, value_type = quant_fp4(value_states, self.kv_cache_quantbit)
# save and load scales
if len(past_key_value) <= self.layer_idx:
self.key_scales = key_s
self.value_scales = value_s
else:
self.key_scales = torch.cat((self.key_scales, key_s), dim=-2)
self.value_scales = torch.cat((self.value_scales, value_s), dim=-2)
# save and load states
cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
# dequant after loading
key_states = dequant_fp4(self.key_scales, key_states, key_type)
value_states = dequant_fp4(self.value_scales, value_states, value_type)
key_states = repeat_kv(key_states, self.num_key_value_groups)
value_states = repeat_kv(value_states, self.num_key_value_groups)
......@@ -1131,16 +1175,31 @@ class MiniCPMModel(MiniCPMPreTrainedModel):
class MiniCPMForCausalLM(MiniCPMPreTrainedModel):
_tied_weights_keys = ["lm_head.weight"]
prune_token_path = "/lustre/S/huangdi/open_for_out/models/aimo-progress-prize-trained-models/MiniCPM_quant_per_head_fp4_LSQ_after_rope_safesoft_lowrope_prune_1021/vocal_prune/sft_MATH_used_token.npy"
def __init__(self, config):
super().__init__(config)
self.model = MiniCPMModel(config)
self.vocab_size = config.vocab_size
self.lm_head = CLMLinear(config.hidden_size, config.vocab_size, weight_bits=4, input_bits=8, bias=False)
self.head_w_quantbit = config.head_w_quantbit
self.head_x_quantbit = config.head_x_quantbit
used_ids = np.load(self.prune_token_path)
self.used_ids = torch.tensor(used_ids)
self.original_vocab_size = config.vocab_size
self.lm_head = CLMLinear(config.hidden_size, config.vocab_size,weight_bits=self.head_w_quantbit,input_bits=self.head_x_quantbit, bias=False)
self.lm_head_prune = CLMLinear(config.hidden_size, len(used_ids),weight_bits=self.head_w_quantbit,input_bits=self.head_x_quantbit, bias=False)
# Initialize weights and apply final processing
self.post_init()
def init_lm_head_prune(self, model_weight):
self.lm_head_prune_weight = model_weight[self.used_ids]
# self.lm_head_prune = nn.Linear(in_features, out_features, bias=False)
# with torch.no_grad():
# self.lm_head_prune.weight.copy_(model_weight)
def get_input_embeddings(self):
return self.model.embed_tokens
......@@ -1219,12 +1278,23 @@ class MiniCPMForCausalLM(MiniCPMPreTrainedModel):
)
hidden_states = outputs[0]
if self.config.pretraining_tp > 1:
lm_head_slices = self.lm_head.weight.split(self.vocab_size // self.config.pretraining_tp, dim=0)
logits = [F.linear(hidden_states, lm_head_slices[i]) for i in range(self.config.pretraining_tp)]
logits = torch.cat(logits, dim=-1)
else:
logits = self.lm_head(hidden_states / (self.config.hidden_size / self.config.dim_model_base))
# hidden_states = hidden_states.to(self.lm_head_prune.weight.device)
# scaled_hidden_states = (hidden_states / (self.config.hidden_size / self.config.dim_model_base))
# compressed_logits = self.lm_head_prune(scaled_hidden_states)
# lm_head_prune_weight = self.lm_head.weight[self.used_ids]
# compressed_logits = F.linear(scaled_hidden_states, self.lm_head_prune_weight)
# logits = torch.full((compressed_logits.shape[0], compressed_logits.shape[1], self.original_vocab_size), float('-inf'),
# dtype=compressed_logits.dtype, device=scaled_hidden_states.device)
# logits[:, :, self.used_ids] = compressed_logits
logits = logits.float()
loss = None
......@@ -1364,7 +1434,7 @@ class MiniCPMForSequenceClassification(MiniCPMPreTrainedModel):
super().__init__(config)
self.num_labels = config.num_labels
self.model = MiniCPMModel(config)
self.score = CLMLinear(config.hidden_size, self.num_labels, weight_bits=4, input_bits=8, bias=False)
self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
# Initialize weights and apply final processing
self.post_init()
......
/lustre/S/huangdi/open_for_out/models/MiniCPM_quant_qilei/pytorch_model.bin
\ No newline at end of file
......@@ -13,6 +13,7 @@
"rstrip": false,
"single_word": false
},
"pad_token": "</s>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
......
/lustre/S/huangdi/open_for_out/models/MiniCPM_quant_qilei/tokenizer.json
\ No newline at end of file
/home/S/wuyt/lustre/model/aimo-progress-prize-trained-models/Code-Math-QA-WizardLM-deepseekproof-Lean-Workbook-V3-MiniF2F-Valid-Diff-Prompt-minicpm-quant-per-head-fp4-LSQ-after-rope-safesoft-lowrope-rotamul-fixed-1022/tokenizer.json
\ No newline at end of file
/lustre/S/huangdi/open_for_out/models/MiniCPM_quant_qilei/tokenizer.model
\ No newline at end of file
/home/S/wuyt/lustre/model/aimo-progress-prize-trained-models/Code-Math-QA-WizardLM-deepseekproof-Lean-Workbook-V3-MiniF2F-Valid-Diff-Prompt-minicpm-quant-per-head-fp4-LSQ-after-rope-safesoft-lowrope-rotamul-fixed-1022/tokenizer.model
\ No newline at end of file
......@@ -28,11 +28,12 @@
}
},
"bos_token": "<s>",
"chat_template": "{% for message in messages %}{% if (message['role'] == 'system')%}{{ '' }}{% elif (message['role'] == 'user')%}{{ message['content'] }}{% elif (message['role'] == 'assistant')%}{{ message['content'] }}{% endif %}{% if loop.last and message['role'] == 'user' and add_generation_prompt %}{{ '' }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": null,
"model_max_length": 2048,
"pad_token": "</s>",
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
......
......@@ -2,26 +2,61 @@ import math
import torch
from torch import nn
def grad_scale(x, scale):
y = x
y_grad = x * scale
return (y - y_grad).detach() + y_grad
def weight_quant(weight, num_bits=1):
dtype = weight.dtype
weight = weight.float()
Qn = -2 ** (num_bits - 1)
Qp = 2 ** (num_bits - 1) - 1
s = Qp / weight.abs().mean().clamp(min=1e-5)
result = (weight * s).round().clamp(Qn, Qp) / s
return result.type(dtype)
def round_pass(x):
y = x.round()
y_grad = x
return (y - y_grad).detach() + y_grad
class Quantizer(nn.Module):
def __init__(self, num_bits, seq_len):
super().__init__()
self.thd_neg = - 2 ** (num_bits - 1)
self.thd_pos = 2 ** (num_bits - 1) - 1
self.s = torch.nn.Parameter(torch.ones(seq_len))
def activation_quant(x, num_bits=8):
dtype = x.dtype
x = x.float()
Qn = -2 ** (num_bits - 1)
Qp = 2 ** (num_bits - 1) - 1
s = Qp / x.abs().max(dim=-1, keepdim=True).values.clamp(min=1e-5)
result = (x * s).round().clamp(Qn, Qp) / s
return result.type(dtype)
def forward(self, x, input_idx):
s = self.s[input_idx:input_idx + x.shape[1]]
s_scale = s[None, :, None]
x = x * s_scale
x = round_pass(x)
x = torch.clamp(x, self.thd_neg, self.thd_pos)
return s, x
def get_scale_f32(src_amax, dst_max):
scale = dst_max / src_amax.float()
return scale
def round_to_FP4(input):
dst_max=6.0
emax=2
emin=0
p=2
part= (2 - 2**(1-p))
ab= torch.where(torch.isinf(input)+torch.isnan(input), torch.ones_like(input)*dst_max, input)
ab = torch.where(ab>dst_max, torch.ones_like(ab)*dst_max, ab)
ab = torch.where(ab<2.0**(emin) * 2**(-p), torch.zeros_like(ab), ab)
E = torch.where(ab < 2**(emin) , torch.ones_like(ab) * (emin), torch.floor(torch.log2(ab.float())))
P = torch.round(ab * 2**(-E) * 2**(p-1) ) / 2**(p-1)
data = 2**E * P
return data
def quant_fp4(data, num_bits):
sign = torch.sign(data)
abs_data = torch.abs(data).float()
amax, index = torch.max(abs_data, -1, True)
qscale = get_scale_f32(amax, 6.0)
quant_data = round_to_FP4(abs_data * qscale)
quant_data = quant_data * sign
quant_data = data + (quant_data - data).detach()
return qscale, quant_data, data.dtype
def dequant_fp4(qscale, quant_data, target_type):
return (quant_data / qscale).to(target_type)
class CLMLinear(nn.Linear):
......@@ -29,6 +64,7 @@ class CLMLinear(nn.Linear):
*kargs,
weight_bits=1,
input_bits=8,
seq_len=4096,
**kwargs
):
super(CLMLinear, self).__init__(*kargs, **kwargs)
......@@ -37,14 +73,37 @@ class CLMLinear(nn.Linear):
"""
self.weight_bits = weight_bits
self.input_bits = input_bits
self.seq_len = seq_len
self.activation_quant = Quantizer(input_bits, seq_len)
def forward(self, input):
if input.shape[1] != 1:
self.input_idx = 0
if input.shape[1] + self.input_idx <= self.seq_len:
input_s, tobe_dequant_input = self.activation_quant(input, self.input_idx)
self.input_idx = input.shape[1] + self.input_idx
else:
raise ValueError(f"input.shape[1]: {input.shape[1]}, self.input_idx: {self.input_idx}, self.seq_len: {self.seq_len}")
quant_input = input + (activation_quant(input, self.input_bits) - input).detach()
quant_weight = self.weight + (activation_quant(self.weight, self.weight_bits) - self.weight).detach()
weight_s, tobe_dequant_weight, _ = quant_fp4(self.weight, self.weight_bits)
out = nn.functional.linear(quant_input, quant_weight)
out = self.elementwise_multiply_and_div(tobe_dequant_input, tobe_dequant_weight,input_s,weight_s,operate_type = torch.bfloat16)
out = out.type(input.dtype)
if not self.bias is None:
out += self.bias.view(1, -1).expand_as(out)
return out
def elementwise_multiply_and_div(self, A, B, C, D, operate_type=torch.bfloat16):
A = A.type(operate_type)
B = B.type(operate_type)
C = C.type(operate_type)
D = D.type(operate_type)
E = torch.matmul(C[:, None], D.T)
E = torch.clamp(E, min=1e-5)
F = torch.matmul(A, B.T)
result = F / E
return result
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment