您好,欢迎来到年旅网。
搜索
您的当前位置:首页使用LoRA(Low-Rank Adaptation)技术对MiniCPM模型进行微调

使用LoRA(Low-Rank Adaptation)技术对MiniCPM模型进行微调

来源:年旅网

使用LoRA(Low-Rank Adaptation)技术对MiniCPM模型进行微调

from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer, \
    DataCollatorForLanguageModeling
from peft import LoraConfig, TaskType, get_peft_model
import torch

# 1. 加载数据集
dataset = load_dataset(path="text", data_files="corpus.txt")
print("训练集大小:", len(dataset["train"]))
print("一个样本数据:", dataset["train"][0])
print("一个样本数据:", dataset["train"][100])

# 2. 预处理数据
path = 'openbmb/MiniCPM-2B-sft-bf16'
tokenizer = AutoTokenizer.from_pretrained(path)
tokenizer.pad_token = tokenizer.eos_token


def preprocess_function(examples):
    return tokenizer(examples["text"], truncation=True, padding=True)


tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=["text"])
train_dataset = tokenized_dataset['train']

# 3. 配置PEFT
peft_config = LoraConfig(task_type=TaskType.CAUSAL_LM,
                         target_modules=["q_proj", "o_proj", "k_proj", "v_proj", "gate_proj", "up_proj", "down_proj"],
                         inference_mode=False, r=8,
                         lora_alpha=32, lora_dropout=0.1)

# 4. 加载预训练模型
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map='cuda',
                                             trust_remote_code=True)

# 5. 应用PEFT到模型
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()

# 6. 设置训练参数
training_args = TrainingArguments(
    output_dir="MiniCPM-2B-sft-bf16/peft",
    learning_rate=1e-3,
    per_device_train_batch_size=1,
    num_train_epochs=2,
    weight_decay=0.01,
    save_strategy="epoch",
)

# 7. 配置数据收集器
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)

# 8. 训练模型
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=None,
    tokenizer=tokenizer,
    data_collator=data_collator,
)

trainer.train()
trainer.save_state()
trainer.save_model(output_dir="MiniCPM-2B-sft-bf16/peft")

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- oldu.cn 版权所有 浙ICP备2024123271号-1

违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务