When I fine-tune LLaMA2 using LoRA on a V100 GPU, it takes two hours for one epoch. Is this normal?
When I fine-tune LLaMA2 using LoRA on a V100 GPU, it takes two hours for one epoch. Is this normal?