Gemma 2B Medical Summary (LoRA)
Fine-tuned with LoRA on medical abstract โ plain language summary task.
Training Details
- Base model: google/gemma-3-4b-it
- PEFT: LoRA (r=16, alpha=32)
- Dataset: Cochrane Library abstracts
- Training samples: 1000
- Epochs: 3
- Loss: Composite (Relevance, Factuality, Readability)
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("google/gemma-3-4b-it")
model = PeftModel.from_pretrained(base_model, "Cristian11212/gemma-2b-medical-summary-lora-20251102-150945")
tokenizer = AutoTokenizer.from_pretrained("Cristian11212/gemma-2b-medical-summary-lora-20251102-150945")
Generated with โค๏ธ using Claude Code
- Downloads last month
- 1