Gemma 2B Medical Summary (LoRA)

Fine-tuned with LoRA on medical abstract โ†’ plain language summary task.

Training Details

  • Base model: google/gemma-3-4b-it
  • PEFT: LoRA (r=16, alpha=32)
  • Dataset: Cochrane Library abstracts
  • Training samples: 1000
  • Epochs: 3
  • Loss: Composite (Relevance, Factuality, Readability)

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("google/gemma-3-4b-it")
model = PeftModel.from_pretrained(base_model, "Cristian11212/gemma-2b-medical-summary-lora-20251102-150945")
tokenizer = AutoTokenizer.from_pretrained("Cristian11212/gemma-2b-medical-summary-lora-20251102-150945")

Generated with โค๏ธ using Claude Code

Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Cristian11212/gemma-2b-medical-summary-lora-20251102-150945

Adapter
(94)
this model