AI Codex
Foundation Models & LLMsDevelopers

LoRA

Also: Low-Rank Adaptation

A parameter-efficient fine-tuning technique that trains only small adapter matrices rather than all model weights — dramatically reducing compute and storage requirements.