I’d like to fine-tune stabilityai/stable-diffusion-2-1-unclip at main but the repo has a bunch of models, each with their own config.json. It seems at the very least I’d want to fine tune the text and image encoders but it’s not clear to me how.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Stable diffusion text_to_image.py discussion | 1 | 372 | May 22, 2023 | |
| Add additional trainable layers to StableDiffusion for fine-tuning | 0 | 1031 | October 8, 2023 | |
| How to condition Stable-Diffusion on CLIP image embeddings? | 0 | 1348 | February 4, 2024 | |
| Converting CLIPModel to VisionTextDualEncoderModel | 1 | 178 | March 21, 2024 | |
| Access CLIP from StableDiffusionPipeline and use the same models for multiple pipelines | 3 | 2690 | October 11, 2023 |