Add library_name tag, link to paper and project page
#25
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,16 +1,22 @@
|
|
| 1 |
---
|
| 2 |
-
thumbnail: "https://repository-images.githubusercontent.com/523487884/fdb03a69-8353-4387-b5fc-0d85f888a63f"
|
| 3 |
datasets:
|
| 4 |
- ChristophSchuhmann/improved_aesthetics_6plus
|
| 5 |
license: creativeml-openrail-m
|
|
|
|
|
|
|
| 6 |
tags:
|
| 7 |
- stable-diffusion
|
| 8 |
- stable-diffusion-diffusers
|
| 9 |
- image-to-image
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# Stable Diffusion Image Variations Model Card
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
π£ V2 model released, and blurriness issues fixed! π£
|
| 15 |
|
| 16 |
π§¨π Image Variations is now natively supported in π€ Diffusers! ππ§¨
|
|
@@ -33,6 +39,7 @@ Make sure you are using a version of Diffusers >=0.8.0 (for older version see th
|
|
| 33 |
```python
|
| 34 |
from diffusers import StableDiffusionImageVariationPipeline
|
| 35 |
from PIL import Image
|
|
|
|
| 36 |
|
| 37 |
device = "cuda:0"
|
| 38 |
sd_pipe = StableDiffusionImageVariationPipeline.from_pretrained(
|
|
@@ -185,7 +192,7 @@ If you are using a diffusers version <0.8.0 there is no `StableDiffusionImageVar
|
|
| 185 |
in this case you need to use an older revision (`2ddbd90b14bc5892c19925b15185e561bc8e5d0a`) in conjunction with the lambda-diffusers repo:
|
| 186 |
|
| 187 |
|
| 188 |
-
First clone [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers) and install any requirements (in a virtual environment in the example below)
|
| 189 |
|
| 190 |
```bash
|
| 191 |
git clone https://github.com/LambdaLabsML/lambda-diffusers.git
|
|
@@ -202,6 +209,7 @@ from pathlib import Path
|
|
| 202 |
from lambda_diffusers import StableDiffusionImageEmbedPipeline
|
| 203 |
from PIL import Image
|
| 204 |
import torch
|
|
|
|
| 205 |
|
| 206 |
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 207 |
pipe = StableDiffusionImageEmbedPipeline.from_pretrained(
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
datasets:
|
| 3 |
- ChristophSchuhmann/improved_aesthetics_6plus
|
| 4 |
license: creativeml-openrail-m
|
| 5 |
+
library_name: diffusers
|
| 6 |
+
pipeline_tag: image-to-image
|
| 7 |
tags:
|
| 8 |
- stable-diffusion
|
| 9 |
- stable-diffusion-diffusers
|
| 10 |
- image-to-image
|
| 11 |
+
thumbnail: https://repository-images.githubusercontent.com/523487884/fdb03a69-8353-4387-b5fc-0d85f888a63f
|
| 12 |
---
|
| 13 |
|
| 14 |
# Stable Diffusion Image Variations Model Card
|
| 15 |
|
| 16 |
+
Model based on [Aligned Novel View Image and Geometry Synthesis via Cross-modal Attention Instillation](https://huggingface.co/papers/2506.11924)
|
| 17 |
+
|
| 18 |
+
Project page: https://cvlab-kaist.github.io/MoAI/
|
| 19 |
+
|
| 20 |
π£ V2 model released, and blurriness issues fixed! π£
|
| 21 |
|
| 22 |
π§¨π Image Variations is now natively supported in π€ Diffusers! ππ§¨
|
|
|
|
| 39 |
```python
|
| 40 |
from diffusers import StableDiffusionImageVariationPipeline
|
| 41 |
from PIL import Image
|
| 42 |
+
from torchvision import transforms
|
| 43 |
|
| 44 |
device = "cuda:0"
|
| 45 |
sd_pipe = StableDiffusionImageVariationPipeline.from_pretrained(
|
|
|
|
| 192 |
in this case you need to use an older revision (`2ddbd90b14bc5892c19925b15185e561bc8e5d0a`) in conjunction with the lambda-diffusers repo:
|
| 193 |
|
| 194 |
|
| 195 |
+
First clone [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers) and install any requirements (in a virtual environment in the example below):\
|
| 196 |
|
| 197 |
```bash
|
| 198 |
git clone https://github.com/LambdaLabsML/lambda-diffusers.git
|
|
|
|
| 209 |
from lambda_diffusers import StableDiffusionImageEmbedPipeline
|
| 210 |
from PIL import Image
|
| 211 |
import torch
|
| 212 |
+
from torchvision import transforms
|
| 213 |
|
| 214 |
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 215 |
pipe = StableDiffusionImageEmbedPipeline.from_pretrained(
|