Atila commited on
Commit
0a3124b
1 Parent(s): ac91a8c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
  ---
8
  # SDXL 1.0-base Model Card (Core ML, 4.04-bit for iOS)
9
 
10
- This model was generated by Hugging Face using [Apple’s repository](https://github.com/apple/ml-stable-diffusion) which has [ASCL](https://github.com/apple/ml-stable-diffusion/blob/main/LICENSE.md). This version contains Core ML weights with the `ORIGINAL` attention implementation, suitable for running on macOS GPUs.
11
 
12
  This version uses 4.04 mixed-bit palettization and generates images with a resolution of 768×768. It uses `SPLIT_EINSUM` attention and is intended for use in iOS/iPadOS 17 or better. It also uses a version of the VAE decoder [created by `@madebyollin`](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix), which allows running using 16-bit precision and helps keep the memory footprint contained.
13
 
 
7
  ---
8
  # SDXL 1.0-base Model Card (Core ML, 4.04-bit for iOS)
9
 
10
+ This model was generated by Hugging Face using [Apple’s repository](https://github.com/apple/ml-stable-diffusion) which has [ASCL](https://github.com/apple/ml-stable-diffusion/blob/main/LICENSE.md).
11
 
12
  This version uses 4.04 mixed-bit palettization and generates images with a resolution of 768×768. It uses `SPLIT_EINSUM` attention and is intended for use in iOS/iPadOS 17 or better. It also uses a version of the VAE decoder [created by `@madebyollin`](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix), which allows running using 16-bit precision and helps keep the memory footprint contained.
13