Stable Diffusion

This is a copy of /r/stablediffusion wiki to help people who need access to that information --- Howdy and welcome to r/stablediffusion! I'm u/Sandcheeze and I have collected these resources and links to help enjoy Stable Diffusion whether you are here for the first time or looking to add more customization to your image generations. If you'd like to show support, feel free to send us kind words or check out our Discord. Donations are appreciated, but not necessary as you being a great part of the community is all we ask for. *Note: The community resources provided here are not endorsed, vetted, nor provided by Stability AI.* #Stable Diffusion ## [Local Installation](https://rentry.org/aqxqsu) Active Community Repos/Forks to install on your PC and keep it local. ## [Online Websites](https://rentry.org/tax8k) Websites with usable Stable Diffusion right in your browser. No need to install anything. ## [Mobile Apps](https://rentry.org/hxutx) Stable Diffusion on your mobile device. # [Tutorials](https://rentry.org/6zfu3) Learn how to improve your skills in using Stable Diffusion even if a beginner or expert. # [Dream Booth](https://rentry.org/vza5s) How-to train a custom model and resources on doing so. # [Models](https://rentry.org/puuat) Specially trained towards certain subjects and/or styles. # [Embeddings](https://rentry.org/6aq8q) Tokens trained on specific subjects and/or styles. # [Bots](https://rentry.org/36vtrs) Either bots you can self-host, or bots you can use directly on various websites and services such as Discord, Reddit etc # [3rd Party Plugins](https://rentry.org/r44i3) SD plugins for programs such as Discord, Photoshop, Krita, Blender, Gimp, etc. # Other useful tools * [Diffusion Toolkit](https://github.com/RupertAvery/DiffusionToolkit) - Image viewer/organizer that scans your images for PNGInfo generated. * [Pixiz Morphing](https://en.pixiz.com/template/Morphing-transition-between-2-photos-4635) - Easily transition between 2 photos. * [Bulk Image Resizing Made Easy 2.0](https://www.birme.net/?target_width=512&target_height=512) #Community ## Games * [PictionAIry](https://pictionairy.com/) : ([Video](https://www.youtube.com/watch?v=T2sNtJPqdNU)|2-6 Players) - The image guessing game where AI does the drawing! ## Podcasts * [This is Not An AI Art Podcast](https://open.spotify.com/show/4RxBUvcx71dnOr1e1oYmvV?si=9b64502c9c344ee4) - Doug Smith talks about Ai Art and provides the prompts/workflow on [his site](https://hackmd.io/@dougbtv/HkAGcsEf2). # Databases or Lists * [AiArtApps](https://www.aiartapps.com/) * [Stable Diffusion Akashic Records](https://github.com/Maks-s/sd-akashic) * [Questianon's SD Updates 1](https://rentry.org/sdupdates) * [Questianon's SD Updates 2](https://rentry.org/sdupdates2) * [SW-Yw's Stable Diffusion Repo List](https://github.com/sw-yx/prompt-eng/blob/main/README.md#sd-distros) * [Plonk's SD Model List (NSFW)](https://rentry.org/sdmodels) * [Nightkall's Useful Lists](https://www.reddit.com/r/StableDiffusion/comments/xcrm4d/useful_prompt_engineering_tools_and_resources/) * [Civitai](https://civitai.com/) \- Website with a list of custom models. **Still updating this with more links as I collect them all here.** # FAQ ## How do I use Stable Diffusion? * Check out our guides section above! ## Will it run on my machine? * Stable Diffusion requires a 4GB+ VRAM GPU to run locally. However, much beefier graphics cards (10, 20, 30 Series Nvidia Cards) will be necessary to generate high resolution or high step images. However, anyone can run it online through **[DreamStudio](https://beta.dreamstudio.ai/dream)** or hosting it on their own GPU compute cloud server. * Only Nvidia cards are officially supported. * AMD support is available **[here unofficially.](https://www.reddit.com/r/StableDiffusion/comments/wv3zam/comment/ild7yv3/?utm_source=share&utm_medium=web2x&context=3)** * Apple M1 Chip support is available **[here unofficially.](https://www.reddit.com/r/StableDiffusion/comments/wx0tkn/stablediffusion_runs_on_m1_chips/)** * Intel based Macs currently do not work with Stable Diffusion. ## How do I get a website or resource added here? *If you have a suggestion for a website or a project to add to our list, or if you would like to contribute to the wiki, please don't hesitate to reach out to us via modmail or message me.

61
14
huggingface.co

>This is a fine tune of the FLUX.1-schnell model that has had the distillation trained out of it. Flux Schnell is licensed Apache 2.0, but it is a distilled model, meaning you cannot fine-tune it. However, it is an amazing model that can generate amazing images in 1-4 steps. This is an attempt to remove the distillation to create an open source, permissivle licensed model that can be fine tuned.

7
0
github.com

- GGUF diffusion model(Flux) support added by @rupeshs Release : https://github.com/rupeshs/fastsdcpu/releases/tag/v1.0.0-beta.62 More Details : https://github.com/rupeshs/fastsdcpu?tab=readme-ov-file#gguf-support

5
0
github.com

Release: https://github.com/bghira/SimpleTuner/releases/tag/v1.1 Quickstart: https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md

4
0
civitai.com

# Abstract >In this work, we share the insights for achieving state-of-the-art quality in our text-to-image anime image generative model, called Illustrious. To achieve high resolution, dynamic color range images, and high restoration ability, we focus on three critical approaches for model improvement. First, we delve into the significance of the batch size and dropout control, which enables faster learning of controllable token based concept activations. Second, we increase the training resolution of images, affecting the accurate depiction of character anatomy in much higher resolution, extending its generation capability over 20MP with proper methods. Finally, we propose the refined multi-level captions, covering all tags and various natural language captions as a critical factor for model development. Through extensive analysis and experiments, Illustrious demonstrates state-of-the-art performance in terms of animation style, outperforming widely-used models in illustration domains, propelling easier customization and personalization with nature of open source. We plan to publicly release updated Illustrious model series sequentially as well as sustainable plans for improvements. Paper: https://arxiv.org/abs/2409.19946 Model: https://civitai.com/models/795765/illustrious-xl Official Release Page: https://huggingface.co/KBlueLeaf/kohaku-xl-beta5 ![](https://i.imgur.com/rO5x4uK.png) ![](https://imgur.com/J0ILjiY.png)

2
0

# Abstract > Recent controllable generation approaches such as FreeControl and Diffusion Self-guidance bring fine-grained spatial and appearance control to text-to-image (T2I) diffusion models without training auxiliary modules. However, these methods optimize the latent embedding for each type of score function with longer diffusion steps, making the generation process time-consuming and limiting their flexibility and use. This work presents Ctrl-X, a simple framework for T2I diffusion controlling structure and appearance without additional training or guidance. Ctrl-X designs feed-forward structure control to enable the structure alignment with a structure image and semantic-aware appearance transfer to facilitate the appearance transfer from a user-input image. Extensive qualitative and quantitative experiments illustrate the superior performance of Ctrl-X on various condition inputs and model checkpoints. In particular, Ctrl-X supports novel structure and appearance control with arbitrary condition images of any modality, exhibits superior image quality and appearance transfer compared to existing works, and provides instant plug-and-play functionality to any T2I and text-to-video (T2V) diffusion model. See our project page for an overview of the results: [this https URL ](https://genforce.github.io/ctrl-x) Paper: https://arxiv.org/abs/2406.07540 Code: https://github.com/genforce/ctrl-x Project Page: https://genforce.github.io/ctrl-x/ ![](https://imgur.com/4Q8CQDV.png) ![](https://imgur.com/cREsI4T.png) ![](https://imgur.com/I41Wjky.png) ![](https://imgur.com/4mp3XXn.png) ![](https://imgur.com/OkLaRLX.png)

18
0

I'd like to fine tune a model that does img2img with a text prompt to guide the output. I think [img2img-turbo](https://github.com/GaParmar/img2img-turbo) might be the closest to what I'm after, though by default it uses a fixed prompt which can be made variable with [some tweaking of the training code](https://github.com/GaParmar/img2img-turbo/issues/41). At the moment I only have access to 24GB VRAM which limits my options. What I'm after is training a model to make specific text-based modifications to images, and I have plenty of before to after images plus the modification text prompts to train on. Worst case, I can try to see if reducing the image size during training makes it possible with my setup. Are there any other options available today?

7
1
https://youtu.be/skw9lPKFfcE

# Abstract > Character video synthesis aims to produce realistic videos of animatable characters within lifelike scenes. As a fundamental problem in the computer vision and graphics community, 3D works typically require multi-view captures for per-case training, which severely limits their applicability of modeling arbitrary characters in a short time. Recent 2D methods break this limitation via pre-trained diffusion models, but they struggle for pose generality and scene interaction. To this end, we propose MIMO, a novel generalizable model which can not only synthesize character videos with controllable attributes (i.e., character, motion and scene) provided by simple user inputs, but also simultaneously achieve advanced scalability to arbitrary characters, generality to novel 3D motions, and applicability to interactive real-world scenes in a unified framework. The core idea is to encode the 2D video to compact spatial codes, considering the inherent 3D nature of video occurrence. Concretely, we lift the 2D frame pixels into 3D using monocular depth estimators, and decompose the video clip to three spatial components (i.e., main human, underlying scene, and floating occlusion) in hierarchical layers based on the 3D depth. These components are further encoded to canonical identity code, structured motion code and full scene code, which are utilized as control signals of synthesis process. This spatial decomposition strategy enables flexible user control, spatial motion expression, as well as 3D-aware synthesis for scene interactions. Experimental results demonstrate the proposed method’s effectiveness and robustness. Paper: http://arxiv.org/abs/2409.16160 Code: https://github.com/menyifang/MIMO (coming soon) Project Page: https://menyifang.github.io/projects/MIMO/index.html ![](https://menyifang.github.io/projects/MIMO/assets/teaser_scene2.mp4)

10
0
https://www.youtube.com/watch?v=y80W3PjR0Gc

Release: https://github.com/invoke-ai/InvokeAI/releases/

13
0
huggingface.co

Qwen2-VL-7B-Captioner-Relaxed is an instruction-tuned version of Qwen2-VL-7B-Instruct, an advanced multimodal large language model. This fine-tuned version is based on a hand-curated dataset for text-to-image models, providing significantly more detailed descriptions of given images.

10
1

At the end of the day, my hardware is not appropriate for SD, it works only through hacks like tiling in A1111. And while that's fine for my hobby experimenting, I would like other people, or even myself once I finally upgrade my desktop, to be able to recreate my images in better quality, as closely as possible (or even try and create variations). I already make sure to keep the "PNG info" metadata which lists most parameters, so I assume the main variable left is the RNG source. Are any of the options hardware-independent? If not, are there any extensions which can create a hardware-independed random number source?

13
7
https://imgur.com/a/Yyheica.mp4

# Abstract >We propose the first video diffusion framework for reference-based lineart video colorization. Unlike previous works that rely solely on image generative models to colorize lineart frame by frame, our approach leverages a large-scale pretrained video diffusion model to generate colorized animation videos. This approach leads to more temporally consistent results and is better equipped to handle large motions. Firstly, we introduce Sketch-guided ControlNet which provides additional control to finetune an image-to-video diffusion model for controllable video synthesis, enabling the generation of animation videos conditioned on lineart. We then propose Reference Attention to facilitate the transfer of colors from the reference frame to other frames containing fast and expansive motions. Finally, we present a novel scheme for sequential sampling, incorporating the Overlapped Blending Module and Prev-Reference Attention, to extend the video diffusion model beyond its original fixed-length limitation for long video colorization. Both qualitative and quantitative results demonstrate that our method significantly outperforms state-of-the-art techniques in terms of frame and video quality, as well as temporal consistency. Moreover, our method is capable of generating high-quality, long temporal-consistent animation videos with large motions, which is not achievable in previous works. Our code and model are available at [this https URL](https://luckyhzt.github.io/lvcd). Paper: https://arxiv.org/abs/2409.12960 Project Page: https://luckyhzt.github.io/lvcd Code: (coming soon) Supplementary Demo clips: https://luckyhzt.github.io/lvcd/supplementary/supplementary.html ![](https://luckyhzt.github.io/lvcd/videos/comp_3.webm) ![](https://luckyhzt.github.io/lvcd/supplementary/general/C007070%20L205-50.webm) ![](https://luckyhzt.github.io/lvcd/supplementary/general/C010040%20L125-59.webm) ![](https://luckyhzt.github.io/lvcd/supplementary/general/C002330%20L125-37.webm)

6
0
https://imgur.com/Z440Gbv.png

# Abstract >In this work, we introduce OmniGen, a new diffusion model for unified image generation. Unlike popular diffusion models (e.g., Stable Diffusion), OmniGen no longer requires additional modules such as ControlNet or IP-Adapter to process diverse control conditions. OmniGenis characterized by the following features: 1) Unification: OmniGen not only demonstrates text-to-image generation capabilities but also inherently supports other downstream tasks, such as image editing, subject-driven generation, and visual-conditional generation. Additionally, OmniGen can handle classical computer vision tasks by transforming them into image generation tasks, such as edge detection and human pose recognition. 2) Simplicity: The architecture of OmniGen is highly simplified, eliminating the need for additional text encoders. Moreover, it is more user-friendly compared to existing diffusion models, enabling complex tasks to be accomplished through instructions without the need for extra preprocessing steps (e.g., human pose estimation), thereby significantly simplifying the workflow of image generation. 3) Knowledge Transfer: Through learning in a unified format, OmniGen effectively transfers knowledge across different tasks, manages unseen tasks and domains, and exhibits novel capabilities. We also explore the model's reasoning capabilities and potential applications of chain-of-thought mechanism. This work represents the first attempt at a general-purpose image generation model, and there remain several unresolved issues. We will open-source the related resources at [this https URL](https://github.com/VectorSpaceLab/OmniGen) to foster advancements in this field. Paper: https://arxiv.org/abs/2409.11340 Code: https://github.com/VectorSpaceLab/OmniGen (coming soon) ![](https://imgur.com/HZeG2oD.png) ![](https://i.imgur.com/l1nDu6J.png) ![](https://i.imgur.com/7RGMXSf.png) ![](https://i.imgur.com/U89rf0H.png) ![](https://i.imgur.com/ZfW2Zwk.png) ![](https://i.imgur.com/A8n7w5Z.png) ![](https://i.imgur.com/S9e8Us6.png)

15
7
https://files.catbox.moe/kihq0q.mp4

[CogVideoX-5b](https://huggingface.co/THUDM/CogVideoX-5b-I2V) [Pallaidium ](https://github.com/tin2tin/Pallaidium)

7
0

This post is a developer diary , kind of. I'm making an improved CLIP interrogator using nearest-neighbor decoding: https://huggingface.co/codeShare/JupyterNotebooks/blob/main/sd_token_similarity_calculator.ipynb , unlike the Pharmapsychotic model aka the "vanilla" CLIP interrogator : https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator/discussions It doesn't require GPU to run, and is super quick. The reason for this is that the text_encodings are calculated ahead of time. I have plans on making this a Huggingface module. //----// This post gonna be a bit haphazard, but that's the way things are before I get the huggingface gradio module up and running. Then it can be a fancy "feature" post , but no clue when I will be able to code that. So better to give an update on the ad-hoc solution I have now. The NND method I'm using is described here , in this paper which presents various ways to improve CLIP Interrogators: https://arxiv.org/pdf/2303.03032 ![](https://lemmy.world/pictrs/image/d499215c-248d-43b5-9c8c-8360581229da.png) Easier to just use the notebook then follow this gibberish. We pre-encode a bunch of prompt items , then select the most similiar one using dot product. Thats the TLDR. Right now the resources available are the ones you see in the image. I'll try to showcase it at some point. But really , I'm mostly building this tool because it is very convenient for myself + a fun challenge to use CLIP. It's more complicated than the regular CLIP interrogator , but we get a whole bunch of items to select from , and can select exactly "how similiar" we want it to be to the target image/text encoding. The \{itemA|itemB|itemC\} format is used as this will select an item at random when used on the perchance text-to-image servers, in in which I have a generator where I'm using the full dataset , https://perchance.org/fusion-ai-image-generator NOTE: I've realized new users get errors when loading the fusion gen for the first time. It takes minutes to load a fraction of the sets from perchance servers before this generator is "up and running" so-to speak. I plan to migrate the database to a Huggingface repo to solve this : https://huggingface.co/datasets/codeShare/text-to-image-prompts The \{itemA|itemB|itemC\} format is also a build-in random selection feature on ComfyUI : ![](https://lemmy.world/pictrs/image/acdd7ee0-7a4e-425b-b0ec-56ffc85e73ab.png) Source : https://blenderneko.github.io/ComfyUI-docs/Interface/Textprompts/#up-and-down-weighting Links/Resources posted here might be useful to someone in the meantime. ![](https://lemmy.world/pictrs/image/9230bde6-42a7-4f6e-881b-0af25fec4458.png) You can find tons of strange modules on the Huggingface page : https://huggingface.co/spaces text_encoding_converter (also in the NND notebook) : https://huggingface.co/codeShare/JupyterNotebooks/blob/main/indexed_text_encoding_converter.ipynb I'm using this to batch process JSON files into json + text_encoding paired files. Really useful (for me at least) when building the interrogator. Runs on the either Colab GPU or on Kaggle for added speed: https://www.kaggle.com/ Here is the dataset folder https://huggingface.co/datasets/codeShare/text-to-image-prompts: ![](https://lemmy.world/pictrs/image/388bbdbd-8277-49c0-9e97-82534cd938c0.png) Inside these folders you can see the auto-generated safetensor + json pairings in the "text" and "text_encodings" folders. The JSON file(s) of prompt items from which these were processed are in the "raw" folder. ![](https://lemmy.world/pictrs/image/68775dfb-d674-4673-a20b-78acd52ff593.png) The text_encodings are stored as safetensors. These all represent 100K female first names , with 1K items in each file. By splitting the files this way , it uses way less RAM / VRAM as lists of 1K can be processed one at a time. ![](https://lemmy.world/pictrs/image/bfe821f0-4015-4c54-89b1-659262984991.png) I can process roughly 50K text encodings in about the time it takes to write this post (currently processing a set of 100K female firstnames into text encodings for the NND CLIP interrogator. ) EDIT : Here is the output uploaded https://huggingface.co/datasets/codeShare/text-to-image-prompts/tree/main/names/firstnames I've updated the notebook to include a similarity search for ~100K female firstnames , 100K lastnames and a randomized 36K mix of female firstnames + lastnames Its a JSON + safetensor pairing with 1K items in each. Inside the JSON is the name of the .safetensor files which it corresponds to. This system is super quick :)! I have plans on making the NND image interrogator a public resource on Huggingface later down the line, using these sets. Will likely use the repo for perchance imports as well: https://huggingface.co/datasets/codeShare/text-to-image-prompts **Sources for firstnames : https://huggingface.co/datasets/jbrazzy/baby_names** List of most popular names given to people in the US by year **Sources for lastnames : https://github.com/Debdut/names.io** An international list of all firstnames + lastnames in existance, pretty much . Kinda borked as it is biased towards non-western names. Haven't been able to filter this by nationality unfortunately. //----// The TLDR : You can run a prompt , or an image , to get the encoding from CLIP. Then sample above sets (of >400K items, at the moment) to get prompt items similiar to that thing.

5
0
github.com

### Highlights for 2024-09-13 Major refactor of [FLUX.1](https://blackforestlabs.ai/announcing-black-forest-labs/) support: - Full **ControlNet** support, better **LoRA** support, full **prompt attention** implementation - Faster execution, more flexible loading, additional quantization options, and more... - Added **image-to-image**, **inpaint**, **outpaint**, **hires** modes - Added workflow where FLUX can be used as **refiner** for other models - Since both *Optimum-Quanto* and *BitsAndBytes* libraries are limited in their platform support matrix, try enabling **NNCF** for quantization/compression on-the-fly! Few image related goodies... - **Context-aware** resize that allows for *img2img/inpaint* even at massively different aspect ratios without distortions! - **LUT Color grading** apply professional color grading to your images using industry-standard *.cube* LUTs! - Auto **HDR** image create for SD and SDXL with both 16ch true-HDR and 8-ch HDR-effect images ;) And few video related goodies... - [CogVideoX](https://huggingface.co/THUDM/CogVideoX-5b) **2b** and **5b** variants with support for *text-to-video* and *video-to-video*! - [AnimateDiff](https://github.com/guoyww/animatediff/) **prompt travel** and **long context windows**! create video which travels between different prompts and at long video lengths! Plus tons of other items and fixes - see [changelog](https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md) for details! Examples: - Built-in prompt-enhancer, TAESD optimizations, new DC-Solver scheduler, global XYZ grid management, etc. - Updates to ZLUDA, IPEX, OpenVINO...

4
1

This is an open ended question. I'm not looking for a specific answer , just what people know about this topic. I've asked this question on Huggingface discord as well. But hey, asking on lemmy is always good, right? No need to answer here. This is a repost, essentially. This might serve as an "update" of sorts from the previous post: https://lemmy.world/post/19509682 //---// Question; FLUX model uses a combo of CLIP+T5 to create a text_encoding. CLIP is capable if doing both image_encoding and text_encoding. T5 model seems to be strictly text-to-text. So I can't use the T5 to create image_encodings. Right? https://huggingface.co/docs/transformers/model_doc/t5 But nonetheless, the T5 encoder is used in text-to-image generation. So surely, there must be good uses for the T5 in creating a better CLIP interrogator? Ideas/examples on how to do this? I have 0% knowledge on the T5 , so feel free to just send me a link someplace if you don't want to type out an essay. //----// For context; I'm making my own version of a CLIP interrogator : https://colab.research.google.com/#fileId=https%3A//huggingface.co/codeShare/JupyterNotebooks/blob/main/sd_token_similarity_calculator.ipynb Key difference is that this one samples the CLIP-vit-large-patch14 tokens directly instead of using pre-written prompts. I text_encode the tokens individually , store them in a list for later use. I'm using the method shown in this paper, the "NND-Nearest neighbor decoding" . ![](https://lemmy.world/pictrs/image/271638ba-0bc7-4a59-ae7f-476ec78fe2c9.jpeg) Methods for making better CLIP interrogators: https://arxiv.org/pdf/2303.03032 T5 encoder paper : https://arxiv.org/pdf/1910.10683 Example from the notebook where I'm using the NND method on 49K CLIP tokens (Roman girl image) : ![](https://lemmy.world/pictrs/image/ffd3cf0f-ab9d-4f5b-8a8e-f53105b6eed2.jpeg) Most similiar suffix tokens : "{vfx |cleanup |warcraft |defend |avatar |wall |blu |indigo |dfs |bluetooth |orian |alliance |defence |defenses |defense |guardians |descendants |navis |raid |avengersendgame }" most similiar prefix tokens : "{imperi-|blue-|bluec-|war-|blau-|veer-|blu-|vau-|bloo-|taun-|kavan-|kair-|storm-|anarch-|purple-|honor-|spartan-|swar-|raun-|andor-}"

10
6

Created by me. Link : https://huggingface.co/codeShare/JupyterNotebooks/blob/main/sd_token_similarity_calculator.ipynb # How does this work? Similiar vectors = similiar output in the SD 1.5 / SDXL / FLUX model CLIP converts the prompt text to vectors (“tensors”) , with float32 values usually ranging from -1 to 1. Dimensions are \[ 1x768 ] tensors for SD 1.5 , and a \[ 1x768 , 1x1024 ] tensor for SDXL and FLUX. The SD models and FLUX converts these vectors to an image. This notebook takes an input string , tokenizes it and matches the first token against the 49407 token vectors in the vocab.json : [https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/tokenizer](https://www.google.com/url?q=https%3A%2F%2Fhuggingface.co%2Fblack-forest-labs%2FFLUX.1-dev%2Ftree%2Fmain%2Ftokenizer) It finds the “most similiar tokens” in the list. Similarity is the theta angle between the token vectors. ![](https://lemmy.world/pictrs/image/44b04639-6212-4009-97b9-51b78580784f.jpeg) The angle is calculated using cosine similarity , where 1 = 100% similarity (parallell vectors) , and 0 = 0% similarity (perpendicular vectors). Negative similarity is also possible. # How can I use it? If you are bored of prompting “girl” and want something similiar you can run this notebook and use the “chick” token at 21.88% similarity , for example You can also run a mixed search , like “cute+girl”/2 , where for example “kpop” has a 16.71% similarity There are some strange tokens further down the list you go. Example: tokens similiar to the token "pewdiepie</w>" (yes this is an actual token that exists in CLIP) ![](https://lemmy.world/pictrs/image/a1cd284e-3341-4284-9949-5f8b58d3bd1f.jpeg) Each of these correspond to a unique 1x768 token vector. The higher the ID value , the less often the token appeared in the CLIP training data. To reiterate; this is the CLIP model training data , not the SD-model training data. So for certain models , tokens with high ID can give very consistent results , if the SD model is trained to handle them. Example of this can be anime models , where japanese artist names can affect the output greatly. Tokens with high ID will often give the "fun" output when used in very short prompts. # What about token vector length? If you are wondering about token magnitude, Prompt weights like (banana:1.2) will scale the magnitude of the corresponding 1x768 tensor(s) by 1.2 . So thats how prompt token magnitude works. Source: [https://huggingface.co/docs/diffusers/main/en/using-diffusers/weighted\_prompts](https://www.google.com/url?q=https%3A%2F%2Fhuggingface.co%2Fdocs%2Fdiffusers%2Fmain%2Fen%2Fusing-diffusers%2Fweighted_prompts)\* So TLDR; vector direction = “what to generate” , vector magnitude = “prompt weights” # How prompting works (technical summary) 1. There is no correct way to prompt. 2. Stable diffusion reads your prompt left to right, one token at a time, finding association _from_ the previous token _to_ the current token _and to_ the image generated thus far (Cross Attention Rule) 3. Stable Diffusion is an optimization problem that seeks to maximize similarity to prompt and minimize similarity to negatives (Optimization Rule) Reference material (covers entire SD , so not good source material really, but the info is there) : https://youtu.be/sFztPP9qPRc?si=ge2Ty7wnpPGmB0gi # The SD pipeline For every step (20 in total by default) for SD1.5 : 1. Prompt text => (tokenizer) 2. => Nx768 token vectors =>(CLIP model) => 3. 1x768 encoding => ( the SD model / Unet ) => 4. => _Desired_ image per Rule 3 => ( sampler) 5. => Paint a section of the image => (image) # Disclaimer /Trivia This notebook should be seen as a "dictionary search tool" for the vocab.json , which is the same for SD1.5 , SDXL and FLUX. Feel free to verify this by checking the 'tokenizer' folder under each model. vocab.json in the FLUX model , for example (1 of 2 copies) : https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/tokenizer I'm using Clip-vit-large-patch14 , which is used in SD 1.5 , and is one among the two tokenizers for SDXL and FLUX : https://huggingface.co/openai/clip-vit-large-patch14/blob/main/README.md This set of tokens has dimension 1x768. SDXL and FLUX uses an additional set of tokens of dimension 1x1024. These are not included in this notebook. Feel free to include them yourselves (I would appreciate that). To do so, you will have to download a FLUX and/or SDXL model , and copy the 49407x1024 tensor list that is stored within the model and then save it as a .pt file. //---// I am aware it is actually the 1x768 text_encoding being processed into an image for the SD models + FLUX. As such , I've included text_encoding comparison at the bottom of the Notebook. I am also aware thar SDXL and FLUX uses additional encodings , which are not included in this notebook. * Clip-vit-bigG for SDXL: https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k/blob/main/README.md * And the T5 text encoder for FLUX. I have 0% understanding of FLUX T5 text_encoder. //---// If you want them , feel free to include them yourself and share the results (cuz I probably won't) :)! That being said , being an encoding , I reckon the CLIP Nx768 => 1x768 should be "linear" (or whatever one might call it) So exchange a few tokens in the Nx768 for something similiar , and the resulting 1x768 ought to be kinda similar to 1x768 we had earlier. Hopefully. I feel its important to mention this , in case some wonder why the token-token similarity don't match the text-encoding to text-encoding similarity. # Note regarding CLIP text encoding vs. token *To make this disclaimer clear; Token-to-token similarity is not the same as text_encoding similarity.* I have to say this , since it will otherwise get (even more) confusing , as both the individual tokens , and the text_encoding have dimensions 1x768. They are separate things. Separate results. etc. As such , you will not get anything useful if you start comparing similarity between a token , and a text-encoding. So don't do that :)! # What about the CLIP image encoding? The CLIP model can also do an image_encoding of an image, where the output will be a 1x768 tensor. These _can_ be compared with the text_encoding. Comparing CLIP image_encoding with the CLIP text_encoding for a bunch of random prompts until you find the "highest similarity" , is a method used in the CLIP interrogator : https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator List of random prompts for CLIP interrogator can be found here, for reference : https://github.com/pharmapsychotic/clip-interrogator/tree/main/clip_interrogator/data The CLIP image_encoding is not included in this Notebook. If you spot errors / ideas for improvememts; feel free to fix the code in your own notebook and post the results. I'd appreciate that over people saying "your math is wrong you n00b!" with no constructive feedback. //---// Regarding output # What are the </w> symbols? The whitespace symbol indicate if the tokenized item ends with whitespace ( the suffix "banana</w>" => "banana " ) or not (the prefix "post" in "post-apocalyptic ") For ease of reference , I call them prefix-tokens and suffix-tokens. Sidenote: Prefix tokens have the unique property in that they "mutate" suffix tokens Example: "photo of a #prefix#-banana" where #prefix# is a randomly selected prefix-token from the vocab.json The hyphen "-" exists to guarantee the tokenized text splits into the written #prefix# and #suffix# token respectively. The "-" hypen symbol can be replaced by any other special character of your choosing. Capital letters work too , e.g "photo of a #prefix#Abanana" since the capital letters A-Z are only listed once in the entire vocab.json. You can also choose to omit any separator and just rawdog it with the prompt "photo of a #prefix#banana" , however know that this may , on occasion , be tokenized as completely different tokens of lower ID:s. Curiously , common NSFW terms found online have in the CLIP model have been purposefully fragmented into separate #prefix# and #suffix# counterparts in the vocab.json. Likely for PR-reasons. You can verify the results using this online tokenizer: https://sd-tokenizer.rocker.boo/ ![](https://lemmy.world/pictrs/image/43467d75-7406-4a13-93ca-cdc469f944fc.jpeg) ![](https://lemmy.world/pictrs/image/c0411565-0cb3-47b1-a788-b368924d6f17.jpeg) ![](https://lemmy.world/pictrs/image/c27c6550-a88b-4543-9bd7-067dff016be2.jpeg) # What is that gibberish tokens that show up? The gibberish tokens like "ðŁĺħ\</w>" are actually emojis! Try writing some emojis in this online tokenizer to see the results: https://sd-tokenizer.rocker.boo/ It is a bit borked as it can't process capital letters properly. Also note that this is not reversible. If tokenization "😅" => ðŁĺħ</w> Then you can't prompt "ðŁĺħ" and expect to get the same result as the tokenized original emoji , "😅". SD 1.5 models actually have training for Emojis. But you have to set CLIP skip to 1 for this to work is intended. For example, this is the result from "photo of a 🧔🏻‍♂️" ![](https://lemmy.world/pictrs/image/e2b51aea-6960-4ad0-867e-8ce85f2bd51e.jpeg) A tutorial on stuff you can do with the vocab.list concluded. Anyways, have fun with the notebook. There might be some updates in the future with features not mentioned here. //---//

9
1
github.com

Release: https://github.com/bghira/SimpleTuner/releases/tag/v1.0

2
0