Join Our Discord (630+ Members)

Mo Di Diffusion

Mo Di Diffusion is a deep learning model developed by NitroSock that uses a combination of convolutional and recurrent neural networks to predict the diffusion of molecules in a system. The model is trained on a dataset of molecular diffusion simulations and can be used for predicting molecular diffusion in different systems. The model can also be used to gain insights into the physics of molecular diffusion, such as the effects of temperature, pressure, and other parameters.

Follow AI Models on Google News

An easy & free way to support AI Models is to follow our google news feed! More followers will help us reach a wider audience!

Google News: AI Models

Subscribe or Contribute

Mo Di Diffusion

This is the fine-tuned Stable Diffusion model trained on screenshots from a popular animation studio. Use the tokens modern disney style in your prompts for the effect.

If you enjoy my work, please consider supporting me Become A Patreon

Videogame Characters rendered with the model: Videogame Samples Animal Characters rendered with the model: Animal Samples Cars and Landscapes rendered with the model: Misc. Samples

Prompt and settings for Lara Croft:

modern disney lara croft Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 3940025417, Size: 512x768

Prompt and settings for the Lion:

modern disney (baby lion) Negative prompt: person human Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 1355059992, Size: 512x512

This model was trained using the diffusers based dreambooth training by ShivamShrirao using prior-preservation loss and the train-text-encoder flag in 9.000 steps.

🧨 Diffusers

This model can be used just like any other Stable Diffusion model. For more information, please have a look at the Stable Diffusion .

You can also export the model to ONNX , MPS and/or FLAX/JAX .

from diffusers import StableDiffusionPipeline
import torch

model_id = "nitrosocke/mo-di-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe ="cuda")

prompt = "a magical princess with golden hair, modern disney style"
image = pipe(prompt).images[0]"./magical_princess.png")

Gradio & Colab

We also support a Gradio Web UI and Colab with Diffusers to run fine-tuned Stable Diffusion models: Open In Spaces Open In Colab


This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:

  1. You can’t use the model to deliberately produce nor share illegal or harmful outputs or content
  2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
  3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here

Related Posts

Danish female TTS Model vits Encoding Trained on cv Dataset at 22050Hz

Danish female TTS Model vits Encoding Trained on cv Dataset at 22050Hz

Danish (dansk) female text-to-speech model trained at 22050 Hz and is available to synthesize the Danish language.

Walter White AI Voice #2

Walter White AI Voice #2

Experience a revolutionary new collection of songs by AI Walter White!



Experience the groundbreaking sound of AI Kazuha (Le Sserafim) with our innovative collection of songs.