Join Our Discord (630+ Members)

Mo Di Diffusion

Mo Di Diffusion is a deep learning model developed by NitroSock that uses a combination of convolutional and recurrent neural networks to predict the diffusion of molecules in a system. The model is trained on a dataset of molecular diffusion simulations and can be used for predicting molecular diffusion in different systems. The model can also be used to gain insights into the physics of molecular diffusion, such as the effects of temperature, pressure, and other parameters.

Follow AI Models on Google News

An easy & free way to support AI Models is to follow our google news feed! More followers will help us reach a wider audience!

Google News: AI Models

Subscribe or Contribute

Mo Di Diffusion

This is the fine-tuned Stable Diffusion model trained on screenshots from a popular animation studio. Use the tokens modern disney style in your prompts for the effect.

If you enjoy my work, please consider supporting me Become A Patreon

Videogame Characters rendered with the model: Videogame Samples Animal Characters rendered with the model: Animal Samples Cars and Landscapes rendered with the model: Misc. Samples

Prompt and settings for Lara Croft:

modern disney lara croft Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 3940025417, Size: 512x768

Prompt and settings for the Lion:

modern disney (baby lion) Negative prompt: person human Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 1355059992, Size: 512x512

This model was trained using the diffusers based dreambooth training by ShivamShrirao using prior-preservation loss and the train-text-encoder flag in 9.000 steps.

🧨 Diffusers

This model can be used just like any other Stable Diffusion model. For more information, please have a look at the Stable Diffusion .

You can also export the model to ONNX , MPS and/or FLAX/JAX .

from diffusers import StableDiffusionPipeline
import torch

model_id = "nitrosocke/mo-di-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a magical princess with golden hair, modern disney style"
image = pipe(prompt).images[0]

image.save("./magical_princess.png")

Gradio & Colab

We also support a Gradio Web UI and Colab with Diffusers to run fine-tuned Stable Diffusion models: Open In Spaces Open In Colab

License

This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:

  1. You can’t use the model to deliberately produce nor share illegal or harmful outputs or content
  2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
  3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here

Related Posts

English female TTS Model capacitron t2 c50 Encoding Trained on blizzard2013 Dataset at 24000Hz

English female TTS Model capacitron t2 c50 Encoding Trained on blizzard2013 Dataset at 24000Hz

English female text-to-speech model trained on the blizzard2013 dataset at 24000 Hz and is available to synthesize the English language.

Luigi AI Voice

Luigi AI Voice

Experience the cutting-edge sound of AI Luigi’s collection of songs! Featuring vocal models produced by a community of AI enthusiasts and a range of styles and languages, you’ll love the innovative sound combinations.

English female TTS Model tacotron2 DDC_ph Encoding Trained on ljspeech Dataset at 22050Hz

English female TTS Model tacotron2 DDC_ph Encoding Trained on ljspeech Dataset at 22050Hz

English female text-to-speech model trained on the ljspeech dataset at 22050 Hz and is available to synthesize the English language.