Stable Diffusion using Apple's Core ML implementation for optimal performance on Apple Silicon, in a native app that runs completely offline and allows you to load your own Stable Diffusion Core ML models #Stable Diffusion #Generate Image #Text to Image #Text-to-Image #Generate #AI
With all the hype and controversy surrounding AI image generation, one thing’s for sure — the technology is here to stay, so we’d better learn to make the most of it. Stable Diffusion is an open-source AI model for generating image content, and there are plenty of intuitive GUIs available for it.
Mochi Diffusion is definitely one of the most user-friendly, while also being quite versatile and efficient. It’s a native app written in Swift, and it relies on Apple’s Core ML Stable Diffusion to ensure the best possible performance on Apple devices. What’s more, it also lets you select the Core ML models you would like to use.
Unlike some other Stable Diffusion GUIs, Mochi Diffusion doesn’t come bundled with any AI models. Instead, you’ll have to either download or convert the models you want to use. You can find detailed instructions here.
The upside is that you can download only the models you’re interested in, which is important given how large they can be. You can reduce both disk space and bandwidth usage. Once you’ve downloaded them, unpack the archive and place them in the working directory (you can check and change it from the app’s preferences).
If you’ve used other Stable Diffusion GUIs, everything will feel familiar. You’ll need to provide a series of prompts, set the number of batches and images to be generated, as well as the number of steps and guidance scale. There’s a lot of trial and error involved, so play around with them until you get it right.
You can save or share the created images, and an upscaler is included to increase their resolution. This can also be done for every generated image automatically, but it will increase memory usage.
Image generation is a resource-intensive process, which becomes especially relevant when you need multiple attempts to create an image that’s accurate to your prompt. Since Mochi Diffusion uses Apple’s own Core ML implementation of Stable Diffusion, it’s as fast as you can possibly expect on Apple Silicon Macs.
You can select your preferred compute unit depending on memory requirements, but make sure to also download the right Core ML models. See the instructions on the project’s GitHub page for more details.
While it doesn’t offer quite as many features as other Stable Diffusion GUIs, Mochi Diffusion is superior in many other aspects. It runs faster than other implementations, you can choose the models you would like to use, and the user interface very intuitive.
What's new in Mochi Diffusion 5.1:
- Fixed character counter to match the currently selected model (@gdbing)
- Added button to view models in Finder (@alexey-detr)
- Updated project website url
Mochi Diffusion 5.1
add to watchlist add to download basket send us an update REPORT- runs on:
- macOS 14.0 or later (Apple Silicon)
- file size:
- 69.5 MB
- filename:
- MochiDiffusion_v5.1.dmg
- main category:
- Graphics
- developer:
- visit homepage
Zoom Client
Windows Sandbox Launcher
calibre
Context Menu Manager
7-Zip
4k Video Downloader
Microsoft Teams
Bitdefender Antivirus Free
ShareX
IrfanView
- Bitdefender Antivirus Free
- ShareX
- IrfanView
- Zoom Client
- Windows Sandbox Launcher
- calibre
- Context Menu Manager
- 7-Zip
- 4k Video Downloader
- Microsoft Teams