MindsEye beta - ai art pilot

Come hang out with us on Twitter or Discord. Check out the project's Github. Consider supporting on Patreon

MindsEye beta is graphical user interface built to run multimodal ai art models for free from a Google Colab (CLIP Guided Diffusion and VQGAN+CLIP, Latent Diffusion, more coming soon), without needing edit a single line of code or know any programming. We built this UI on top of great open source resources from the community such as Disco Diffusion v5 and Hypertron v2, more coming soon

Access MindsEye beta here

The Rocky Horror Picture Show as an impressionist painting a feijoada ramen an insect robot preparing a meal the spirit of a tamagotchi wandering in the city of Tokyo The Stephansdom Cathedral turned into a Giant MC Donalds Elvira made of timber

(some AI art created with MindsEye)

How to use it:

mindseyegif

After accessing it you'll first land on it on a Google Colab, an environment provided by Google to run code for machine learning applications, such as this one! But don't worry, you will not need to touch a single line of code. The MindsEye application will just be launched via Colab and live on it's on GUI. Follow the guides below for step-by-step guidance:

On Colab, just click "Run all"

Click Run all or press CTRL+F9

When prompted to connect to Google Drive, you can either accept it or not - both works. If you do accept, all your creations and the AI models will be stored on your Drive. If you don't, after this Colab session expires (which can take 1-2 hours of inactivity to happen) you will any unsaved creations. drive_autorization

After that scroll to the bottom of the page and when it's done loading you will find a link to click click_link

Once you do, just Click to Continue through the tunnel, and you'll reach the GUI! right_link

Done! The GUI is here: mindseye_beta_interface

In the most minimal use-case, you can just type your prompt and click "Generate your piece!"

Otherwise you can:

Enhance your prompt by adding more sentences to it

enhancers Those are some prompts that the community likes using. There are plenty more that can improve your piece.

Change the model settings

basic_settigns For the image width and height: more than 512x512 will probably not work except if you pay for Google Colab Pro. The number of steps are a trade-off. The less steps the fastest the image generates but worst the composition gets. The init image is a setting that allows you to start not from a random image, but rather from an image you upload

Change the generation settings (batch mode and save each frame)

gen_settings

Here you can save intermediary frames (the intermediary steps until the image is generated) and set up a batch mode that will generate a few images with the same prompt

Advanced settings

The models have some advanced configurations. Feel free to click the Question Mark symbol on each to see what they mean and explore different values for your craft advanced_settings

  • Save/restore your own settings
  • Create 2D and 3D animations and not only still frames (from Disco Diffusion v5 and VQGAN Animations)
  • Gallery a gallery view on MindsEye itself (without needing to go to Colab)
  • Input audio and images instead of just text

F.A.Q

Can I run it locally?

Yes, you don't have to as it can run it on Colab, but you could. Though you need a powerful Nvidia GPU to get anything meaningful out of it (a GeForce RTX 2080 is considered a bare minimum requirement) but I have not released an easy-to-install local code just yet, so you may need to meddle with downloading the models and dependencies yourself. For those wishing to run Colab models locally, there's a great UI for Windows called Visions of Chaos that you can run dozens of VQGAN+CLIP, Guided Diffusion and other models from it.

How is this different from NightCafe or SnowPixel?

In a sense, this is an open source and free alternative to NightCafe and SnowPixel. Both are great UIs for generating multimodal ai art, and it is awesome to have people trying to figure out a business model in this space given how GPU heavy those applications are and how pricy are GPUs. But MindsEye approach is different because: 1. we are fully open source, and try to just add a UI on top of great Colab notebooks such as Disco Diffusion, Hypertron and 2. our goal is to try to provide the experience of running this models without thinking about credits or constraints. We also don't collect your prompts or results.

This AI generated art blew my mind. Where can I read more about it?

Check out our curated list of reads about the subject, from tutorials for absolute beginners all the way until in depth explanations and courses at https://multimodal.art or hang out with us on Twitter or Discord

And who are you?

My name is Apolinário, I created multimodal.art to be a portal with original art creations, news, curations, meta-curations, guides and tools (such as MindsEye) on the exciting field of multimodal ai generated art. I think the potential of this technology hasn't met the amount of people excited by it, and I want to help bridging that gap

And how can I help you do that?

Using MindsEye, providing feedback and sharing with your friends already helps me with the main goal of this project: making this technology more accessible to everybody. But if you want to contribute financially you can participate in our Patreon. I will not lock MindsEye behind a paywall, but we will have exclusive chats and I will give priority to features requested by Patrons, as well as you'll be supporting the existence of the whole multimodal.art eco-system.

Who owns the images produced by MindsEye?

Definitely not me! Probably you do. I say probably because the Copyright discussion about art generated by AI is ongoing, so it may be the case that everything produced with this tech would fall immediately on public domain. But in any case, either it's yours or in public domain. I don't claim any ownership on what comes out of this user interface nor you have to ask me to do anything. But if you want to post results on social media with the #mindseye hashtag and tag @multimodalart on Twitter and @multimodal.art on Instagram I will be super happy

Okay, reformulating, can I use the images commercially and/or sell NFTs with it?

Yes.

Do you capture our prompts or the images produced by MindsEye?

No.

Did you build this all yourself?

Absolutely not. I'm a very tiny atom on the shoulder of giants. This is just a UI on top of already existing great models.

Disco Diffusion v5 model is by @somnai_dreams and @gandamu, based on the foundational work of @RiversHaveWings, with modifications by @danielrussruss, Dango233, @chigozienri, @zippy731, @softologyComAu and others.

Hypertron v2 VQGAN model by Philipuss adapted from @RiversHaveWings with modifications by @jbusted1, @softologyComAu and others Original GAN+CLIP approach by @advadnoun. CLIP and Guided Diffusion were originally released by OpenAI. VQGAN was released by CompVis Heidelberg