MindsEye Lite released* Centipede Diffusion v3 upgrade* Latent Princess Generator released*
Multi-Modal-Comparators released*
* code released
Gato🐈a scalable generalist agent that uses a single transformer with exactly the same weights to play Atari, follow text instructions, caption images, chat with people, control a real robot arm, and more: https://t.co/9Q7WsRBmIC
— DeepMind (@DeepMind) May 12, 2022
Paper: https://t.co/ecHZqzCSAm 1/ pic.twitter.com/cC8ukhw4at
CLIP-CLOP: CLIP-Guided Collage and Photomontage
— AK (@ak92501) May 9, 2022
abs: https://t.co/ECIrymHz2P pic.twitter.com/8hO288dD6M
I've released MindsEye Lite👁️🧠: a UI that runs multiple text-to-image models without Colabs or logins - directly on Hugging Face Spaces
— multimodal ai art (@multimodalart) May 13, 2022
Run Diffusion, DALLE replicas, VQGAN+CLIP. Try it out and consider sending it to someone that tried used AI art yet! https://t.co/d94ECY5GJq pic.twitter.com/npSohbZcak
Centipede Diffusion V3 is out: with real-time mask drawing for inpainting and Real-ESRGAN upscaling.https://t.co/nugZLSx0Kr pic.twitter.com/DY8gxoehr9
— Zalring (@ZalringTW) May 13, 2022
Latent Princess Generator v1 released! 👸
— multimodal ai art (@multimodalart) May 14, 2022
In the last few days I've worked with Dango233 to release his newest work: a very specially flavored CLIP Guided Latent Diffusion approach
Results are superb - try it out on the Colab (soon on MindsEye) https://t.co/xg17IMxJcI pic.twitter.com/16DXLlXsmj
Been working on a new tool to facilitate quickly adding support for new CLIP perceptors to AI art colabs. The tool is modality agnostic (i.e. I'll be adding models for other modalities soon) and can "mock" the OpenAI CLIP API for "drop-in" support! https://t.co/bHwLQREiXc 1/2
— David Marx (@DigThatData) May 10, 2022
AIArt is a free and open source AI art course by John Whitaker. There are synchronous classes for the next few Saturdays 4 PM UTC on Twitch. All previous classes stay recorded and available on Google Colabs on the GitHub link