CLIP-GEN models released* DALLE-Mega mid-training checkpoint* Centipede Diffusion Inpainting Upgrade* DALL-E Flow released*
OpenCLIP ViT-B/16+ trained on LAION-400M*
* code released
CLIP-GEN model is out!
— multimodal ai art (@multimodalart) May 5, 2022
The model is small, so it doesn't compete with state of the art models neither in speed or quality, but still is an interesting approach to keep an 👀
GitHub: https://t.co/ZoGaoMHgbr
Quick Colab I've assembled for you to try out: https://t.co/PWHluhY3Si pic.twitter.com/7YLuJhBWUO
Updated sample predictions: https://t.co/F089SJ5hrj pic.twitter.com/ABo2bqtKsX
— Boris Dayma 🥑 (@borisdayma) May 6, 2022
Centipede Diffusion V2 is out! Now with inpainting for Latent Diffusion and Disco Diffusion, and some other improvements.
— Zalring (@ZalringTW) May 4, 2022
Link : https://t.co/nugZLSx0Kr pic.twitter.com/c0txjnu601
adding some of my creations. prompt = 'A scientist comparing apples and oranges, by Norman Rockwell.' pic.twitter.com/tSTsJXgs9J
— Han Xiao (@hxiao) May 7, 2022
OpenCLIP concluded an amazing cycle of releases of LAION-400M trained CLIP models releasing a beefed up version of CLIP ViT-B/16, thanks for all this work @wightmanr and team
— multimodal ai art (@multimodalart) May 7, 2022
"The game Among Us in a favela" LAION 400M CLIP ViT-B 16+ Guided Diffusion https://t.co/pIzN8v7joE pic.twitter.com/eatyf3E6el
Excited to share Clip-Forge (CVPR 2022) with everyone. This is a very simple but effective method which tackles the problem of generating shapes from text when paired text and shape data are not available. https://t.co/TF35GPb9pN pic.twitter.com/YZ5tIw0Bm2
— Aditya Sanghi (@sanghiad) May 2, 2022
AIArt is a free and open source AI art course by John Whitaker. There are synchronous classes for the next few Saturdays 4 PM UTC on Twitch. All previous classes stay recorded and available on Google Colabs on the GitHub link