Code for CLIP-GEN released* DallE-Mega early-training checkpoint* StyleGAN XL large 1024px model* StyleGAN Human + CLIP Colab*
OpenCLIP ViT-B/16 trained on LAION-400M* Flamingo Visual Language Model
* code released
Whoa, DallE Mega is already insane at early training
— multimodal ai art (@multimodalart) April 27, 2022
You can try an early version of the checkpoint (~15% training) at its Hugging Face Spaces here 🤗 https://t.co/6OB1y7ng0n
(DallE Mega is @borisdayma DallE 1 replication effort but is is looking more like a DallE 1.8!) https://t.co/kNoc6VWQ5g pic.twitter.com/fHPvCMC9ma
`a calm landscape by Studio Ghibli | lofi hip hop radio` pic.twitter.com/BzV6KK33qN
— multimodal ai art (@multimodalart) April 26, 2022
"A crypto bro"
— Diego Porres (@PDillis) April 23, 2022
StyleGAN-Human + CLIP. Make of it as you will. pic.twitter.com/l9nfhU77uT
OpenCLIP released a new pre-trained ViT-B/16 model on the LAION-400M dataset! 🥳
— multimodal ai art (@multimodalart) April 25, 2022
Link: https://t.co/cMVef53Gl5
"A mecha robot in a favela by James Gurney"
Disco Diffusion 5.1 - OpenCLIP ViT-B/16 LAION-400M pic.twitter.com/CQHYtfKHP3
This use-case is really cool pic.twitter.com/wIfmQhBMaS
— multimodal ai art (@multimodalart) April 28, 2022
AIArt is a free and open source AI art course by John Whitaker. There are synchronous classes for the next few Saturdays 4 PM UTC on Twitch. All previous classes stay recorded and available on Google Colabs on the GitHub link, this Saturday (April 30th) they will have the Diffusion course!