Using custom data configuration huggan--CelebA-faces-8a807f0d7d4912ca
Downloading and preparing dataset image_folder/default (download: 1.29 GiB, generated: 1.06 GiB, post-processed: Unknown size, total: 2.35 GiB) to /root/.cache/huggingface/datasets/parquet/huggan--CelebA-faces-8a807f0d7d4912ca/0.0.0/0b6d5799bb726b24ad7fc7be720c170d8e497f575d02d47537de9a5bac074901...
Dataset parquet downloaded and prepared to /root/.cache/huggingface/datasets/parquet/huggan--CelebA-faces-8a807f0d7d4912ca/0.0.0/0b6d5799bb726b24ad7fc7be720c170d8e497f575d02d47537de9a5bac074901. Subsequent calls will reuse this data.
Using custom data configuration johnowhitaker--imagewoof2-320-6229576297321d90
Reusing dataset parquet (/root/.cache/huggingface/datasets/parquet/johnowhitaker--imagewoof2-320-6229576297321d90/0.0.0/0b6d5799bb726b24ad7fc7be720c170d8e497f575d02d47537de9a5bac074901)
A photo of a Dingo
Conceptual Captions 12M
These actually have text associated with the images. Useful for text-to-image testing and so on. I’ve been meaning to do LAION as well but for now this works.
I also tried an image repair task, that requires a low quality image and a high quality version. The ‘Low quality version’ is a 256px image that has been encoded then decoded with VQGAN. The target is the 512px reference image.