We have had no chance to publish these past weeks; the management of a particular part of our intellectual property portfolio has consumed most of the time and delayed some announcements we would like to make into the medium-term future. We have also been awarded several grants for further development of our intellectual property; it is good but delays our blog's "explorative-publicative" work.
This post has nothing to do with any of it. During our design thinking creative workshops, we wanted to explore the dream-like image generation from models such as OpenAI CLIP (repository here) and the related article (here). The main idea behind this procedure is obtaining an image to text and text to image relationship. The computer can generate a textual description from an image or an image from a textual description. This is used in apps, made now famous by social media users, such as https://www.wombo.art/. For example, if we input the "Ostirion" seed word into Wombo, this is the image we obtain, in a baroque style:
This is guided by human intent; we give the model the text and let it generate a single image. Dreams, human dreams, seem to be an amalgamation and processing of all the images we have seen during the day and all the "text" interpretations our human brains assign to them, looping through a complex iterative cycle that generates the dream. To simulate this loop, we can now feed our Ostirion Dreams image into an artificial intelligence caption system that will do the inverse task: assign a text to the picture. In this case, using the web tool at https://milhidaka.github.io/chainer-image-caption/, we obtain this automatic caption set:
The computer interprets this as "a cat sitting on top of a laptop computer". So let us close the loop now and input this text back into wombo to obtain this new "dream" image:
And this image back into the captioning system:
For a caption of "a close up of a clock on a wall". That we feed into Wombo again for this image:
And we could keep the machine dreaming, jumping from concept to concept, simulating a human dream sequence. This is how the imagining of the machine will diverge, and this time we have done it manually. In our subsequent publication, we will automate this process and let the machine dream for multiple cycles. Is this creative thinking from the machine?
Do not hesitate to contact us if you require quantitative model development, deployment, verification, or validation. We will also be glad to help you with your machine learning or artificial intelligence challenges when applied to asset management, automation, intelligence gathering from satellite, drone, fixed-point imagery, even dreams.