Over the past few years, I've made extensive use of ChatGPT and found it incredibly helpful for ideation, often providing insights and details I might not have considered. It's been a remarkable tool, and I eagerly anticipate its ongoing enhancements.
In addition, I've explored various Generative AI image tools, including MidJourney, Vizcom, Dall-E, and Adobe Firefly. The generative capabilities of these tools, especially Adobe's Firefly with its innovative fill features, have captivated my interest.
While assembling this portfolio, I noticed the absence of thumbnail images for my collection of Medium articles. This led me to employ Dall-E, where I generated images through specific prompts for each article.
The outcomes were impressively relevant and usable right from the first batch. However, Dall-E did produce some curious errors, notably in human anatomy representation. An amusing example is the image for the Social Interactions Use Case entry, where the grandmother appears to have three arms—two on her right and one on her left, as shown below.
On the "Non-Verbal Interactions" article , the wearer in the image appear to have two left hands, and the appearance of the hands looks different to me.
On the "Non-Verbal Interactions" article , the wearer in the image appear to have two left hands, and the appearance of the hands looks different to me.
I decided to retain these images as they were, not just for their rich and fitting portrayal of the prompts, but also as a fascinating error. It serves as a reminder of the importance of human oversight when utilizing Generative AI tools.