Google has decided to pause its Gemini AI tool’s feature of generating pictures of people. This is because some of the historical images it made were not correct. This is a problem for Google, which is trying to keep up with companies like OpenAI and Microsoft.
A few days ago, Google began letting people make images with its Gemini AI models. But recently, people on social media noticed that the tool’s historical pictures weren’t always right. This made Google decide to pause the feature for now.
We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon. https://t.co/SLxYPGoqOZ
— Google Communications (@Google_Comms) February 22, 2024
Google Gemini AI Tool Facing Accuracy Issues with Historical Images
Google admitted on Wednesday that its Gemini AI tool has been producing inaccurate historical images. This admission comes as Google strives to keep pace with competitors like OpenAI and Microsoft, especially since the launch of OpenAI’s ChatGPT in November 2022.
About a year ago, Google introduced its generative AI chatbot called Bard. However, the company faced criticism after sharing inaccurate information about pictures of a planet outside Earth’s solar system in a promotional video. This caused Google’s shares to drop by as much as 9%.
Recently, Google rebranded Bard as Gemini and unveiled paid subscription plans to enhance users’ access to better reasoning capabilities from the AI model. Jack Krawczyk, Senior Director of Product for Gemini at Google, acknowledged the challenges, stating, “Historical contexts have more nuance to them, and we will further tune to accommodate that.” This highlights Google’s commitment to addressing the accuracy issues in historical image generation.
Google Gemini AI Image Generation Sparks Controversy
Google’s new AI image generation tool on Gemini is facing criticism from X (formerly Twitter) users lately. The tool, which creates pictures based on text prompts, has been accused of going too far in promoting ‘wokeness’ by generating images showing people of various ethnicities, even if it contradicts historical accuracy.
“Can you generate images of the Founding Fathers?” It’s a difficult question for Gemini, Google’s DEI-powered AI tool.
Ironically, asking for more historically accurate images made the results even more historically inaccurate. pic.twitter.com/LtbuIWsHSU
— Mike Wacker (@m_wacker) February 21, 2024
These issues have triggered a heated debate on X, with right-wing users accusing Google of perpetuating racism against white people. For example, when a user asked Gemini to depict America’s founding fathers, the AI included women and people of colour, likely aiming to enhance diversity representation. However, this “inclusive” approach leads to inaccuracies and discomfort in cases where historical facts are established.
Past Mishaps and Present Challenges
According to a BBC report, years ago, Google had to issue an apology after its photo app infamously labelled a picture of a Black couple as “gorillas.” Similarly, more recently, OpenAI’s Dall-E image generator consistently depicted CEOs and other authority figures as white males, even when users didn’t specify gender or race.
On the flip side, injecting random diversity into every image as a way to overcorrect bias could perpetuate harmful stereotypes and technical issues. The current bug in Gemini highlights the challenges of using crude inclusion filters.
Concerns About Gemini AI’s Diversity Approach
Many users have noticed a trend with Google’s Gemini AI when they ask it to create photos of people from specific countries like America, England, or Australia. Gemini tends to show pictures of people from diverse races, including Asians and Blacks, but not whites. This bias becomes even more obvious when users request photos of historical figures who were mostly white, such as Vikings or American revolutionary soldiers. One Twitter user expressed frustration, saying, “It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist.”
It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist pic.twitter.com/4lkhD7p5nR
— Deedy (@debarghya_das) February 20, 2024
Others criticize Gemini for lacking an understanding of cultural and historical context. They argue that the AI seems to prioritize diversity by changing white people, especially males, to other ethnic groups or genders.
While other AI platforms like MidJourney or Microsoft’s Bing image generation have faced criticism for creating inappropriate or racial content, Gemini users are concerned that Google’s AI is ignoring one race altogether in its quest for diversity. Some even suggest that Gemini’s image search function is influenced by a “woke” ideology that prioritises diversity over accuracy.
Also Read: Amazon Bazaar: Amazon Set to Launch Low-Price Vertical to Compete with Meesho
Zee Accounts Missing $241 Mn, Sebi Asks Subhash Chandra and Punit Goenka to Explain
Renault India Teams Up with BLS E-Services to Boost Rural Sales