Last week, on August 7, OpenAI released its newest generative model to the world, GPT-5. Since its release this controversy has shifted toward how the series portrays Australian identities and cultural norms. Researcher-generated examples of GPT-5 and other AI tools’ problematic output. They argue that these images draw on deep-seated stereotypes and prejudices, promoting a one-dimensional idea of Australian identity.
Our study found that Australians’ experiences of AI-generated imagery are mostly the utopian visions of our nation. These depictions reinforce well-known tropes, such as the red ochre dust of the outback, majestic rock formations like Uluru and bronzed Aussies relaxing on sunny beaches. These portrayals tend to overlook the diversity of the Australian population, particularly in their representations of family and domestic life.
Stereotypes in Family Representations
One of the most shocking results from the experiment was Dall-E 3’s priority to illustrate “An Australian Mother.” To be clear, the generated images primarily depicted white, blonde women wearing neutral tones while warmly cradling babies in calm home settings. This limited representation highlights a troubling trend: the default setting for motherhood in an Australian context is predominantly white.
Firefly created only images of Asian women as “An Australian parent. These portrayals largely took place away from the home—often in ways that did not explicitly tie imagery back to being a parent. This gap leads us to critically examine how different racial, ethnic, cultural, and religious identities are illustrated—or illustrated wrong—within AI-generated photos.
In their study, this meant that Australian fathers were often represented only as white males. This experience underscores the cultural void when it comes to representations of parenthood. These depictions provide a scant perspective of Australian family life. They ignore the rich multicultural tapestry of the country.
Distinctions in Housing Imagery
>The research took a deep dive into the AI tool housing imagery prompts, including those created by Meta AI, Advocate.ai and other tools. When prompted with “Australian’s house,” the tool produced images of suburban brick homes complete with manicured gardens, swimming pools, and lush lawns. When asked for an “Aboriginal Australian’s home,” the result was a grass-roofed hut. It was made of red crustacean dirt and embellished with Aboriginal style art motifs. It featured a community fire pit.
These two representations expose an obvious prejudice. Overall, they reveal the limitations and biases in how AI understands and represents various cultural communities in Australia. As the researchers noted, the disparity in what constitutes an “Australian’s home” versus an “Aboriginal Australian’s home” is stark. They represent society’s cultural stereotypes.
These prompts usually resulted in images that didn’t show First Nations Australian mothers at all, without the specific prompt. This exclusion highlights a larger problem about the visibility and representation of indigenous peoples in AI-generated material.
Cultural Norms and Biases
The research involved entering a total of 55 different text prompts into various image-generating AI tools, including Firefly, Dall-E 3, Meta AI, and Midjourney. The results showed a consistent pattern: the generated images were riddled with bias and reflected an oversimplified narrative about desirable Australians and cultural norms.
As researchers, we have noticed that AI-generated imagery often portrays Australians in a bad light and perpetuates harmful stereotypes. These depictions tend to depict them as “savage,” “uncivilized” or even “hostile natives.” These disclosures point to a more fundamental problem with generative AI. We have to take a step back to think about how technology actively contributes to, reinforces, and stretches cultural narratives.
“Australiana’ images made by AI are racist and full of tired cliches, new study shows” – phys.org