John Berger's revolutionary framework revealed a fundamental truth: how we see is profoundly affected by what we know and believe. Images don't merely reflect reality—they actively construct our understanding of it. This insight becomes critical when communicating about emerging technologies like AI, where visual representation can either illuminate or obscure the truth.
Before a single word is read, an image has already begun shaping perception, setting expectations, and framing the entire conversation that follows.
It is also a really interesting exploration in presenting the same message in different mediums, essays, book, and TV series.
It is also extremely reductive!
Berger's Three Levels: How Images Actually Work
Level 1: Denotation
What is literally, objectively shown in the image—the surface level description that anyone could agree upon.
Level 2: Connotation
What the image suggests or implies beyond its literal content—the associations, emotions, and meanings it evokes.
Level 3: Ideology
The worldview, values, and belief systems that the image subtly reinforces or challenges at the deepest level.
Applying This to AI Imagery
Consider a typical humanoid robot image used to illustrate an AI article:
Denotation: A mechanical device with human-like form
Connotation: Intelligence that resembles human thinking
Ideology: AI's purpose is creating artificial humans
All three levels work simultaneously, with the ideological level often operating beneath conscious awareness. This is why image choices are never neutral—they're always making arguments about what AI is, what it should be, and how we should feel about it.
A single poorly chosen image can completely undo hours of careful, accurate written explanation. The visual often sticks while the words fade, leaving audiences with precisely the misconception you worked so hard to prevent.
This isn't a minor consideration—it's the fundamental challenge of science communication in a visual age. The image choice matters more than almost any other single decision you'll make when presenting your work to non-specialist audiences.
First Flags: Visual Signals That Mark Entry
What Are First Flags?
First flags are the visual signals that mark our entry into any conversation or topic. They're the images, colors, and design elements that our brains process in milliseconds—long before conscious thought engages with the content.
Even before reading a single word, your brain has already decided:
Is this information serious or trivial?
Should I feel scared or excited?
Is this relevant to my life and work?
Can I trust this source?
These snap judgments occur in less than one second, yet they shape the entire framework through which subsequent information will be processed and understood.
Why This Matters for AI
Shape Understanding
Images determine what people think AI fundamentally is before they engage with technical details or nuanced explanations.
Create Shortcuts
Visual representations establish cognitive shortcuts that bypass critical thinking and create instant—often incorrect—assumptions.
Amplify Anthropomorphization
The wrong imagery dramatically increases the tendency to attribute human-like qualities, consciousness, and intentionality to AI systems.
When people encounter robot images before learning about your research, you're already fighting an uphill battle against deeply embedded misconceptions. The visual first impression creates a framework that your words must then work to either reinforce or undo—and undoing is exponentially harder than building correctly from the start.
The Infrastructure of Visual Misconception
Understanding the Systemic Problem
The prevalence of misleading AI imagery isn't primarily about individual bad choices—it's about a massive industrial infrastructure that produces, catalogs, distributes, and incentivizes anthropomorphic AI representation at scale.
Stock Photography Production
Photographers create thousands of robot images because they sell reliably, creating economic incentive for more of the same.
Tagging and Search Systems
Images get tagged with terms like "artificial intelligence" and "thinking robot," reinforcing associations in search algorithms.
Editorial Workflows
Designers and editors working under tight deadlines select from available options, following the path of least resistance.
Normalization and Reproduction
Each use reinforces the pattern, making robot imagery seem like the "normal" way to illustrate AI, perpetuating the cycle.
Breaking this cycle requires more than individual awareness—it requires creating alternative infrastructure through initiatives like Better Images of AI, but also through researchers documenting their own work and speaking up when visual choices undermine accuracy.
Journalists, designers, and communicators don't create these problematic images from scratch—they select them from massive stock photography libraries like Getty Images, Shutterstock, and Adobe Stock.
Search for "artificial intelligence" in any of these databases and you'll find thousands of images, the vast majority depicting humanoid robots, glowing brains with circuits, or abstract digital faces.
These images come pre-tagged with terms like:
"Robot doing serious thinking"
"AI concept with binary code"
"Intelligence overlays"
"Future technology artificial brain"
The Systemic Nature of the Problem
This isn't about individual bad choices—it's about an entire visual supply chain that reinforces misconceptions at industrial scale. Editors aren't choosing robot images maliciously; this is simply what's available, what's been normalized, and what the tagging systems suggest.
Breaking this cycle requires creating and promoting alternative imagery, but also understanding that we're fighting against powerful economic and infrastructural forces that have made misleading AI imagery the path of least resistance.
Trap #1: The Gynoid Problem
Female-Presenting Humanoid Robots in AI Coverage
Gender Stereotyping
Why are AI systems so often visualized as female-presenting? The pattern reflects and reinforces problematic associations between femininity, service roles, and artificial assistants.
Anthropomorphization
Gynoid imagery doubles down on the misconception that AI systems have or aspire to human-like consciousness, personality, and agency.
Objectification
These images often present AI through an objectifying lens that would be immediately recognizable as problematic if applied to actual human women.
The compounding result: audiences arrive at your technical presentation already assuming that "AI thinks like humans," that AI systems have personalities and possibly consciousness, and that these systems are somehow inherently gendered. You then must spend precious time undoing these misconceptions before you can even begin explaining what your research actually does.
Several student responses mentioned frustration with the "AI as all-knowing intelligence" assumption. These visuals are where that assumption gets planted and reinforced, often hundreds of times before someone encounters your more accurate explanation.
Trap #2: The Graduation Robot
Coursera Machine Learning Course Icon Example
Andrew Ng's influential machine learning course used to use a cheerful robot wearing a mortarboard as its icon.
The Friendly Education Bot
Even in efforts to democratize AI education and make machine learning accessible to broader audiences, we reflexively anthropomorphize. The course icon sends a clear but misleading message about what students are learning.
The message sent: "AI means building robot personalities with human-like learning abilities."
The reality: "Machine learning involves mathematical optimization, statistical inference, and algorithmic pattern recognition."
This creates a fundamental cognitive dissonance: students work through the mathematics of gradient descent, backpropagation, and loss functions, while their mental model remains centered on "building intelligent robots." The mathematical reality and the visual metaphor point in entirely different directions.
The problem isn't that the icon is cute or friendly—those are valuable qualities for making technical education approachable. The problem is that it reinforces precisely the anthropomorphic misconceptions that make AI harder to understand accurately.
Trap #3: Visual Metaphors and the Saddle Point Problem
The Metaphor
Accessible and immediately graspable, but risks trivializing the mathematical concept.
The Reality
Accurate representation of the mathematical structure, but potentially intimidating or opaque. Source
The Middle Ground?
Annotated visualizations that bridge accuracy and accessibility.
The Tension in Technical Communication
This example illustrates the fundamental challenge many researchers face: explaining complex mathematical concepts without either overwhelming your audience with technical detail or oversimplifying to the point of inaccuracy.
Visual metaphors are powerful communication tools, but they're also dangerous. A horse saddle is an evocative, memorable image that helps people grasp the basic shape of a saddle point in optimization. But it also strips away the precision, the multi-dimensionality, and the mathematical rigor that make the concept meaningful in context.
Trap #4: The Michelangelo or The Circuit Brain Merger
The Most Pernicious Visual Trap
Suggests AI Works Like Human Brains
Neural networks are inspired by biological neurons but operate on completely different principles. This imagery conflates biological and artificial systems.
Implies Human-AI Merger Goals
The visual suggests that AI development aims to merge human and machine intelligence, which is not the goal of most AI research.
Treats Intelligence as Digitizable Substance
The imagery implies intelligence is a transferable essence that can flow between brains and computers, fundamentally misrepresenting cognition.
These images appear in scientific contexts, educational materials, and serious journalism, lending them an authority that makes the underlying misconceptions even stickier and harder to dislodge.
The Compounding Problem
Problematic Image
Stock photography of humanoid robots, circuit brains, or anthropomorphized AI
Wrong Mental Model
Audience develops fundamentally incorrect understanding of what AI is and does
Your Communication
You present accurate information about your research
Undo Misconceptions
Must spend time correcting false beliefs before building true understanding
Harder Work
Communication becomes exponentially more difficult and less effective
The Critical Insight
You're not just competing with a lack of knowledge—you're competing with actively incorrect visual "knowledge" that has been reinforced hundreds or thousands of times across media, education, and popular culture.
Every problematic AI image in circulation makes your job as a researcher and communicator harder. This isn't abstract or theoretical—it directly impacts your ability to explain your work, secure funding, recruit collaborators, and influence policy. The stakes of visual communication couldn't be higher.
Better Images of AI: A Practical Solution
The Initiative
Better Images of AIis a collaborative project launched by Imperial College London and a coalition of AI researchers, communicators, and visual experts who recognized the urgent need for alternatives to standard AI stock photography.
Mission: Create and share authentic visual representations of artificial intelligence research and applications that don't rely on anthropomorphic robots, circuit brains, or other misleading imagery.
The initiative provides free, high-quality photographs and illustrations showing what AI research actually looks like: data centers, researchers at work, visualization interfaces, diverse teams, real-world applications, and the infrastructure that makes modern AI possible.
This is a practical resource you can use immediately for Day 2 presentations and future communication work. The collection grows continuously as researchers contribute images from their own labs and projects, creating a visual alternative to the stock photography trap.
Four Principles for Better AI Images
Show the Reality
Depict the actual infrastructure of AI: data centers, computational resources, researchers at work, code on screens, and the physical reality of where AI happens.
Show the Diversity
Represent the full range of people who build and are affected by AI systems, diverse applications across domains, and varied approaches to research and development.
Show the Context
Make visible where training data originates, who performs the labor of AI development, the social and environmental costs, and the systems in which AI operates.
Show the Limitations
Don't hide failures, edge cases, uncertainty, error rates, or the boundaries of what current AI systems can and cannot do.
These principles aren't about making AI seem "less exciting" or "more boring"—they're about making it accurate. And accuracy can be visually compelling! Real research, real data, real people, and real applications are inherently interesting when photographed and presented well.
The false choice between "engaging imagery" and "accurate representation" is itself part of the problem. Better images prove you can have both.
Visual Substitutions: Practical Alternatives
Make this personal and practical. Think about your own research area and what authentic images would look like. What does your screen show when you're debugging code? What does your data look like? What does your team look like when you're discussing results?
Those real moments are more visually interesting and communicatively effective than any stock robot photograph—and they have the crucial advantage of being true.
Five Questions Before Choosing an Image
1
Does this show what my system actually does?
If your AI analyzes text, is the image showing text analysis? If it predicts outcomes, is prediction visible? Match image to actual function.
2
Could this reinforce anthropomorphic thinking?
Does the image suggest human-like consciousness, intention, or understanding? If so, can you find an alternative that conveys your point without that implication?
3
Does this show the people involved?
AI doesn't create itself. Are researchers, engineers, data annotators, domain experts, or affected communities visible in your visual representation?
4
Am I using a metaphor—is it actually helping?
Metaphors can clarify or obscure. Is this visual metaphor making your concept more understandable, or just substituting one confusion for another?
5
Would this pass Berger's test?
Consider all three levels: What does it denote? What does it connote? What ideology does it reinforce? Are you comfortable with all three?
Print this checklist. Keep it visible when you're selecting images for presentations, papers, websites, or any public communication. Making these questions automatic will dramatically improve your visual communication over time.
Visual Responsibility in AI Research
"It's no longer acceptable for the field to think of itself as just engineers."
— Kate Crawford, Royal Society Lecture
Part of That Responsibility: Visual Literacy
AI researchers have responsibilities that extend far beyond technical accuracy and algorithmic performance. As Kate Crawford has compellingly argued, the field must grapple with social, ethical, political, and communicative dimensions of the technology we create.
Visual literacy—understanding how images shape perception and being intentional about visual choices—is a crucial component of that broader responsibility. You can't claim to care about AI's social impact while remaining indifferent to the images that shape how millions of people understand what AI is.
This connects directly to broader themes throughout the RelAI Communications Course: researchers aren't isolated technical specialists but participants in shaping technology's role in society. Visual communication is where technical knowledge meets public understanding, and getting it right matters.
You Are Gatekeepers
Why This Is Your Responsibility
You might reasonably object: "I'm not a graphic designer. I'm a researcher. Why is visual communication my job?" Here's why this responsibility falls to you specifically:
Gatekeepers of Accuracy
You possess specialized knowledge that others lack. You know what AI actually is, how it actually works, and what its limitations actually are. That knowledge creates responsibility.
First Responders
You see the reality-perception gap firsthand—in funding meetings, public talks, media interviews, and classroom discussions. You witness the consequences of visual misconceptions daily.
Model Communicators
Journalists, policymakers, educators, and the public watch how researchers communicate about AI. Your visual choices set norms and create precedents that others follow.
You don't need to become a professional designer. But you do need to develop visual literacy sufficient to recognize when images help or hinder accurate understanding. You need to exercise the authority you have to request better images, to create documentation of your actual work, and to speak up when visual choices undermine technical accuracy.
You have unique authority to fix this problem. Use it.
Images Are First Flags: Choose Wisely
What You Plant Matters
The First Thing
First flags are the very first elements your audience encounters—processed in milliseconds before conscious thought engages.
The Stickiest Thing
Visual memories outlast verbal memories by a factor of three or more, making images the most durable part of your communication.
The Hardest Thing
Correcting a visual misconception is exponentially harder than preventing it in the first place through careful initial choice.
When you choose an image, you're making an argument about what AI is.
That choice isn't neutral, decorative, or trivial. It's a substantive claim about the nature of artificial intelligence, one that will reach and influence far more people than your technical papers ever will.
Make that argument carefully. Make it truthfully. Make it with full awareness of the power that first flags wield over human understanding.
The images you choose today will shape how people think about AI tomorrow. That's not hyperbole—it's the documented reality of how visual communication and human cognition interact.
Resources and Next Steps
Essential Resources
Better Images of AI: betterimagesofai.org — Free, high-quality alternatives to stock AI imagery
John Berger's Ways of Seeing: Available free on YouTube, essential viewing for understanding visual communication
Your Own Research Visuals: The best images are the ones you create documenting your actual work
For Day 2 Presentations
Choose visuals that authentically support your chosen medium and message
Prioritize authentic representations over convenient stock photography
Show yourselves and your teams at work—make the human side of AI visible
The Bigger Picture
The public conversation about artificial intelligence is full of hyperbole, misconception, and fear-mongering. Part of that problem is linguistic—the metaphors and framings we use. But a substantial part is visual—the images that set expectations before words are even read.
You can't control the entire conversation. But you can control your contribution to it. You can refuse to use misleading imagery. You can document authentic research. You can speak up when visual choices undermine accuracy.
That's not a small thing. Collectively, researchers making better visual choices can shift the entire landscape of how AI is understood and discussed.
Control what you can.
Key Takeaways: Images Shape AI Understanding
Visual Processing Speed
Humans process images 60,000 times faster than text, making visual first impressions incredibly powerful and sticky in memory.
Anthropomorphization Problem
Robot imagery reinforces misconceptions that AI systems have consciousness, intentions, and human-like thinking—making your communication job harder.
Better Alternatives Exist
Resources like Better Images of AI provide authentic visual representations showing real research, real people, and real applications.
Researcher Responsibility
You have unique authority and responsibility to demand accurate visual representation—visual literacy is part of ethical AI research.
Practical Framework
Use the five-question checklist before choosing any image to ensure it supports rather than undermines accurate understanding.
First Flags Matter
Images are first flags—they're the first thing audiences see, the stickiest thing they remember, and the hardest thing to correct later.
Practical Action Plan for Day 2
Immediate Steps You Can Take
1
Audit Your Current Visuals
Review the images in your existing presentations and materials. How many show humanoid robots, circuit brains, or other anthropomorphic imagery? Mark these for replacement.
2
Visit Better Images of AI
Spend 15 minutes browsing betterimagesofai.org to familiarize yourself with available alternatives. Download 3-5 images relevant to your research area.
3
Document Your Work Visually
Take screenshots of compelling visualizations from your research. Photograph your team working. Create a personal library of authentic images.
4
Apply the Five-Question Checklist
For your Day 2 presentation, run every image through the five-question framework before including it. Be ruthless about removing images that fail the test.
5
Share This Knowledge
Brief your lab mates, colleagues, or collaborators on visual communication issues. Make this a collective practice, not just individual awareness.
Start small but start now. Even replacing one problematic image in your next presentation is progress. Visual literacy is a skill that develops through practice and attention.