Critical AI Engagement Framework, Version 1.0

This framework draws on Hosseini's (2026) forthcoming paper in The Geographical Journal, which examines how generative AI tools reproduce racial, gendered, and class-based representations through algorithmic coloniality (Mohamed et al., 2020). That work demonstrates how seemingly neutral prompts encode dominant cultural assumptions, producing outputs that reflect and reinforce existing inequalities.

The Critical AI Engagement Framework extends this analysis into a practical pedagogical tool, mapping how individuals engage with generative AI across two axes: epistemic posture and structural consciousness. It is designed for use across educational contexts, supporting educators and learners in moving beyond prompt refinement toward critical, collective, and structurally informed engagement with AI systems.

Critical AI Engagement Framework — Hosseini
Critical AI Engagement Framework
Grounded in Mohamed et al. (2020), Benjamin (2019), Noble (2018), Zembylas (2023), Crenshaw (1991)
Hosseini — Version 1.0, March 2026
For academic and workshop use
Not for citation without permission
With special thanks to Dr Sara Camacho Felix

Generative AI tools are now embedded in higher education, used to draft text, generate images, produce code, and synthesise research. Yet these tools are not neutral. They encode the assumptions, hierarchies, and exclusions of the data they were trained on, reproducing patterns of racial, gendered, and class-based harm even as their outputs become more sophisticated (Benjamin, 2019; Mohamed et al., 2020; Noble, 2018). Understanding how students engage with these tools, and what that engagement does or does not interrogate, is therefore an urgent task for educators.

The Critical AI Engagement Framework maps educational engagement with generative AI across two axes. The horizontal axis describes an individual’s epistemic posture toward AI: from treating its outputs as authoritative, through questioning them, to recognising the colonial and structural conditions that shape what AI knows, whose knowledge it centres, and what it silences (Hosseini, 2026; Maalsen, 2023). The vertical axis describes how far an individual recognises AI as a socio-technical artefact shaped by relations of power, rather than simply a tool with fixable flaws (Benjamin, 2019; Quijano, 2000; Zembylas, 2023). The positions individuals occupy are not freely chosen but reflect the institutional, curricular, and social conditions that shape how they learn and work. The framework’s aspiration is not a more capable individual user but collective action: sustained, community-grounded engagement that works toward structural change (Camacho Felix, 2025; Mohamed et al., 2020).

Epistemic deference
AI as neutral oracle
Critical interrogation
Outputs questioned
Epistemic agency
Whose knowledge?
Collective / relational
Critique with others
Individualized
engagement
AI as personal tool,
neutral & apolitical
The Uncritical Receiver
Accepts outputs; AI naturalized as neutral knowledge source
Theoretical anchor
Algorithmic coloniality (Mohamed et al., 2020): student treats AI outputs as objective, without recognizing that GenAI encodes "opaque, inconsistent cultural assumptions" shaped by historically racist and sexist training data (Keshishi & Hosseini, 2023; Benjamin, 2019).
For educators
Make the assumption of neutrality visible through practical experiments. Ask: whose perspective does a GenAI output reflect by default — in an image, a summary, a clinical recommendation, or a generated essay? Drawing on Day & Esson (2025) and Hosseini (2026), show how seemingly neutral prompts reproduce skewed cultural defaults across output types.
For students
The student experiences AI outputs as "natural" rather than constructed. As Benjamin (2019) argues, socio-technical artefacts are not static reflections — they are shaped by the feedback and values of those who built them. Students need a framework to see this, not just permission to question.
The Cautious Pragmatist
Checks outputs; AI still framed as neutral instrument
Theoretical anchor
The student audits outputs for factual errors but not for the cultural assumptions encoded in them. As Day & Esson (2025) show, even anomalous outputs require users to "remain vigilant to the opaque and shifting nature of generative AI tools" — vigilance the Cautious Pragmatist applies technically but not epistemically.
For educators
Shift from "is this accurate?" to "whose accuracy?" Introduce temporal dynamism (Kleinman, 2024): improved outputs — whether images, text, or code — do not mean underlying biases have been addressed. Use Spennemann & Oddone's (2025) technique of asking GenAI to explain its own outputs as a critical exercise.
For students
May believe that better prompting solves the problem. However, prompt refinement "would not address the underlying biases within the datasets themselves" (Hosseini, 2026). The student needs to move from refining inputs to interrogating the training data and the colonial logics embedded within it.
The Epistemically Alert
Interrogates whose knowledge is centered; notices silences
Theoretical anchor
Algorithmic coloniality (Mohamed et al., 2020): student recognizes that AI systems embed a "dominant, Eurocentric worldview" that upholds hierarchical, racialized, and gendered ways of knowing. Connects to Noble's (2018) algorithms of oppression and Maalsen's (2023) algorithmic epistemologies.
For educators
Move from naming bias to interrogating its origin. Use Quijano's (2000) coloniality of power to show that AI's racial and gender defaults are not errors but expressions of colonial hierarchies embedded in training data. Ask: what would an AI trained on non-Eurocentric datasets produce differently?
For students
May feel isolated, especially when institutional AI guidance frames the issue as a technical problem. Wilby & Esson's (2024) call for "capabilities, caveats, and criticality" provides legitimizing language. Connect to communities of practice doing this work.
The Isolated Disruptor
Critiques AI alone; change without solidarity
Theoretical anchor
Individual critique of algorithmic coloniality, however sophisticated, cannot address structural problems in proprietary and inaccessible training datasets (Amoore et al., 2024). Mohamed et al. (2020) are explicit: structural change requires "political coalitions and communities," not individual actors.
For educators
Connect students to collective and cross-disciplinary action. Addressing algorithmic coloniality requires breaking down "disciplinary and departmental silos" (Hosseini, 2026; Maalsen, 2023). Individual insight without structural leverage changes nothing about the datasets or systems producing harmful outputs.
For students
Risk of cynicism or disengagement when individual critique runs up against inaccessible, proprietary datasets and opaque systems. As Hosseini (2026) demonstrates, surface improvements in GenAI outputs can mask rather than resolve the underlying colonial logics — students need community and strategy, not just analysis.
Partial structural
awareness
Senses bias or harm,
lacks systemic account
The Uneasy Adapter
Senses something wrong; lacks language to name it
Theoretical anchor
Pre-conceptual awareness of algorithmic harm: student senses that something is "off" in AI outputs — perhaps noticing racial or gender skew — but has not yet encountered the theoretical vocabulary to name it. This is the moment described by Day & Esson (2025) when outputs produce "surprising results."
For educators
This is a threshold moment. Offer concepts — algorithmic coloniality (Mohamed et al., 2020), algorithms of oppression (Noble, 2018), socio-technical artefacts (Benjamin, 2019) — as language for what is already felt. Hosseini's (2026) method of prompting GenAI and critically analyzing outputs is a replicable pedagogical entry point adaptable across text, image, and code generation.
For students
High potential. Already doing affective critical work. Avoid rushing to resolution — the unease is epistemically productive. GenAI outputs should be approached "not [as] surprising, but as symptomatic of racialised and gendered logics" (Hosseini, 2026) embedded in training data across all output modalities.
The Informed Skeptic
Identifies bias in outputs; most common profile
Theoretical anchor
Can identify racial and gender skew in outputs — consistent with quantitative evidence (Cheong et al., 2024; Currie et al., 2024, 2025) — but frames it as a dataset problem rather than an expression of algorithmic coloniality (Mohamed et al., 2020). The systemic account is absent.
For educators
Move from "bias as glitch" to "bias as design." Use Benjamin's (2019, p. 59) argument that training datasets carry "the prejudices of the individuals who compiled them." Ask: why does a GenAI default encode particular assumptions about race, class, gender, or expertise — whether producing an image, drafting a clinical summary, or generating a curriculum resource?
For students
May believe that surface improvements — more realistic outputs, more diverse teams, better prompts — will resolve the issue. Hosseini (2026) demonstrates directly that successive GenAI model versions produced aesthetically improved outputs while reproducing the same racial and gendered logic. The technical fix does not address colonial logics in the training data.
The Structural Analyst
Names AI harms systemically; connects to power
Theoretical anchor
Understands AI as a socio-technical artefact (Benjamin, 2019) shaped by Silicon Valley's role as "part of the United States, a global hegemon and a successor to European colonial powers" (Keshishi & Hosseini, 2023). Connects algorithmic coloniality (Mohamed et al., 2020) to concrete outputs.
For educators
Deepen from analysis to action. Introduce reparative description (Parry, 2023): how might geographers work with public image repositories to revise false past categorizations? Introduce Zembylas's (2023) strategies for "undoing the ethics of digital neocolonialism."
For students
May become frustrated that structural analysis does not translate into change. Channel into cross-disciplinary collaboration. Addressing problematic training data requires collective action and "relational approaches that emphasise the spatial and political contexts of algorithms" (Maalsen, 2023; Hosseini, 2026). Analysis without community and outlet risks paralysis.
The Emerging Ally
Seeks solidarity; building shared critical vocabulary
Theoretical anchor
Transitional position between individual and collective consciousness (Freire). Recognises that critique must be collective but lacks the structural analysis to ground it yet.
For educators
Facilitate cross-disciplinary collaboration explicitly. Addressing algorithmic harm requires breaking "disciplinary and departmental silos" (Hosseini, 2026; Maalsen, 2023) — across education, geography, data science, and activism. Connect emerging allies to existing coalitions and communities of practice doing this work.
For students
Motivated by justice but may lack the analytical vocabulary to sustain critique under institutional pressure. Pairing with theoretically grounded peers — including those with lived experience of the harms being analyzed (cf. acknowledgments in Keshishi & Hosseini, 2023) — is more generative than educator-only support.
Structural
consciousness
AI as site of
coloniality & harm
Conscientized but Constrained
Sees the system; defers under institutional pressure
Theoretical anchor
Understands algorithmic coloniality and its harms but operates in institutional systems — curriculum, assessment, professional bodies — that have not caught up with the critique. Within many national contexts "there are nascent discussions on the ethical issues of using Gen AI technologies within tertiary education" (Hosseini, 2026) — the institutional conversation is beginning but remains uneven.
For educators
Name the institutional lag explicitly. Developing "algorithmic literacy as part of wider digital literacy initiatives" (Kong et al., 2023; Zembylas, 2023; Hosseini, 2026) is a growing expectation — the conversation is beginning, and students can actively contribute to shaping it rather than waiting for institutions to catch up.
For students
Risk of internalizing structural constraint as personal inadequacy. The student's tension is not a sign of failure — it is evidence of structural contradictions that institutions have not yet resolved. Validate the critique while building pathways to act within and against institutional constraints.
The Critical Refuser
Refuses metaphorical framing; acts on structural critique
Theoretical anchor
Tuck & Yang (2012): decolonization is not a metaphor. Student refuses cosmetic diversity framings and demands structural change to what AI produces and whom it serves.
For educators
Support with Mohamed et al.'s (2020) practical recommendations: identifying sites of coloniality in AI systems, understanding where and how algorithms are made, engaging in reparative description (Parry, 2023), and developing local and national policy challenges to colonial algorithmic logics.
For students
May encounter resistance from colleagues who frame AI critique as technophobia or obstructionism. Documentation and publication — as Hosseini (2026) demonstrates — transforms resistant practice into sharable pedagogical resource. Connect to communities doing this work across disciplines; the argument gains force collectively.
The Critical Collaborator
Challenges AI's epistemic order; builds alternatives
Theoretical anchor
Actively participates in co-creating "instructional materials that transcend boundaries" (Hosseini, 2026) — resources that make algorithmic coloniality visible and addressable across GenAI modalities. Draws on intersectionality (Crenshaw, 1991; Hill Collins, 2019) to hold race, gender, and class in simultaneous analysis rather than treating each as a separate problem.
For educators
Commission rather than assess. Meaningful critique of algorithmic coloniality requires centering those with lived expertise in the harms being analyzed — not as informants but as co-authors (Hosseini, 2026). This student's contribution should shape pedagogy, not merely illustrate it. Invite co-authorship, co-design, and co-delivery.
For students
Risk of co-option — being absorbed as institutional evidence of diversity without structural change. Hosseini's (2026) reflexive positioning — centering colleagues with lived expertise in racial and gender inequity — models how genuine co-production differs from performative consultation. Support students to name and resist this distinction.
The Praxis Collective aspirational*
Reflection + action with others; pluriversal praxis
Theoretical anchor
Camacho Felix's (2025) decolonial imaginations and collective imagination — "unveiling different possibilities for addressing injustices" through relational, mutual aid. Mohamed et al.'s (2020) political coalitions. Benjamin's (2019) abolitionist tools for dismantling the New Jim Code in AI systems.
For educators
Collective praxis around GenAI requires institutional conditions: time, resource, partnership, and willingness to redistribute epistemic authority. It demands cross-disciplinary collaboration, reparative dataset work (Parry, 2023), and policy advocacy (Hosseini, 2026; Mohamed et al., 2020) — none of which individual pedagogy alone can produce. Educators must build the structures, not just model the position.
For students
Students here are co-researchers and co-educators. Hosseini (2026) models this directly: conducting experiments, publishing findings, and encouraging readers to replicate and extend the work with a critical eye. Sustain rather than assess — the goal is ongoing collective action that outlasts the course, not a demonstration of competence for a grade.
← epistemic deference
collective / relational agency →
Movement across these axes is non-linear — students may hold multiple positions simultaneously across different contexts and knowledge domains
Theoretical grounding
Horizontal axis: Mohamed et al. (2020) — algorithmic coloniality; Noble (2018) — algorithms of oppression; Maalsen (2023) — algorithmic epistemologies and situated knowledge  ·  Vertical axis: Benjamin (2019) — socio-technical artefacts encoding racial inequity; Zembylas (2023) — decolonial AI in HE; Quijano (2000) — coloniality of power; Camacho Felix (2025) — decolonial imaginations and collective action
Read More
Educational technologies, generative AI Dustin Hosseini Educational technologies, generative AI Dustin Hosseini

Using generative AI for everyday writing

As part of my role, I’m experimenting with generative AI as part of my role within the Learning Innovation Support Unit at the University of Glasgow.

In the short post, I look at using generative AI for everyday writing tasks from the perspective of anyone working in tertiary education. So, whether you work in a college in the US or UK sense or a university as an administrator, educator or research staff, this post applies to you.

Overview

I look at examples from Google Bard and chatGPT 3.5. Specifically, I look at using generative AI for:

  • turning draft notes into prose

  • writing a welcome message to new students, and tweaking messages

  • writing a message in a specific literary style

  • writing for specific audiences

  • drafting and redrafting

By using these examples, I try to show what two different generative AI tools can do. I also briefly discuss their limitations.

This post will help anyone in education other environments understand how to use generative AI for everyday writing tasks.

Turning meeting notes into prose

Sometimes you might need to write up notes taken into a meeting into a prose, something that is coherent and to be read by others. Generative AI can help with this. Here are two examples that I used using the following prompt:

I have some draft notes from a meeting. Can you turn this into prose?

Reflections: defining vs storytelling?

As you’ll see from the drop-down boxes below, my original notes are transformed in different ways by Google Bard and chatGPT 3.5, respectively. My original notes were a set of bullet-pointed items, as I was taking these in the moment while in a meeting with two other colleagues. I wanted to jot down basic information quicky. These notes might come in handy later, for example when I need to put together a report.

Interestingly, Google Bard keeps the bullet points and expanded them with some prose by perhaps defining each area or student population. The focus here seems to be on defining what the student groups are. The response is short. However, Google Bard also did create two other drafts. You can see a screenshot of a second draft below.

chatGPT, however, creates paragraphs that seem to create a story or argument. The emphasis is less on defining who the student groups are and more on storytelling what they might bring to the course in question.

This contrast is important to consider because writing for your audience matters. Understanding who your audience is will make a difference in the kind of response you want or need to create.

  • course audience

    • non-traditional students

      • people with lived experiences

    • returners to study

    • mature students

    • academic areas

    • social/public policy students

      • local government practices and policy

    • urban studies students?

    • social work

      • experienced people

      • new people

    • social justice

    • public health

    • social welfare

    • decolonization / decolonizing education and systemstion

  • The course audience encompasses a diverse range of individuals, making it a dynamic and inclusive learning environment. Primarily, it caters to non-traditional students, particularly those with unique lived experiences that contribute to the richness of classroom discussions. This group often includes individuals returning to the academic sphere after an absence, mature students seeking new knowledge and skills, and those pursuing studies in various academic disciplines.

    Among the academic areas represented within the course are social and public policy students, with a focus on local government practices and policy. Here, students delve into the intricate workings of governance at the local level, exploring policies that impact communities directly.

    Another segment of the audience consists of urban studies students, who examine the complexities of urban environments, including their design, development, and social implications.

    Social work students, both seasoned professionals and newcomers to the field, engage in discussions related to their practice. This diverse group brings real-world experiences and fresh perspectives to the table.

    Furthermore, the course appeals to students interested in social justice, public health, social welfare, and the critical concept of decolonization. It offers a platform for dialogue on creating equitable systems, dismantling colonial legacies, and promoting social change.

    In essence, this course fosters an inclusive and collaborative learning environment, bringing together individuals from various backgrounds and academic interests to explore critical issues that shape our society.

  • The course audience is likely to be a diverse group of students, including:

    • Non-traditional students: This could include people who have not followed a traditional educational path, such as those who have delayed their education or who have returned to school after a break.

    • Returners to study: Students who are returning to school after a period of absence, such as those who are raising a family or who have been working full-time.

    • Mature students: Students who are over the traditional age of entry for higher education.

    • Academic areas:

      • Social/public policy students: Students who are interested in studying social and public policy, such as local government practices and policy.

      • Urban studies students: Students who are interested in studying urban areas and the challenges and opportunities they face.

      • Social work students: Students who are interested in a career in social work, helping people in need and advocating for social justice.

      • Social justice students: Students who are passionate about social justice and want to learn more about how to make the world a more equitable place.

      • Public health students: Students who are interested in studying public health and promoting the health and well-being of populations.

      • Social welfare students: Students who are interested in studying social welfare and the policies and programs that support people in need.

      • Decolonization students: Students who are interested in studying decolonization and decolonizing education and systems.

    The course is likely to be of interest to students from a variety of backgrounds and with a range of experiences. It is particularly relevant to students who are interested in making a difference in the world and who are committed to social justice.

chatGPT & Google Bard in action

Below are two gif animations of the prompts I used and the responses from Google Bard and chatGPT, respectively. As you can see, chatGPT creates one response. You can then write follow-up prompts to hone the response(s) that you want. Google Bard will provide 3 drafts (at least) and you can choose from these and/or write follow-up prompts for further drafts.

NB: Clicking on an image will enlarge it.


Drafting a welcome message to new students

Generative AI can also be helpful in creating templates that you can modify to your needs. One example is creating a welcome message to new students and/or staff. You may already have one. However, it’s always good to refresh these as time goes on.

For these, examples, you can see the results yourself from Google Bard and chatGPT 3.5, respectively or see the images below.

Here’s the prompt I used in both genAI tools:

I'm a program administrator. I'd like to draft a short welcome message to new students. The tone should be professional and friendly. The message should be no more than 2 paragraphs long.

Google Bard’s response

Interestingly, it ends with tips on writing. The response appears a little more personal than the one created by chatGPT.

chatGPT 3.5’s response

This reads somewhat similar to the response by Google Bard but it’s a lot shorter and to the point.

Testing the creativity of generative AI: using literary styles for welcome messages

I decided to test how Google Bard and chatGPT 3.5 might respond if I asked them to re-write the welcome messages to students in the style of Audre Lorde and Robert Burns. I used Google Bard for Audre Lorde, and chatGPT 3.5 for Robert Burns.

You can judge the results for yourself!

  • Welcome, new students.

    This is a space for you to learn and grow, to challenge yourself and to be challenged.

    It is a space for you to find your voice and to use it.

    To stand up for what you believe in, even when it is difficult.

    To be your full, authentic self.

    We are here to support you on your journey.

    But we cannot do it for you.

    You must do the work.

    You must be brave enough to ask the hard questions.

    You must be willing to be uncomfortable.

    You must be open to new ideas and new perspectives.

    And you must be willing to share your own.

    This is your time.

    Make the most of it.

    In solidarity,

    [Your name]

    Program Administrator

  • Dear Newcomers, ye're welcome here,

    As Administrator, I give ye cheer.

    Ye've chosen well to join our fold,

    In learning's path, ye'll soon be bold.

    Through study's journey, find your way,

    With staff to guide, by night and day.

    Questions and concerns, don't ye hide,

    We're here to help, walk by your side.

    But ne'er forget, in this abode,

    The heart of learning, by community sowed.

    Connect with peers, as the seasons turn,

    Together we'll thrive, together we'll learn.

    For in our bonds, our strength is found,

    As we explore knowledge all around.

    Welcome to our program's embrace,

    A transformative journey, yer destiny's grace.

    Warm regards,

    [Your Name]

    Program Administrator


Drafting & redrafting tips for new students

Greeting new students, no matter their age or level, can be an opportunity to spark many things: interests, hope, enthusiasm, change, development and many others.

Let’s see what generative AI creates for the following prompt:

I teach first year undergraduate students. Create a list of 10 good tips for students new to a university. Keep the tone supportive and friendly.

As we can see below, the responses are quite similar. Depending on where you teach and work, you’ll want to modify some of the language. We can see that the language below is heavily geared towards locations where academics or lecturers are called ‘professors’ (looking at you, USA!).

Interestingly, Google Bard’s response puts emphasis on trying things out and embracing learning from failure, whereas chatGPT seems to avoid mentioning this at all while instead encouraging people to remain positive. Neoliberal much?

chatGPT 3.5’s response

  1. Stay Organized: Keep a planner or use digital tools to stay on top of assignments, deadlines, and class schedules. Being organized will reduce stress and help you manage your time effectively.

  2. Ask Questions: Don't hesitate to ask your professors, advisors, or fellow students for help or clarification. Everyone is here to support your learning journey, and questions are a vital part of the process.

  3. Engage in Class: Active participation in class discussions and activities can enhance your learning experience and build valuable connections with your peers and professors.

  4. Explore Resources: Universities offer a wealth of resources, from libraries to tutoring centers to counseling services. Take advantage of these resources to excel academically and personally.

  5. Balance Your Life: University life is about more than just studying. Make time for hobbies, socializing, and self-care. Finding a healthy balance is essential for your well-being.

  6. Make Connections: Building relationships with classmates can lead to lasting friendships and valuable study groups. Attend club meetings, events, and orientation sessions to meet new people.

  7. Stay Curious: University is an opportunity to explore new ideas and perspectives. Embrace the chance to broaden your horizons and approach subjects with an open mind.

  8. Seek Feedback: Don't be afraid to request feedback on your assignments or exams. Understanding where you can improve is a crucial part of your growth as a student.

  9. Stay Positive: University can be challenging, but remember that setbacks are part of the learning process. Stay positive, persevere through challenges, and celebrate your successes, no matter how small.

  10. Plan for the Future: While enjoying your first year, start thinking about your long-term goals and career aspirations. University is not only about education but also about preparing for your future.

Welcome to the university journey! We're excited to have you here, and we're confident that you'll thrive with these tips in mind. Remember, you're not alone in this adventure, and there's a whole community here to support you.

Google Bard’s response

  1. Get organized. This means creating a study schedule, finding a good place to study, and keeping track of your assignments and deadlines. There are many resources available to help you get organized, such as your university's academic advising center or student success center.

  2. Don't be afraid to ask for help. If you're struggling in a class, talk to your professor or teaching assistant. They can provide you with extra help and support. You can also join a study group or talk to a tutor.

  3. Get involved in extracurricular activities. This is a great way to meet new people, learn new skills, and have fun. There are many different clubs and organizations to choose from, so you're sure to find something that interests you.

  4. Take care of yourself. This means getting enough sleep, eating healthy foods, and exercising regularly. It also means taking breaks from studying and socializing with friends. Taking care of yourself will help you stay focused and motivated.

  5. Don't be afraid to make mistakes. Everyone makes mistakes, especially when they're first starting out. The important thing is to learn from your mistakes and move on.

  6. Celebrate your successes. No matter how small they may seem, it's important to celebrate your successes. This will help you stay motivated and keep moving forward.

  7. Find a mentor. A mentor is someone who can provide you with guidance and support. They can be a professor, a teaching assistant, an older student, or someone in your field of study.

  8. Use your university's resources. Your university has many resources available to help you succeed, such as the library, the writing center, and the career counseling center. Take advantage of these resources!

  9. Don't be afraid to try new things. University is a time to explore your interests and try new things. Take classes outside of your major, try out for a sports team, or join a club that you're interested in.

  10. Have fun! University is a time to learn and grow, but it's also a time to have fun. Make memories with your friends, go to sporting events and concerts, and explore your new city.

Additional tips:

  • Be proactive in your learning. Don't wait until the night before an exam to start studying. Start early and give yourself plenty of time to prepare.

  • Don't be afraid to speak up in class. Ask questions and participate in discussions. The more involved you are, the more you will learn.

  • Take advantage of your professors' office hours. If you're struggling in a class, don't be afraid to go to your professor's office hours for help. They are there to help you succeed.

  • Don't be afraid to fail. Everyone fails at some point in their lives. The important thing is to learn from your failures and keep moving forward.

I hope these tips are helpful! Welcome to university!

Diving deeper into making connections

I think making and sustaining connections in education is very important for success. It helps to have people to bounce ideas off, to get alternative perspectives and takes on the same or similar readings. So, I wanted to see what chatGPT 3.5 would respond to expanding on point 6 ‘Make connections’. I asked ‘Can you expand upon point 6'?’ and here is what it responded with:

Certainly! Point 6, "Make Connections," is not just about meeting people but also about creating a network that can enhance your university experience and future opportunities. Here's an expanded version of this tip:

6. Make Connections: Build Your University Network

One of the most enriching aspects of university life is the chance to connect with a diverse group of people. These connections can shape your academic experience, personal growth, and even your future career. Here's how to make the most of this opportunity:

a. Attend Campus Events: Universities often host a wide range of events, from club meetings to guest lectures and cultural festivals. Attend these gatherings to meet fellow students who share your interests and passions. It's a great way to find like-minded friends and explore your hobbies.

b. Join Clubs and Organizations: Most universities have numerous clubs and student organizations that cater to a variety of interests. Whether you're into sports, arts, social activism, or academic pursuits, there's likely a group for you. Joining clubs not only helps you make friends but also provides opportunities for leadership and personal development.

c. Participate in Orientation: Many universities organize orientation programs for new students. These events are designed to help you get to know your campus, fellow students, and university resources. Take advantage of them to start building connections from day one.

d. Study Groups: Collaborative learning can be incredibly effective. Forming or joining study groups can help you better understand complex subjects, share knowledge, and develop lasting friendships with your peers.

e. Network with Professors: Your professors are valuable resources for both academic and career advice. Attend office hours to discuss your coursework, seek guidance on research projects, and build professional relationships that can provide references or mentorship.

f. Connect Virtually: In the digital age, online platforms and social media are powerful tools for staying connected with classmates, professors, and alumni. LinkedIn, for instance, is a great platform for building your professional network.

g. Be Inclusive: University is a place of diversity, and it's an opportunity to interact with people from various backgrounds and cultures. Embrace inclusivity, be open to different perspectives, and learn from the richness of this diversity.

Remember, the connections you make in university can extend far beyond your time on campus. These relationships can lead to lifelong friendships, career opportunities, and a broader understanding of the world. So, take the initiative, be open to new experiences, and enjoy the journey of building your university network.

Reflections: pushing generative AI

When I asked chatGPT 3.5 to give examples of apps where students could connect virtually, the responses were overwhelmingly biased towards Anglo-American/Eurocentric apps such as

  • Zoom

  • Microsoft Teams

  • Slack

  • Discord

  • TikTok

  • Snapchat

I had to specifically ask about common social media apps that non-English speakers might use, and chatGPT finally spilled the beans by suggesting:

  • WeChat - an all-in-one messaging app widely used in China

  • Line - a messaging app with a strong presence in Asia. It offers stickers, games, and a variety of features in addition to messaging.

  • VK (VKontakte) - a Russian-language social media platform similar to Facebook. It offers features for connecting with friends, sharing media, and joining communities. It’s popular in Russian speaking areas.

I wanted to push it a bit as I know that tools like Google Bard and chatGPT will reproduce cultural biases and norms through the suggestions that they create. Omissions are almost always there, so further questioning can help to get more clarity out of generative AI tools.

Read More

Generative AI: a problematic illustration of the intersections of racialized gender, race, ethnicity

NB: this post is a draft and subject to change; it forms a pre-print (an author’s original manuscript) I have authored.

Learning, teaching and technology have often been a big part of my career - since way back to the mid/late-2000s! Now in 2023, talk of artificial intelligence and education is omnipresent, and it's here to stay. Machine learning allows AI tools to become more intelligent by drawing on datasets to develop expertise over time. However, AI tools rely upon raw data created by humans; these datasets, in turn, reflect the biases of those who have gathered the evidence, which will be racial, economic and gendered in nature (Benjamin, 2019, p. 59).

Several researchers (Noble, 2018; Benjamin, 2019; Mohamed et al., 2020; Zembylas, 2023) are looking into the underpinning reasons that enable AI to skew results and create representations that overlook and erase others while focusing on specific, dominant groups. Specifically, the way that the human-created algorithms informing AI and generative AI tools portray racialized, gendered people is especially problematic. To understand why problematic representations of people are created, it is worth looking at the ideas of intersectionality (Crenshaw, 1991; hooks, b, 2015; Hill Collins, 2019). I draw on bell hooks and Patricia Hill Collins’s works here and recommend the reader acquaint themselves with Kimberlé Crenshaw’s work. 

I write this post from my position as a part-time doctoral student, educator and higher education worker at a Scottish university in the UK. I write it as someone who’s interested in and curious about technology and as someone who teaches, develops, coaches. mentors educators (lecturers) how to teach and augment their teaching practices. However, I also write it from the perspective of a US migrant and dual national who has lived/worked in China, Russia, Kazakhstan and the UK. I note these as they inform my positionality when writing this post as I am interested in the interplay of education, culture, media representation, critical pedagogy and decolonial thinking as some of the ideas underpinning these areas inform some of my personal and professional values.

As a colleague of mine wrote "As per the Russell Group principles, I strongly believe it’s my job as an individual educator and our job as a sector to guide students how to use AI appropriately." I take their words and apply them to my own context: I believe it is my job as an educator to guide students and university staff in understanding and using AI appropriately.

For educators, this will give you an insight into some of the affordances of generative AI tools for creating images while exposing you to some of the opportunities and serious problems of using, for example, DALL-E, to create images. This post should give you ideas for developing your own practice with your students and your colleagues, no matter their experience as educators.

Thank you to colleagues and friends who have helped expand my thinking when writing this post.

Introduction

Using generative artificial intelligence (AI) tools can be exciting, confounding, scary and confusing. This was my experience and observation upon showing an academic colleague how a generative AI tool like ChatGPT can work by taking text prompts that are then create text-based content. Although text content creation tools have been at the forefront of everyone's mind since at least mid-2022, there are other generative AI tools that exist and merit attention. At the time of this writing, I can see common generative AI tools being categorized into three or four major types:

  • text to text (e.g. ChatGPT, Google Bard, Cohere.ai)

  • text to image (e.g. DALL-E, Midjourney, Stable Diffusion)

  • text to media, such as audio or video 

  • and text-to-code, for coding and programming purposes

In this post, I focus on text-to-image generative AI through example prompts that I created.  I analyze what it produced to demonstrate that educators must experiment with generative AI tools to understand and critique the tools and what they produce. In doing so, we can begin to understand how and why such tools create the content that they do. I use intersectionality as a heuristic (Hill Collins, 2019) to analyze the AI-generated avatars by looking at how these represent socially constructed identities in terms of racialized gender, race, ethnicity and nationality. Humans create algorithms and algorithms, in turn, create representations based upon human-created algorithms.  

Specifically, we can deepen our understanding the reasons that generative AI tools (and other technologies) create questionable content that might, at the very least, underpinned by stereotypes representing an intersection of racism, misogyny, classism and/or xenophobia. 

Finally, we must recognize that, for the moment, there is no concrete solution that a lay academic or layperson can implement to achieve this without a collective, concerted effort that includes a range of groups focused on shining light on the issues, changing hearts, minds and code and imaging ways forward to an equitable, inclusive world. Decolonial thinking can offer some imaginations to counter the coloniality of AI. 

I first provide the context by laying out four (4) example prompts that I created an entered into DALL-E. I briefly touch on the prompts I created before moving on to analyze the results of each of the prompts. I provide a basic critique of the subsequently created representations by looking at the atmosphere, decor, clothing, facial expressions, ethnicity, or race. 

For clarity, I use definitions of race and ethnicity offered by Fitzgerald (2020, p. 12) that sees race as referring to a ‘group of people that share some socially defined characteristics, for instance, skin color, hair texture, or facial features’ while ethnicity encompasses the ‘culture, nationality, ancestry and/or language’ shared by a group of people irrespective of their physical appearance (ibid). Grosfoguel offers another take on race informed by decolonial thinking: race is what he terms ‘a dividing line that cuts across multiple power relations such as class, sexual and gender at a global scale (2016, p. 11). In this case, race and subsequent racism are institutional and structural in nature in that the concept of race creates hierarchies of power and domination which are compounded by gender, sex, class and other factors. 

While the concepts of race and ethnicity are social constructs and neither are mutually exclusive, I use these definitions to frame my analysis.

I highlight what is represented, and why the representations might appear this way and leave you, the reader, with critical questions to consider as you and your prospective students/learners explore the usage of generative AI for creating images from text. I then offer some possible solutions drawing on decolonial thinking. 

NB: some readers will find the results disturbing, upsetting and potentially angering. 

Sweet old grannies

Generative AI allows us to experiment with ideas to then create representations of those ideas, whether these are text, images or other media. In these short cases, I asked DALL-E to create illustrations of sweet old grannies making pancakes. As a reminder, DALL-E is one of three major text-to-image generative AI tools, and there are many others out there. 

This was an impromptu idea that came up for a few reasons. In my current role, there is much discussion on the issues of generative AI and how to prepare students and educators. I also like pancakes and I have some fond memories of one of my grandmothers who would visit regularly when I was younger. I also worked and lived in Russia for a while where both pancakes and grandmothers are a big part of the culture. Pancakes are big around Maslenitsa or Carnival as it is known in other countries that celebrate the Western Christian version of the event, while grandmothers are a major cultural symbol, source of unpaid family work (Utrata, 2008) and symbol of stoicism that represents an intersection of age, gender and class (Shadrina, 2022). I also thought it would be playful and also allow me to see how DALL-E, a tool created by humans who programmed algorithms, would represent humans.

For transparency, I acknowledge that I am using gendered, ageist and even stereotypical language, especially in terms of describing 'a sweet, old X grandmother'. I am also aware that I am focused on a particular type of social/familial role, a grandmother. Not all old(er) women are grandmothers and not all grandmothers are old! As Benjamin (2019, pp. 102 drawing on Blay, 2011) asserts, qualifying words - those adjectival words used to describe 'opinion, size, age, shape, colour, origin, material, purpose' (Dowling, 2016) often encode gender; race, racism and racialization; and the humanity of individuals and groups of individuals (see Wynter, 2003).

Initial prompts

I used a prompt and only changed the adjectival qualifier describing the national origin of the imaginary character or avatar: "Create an image of a sweet, old X grandmother making pancakes". I tried out these prompts over a period of two weeks in July 2023. The queries I created are these:

  • "Create an image of a sweet, old Polish grandmother making pancakes" 

  • "Create an image of a sweet, old Russian grandmother making pancakes" 

  • "Create an image of a sweet, old American grandmother making pancakes" 

  • "Create an image of a sweet, old Black American grandmother making pancakes"

I use specific terms to get the generative AI tool DALL-E to generate specific results to allow me to see what the AI tool produces so that I can then analyze the results. This, in turn, offers evidence and clues to understanding how human-created algorithms create the outputs that they do within generative AI tools. 

In each case, DALL-E created four (4) individual representations of each character or avatar to illustrate the prompt I had created; in total, there are 16 images which you can see below with a caveat. Generative AI does not currently do well with the finer details of humans such as facial expressions, eyes, and hands. While I won't focus on hands and eyes specifically, facial expressions and ethnicity will be important later. 

Representations of Polish and Russian grandmothers

At first glance, to the untrained eye and perhaps even to the untravelled eye, we might think nothing is amiss. There are four different images created that seemingly portray what is meant to be a sweet old Polish grandmother who is making pancakes, and another four representing Russian counterparts. Generative AI does not currently do well with the finer details of humans such as facial expressions, eyes, and hands. While I won't focus on hands and eyes specifically, facial expressions will be important later. 

Images of sweet old, Russian grannies making pancakes by DALL-E

Images of sweet old, Polish grannies making pancakes by DALL-E - representations created by DALL-E for the prompt on Polish grandmothers

Images of sweet old, Russian grannies making pancakes by DALL-E - representations created by DALL-E for the prompt on Russian grandmothers

Atmosphere, decor, clothing

As we can see, each image illustrates a sweet, old Polish grandmother who appears to be in an almost gloomy environment. The lighting isn't bright but rather dark and almost shadowy. The representations of their Russian counterparts are very similar in many ways: the atmosphere is dark, perhaps gloomy. We can see what looks like wooden utensils being used and in some of the windows, we can see stereotypical lattice-type window net curtains.

Such portrayals could indicate a lack of modern lighting and/or electricity. The light also indicates the time of day, which could be an early morning golden hour, when they might rise to make an early morning breakfast. This does offer a stereotyped, ageist view of the women represented, however, by generalizing that all might rise at a very early hour to make pancakes.

If we look at the clothing, we see that each avatar is wearing clothing that is stereotypical of elderly Polish and Russian women: patterns that are floral in nature while headscarves. Some women do occasionally wear headscarves when attending church. However, these women are depicted in the home. However, we don't really get any indication of their hair or hairstyles, or whether these are things they might worry about simply because the representations cover or hide this particular aspect of all of these women.

In each case, it seems that perhaps these avatar-grandmothers are living in a different time based on the depictions of the atmosphere and technologies they are using. This doesn't mean that some do not live this way, however, it is problematic as certainly not all might live this way depending upon their means, wealth and family ties. 

Expressions and ethnicity

The expressions of the Polish and Russian grandmothers are problematic for a few reasons. If we look at each of the women, most of them appear to be looking either down or away with only one of each looking ahead at the imaginary camera. The images as a collective might be seen to represent a sort of melancholic and depressing environment.

The women are either expressionless or perhaps seemingly unhappy in the eyes of someone from the US or UK apart from one of the Polish avatars. While there may be socio-historic rationales for portraying the women in such a way (e.g., World War I and World War II, followed by the Cold War) these images are explicitly problematic as they represent stereotyped, gendered and xenophobic representations of elderly Polish and Russian women.

In terms of ethnicity, for both the images representing these groups, all of the women are White or appear to be White. Poland, according to some statistics is 98% Polish so perhaps the representations are close to portraying the norm. On the other hand, Russia is more complex with its 193 ethnic groups yet the images portray a high level of homogeneity.

Ethnic Russians make up 77-81% of Russia's population of 147 million, along with Tatars, Ukrainians, Baskhirs, Chuvashs, Chechens and Armenians being other major ethnic groups of over a million (see https://en.wikipedia.org/wiki/Demographics_of_Russia#Ethnic_groups and https://minorityrights.org/country/russian-federation/ for a breakdown; there are other Russian-language sites that you can check as well). My point is here that Russia is a diverse nation of peoples of ethnic backgrounds and mixes including those of Slavic, Turkic, Caucasian, Mongolian peoples, indigenous and Korean ancestry. However, the images created by DALL-E portray avatars that represent only those who appear Slavic and/or European (i.e. White). There are no representations of other types of Russians who may be Turkic, indigenous, or Mongolian in origin.

However, this could be due to how algorithms encode the concept of a Russian person. Does 'Russian' mean a citizen of Russia, and therefore anyone who lives in Russia? If this is the case, then it is likely dominant views that inform datasets will skew any possible representations. On the other hand, does it mean those that see themselves as ethnically Russian? If this is the case, then perhaps it is valid to show only Slavic/European avatars. In either case, the representations are problematic as they highlight whatever the dominant 'norm' is while erasing Russia's historically rich diversity. Another perspective could be how a particular government might influence how the imaginations of its populace are portrayed, which may mean the prominence of a dominant group at the expense of an ethnic minority group. In Russia’s case, there are concerns surrounding ethnic separatism and how migrants are portrayed, especially of those from regions traditionally associated with Turkic and Asiatic peoples and those whose faith is Islam (Coalson, 2023). However, such concerns are not reasons for erasing different representations and portrayals of peoples of different ethnicities. 

Representations of grandmothers from the US

As a reminder, I used the following prompt: "Create an image of a sweet, old American grandmother making pancakes". I acknowledge that using ‘American’ can be problematic. It can refer to people of the United States, or if you live in Latin America, American can refer to anyone from the Americas, not just people who live in the United States of America. 

In addition to the term ‘American’ being problematic, this prompt quickly revealed more serious issues that I will touch upon. 

Representations created by DALL-E for the prompt on American grandmothers

Atmosphere, decor, clothing

The images of American grandmothers offer a stark contrast in many ways when compared with the representation of Polish and Russian grandmothers. The DALL-E produced illustrations appear to show these women, for the most part, in a different light. 

While the first two women in the top row appear to be in the home, their homes appear to be more modern in some respects. They all appear to be using what appear to be metallic utensils as opposed to wooden ones. The lighting in the bottom two images is much brighter with almost an appearance of a representation of a cooking show as indicated by the lighter-colored walls. The atmosphere appears a lot less cluttered and lighter in many respects. This lack of clutter and more light might indicate, at the very least, modern homes that are efficient. 

Then there are the hairstyles. These are, admittedly, something that I hadn't picked up on as it wasn't something that I am fully literate about until a friend prompted me. As that friend noted, the hair of these women tells another side of the story related to class. What does the hair say to you? How do each of their hairstyles represent their own lives? What does each style say about their socio-economic background?

The clothing also offers clues to how these avatar-representations live. Their clothes appear more modern, perhaps more expensive than their Polish and Russian counterparts. What does this say about the data that has informed the creation of these avatars? 

Expressions and race

If we look at the facial expressions, again while generative AI does not yet get the finer details right, something appears and feels more warm, perhaps more positive about the expressions of these avatar women. The first one appears thoughtful and focused on what she is doing with almost a sense of enjoyment. The second one appears content - at the very least - with what she is doing. The third and fourth images appear to represent a wholly positive image of two different women engaging in cooking as indicated by slight smiles whether looking down (image 3) or looking straight ahead (image 4).

However, there is a significant problem with these representations which is indicated by the perceived race of the sweet, old, American grandmothers: each avatar represents a White woman. This is particularly problematic as the US has a population of over 330 million with nearly 80 million (nearly 1 in 4 people) who comprise non-White people. The question here then is why has the generative AI tool created only White faces to represent the qualifier ‘American’ when 1 in 4 people in the US fall under the broad categories of Black, Asian, Indigenous and others? Why is the US portrayed as, at least according to these AI-generated images, representing only one part of its population? 

Data gathered from https://www.census.gov/

Representations of grandmothers from the US racialized as Black

Representations created by DALL-E for the prompt on Black American grandmothers

Atmosphere, decor, clothing 

If we consider the representations generated by DALL-E below, we see deeply problematic underlying issues that represent an intersection of race, gender and class in the portrayals of imagined sweet, old Black American women. 

The atmosphere in each avatar appears generally warm and inviting, reflecting the representations of sweet, old, ‘American’ grandmothers. There is a certain simplicity and modernness to the environment. Two avatars appear in a home kitchen (the bottom two) as indicated by kitchen cabinets/cupboards and a nearby window. The top-left image appears perhaps in a larger, commercial kitchen or perhaps a kitchen in the home, and the second (top-right) appears perhaps in a TV studio as indicated by the lighting and focus. 

One colleague, Dr Ruby Zelzer, notes something that I had missed:

… something struck me about how utilitarian the kitchens were, the kitchen tiles in 3 of the 4 pictures, and also that the type of tiles were very basic in appearance. How none of the other images had these tiles (to my eye).

However, the images appear to say something about the roles of these avatar women. Three of the images appear to represent the avatars as cooks or chefs, as indicated by what appear to be chefs' hats and their attire in general. The avatar in a pink apron and white outfit (top-right) appears to be in an ambiguous situation in part due to the lighting and the red nose: are they in a TV studio or in a circus? I will discuss this later as the representation harks back to minstrelsy and blackface. 

In addition, two of the avatar women are wearing what look like cleaner gloves. The avatar portraying yellow gloves is also problematic as the gloves appear slightly worn and tattered. This can be seen to place someone, or here an older Black American woman, in a lower socio-economic position. 

In 3 out of the 4 images (all bar the lower-left image), the avatars representing Black American grandmothers are situated in positions of service through the attire that they are portrayed to be wearing. In fact, only the avatar in a blue shirt and pink apron appears to be in a position that seemingly isn’t attributable to a service role. In contrast, the White representations of American women don’t appear to be in positions of service as indicated by their clothing. I now turn to discuss the problems that nearly all of these images is (re)producing.  

Expressions and race

All the women appear to be smiling or enjoying what they are doing. At first, this may seem like a good thing. However, the expressions of the top-left and bottom-right avatars are highly problematic for a few historic reasons rooted in racist, gendered and classist portrayals of Black American women. In addition, the larger bodies of three of the other avatars also reflect how Black American women have historically been portrayed within the United States and beyond. In contrast, the avatars representing White American women are constructed with what appear to be more delicate and smaller features, something that several researchers (Bowdre, 2006; Downing, 2007; Thompson Moore, 2021) argue has frequently been attributed to representations of White women. 

The origins of stereotyped representations of Black American women lie, in part, in minstrelsy in the 1800s (Bowdre, 2006; Downing, 2007; Thompson Moore, 2021). In minstrel shows, White men portrayed Black Americans by blackening their faces using burnt cork while exaggerating other facial features, such as the lips, by using 'red or white paint' (Bowdre, 2006, p. 37). The avatars representing Black American women are illustrative of how Black women were constructed in minstrel shows through the caricature of the wench (Thompson Moore, 2021, p. 318). White men performed the wench character representing Black women through cross-dressing and drag performances (ibid). Other characters would go further by dressing in 'brighter, more flamboyant dress' and their faces would be further exaggerated by makeup, creating 'larger eyes and gaping mouths with huge lips' (ibid). As Bowdre (2006) asserts, minstrelsy has aided stereotypes around people racialized as Black and continues to inform media representations of Black American men and women in the present day. 

Another representation is that of Black American women as a ‘mammy,’ or a good-natured, submissive and motherly figure who would provide care for White families. Taken together, an excerpt from King (2019, p. 13) explains why such representations are deeply problematic: 

“Aunt Jemima,” a well-known trope that (mis)represents/distorts Black/African womanhood in the USA, is a fictional historic advertising icon that reinforces the national stereotype of the slave plantation “mammy.” In the late 19th century, this image of a smiling, usually corpulent dark-skinned Black woman wearing a red bandana became the trademark logo for a technological innovation: ready-mixed pancake flour. Commercial advertisements that invented this denigrating image of Black womanhood expressed the white imagination, which was then reified in film, fiction, the fantasy world of plantation mythology, and consumer consciousness. This stereotype epitomises the dominance of hegemonic white memory and imagination in the material culture of American society (Wallace-Sanders 2008).

The images below depict what hooks (2015, pp. 65-66) would argue that such images portray Black women in a negative light through the construction of Black women having ‘excessive make-up,’ ‘wearing wigs and clothes that give the appearance of being overweight’ while simultaneously representing large ‘maternal figures’. bell hooks's message here is that historical depictions of Black American women portray them as fat/obese, older, asexual and unkempt, homogenizing this group while mocking them through the ‘wench’ and/or ‘mammy’ stereotypes, which both (re)produce demeaning representations of Black American women. 

Discussion

What we see here in each of the images represents what are what Benjamin describes as (2019, p. 59) ‘deeply ingrained cultural prejudices’ and ‘biases’ drawn from data that the generative AI tools use to create representations. 

While the imaginary representation of Black American women was reified in media and consumer consciousness, we can see that this portrayal resurfaces in the digital realm within the context of generative AI. What we see here then is one manifestation of ‘algorithmic coloniality’ (Mohamed et al., 2020; Zembylas, 2023). For those new to the concept of coloniality, this is a state of knowing and being that pervades knowledge and power relations that sees those formerly colonized and/or enslaved as regularly encountering inherent disadvantages in all aspects of life while former colonizers retain many advantages in all areas of life (Quijano and Ennis, 2000; Wynter, 2003; Ndlovu-Gatsheni, 2015). In simple terms, this means that accepted knowledges and ways of being represent those of the dominant members of society. 

In this case, the role of Silicon Valley, located in the United States, which is a hegemonic power and an extension of the former European colonial nations as one of her settler-colonies, is significant. This extends beyond the technological companies of Silicon Valley and elsewhere in the US to anywhere that readily accepts, uses and replicates their models. Those who follow the dominant modes of cultural, and technological production take part in the creation and perpetuation of algorithms which overvalue some humans (those racialized as White) while undervaluing and actively devaluing the humanity of other humans (those racialized as Black, Asian and others). 

Considering the #BlackLivesMatter movement and the daily injustices that people racialized as Black in the US (and elsewhere, even the UK for example) experience, it is particularly problematic that human-authored algorithms informing generative AI reflect dominant systems of knowing and being. It is, however, a testament to the existence of coloniality within AI and AI algorithms which (re)produce gendered, racist and xenophobic representations of racialized and minoritized peoples. 

Although there is some hope for everyone to influence the datasets that inform algorithms, which in turn might allow for some change, this will not be easy: collaboration will be key and conscientization of everyone on the issues will be as well to address and rectify the issues of problematic algorithms, which are just one tool in a greater system. 

Some specific solutions can help by drawing on decolonial thinking that can develop and deepen the understanding of students and educators. This can start with understanding where sites of coloniality replicate harmful generative AI algorithms. Drawing on Mohamed et al. (2020, p. 8)this might include understanding and identifying such sites, which might include where and how algorithms are made and function, who is involved in beta-testing and testing generally, and what local and national policies can be developed. This also includes specifically developing algorithmic literacy as part of digital literacy initiatives (Zembylas, 2023)

Key questions for students/educators

  • Why do the avatars represent these particular groups in the way that they do?

  • What, if anything, do the representations get right? 

  • What, if anything, do the illustrations get wrong? 

    • How are the representations problematic? 

    • Where representations are problematic...

      • What message does this send to someone without knowledge of the context?

      • What message does this create about the people/cultures/objects portrayed in the images? 

      • What can you do to ensure generative AI creates, if it is possible, more accurate and equitable representations of peoples/cultures/objects?

References

Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press. 

Bowdre, K. M.  (2006). Racial mythologies: African American female images and *representation from minstrelsy to the studio era. [Doctoral dissertation/thesis, University of Southern California]. 

Coalson, R.  (2023). Russia’s 2021 Census Results Raise Red Flags Among Experts And Ethnic-Minority Activists – RadioFreeEurope/RadioLiberty. Retrieved 2023-07-24 from https://www.rferl.org/a/russia-census-ethnic-minorities-undercounted/32256506.html

Crenshaw, K. (1991). Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color. Stanford Law Review, 43, No. 6, 1241-1299. https://doi.org/10.2307/1229039

Dowling, T.  (2016, Tuesday, 13 September). Order force: the old grammar rule we all obey without realising – The Guardian. Retrieved 2023-07-24 from https://www.theguardian.com/commentisfree/2016/sep/13/sentence-order-adjectives-rule-elements-of-eloquence-dictionary

Downing, C. (2007). “Interlocking oppressions of sisterhood: (re) presenting the black woman in nineteenth century blackface minstrelsy”. Senior Scholar Papers, Paper 539. https://digitalcommons.colby.edu/seniorscholars/539

Fitzgerald, K. J. (2020). Recognizing Race and Ethnicity: Power, Privilege and Inequality (Third ed.). Routledge. 

Grosfoguel, R. (2016). What is Racism. Journal of World-Systems Research, 22(1), 9-15. https://doi.org/10.5195/jwsr.2016.609

Hill Collins, P. (2019). Intersectionality as Critical Social Theory. Duke University Press. 

hooks, b. (2015). Ain’t I a Woman: Black Women and Feminism. Routledge. 

King, J. E. (2019). Staying Human: Forty Years of Black Studies Practical-Critical Activity in the Spirit of (Aunt) Jemima. International Journal of African Renaissance Studies - Multi-, Inter- and Transdisciplinarity, 14(2), 9-31. https://doi.org/10.1080/18186874.2019.1690399

Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology, 33(4), 659-684. https://doi.org/10.1007/s13347-020-00405-8

Ndlovu-Gatsheni, S. J. (2015). Decoloniality as the Future of Africa. History Compass, 13(10), 485-496. https://doi.org/10.1111/hic3.12264

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press. 

Quijano, A., & Ennis, M. (2000). Coloniality of Power, Eurocentrism, and Latin America. Nepantla: Views from South, 1(3), 533-580. 

Shadrina, A. (2022). Enacting the babushka: older Russian women ‘doing’ age, gender and class by accepting the role of a stoic carer. Ageing and Society, 1-18. https://doi.org/10.1017/s0144686x2200037x

Thompson Moore, K. (2021). The Wench: Black Women in the Antebellum Minstrel Show and Popular Culture. The Journal of American Culture, 44(4), 318-335. https://doi.org/10.1111/jacc.13299

Utrata, J. (2008). Babushki as Surrogate Wives: How Single Mothers and Grandmothers Negotiate the Division of Labor in Russia. UC Berkeley: Berkeley Program in Soviet and Post-Soviet Studies. https://escholarship.org/uc/item/3b18d2p8

Wallace-Sanders, K. (2008). Mammy: A century of race, gender, and southern memory. University of Michigan Press.

Wynter, S. (2003). Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human, After Man, Its Overrepresentation—An Argument. CR: The New Centennial Review, 3(3), 257-337. https://doi.org/10.2307/41949874

Zembylas, M. (2023). A decolonial approach to AI in higher education teaching and learning: strategies for undoing the ethics of digital neocolonialism. Learning, Media and Technology, 48(1), 25-37. https://doi.org/10.1080/17439884.2021.2010094

Read More