Critical AI Engagement Framework, Version 1.0

This framework draws on Hosseini's (2026) forthcoming paper in The Geographical Journal, which examines how generative AI tools reproduce racial, gendered, and class-based representations through algorithmic coloniality (Mohamed et al., 2020). That work demonstrates how seemingly neutral prompts encode dominant cultural assumptions, producing outputs that reflect and reinforce existing inequalities.

The Critical AI Engagement Framework extends this analysis into a practical pedagogical tool, mapping how individuals engage with generative AI across two axes: epistemic posture and structural consciousness. It is designed for use across educational contexts, supporting educators and learners in moving beyond prompt refinement toward critical, collective, and structurally informed engagement with AI systems.

April 2026 update: the paper informing this paper has now been published and is available below

Hosseini, D. D. 2026. “Generative AI: A Problematic Illustration of the Intersections of Race, Gender and Class.” The Geographical Journal 192, no. 2: e70085. https://doi.org/10.1111/geoj.70085.

Critical AI Engagement Framework — Hosseini
Critical AI Engagement Framework
Grounded in Mohamed et al. (2020), Benjamin (2019), Noble (2018), Zembylas (2023), Crenshaw (1991)
Hosseini — Version 1.0, March 2026
For academic and workshop use
Not for citation without permission
Design developed with Claude Sonnet 4.6 (Anthropic, 2026)
With special thanks to Dr Sara Camacho Felix

Generative AI tools are now embedded in higher education, used to draft text, generate images, produce code, and synthesise research. Yet these tools are not neutral. They encode the assumptions, hierarchies, and exclusions of the data they were trained on, reproducing patterns of racial, gendered, and class-based harm even as their outputs become more sophisticated (Benjamin, 2019; Mohamed et al., 2020; Noble, 2018). Understanding how students engage with these tools, and what that engagement does or does not interrogate, is therefore an urgent task for educators.

The Critical AI Engagement Framework maps educational engagement with generative AI across two axes. The horizontal axis describes an individual’s epistemic posture toward AI: from treating its outputs as authoritative, through questioning them, to recognising the colonial and structural conditions that shape what AI knows, whose knowledge it centres, and what it silences (Hosseini, 2026; Maalsen, 2023). The vertical axis describes how far an individual recognises AI as a socio-technical artefact shaped by relations of power, rather than simply a tool with fixable flaws (Benjamin, 2019; Quijano, 2000; Zembylas, 2023). The positions individuals occupy are not freely chosen but reflect the institutional, curricular, and social conditions that shape how they learn and work. The framework’s aspiration is not a more capable individual user but collective action: sustained, community-grounded engagement that works toward structural change (Camacho Felix, 2025; Mohamed et al., 2020).

Epistemic deference
AI as neutral oracle
Critical interrogation
Outputs questioned
Epistemic agency
Whose knowledge?
Collective / relational
Critique with others
Individualized
engagement
AI as personal tool,
neutral & apolitical
The Uncritical Receiver
Accepts outputs; AI naturalized as neutral knowledge source
Theoretical anchor
Algorithmic coloniality (Mohamed et al., 2020): student treats AI outputs as objective, without recognizing that GenAI encodes "opaque, inconsistent cultural assumptions" shaped by historically racist and sexist training data (Keshishi & Hosseini, 2023; Benjamin, 2019).
For educators
Make the assumption of neutrality visible through practical experiments. Ask: whose perspective does a GenAI output reflect by default — in an image, a summary, a clinical recommendation, or a generated essay? Drawing on Day & Esson (2025) and Hosseini (2026), show how seemingly neutral prompts reproduce skewed cultural defaults across output types.
For students
The student experiences AI outputs as "natural" rather than constructed. As Benjamin (2019) argues, socio-technical artefacts are not static reflections — they are shaped by the feedback and values of those who built them. Students need a framework to see this, not just permission to question.
The Cautious Pragmatist
Checks outputs; AI still framed as neutral instrument
Theoretical anchor
The student audits outputs for factual errors but not for the cultural assumptions encoded in them. As Day & Esson (2025) show, even anomalous outputs require users to "remain vigilant to the opaque and shifting nature of generative AI tools" — vigilance the Cautious Pragmatist applies technically but not epistemically.
For educators
Shift from "is this accurate?" to "whose accuracy?" Introduce temporal dynamism (Kleinman, 2024): improved outputs — whether images, text, or code — do not mean underlying biases have been addressed. Use Spennemann & Oddone's (2025) technique of asking GenAI to explain its own outputs as a critical exercise.
For students
May believe that better prompting solves the problem. However, prompt refinement "would not address the underlying biases within the datasets themselves" (Hosseini, 2026). The student needs to move from refining inputs to interrogating the training data and the colonial logics embedded within it.
The Epistemically Alert
Interrogates whose knowledge is centered; notices silences
Theoretical anchor
Algorithmic coloniality (Mohamed et al., 2020): student recognizes that AI systems embed a "dominant, Eurocentric worldview" that upholds hierarchical, racialized, and gendered ways of knowing. Connects to Noble's (2018) algorithms of oppression and Maalsen's (2023) algorithmic epistemologies.
For educators
Move from naming bias to interrogating its origin. Use Quijano's (2000) coloniality of power to show that AI's racial and gender defaults are not errors but expressions of colonial hierarchies embedded in training data. Ask: what would an AI trained on non-Eurocentric datasets produce differently?
For students
May feel isolated, especially when institutional AI guidance frames the issue as a technical problem. Wilby & Esson's (2024) call for "capabilities, caveats, and criticality" provides legitimizing language. Connect to communities of practice doing this work.
The Isolated Disruptor
Critiques AI alone; change without solidarity
Theoretical anchor
Individual critique of algorithmic coloniality, however sophisticated, cannot address structural problems in proprietary and inaccessible training datasets (Amoore et al., 2024). Mohamed et al. (2020) are explicit: structural change requires "political coalitions and communities," not individual actors.
For educators
Connect students to collective and cross-disciplinary action. Addressing algorithmic coloniality requires breaking down "disciplinary and departmental silos" (Hosseini, 2026; Maalsen, 2023). Individual insight without structural leverage changes nothing about the datasets or systems producing harmful outputs.
For students
Risk of cynicism or disengagement when individual critique runs up against inaccessible, proprietary datasets and opaque systems. As Hosseini (2026) demonstrates, surface improvements in GenAI outputs can mask rather than resolve the underlying colonial logics — students need community and strategy, not just analysis.
Partial structural
awareness
Senses bias or harm,
lacks systemic account
The Uneasy Adapter
Senses something wrong; lacks language to name it
Theoretical anchor
Pre-conceptual awareness of algorithmic harm: student senses that something is "off" in AI outputs — perhaps noticing racial or gender skew — but has not yet encountered the theoretical vocabulary to name it. This is the moment described by Day & Esson (2025) when outputs produce "surprising results."
For educators
This is a threshold moment. Offer concepts — algorithmic coloniality (Mohamed et al., 2020), algorithms of oppression (Noble, 2018), socio-technical artefacts (Benjamin, 2019) — as language for what is already felt. Hosseini's (2026) method of prompting GenAI and critically analyzing outputs is a replicable pedagogical entry point adaptable across text, image, and code generation.
For students
High potential. Already doing affective critical work. Avoid rushing to resolution — the unease is epistemically productive. GenAI outputs should be approached "not [as] surprising, but as symptomatic of racialised and gendered logics" (Hosseini, 2026) embedded in training data across all output modalities.
The Informed Skeptic
Identifies bias in outputs; most common profile
Theoretical anchor
Can identify racial and gender skew in outputs — consistent with quantitative evidence (Cheong et al., 2024; Currie et al., 2024, 2025) — but frames it as a dataset problem rather than an expression of algorithmic coloniality (Mohamed et al., 2020). The systemic account is absent.
For educators
Move from "bias as glitch" to "bias as design." Use Benjamin's (2019, p. 59) argument that training datasets carry "the prejudices of the individuals who compiled them." Ask: why does a GenAI default encode particular assumptions about race, class, gender, or expertise — whether producing an image, drafting a clinical summary, or generating a curriculum resource?
For students
May believe that surface improvements — more realistic outputs, more diverse teams, better prompts — will resolve the issue. Hosseini (2026) demonstrates directly that successive GenAI model versions produced aesthetically improved outputs while reproducing the same racial and gendered logic. The technical fix does not address colonial logics in the training data.
The Structural Analyst
Names AI harms systemically; connects to power
Theoretical anchor
Understands AI as a socio-technical artefact (Benjamin, 2019) shaped by Silicon Valley's role as "part of the United States, a global hegemon and a successor to European colonial powers" (Keshishi & Hosseini, 2023). Connects algorithmic coloniality (Mohamed et al., 2020) to concrete outputs.
For educators
Deepen from analysis to action. Introduce reparative description (Parry, 2023): how might geographers work with public image repositories to revise false past categorizations? Introduce Zembylas's (2023) strategies for "undoing the ethics of digital neocolonialism."
For students
May become frustrated that structural analysis does not translate into change. Channel into cross-disciplinary collaboration. Addressing problematic training data requires collective action and "relational approaches that emphasise the spatial and political contexts of algorithms" (Maalsen, 2023; Hosseini, 2026). Analysis without community and outlet risks paralysis.
The Emerging Ally
Seeks solidarity; building shared critical vocabulary
Theoretical anchor
Transitional position between individual and collective consciousness (Freire). Recognises that critique must be collective but lacks the structural analysis to ground it yet.
For educators
Facilitate cross-disciplinary collaboration explicitly. Addressing algorithmic harm requires breaking "disciplinary and departmental silos" (Hosseini, 2026; Maalsen, 2023) — across education, geography, data science, and activism. Connect emerging allies to existing coalitions and communities of practice doing this work.
For students
Motivated by justice but may lack the analytical vocabulary to sustain critique under institutional pressure. Pairing with theoretically grounded peers — including those with lived experience of the harms being analyzed (cf. acknowledgments in Keshishi & Hosseini, 2023) — is more generative than educator-only support.
Structural
consciousness
AI as site of
coloniality & harm
Conscientized but Constrained
Sees the system; defers under institutional pressure
Theoretical anchor
Understands algorithmic coloniality and its harms but operates in institutional systems — curriculum, assessment, professional bodies — that have not caught up with the critique. Within many national contexts "there are nascent discussions on the ethical issues of using Gen AI technologies within tertiary education" (Hosseini, 2026) — the institutional conversation is beginning but remains uneven.
For educators
Name the institutional lag explicitly. Developing "algorithmic literacy as part of wider digital literacy initiatives" (Kong et al., 2023; Zembylas, 2023; Hosseini, 2026) is a growing expectation — the conversation is beginning, and students can actively contribute to shaping it rather than waiting for institutions to catch up.
For students
Risk of internalizing structural constraint as personal inadequacy. The student's tension is not a sign of failure — it is evidence of structural contradictions that institutions have not yet resolved. Validate the critique while building pathways to act within and against institutional constraints.
The Critical Refuser
Refuses metaphorical framing; acts on structural critique
Theoretical anchor
Tuck & Yang (2012): decolonization is not a metaphor. Student refuses cosmetic diversity framings and demands structural change to what AI produces and whom it serves.
For educators
Support with Mohamed et al.'s (2020) practical recommendations: identifying sites of coloniality in AI systems, understanding where and how algorithms are made, engaging in reparative description (Parry, 2023), and developing local and national policy challenges to colonial algorithmic logics.
For students
May encounter resistance from colleagues who frame AI critique as technophobia or obstructionism. Documentation and publication — as Hosseini (2026) demonstrates — transforms resistant practice into sharable pedagogical resource. Connect to communities doing this work across disciplines; the argument gains force collectively.
The Critical Collaborator
Challenges AI's epistemic order; builds alternatives
Theoretical anchor
Actively participates in co-creating "instructional materials that transcend boundaries" (Hosseini, 2026) — resources that make algorithmic coloniality visible and addressable across GenAI modalities. Draws on intersectionality (Crenshaw, 1991; Hill Collins, 2019) to hold race, gender, and class in simultaneous analysis rather than treating each as a separate problem.
For educators
Commission rather than assess. Meaningful critique of algorithmic coloniality requires centering those with lived expertise in the harms being analyzed — not as informants but as co-authors (Hosseini, 2026). This student's contribution should shape pedagogy, not merely illustrate it. Invite co-authorship, co-design, and co-delivery.
For students
Risk of co-option — being absorbed as institutional evidence of diversity without structural change. Hosseini's (2026) reflexive positioning — centering colleagues with lived expertise in racial and gender inequity — models how genuine co-production differs from performative consultation. Support students to name and resist this distinction.
The Praxis Collective aspirational*
Reflection + action with others; pluriversal praxis
Theoretical anchor
Camacho Felix's (2025) decolonial imaginations and collective imagination — "unveiling different possibilities for addressing injustices" through relational, mutual aid. Mohamed et al.'s (2020) political coalitions. Benjamin's (2019) abolitionist tools for dismantling the New Jim Code in AI systems.
For educators
Collective praxis around GenAI requires institutional conditions: time, resource, partnership, and willingness to redistribute epistemic authority. It demands cross-disciplinary collaboration, reparative dataset work (Parry, 2023), and policy advocacy (Hosseini, 2026; Mohamed et al., 2020) — none of which individual pedagogy alone can produce. Educators must build the structures, not just model the position.
For students
Students here are co-researchers and co-educators. Hosseini (2026) models this directly: conducting experiments, publishing findings, and encouraging readers to replicate and extend the work with a critical eye. Sustain rather than assess — the goal is ongoing collective action that outlasts the course, not a demonstration of competence for a grade.
← epistemic deference
collective / relational agency →
Movement across these axes is non-linear — students may hold multiple positions simultaneously across different contexts and knowledge domains
Theoretical grounding
Horizontal axis: Mohamed et al. (2020) — algorithmic coloniality; Noble (2018) — algorithms of oppression; Maalsen (2023) — algorithmic epistemologies and situated knowledge  ·  Vertical axis: Benjamin (2019) — socio-technical artefacts encoding racial inequity; Zembylas (2023) — decolonial AI in HE; Quijano (2000) — coloniality of power; Camacho Felix (2025) — decolonial imaginations and collective action
Read More

Exploring ideas for decolonizing the curriculum using generative AI tools

In this post, I share some examples created by generative AI for decolonizing the curriculum. I also contextualize the examples by providing commentary from colleagues from the University of Glasgow Decolonising the Curriculum Community of Practice.

The master’s tools will never dismantle the master’s house.
— Audre Lorde

In this post, I share some examples created by generative AI for decolonizing the curriculum. I also contextualize the examples by providing commentary from colleagues from the University of Glasgow Decolonising the Curriculum Community of Practice.

Decolonizing education is part of many university strategies, including the university where I work. So, it seemed natural to think of how generative AI tools might help university students and staff think of ideas for decolonizing the curriculum. However, we must remember that the underlying logic of generative AI represents tools created by those in nations that hold power over others. Generative AI tools are often created in former imperial nations that seek out and obtain cheaper labor in other parts of the world to train and ‘develop’ the tools further. Generative AI also imparts a significant environmental impact, which must be considered.

AI and ethical considerations: coloniality of…

There are several caveats to using AI and generative AI generally, which I briefly outline in Karen Hao’s article from July 2020:

  • ghost work

    • this is invisible labor provided by underpaid workers who are often in former US and UK colonies (among others)

  • beta testing

    • sometimes beta testing is used on more vulnerable groups; yes, this is unethical, but it does still happen

  • AI governance

    • think about who creates governance for AI; high-wealth nations and the Global North largely drive this at the expense of Global South nations

  • international social development

    • if we consider ‘AI for…’ initiatives, we have to consider who drives these and who the targets or recipients are

  • algorithmic discrimination and oppression

    • if we consider who creates algorithms, then we can begin to understand why some algorithms can portray racist, gendered, xenophobic imagery

Further reading

To understand the ethical issues of generative AI by using a decolonial lens, have a read of these:


Generative AI’s suggestions for decolonizing

For the following outputs, as shown in the GIF images below, I used the initial prompt:

I'm a lecturer and there is talk of decolonising the curriculum. I teach mathematics and statistics. What can I do to start decolonising my curriculum?

As we can see in the GIFs below, each generative AI tool appears to give some considered suggestions for how a lecturer in this particular area might go about decolonizing the curriculum they teach. Ideas such as incorporating more diverse views, Indigenous knowledges and contextualizing what is being learned are all general suggestions that I might expect to find in such a curriculum that is undertaking decolonizing.

However, I wanted to see more detail and so I followed up with another prompt.

The follow-up prompt was designed to see what else generative AI might suggest. Interestingly, with insight from colleagues, plenty could be done and suggested to create a curriculum that undertakes decolonization within a specific context.

In this case, the lists seemed familiar and similar in some respects and then a bit different in other respects in ways that I couldn’t immediately pick up on. The suggested names stem from ancient to modern times, albeit with a jump between ancient and modern times! Some familiar names are there, but are there perhaps some that could be included?

Here is the prompt I used:

What are some prominent but overlooked non-Western scholars of mathematics and statistics?

Reflections from colleagues

I consulted some colleagues, given the topic, the example is from an area I’m not familiar with. Specifically, I consulted colleagues in the UofG Decolonising the Curriculum Community of Practice who kindly provided their thoughts.

Soryia Siddique, whose background is in chemistry/pharmaceuticals/politics, provided the following:

My initial observation is that we ensure women of colour are represented in the materials. Perhaps a specific search around this.

BAME and Muslim women are underrepresented in many professions, including senior roles in Scotland, and are likely to experience systemic bias. Taking into consideration that Muslim women can experience racisim, sexism, and Islamaphobia. It is questionable whether media/society represents Muslim and BAME women's current and historical achievements.

They are also "missing” from Scotland’s media landscape.

In utilising AI, are we relying on data that is embedded in algorithmic bias and potentially perpetuating further inequality?

Soryia also suggested the following reading: The Movement to Decolonize AI: Centering Dignity Over Dependency from Standford University’s Human-Centered Artificial Intelligence. It’s an interview with Sabelo Mhlambi who describes the role of AI in colonization and how activists can counter this.

Samuel Skipsey, whose background is in physics and astronomy, also shared his thoughts:

The "list of important non-Westerners" is fairly comparable between the two - Bard is more biased towards historical examples and is pretty India-centric (with no Chinese or Japanese examples, notably), ChatGPT does a lot better at covering a wider baseline of "top hits" across the world (although given that "Nine Chapters on the Mathematical Art" doesn't have known authors - the tradition of the time it was written means that it probably had many contributions whose authorship is lost to history - I would quibble about it being a "scholar"). I note that this is still a Northern-Hemisphere centric list from both - although that's somewhat expected due to the problems citing material from pre-colonial Latin America, say. Still, it would have been nice to see some citation of contributions from Egypt, say.

In general, both lists are subsets of the list I would have produced by doing some Wikipedia diving.

The "advice on decolonising" is very high-level and tick-boxy from both. It feels like they're sourced from a web search (and, indeed, on an experimental search on DDG [DuckDuckGo] for "how can I decolonise my course" the first few hits all have a set of bullet points similar to those produced by the LLMs, which is unsurprising). To be fair to the LLMs, this is also basically what a lot of "how do I start decolonising" materials look like when produced by humans, so...

As Soryia notes, because the answers are quite generic there's a bunch of specific considerations that they don't touch on (they're not very intersectional - Hypatia turns up on both lists of non-Western scholars, doing a lot of heavy lifting as the only female name on either!)

Read More

Experimenting with generative AI: (re)designing courses and rubrics

In this post, I share some ideas for (re)creating courses and assessment rubrics as well as getting ideas for creative assessments using generative AI.

Experimenting for creating a course

I tried out Google Bard and chatGPT 3.5 to design courses and rubrics. In each case, being specific about what I wanted to see created was key. What this means is that when you are creating your prompt or query, you should be specific in terms of:

  • Context: e.g. state who you are or who you imagine yourself to be when creating the prompt

  • Audience: who is the audience of what you want to create? Students? Staff? Administrators? Management? The Public?

  • Purpose: in brief terms, what do you want to achieve?

  • Scope: similar to context, however, I see this as more focused, so ‘create a university level course on sociology’ is fine, but narrowing it down to ‘Year 1, Year 2’ etc. will focus the prompt and subsequently generate examples more tightly.

  • Length: it’s always helpful to state the length of the proposed course or output. For example, are you asking for a draft of a 12-week course? A two-page maximum syllabus? A three-paragraph summary?

For this example, I used the following prompt…

I am a lecturer who teaches university-level chemistry. I wish to create a new course on inorganic chemistry for Year 2 university students. The course should be 12 weeks long and have 4 assignments. What might this look like?

Below are two GIFs showing chatGPT and Google Bard respectively.

NB: You may wish to select the images to see a larger version.

Brief reflections

I used a similar prompt for both generative AI tools. I decided to add an element of creativity when so I slightly changed the prompt when using Google Bard to get it to suggest creative assessments. I then went back to chatGPT to ask it do also suggest ideas for creative assessments within the context of this course.

They seem to produce similar results regarding this particular prompt. Both suggest an outline of a suggested course on inorganic chemistry; while Google Bard integrates the creative assessments into some of the topics, chatGPT predictably creates a list of suggested creative assessments as I had asked it after the initial prompt.

Interestingly, Google Bard also expands a bit at the end of the outline with further examples of non-written, creative assessments. chatGPT, on the other hand, does give some examples of ways of supporting learning and teaching after creating an example course outline. The creative assessments it lists are similar to those of Google Bard, although they are different, such as the quiz show example among others.

For transparency, I do not teach chemistry nor have I taught it. I have, however, supported those learning chemistry with their academic writing abilities, including writing lab reports and researching the topic. On the surface, the course looks coherent. However, I will leave that to those who teach chemistry!

What you can do

  • To replicate what I’ve done, copy and paste the prompt into your generative AI tool of choice.

  • Please note: you’ll likely get a slightly different response. I did not test each response again. That said, Google Bard automatically offers additional draft examples.


Creating assessment rubrics

Educators are often handed marking rubrics with little chance to develop or create their own. What this means is that when it comes to creating an assessment rubric, some educators may not have practical experience beyond what they have observed. In this case, generative AI can provide ideas and food for thought. This can be especially helpful for getting ideas for creative assessments that are still valid and rigorous while offering a suitable alternative to traditional assessments.

I ask generative AI tools to create assessment rubrics in the examples below. Remember: you need to give generative AI a context (e.g. you’re a lecturer teaching X), a specific request (e.g. you want to create an assessment rubric) and ensure the request has specific parameters (e.g. you provide your specific criteria for this rubric) .

I am a lecturer. I wish to create a marking rubric for an essay-based assessment. The rubric should include the following criteria: criticality, academic rigor, references to research, style and formatting.

NB: You may wish to select the images to see a larger version.

Reflections

In both cases, I state my (imagined) role and the type of assessment I usually employ and ask the tools to suggest ideas with specific criteria included. In both cases, each generative AI tool creates a sample rubric based upon what I have asked it.

Both tools create a table I would expect an assessment rubric to look like. Each table includes the criteria and sample grade bands with descriptor text that cross-references to the criteria. What both generally do well with is providing some sample descriptor text. However, you will need to tweak, modify and/or change the criteria to your specific, local context.


Creating rubrics specific to your institution

If your institution has a general, overarching rubric often used, you can get generative AI to suggest sample rubrics. This may, however, be difficult given how complex your institution’s rubric may be.

In the examples below, I ask chatGPT 3.5 and Google Bard respectively to create an example rubric based on Glasgow University’s 22-point marking system. This did, however, prove difficult!

Can you change the marking scale to a 22 point scale used at the University of Glasgow?

Reflections

The prompt above initially confused both generative AI tools. This could be because a 22-point scale differs from many scales out there. This could also be because I hadn’t provided specific context of the different bands. In this case, my suggestion is to suggest that chatGPT or Google Bard create a rubric based on your marking criteria. You can then tailor the created sample rubric to your local needs.

As you can see, both tools got some areas right and others wrong.

What chatGPT did well:

  • it created a scale based on the criteria I provided

  • it included the marking bands, cross-referenced against the criteria

  • it included some basic descriptor text

What chatGPT can do better at:

  • the descriptor texts were wildly off compared with the example marking schemes

  • it struggled to capture the nuances between the marking bands

What Google Bard did well:

  • the descriptor text for each band more closely matches what I would expect to see

  • the marking bands are divided out nicely

  • the criteria are cross-referenced against marking bands

What Google Bard can do better at:

  • it’s hard to say what it can do better at right now given how it created a marking rubric based upon my query!

  • that said, the descriptor texts for each band would likely need some tweaking to match local styles


Getting ideas for creative assessments

As I noted earlier, you can use generative AI to get ideas for (more) creative assessments that aren’t traditional, written-based assignments. Traditional, written-only assignments are great for some things. However, there are other, more inclusive and creative ideas for assessments that you can use in your teaching, no matter the subject.

For this particular example, I draw upon my own area of expertise and subject area which lies at the intersections of education and sociology.

I teach a social sciences subject in university. Traditionally, we use written assessments such as essays and exams as assessments. What are some creative alternative assessments?

Reflections

In brief, similar to the first example on chemistry, both generative AI tools create a good range of creative and event collaborative assessments that you can use within your own context.

You may already use some of these, such as mind maps and portfolios. That said, there are a lot of good ideas that have been suggested that might be worth trying out. I would recommend co-creating these with students, especially if an idea appears new or innovative or out of your personal comfort zone as an educator. You may be surprised at how quickly your students take to becoming partners in learning and teaching.

Read More