Exploring ideas for decolonizing the curriculum using generative AI tools

In this post, I share some examples created by generative AI for decolonizing the curriculum. I also contextualize the examples by providing commentary from colleagues from the University of Glasgow Decolonising the Curriculum Community of Practice.

The master’s tools will never dismantle the master’s house.
— Audre Lorde

In this post, I share some examples created by generative AI for decolonizing the curriculum. I also contextualize the examples by providing commentary from colleagues from the University of Glasgow Decolonising the Curriculum Community of Practice.

Decolonizing education is part of many university strategies, including the university where I work. So, it seemed natural to think of how generative AI tools might help university students and staff think of ideas for decolonizing the curriculum. However, we must remember that the underlying logic of generative AI represents tools created by those in nations that hold power over others. Generative AI tools are often created in former imperial nations that seek out and obtain cheaper labor in other parts of the world to train and ‘develop’ the tools further. Generative AI also imparts a significant environmental impact, which must be considered.

AI and ethical considerations: coloniality of…

There are several caveats to using AI and generative AI generally, which I briefly outline in Karen Hao’s article from July 2020:

  • ghost work

    • this is invisible labor provided by underpaid workers who are often in former US and UK colonies (among others)

  • beta testing

    • sometimes beta testing is used on more vulnerable groups; yes, this is unethical, but it does still happen

  • AI governance

    • think about who creates governance for AI; high-wealth nations and the Global North largely drive this at the expense of Global South nations

  • international social development

    • if we consider ‘AI for…’ initiatives, we have to consider who drives these and who the targets or recipients are

  • algorithmic discrimination and oppression

    • if we consider who creates algorithms, then we can begin to understand why some algorithms can portray racist, gendered, xenophobic imagery

Further reading

To understand the ethical issues of generative AI by using a decolonial lens, have a read of these:


Generative AI’s suggestions for decolonizing

For the following outputs, as shown in the GIF images below, I used the initial prompt:

I'm a lecturer and there is talk of decolonising the curriculum. I teach mathematics and statistics. What can I do to start decolonising my curriculum?

As we can see in the GIFs below, each generative AI tool appears to give some considered suggestions for how a lecturer in this particular area might go about decolonizing the curriculum they teach. Ideas such as incorporating more diverse views, Indigenous knowledges and contextualizing what is being learned are all general suggestions that I might expect to find in such a curriculum that is undertaking decolonizing.

However, I wanted to see more detail and so I followed up with another prompt.

The follow-up prompt was designed to see what else generative AI might suggest. Interestingly, with insight from colleagues, plenty could be done and suggested to create a curriculum that undertakes decolonization within a specific context.

In this case, the lists seemed familiar and similar in some respects and then a bit different in other respects in ways that I couldn’t immediately pick up on. The suggested names stem from ancient to modern times, albeit with a jump between ancient and modern times! Some familiar names are there, but are there perhaps some that could be included?

Here is the prompt I used:

What are some prominent but overlooked non-Western scholars of mathematics and statistics?

Reflections from colleagues

I consulted some colleagues, given the topic, the example is from an area I’m not familiar with. Specifically, I consulted colleagues in the UofG Decolonising the Curriculum Community of Practice who kindly provided their thoughts.

Soryia Siddique, whose background is in chemistry/pharmaceuticals/politics, provided the following:

My initial observation is that we ensure women of colour are represented in the materials. Perhaps a specific search around this.

BAME and Muslim women are underrepresented in many professions, including senior roles in Scotland, and are likely to experience systemic bias. Taking into consideration that Muslim women can experience racisim, sexism, and Islamaphobia. It is questionable whether media/society represents Muslim and BAME women's current and historical achievements.

They are also "missing” from Scotland’s media landscape.

In utilising AI, are we relying on data that is embedded in algorithmic bias and potentially perpetuating further inequality?

Soryia also suggested the following reading: The Movement to Decolonize AI: Centering Dignity Over Dependency from Standford University’s Human-Centered Artificial Intelligence. It’s an interview with Sabelo Mhlambi who describes the role of AI in colonization and how activists can counter this.

Samuel Skipsey, whose background is in physics and astronomy, also shared his thoughts:

The "list of important non-Westerners" is fairly comparable between the two - Bard is more biased towards historical examples and is pretty India-centric (with no Chinese or Japanese examples, notably), ChatGPT does a lot better at covering a wider baseline of "top hits" across the world (although given that "Nine Chapters on the Mathematical Art" doesn't have known authors - the tradition of the time it was written means that it probably had many contributions whose authorship is lost to history - I would quibble about it being a "scholar"). I note that this is still a Northern-Hemisphere centric list from both - although that's somewhat expected due to the problems citing material from pre-colonial Latin America, say. Still, it would have been nice to see some citation of contributions from Egypt, say.

In general, both lists are subsets of the list I would have produced by doing some Wikipedia diving.

The "advice on decolonising" is very high-level and tick-boxy from both. It feels like they're sourced from a web search (and, indeed, on an experimental search on DDG [DuckDuckGo] for "how can I decolonise my course" the first few hits all have a set of bullet points similar to those produced by the LLMs, which is unsurprising). To be fair to the LLMs, this is also basically what a lot of "how do I start decolonising" materials look like when produced by humans, so...

As Soryia notes, because the answers are quite generic there's a bunch of specific considerations that they don't touch on (they're not very intersectional - Hypatia turns up on both lists of non-Western scholars, doing a lot of heavy lifting as the only female name on either!)

Read More

Experimenting with generative AI: (re)designing courses and rubrics

In this post, I share some ideas for (re)creating courses and assessment rubrics as well as getting ideas for creative assessments using generative AI.

Experimenting for creating a course

I tried out Google Bard and chatGPT 3.5 to design courses and rubrics. In each case, being specific about what I wanted to see created was key. What this means is that when you are creating your prompt or query, you should be specific in terms of:

  • Context: e.g. state who you are or who you imagine yourself to be when creating the prompt

  • Audience: who is the audience of what you want to create? Students? Staff? Administrators? Management? The Public?

  • Purpose: in brief terms, what do you want to achieve?

  • Scope: similar to context, however, I see this as more focused, so ‘create a university level course on sociology’ is fine, but narrowing it down to ‘Year 1, Year 2’ etc. will focus the prompt and subsequently generate examples more tightly.

  • Length: it’s always helpful to state the length of the proposed course or output. For example, are you asking for a draft of a 12-week course? A two-page maximum syllabus? A three-paragraph summary?

For this example, I used the following prompt…

I am a lecturer who teaches university-level chemistry. I wish to create a new course on inorganic chemistry for Year 2 university students. The course should be 12 weeks long and have 4 assignments. What might this look like?

Below are two GIFs showing chatGPT and Google Bard respectively.

NB: You may wish to select the images to see a larger version.

Brief reflections

I used a similar prompt for both generative AI tools. I decided to add an element of creativity when so I slightly changed the prompt when using Google Bard to get it to suggest creative assessments. I then went back to chatGPT to ask it do also suggest ideas for creative assessments within the context of this course.

They seem to produce similar results regarding this particular prompt. Both suggest an outline of a suggested course on inorganic chemistry; while Google Bard integrates the creative assessments into some of the topics, chatGPT predictably creates a list of suggested creative assessments as I had asked it after the initial prompt.

Interestingly, Google Bard also expands a bit at the end of the outline with further examples of non-written, creative assessments. chatGPT, on the other hand, does give some examples of ways of supporting learning and teaching after creating an example course outline. The creative assessments it lists are similar to those of Google Bard, although they are different, such as the quiz show example among others.

For transparency, I do not teach chemistry nor have I taught it. I have, however, supported those learning chemistry with their academic writing abilities, including writing lab reports and researching the topic. On the surface, the course looks coherent. However, I will leave that to those who teach chemistry!

What you can do

  • To replicate what I’ve done, copy and paste the prompt into your generative AI tool of choice.

  • Please note: you’ll likely get a slightly different response. I did not test each response again. That said, Google Bard automatically offers additional draft examples.


Creating assessment rubrics

Educators are often handed marking rubrics with little chance to develop or create their own. What this means is that when it comes to creating an assessment rubric, some educators may not have practical experience beyond what they have observed. In this case, generative AI can provide ideas and food for thought. This can be especially helpful for getting ideas for creative assessments that are still valid and rigorous while offering a suitable alternative to traditional assessments.

I ask generative AI tools to create assessment rubrics in the examples below. Remember: you need to give generative AI a context (e.g. you’re a lecturer teaching X), a specific request (e.g. you want to create an assessment rubric) and ensure the request has specific parameters (e.g. you provide your specific criteria for this rubric) .

I am a lecturer. I wish to create a marking rubric for an essay-based assessment. The rubric should include the following criteria: criticality, academic rigor, references to research, style and formatting.

NB: You may wish to select the images to see a larger version.

Reflections

In both cases, I state my (imagined) role and the type of assessment I usually employ and ask the tools to suggest ideas with specific criteria included. In both cases, each generative AI tool creates a sample rubric based upon what I have asked it.

Both tools create a table I would expect an assessment rubric to look like. Each table includes the criteria and sample grade bands with descriptor text that cross-references to the criteria. What both generally do well with is providing some sample descriptor text. However, you will need to tweak, modify and/or change the criteria to your specific, local context.


Creating rubrics specific to your institution

If your institution has a general, overarching rubric often used, you can get generative AI to suggest sample rubrics. This may, however, be difficult given how complex your institution’s rubric may be.

In the examples below, I ask chatGPT 3.5 and Google Bard respectively to create an example rubric based on Glasgow University’s 22-point marking system. This did, however, prove difficult!

Can you change the marking scale to a 22 point scale used at the University of Glasgow?

Reflections

The prompt above initially confused both generative AI tools. This could be because a 22-point scale differs from many scales out there. This could also be because I hadn’t provided specific context of the different bands. In this case, my suggestion is to suggest that chatGPT or Google Bard create a rubric based on your marking criteria. You can then tailor the created sample rubric to your local needs.

As you can see, both tools got some areas right and others wrong.

What chatGPT did well:

  • it created a scale based on the criteria I provided

  • it included the marking bands, cross-referenced against the criteria

  • it included some basic descriptor text

What chatGPT can do better at:

  • the descriptor texts were wildly off compared with the example marking schemes

  • it struggled to capture the nuances between the marking bands

What Google Bard did well:

  • the descriptor text for each band more closely matches what I would expect to see

  • the marking bands are divided out nicely

  • the criteria are cross-referenced against marking bands

What Google Bard can do better at:

  • it’s hard to say what it can do better at right now given how it created a marking rubric based upon my query!

  • that said, the descriptor texts for each band would likely need some tweaking to match local styles


Getting ideas for creative assessments

As I noted earlier, you can use generative AI to get ideas for (more) creative assessments that aren’t traditional, written-based assignments. Traditional, written-only assignments are great for some things. However, there are other, more inclusive and creative ideas for assessments that you can use in your teaching, no matter the subject.

For this particular example, I draw upon my own area of expertise and subject area which lies at the intersections of education and sociology.

I teach a social sciences subject in university. Traditionally, we use written assessments such as essays and exams as assessments. What are some creative alternative assessments?

Reflections

In brief, similar to the first example on chemistry, both generative AI tools create a good range of creative and event collaborative assessments that you can use within your own context.

You may already use some of these, such as mind maps and portfolios. That said, there are a lot of good ideas that have been suggested that might be worth trying out. I would recommend co-creating these with students, especially if an idea appears new or innovative or out of your personal comfort zone as an educator. You may be surprised at how quickly your students take to becoming partners in learning and teaching.

Read More