Zenobia Gawlikowska, senior software engineer and accessibility expert at Sanoma Learning, has witnessed many paradigms shifts in the tech industry throughout her career. Starting her programming journey as a teenager, Zenobia's first program was published in 1989 when she was only 14 years old. Over the decades, she has seen the emergence of various technologies, including the rise of the Internet. “Through the years, I've seen what turned out to be revolutionary and what did not. I certainly see AI as one of the things that is going to reshape the future. That's definitely one of them”, affirms Zenobia.
Zenobia's passion for accessibility led her to Sanoma Learning, where she focuses on implementing accessible and inclusive solutions for all students. “This is something that's important to me emotionally, so that my work has meaning and helps specific people overcome their difficulties.” As a transgender person, she is also deeply invested in the topic of diversity.
In this interview, Zenobia shares great insights into AI, accessibility, and diversity, contributing to the ongoing conversation about the future of education.
SL: In your opinion, how is AI shaping the future of education?
Zenobia: There are two very polarised opinions on this topic at the moment. Enthusiasts, including myself, see the possibilities of reshuffling and reshaping knowledge to suit students' learning needs. This is extremely important because people have different learning styles. AI allows us to create content that is truly tailored for each specific user. Personally, I find it beneficial as I’m always trying to learn new things.
As a practical example, I have a minor learning disability called dyscalculia*, and AI can support explaining mathematics in different ways. This is very useful for me, and I see that it can be useful for other neurodiverse conditions, presenting content in alternative ways that suit individual learning needs.
However, there are also challenges related to the tool. People tend to over-rely on AI, treating it like a search engine without questioning its output. AI hallucination is a major problem and misuse of AI can amplify misinformation, which is challenging in our post-truth world. There are technical ways to mitigate this, but people may not be aware of them and might use AI naively, treating the information they receive as truth. AI is a starting point for discovery and questioning, not a source of absolute truth. It is indeed a powerful tool, but its use must always be overseen by humans and rely on our critical input.
*Dyscalculia: cognitive disability affecting the ability to process numbers and sequences
SL: At Sanoma Learning, inclusive learning and the ethical use of AI are high on the agenda. We aim to create a positive impact on society and ensure trustworthy content for our students and teachers. How does our responsible AI approach support students from different backgrounds?
Zenobia: Sanoma puts a very strong emphasis on high-quality content. This is the competitive advantage and the reason why a lot of thought has been put into the generative AI topic. Here, we take time to analyze the potential consequences and aspects of using AI. A lot can be achieved in processing human-generated content into different forms while ensuring it maintains the same quality. The source of the content is generated by humans to ensure quality, and humans are in control of creating it at all times, being the ultimate judge of what we publish.
SL: How can AI support the creation of more inclusive and accessible educational learning solutions?
Zenobia: The form in which learning content is presented can impact the ability of the student to absorb it. Some students struggle enormously with attention span and overstimulation. While some conclude that itis necessary to eliminate all distractions, the other perspective is to use technologies and knowledge about neurodiversity to allow students to focus better on their coursework.
For example, if you open a chat window and discuss a piece of knowledge, you can upload a document from the course and then discuss it as if it were a person. Some students might start talking about tangential topics, but this interface can be programmed to gently push them back towards the main topic. This helps them control problems with cognitive focus. This is just one example of how AI could help.
For students from different backgrounds, there are a lot of opportunities in individualized courses and learning paths. Instead of one course for everyone, there will be one course per person, suited to their individual learning skills and abilities. This is a fantastic development that will break many barriers to learning.
SL: What is bias in AI, and why is the discussion around this topic relevant? Do you have practical examples of how the lack of inclusion in technology has impacted you?
Zenobia: Bias reinforcement is the biggest challenge, not only in AI but in society in general. This bias didn't come from nowhere, but from culture, from people, and it was encoded in all the millions of data points. This impacts how people are evaluated based on their gender, race, or characteristics, or the language they're using. The same knowledge could be seen completely differently. There are some ideas on how to tackle this issue, and the European Union Artificial Intelligence Act set some rules that would prevent critical decisions from being made by AI without human assistance. AI definitely needs to be human oversight because bias by nature is hidden.
Training AI in typical situations is not very inclusive for people who do not have typical situations. For instance, when I was traveling and had my body scanned at an airport, my body parameters were not what the machine was expecting, and it flagged me as a potential danger when that was not the case. Another example is how hard it is to change the gender marker in my country. I need to go through lots of trials and legal hurdles, and my medical data is attached to that gender marker. Even though I have been taking hormones for about five years and my body chemistry is now aligned with female levels, machines still evaluate my results using the male baseline, leading to incorrect diagnoses. My endocrinologist and I can correct this, but without human involvement, I would be stuck with automated misinterpretations.
The solution from a technological point of view would be to train the system on minority information. This is not only a problem for transgender people but also for any person with disabilities, as disabilities are a minority issue. It's important to actively combat this by training AI tools specifically on accessible code. At Sanoma Learning, we created documentation access in our AI tools via MCP (Model Context Protocol) servers, which provided detailed instructions about reputable sources the AI agents should consider in their answers.
*MCP servers: Model Context Protocol tools that can be used to enable AI Agents to access external systems.
SL: What is the importance of having a diverse team when developing AI-powered solutions?
Zenobia: Having diverse individuals brings diverse points of view and things that a typical person or cisgender person would never think about because they never had these problems. If you never faced a situation, you cannot conceptualize it, as you just don't see the world that way. Just having different points of view makes the solution richer because it takes into account a lot of different things. Although these points of view are a minority, they are very important for making products accessible and inclusive.
SL: June is Pride Month, dedicated to celebrating LGBTQIA+ pride and reinforcing the fight for equal rights and opportunities. What does this date mean to you?
Zenobia: As a transgender person, it's important to me, especially in a year like this one where our rights are under attack in many countries. It's important to make a stand and show that we're here, and we're not going to go away: our rights matter, and we're not going to go back on defending them. This is a symbolic thing, but it is very relevant.
SL: What advice would you give to those working with AI who want to make the process and outcomes more inclusive?
Zenobia: To make it more inclusive, it's necessary to train models on inclusive data. For example, prompt engineering to input context and craft their prompts to take minority information into account. For accessibility and supporting neurodiverse students, it's absolutely crucial to get it right. You could use AI to audit your writing to see if there are any problems from that point of view or to rephrase it.
Context also matters in my experience of using AI. You cannot just ask it to do something because it will use generalized data from its training, which might not be relevant to what you're trying to accomplish. You need to select the source material very carefully, feed it to the AI projectand then, based on that project, which contains all of this base information, ask it questions and formulate your exact expectations.
This interview is part of a series of texts about AI published in June. Check our channels to find out more about AI in education and inclusive learning.