top of page
  • Writer's pictureJhankaar Purohit

Iris or High-Risk?

3 things that are likely to go wrong if Iris comes to my classroom 


Iris is a humanoid, a robot designed to look and act like a human teacher that Makerslab Edutech recently launched in a Kerala school. It uses artificial intelligence (AI) to speak with students, respond to their doubts, and teach them in multiple languages. This launch leads us to the “AI in the classroom: Boon or Curse?” debate. While the answer is much more complicated than it is ‘good’ or ‘bad,’ this article focuses on three concerns of having an AI teacher. 


1. Privacy 


Students’ Academic Data

Makerslab boasts Iris’s ability to use personalized learning paths for each student, accessing their strengths and weaknesses and tailoring classroom content to the specific needs of each child, but how does this work? For Iris to ‘remember’ these individual needs, it must store data on each student. Human teachers would have this information too, so what? Unfortunately, just as with any software, an unauthorized third party may hack into Iris’s storage system and gain access to the students’ data. 


Such a data breach consisting of areas of academic or personality strengths and weaknesses may seem insignificant but leads us to a murky ethical situation. This data about the students is currently being stored without their consent. 

Well, what if they simply obtained the student’s consent? 


While it may be easy to do so on paper, especially for younger students, it is tricky to explain to them the implications of a data breach (or their data being stored along with personally identifiable information like their names even in the absence of a breach). Further, even when parents are able to understand the way this works and consent to it, it can be tricky to determine whether this reflects what the child would truly want. 


Personally Identifiable Data

While it may seem unfair to prevent the benefits of a humanoid teacher because of philosophical ethical considerations, there are also practical threats to privacy. It is currently unclear whether Iris will take on this role, but if we assume it will assume all the responsibilities of a classroom teacher, it is plausible Iris could also store personally identifiable information about the students or their parents like email addresses, phone numbers, addresses, etc., for record-keeping or for things like progress reports to parents or staying available to students after-hours as many teachers often are. 

A breach, in this case, could lead to huge concerns for privacy, especially if the data leaked also included personal information on the students’ likes, dislikes, strengths, weaknesses, etc.


2. Artificial Intelligence 

There have been huge concerns over the rise of AI; one of the reasons why is its inability to think like humans. A lot of information or ways of thinking that are easily accessible to us or seem like ‘common sense’ are actually pretty uncommon amongst AI. The reason is that these are thought patterns that we have picked up over decades of existence and centuries of evolution, making them subconscious in a lot of cases. This might make it hard for developers to program them into a system or even know that they must be taught since they are so intuitive to us. Since we have learned these thought patterns over a very long time, it can take decades for Iris to use machine learning to pick them up. 


It is also possible that while this learning is happening, Iris’s logic or thought patterns have fallacies.

For example, an AI system tasked with ridding our world of cancer may conclude that having all organisms capable of developing cancer go extinct would effectively remove cancer from Earth! While this is technically the system carrying out the command it was given, it is clearly not what the command intended. Obviously, this is a very extreme example - Iris is unlikely to kill us all - but it does bring to light the sort of incorrect thinking patterns that AI systems can develop.

This can lead to Iris responding to a student’s query incorrectly (where younger students may be unable to recognize this as well) or potentially making dangerous classroom decisions like believing a student when they say “I can go home alone safely” or respond to a student’s question of “How to make a bomb?” with step-by-step instructions.


3. Personality Development

Children learn by imitation. This is hardly a concern for older students, but for younger ones, interacting with a humanoid teacher for a large part of their day may affect their personality development negatively. A lot of critical functioning skills like empathy, sympathy, human connection, and communication, require students to have models of adults around them to follow. In the absence of such a model for a 6-8 hour school day, the majority of the child’s day, the student may struggle in these aspects, or worse adopt mannerisms that make it harder for them to socialize. This could look like blunt honesty that is expected from a robot but can negatively impact a human’s social well-being. 


Moreover, while clear data on Iris’s mannerisms isn’t available at the moment, it can be assumed to an extent that its way of communication will not be the same as a human. A lack of sensitivity or delicate phrasing when speaking to children can lead to feelings of rejection or inadequacy, that at such a young age can contribute to potentially long-term effects on self-esteem which can be highly damaging for students.


4. However,

As we discuss the potential failings of Iris, it is important to remember that not using AI in classrooms at all will not solve these problems. In fact, it will also deprive us, as a society, of the benefits of AI teachers in the classroom and the other technological advantages in other fields that will undoubtedly develop as AI develops in one facet. As previously discussed, the way machine learning works is by taking feedback on actions and implementing it until the AI system gets it right: Iris learns by remaining in the classroom. 


So how do we prevent these disadvantages without sabotaging the future benefits of AI by not allowing Iris in the classroom?

I propose having Iris learn and help students under the guidance of a human teacher. This way we can reap the benefits that Iris has to provide now, make it a stronger AI with more benefits for the future, and prevent the harms that may come with it today. 


This still leaves us with a pressing concern regarding Iris - which classrooms actually need it? Do private schools that already have access to sufficient human teachers require Artificial Intelligence? Would this technology have more utility in the Indian economy if it was allocated to public schools in rural areas that have a shortage of teachers? The advantages and disadvantages of Iris are not only limited to its functionality but also to where and how it is utilized. As we continue to develop AI, it is essential to think about more than just the technology: we must consider its impacts and how we can use it to help those most in need.



Written by Jhankaar Purohit




10 views0 comments

Recent Posts

See All
bottom of page