“While educational technology can have positive impacts on academic achievment, a lack of attention to the complexities of educational settings and broader social contexts often causes initiatives to perform poorly and exacerbate existing inequities across lines of race, class, and gender” (Christopher Wegemer – “Brain-computer interfaces and education)

Over reading break, I did the unthinkable — I took a break. A much-needed break. Yes, I did some work, but I took actual full days to do no work at all. That time off mostly included sitting around with my aunt and uncle and listening to podcasts about space while sitting in their hot tub. One morning, though, they were watching an episode of the Passionate Eye on CBC about neuro implants. The episode is titled “I am Human,” but it is essentially about how neuroscientific advancements are allowing us to augment the physicality of what being human means (I highly recommend watching the episode when/if you have a free hour!).

Three people are profiled in the episode: 1) Anne, a woman who has Parkinson’s disease who has had a deep brain stimulator implanted to mitigate her symptoms, 2) Stephen, a man who lost his sight who has received implant that, paired with the Argus glasses, should help him “see” again, and 3) Bill, a man who was paralyzed from the neck down who receives an implant to help him move by thinking. These implants were designed not with the intention of “correcting” these humans’ humanity, but offering them things (mainly independence) that would make them feel more human in a world that treats them as less than because of their disabilities. 

The ways we address difference often say a lot about how we feel about difference. In schools, how we address difference has a significant impact on the learners within our communities. How we approach difference can either affirm learners humanity or dehumanize them.

Having watched this episode so recently, our discussion today about assistive technology in education got me thinking about what assistive or adaptive technology looks like now but, more than that, what it might look like in the future and how this might change (or reflect change in) what it means to educate and what it means to be human.

The episode focused on the amazing scientific discoveries that led to the possibility of neuro implants and prostheses, but it also talked about the future dangers of neurological implant technology. The impression that I got is that this type of technology could, essentially, become the next version of eugenics. Will we get to a place where we decide that people with disabilities will be forced to accept corrective technologies? What other dark possibilities are there for something the past couldn’t foresee that is now coming into existence without any established codes of ethics, regulations, or broad social understandings of how to use new technology equitably?

Thinking about such dark possibilities must occur alongside considerations of the positive possibilities — look at how the Internet has turned out. We couldn’t predict what it would be like and how it would evolve, and we have managed it poorly, reacting to bad things rather than predicting and pre-managing them.

Will assistive technology in the classroom always be external to students, or will it follow the direction of such medical applications of neuroscience and neuro implants or interfaces? Will it combine both — providing wearable technologies that still interface directly with brain activity? Will these become a critical component of assistive technologies in classrooms?

That would certainly make assistance easier, as it would bypass the body interface to reach the brain directly, allowing for a closer connection between that which needs assisting or adapting and not only the material but the machine to be accessed by allowing direct communication between thought and action.

But what would this direct connection between thought and action, without a translation through manual manipulation, mean for education?

One teacher posits that, essentially, it means we need to work more and more on children’s thinking — namely, teaching them how to intentionally control and direct their thinking so it isn’t haphazardly applied when they meet a brain machine interface in the future.

More specifically, brain interfaces mean that we may be able to access and understand the emotional and cognitive states/functioning of our students better, which means understanding their needs better. Moreover, machine learning and artificial intelligence mean that we will likely be able to understand not only these states or this functioning but, more importantly, know how best to respond to them.

There are plenty of other possibilities. But, as with any other technology and its relationship with the classroom, caution is necessary. This is not only true because of the dark possibilities of new technology mentioned above, but because there are certain considerations that need to be taken into consideration before any technology is introduced to a classroom. Often, “technology is mistakenly treated as a panacea for educational challenges” (Wegemer). However, complex systems and relationships within classrooms — those of social class, gender, ability, etc. — influence the potential applications and efficacy of technology in the classroom. With technologies that are not very well understood, both publicly or more specifically by individual teachers, technology is often enthusiastically embraced without the knowledge to implement it properly. This can lead to poor outcomes.

Some of the questions that need to be asked, then, are not just “how can these neuro technologies be used in the classroom” but, more importantly: “who will train teachers to use them effectively? how can they produce equitable outcomes? how will curricula make room for them? who will be around to help when things go wrong? how will policy support their effective and equitable use? how will we develop a framework for ethical use? who will enforce that ethical framework and how? will those who enthusiastically support such tech be willing to stop using it if its efficacy can’t be proven?,” (Wegemer) and so many others.

Perhaps the most important question is one that relates to the ethics discussed in “I am Human” and the ethics of current use of assistive technology in classrooms: are we using assistive technology to create a more level playing field for the disadvantaged, or are we using them to erase difference through correcting deficiency?

Then another question comes to mind — if used and effective, will brain-machine interfaces make formal education obsolete?