Artificial Intelligence is reaching education. It is no longer science fiction. The use of large amounts of data (Big Data) to direct student learning is already part of real new experiments. It is a central moment in the history of education: who will control the data? What uses will they have? What benefits and what risks will they bring? Who will be the winners and losers? Many crucial questions are opening up. Here we will see two cases that help to formulate these questions and to venture possible answers.
The first path of Big Data is the model of the AltSchool, already reviewed in this blog . AltSchools are a rage in the world of EdTech. Its creator is Max Ventilla , whose previous years in Google led him to a frontier vision in the use of predictive algorithms.
The AltSchools are micro-schools, very small, with less than 150 students. Its central purpose is to become the largest educational revolution of the 21st century. That is why they invest 10% of their budget (which was agglomerated with US $ 133 million of recently collected capital, with Mark Zuckemberg of Facebook as the main investor) in research and development.
What AltSchools do today is a laboratory of what, according to Ventilla, will be a new model of schools and educational system in the next 30 years.
In the AltSchool is experiencing is the most extreme model of student observation to personalize teaching. This includes tracking with cameras and microphones all the interactions of students within the schools with software to track body and facial movements and voice recognition.
The monitoring of the expressions on the students’ faces allows us to perform an analysis of each class: the lighting of the classroom changes automatically giving a signal when the noise of the students becomes too loud. The teacher is assisted by the machines to control the course and is evaluated by the level of attention and interest of their students.
In parallel, the AltSchools have a “playlist” of learning for each student, with various personalized digital activities.
Facial recognition based on the filming of students ( see here the EngageSense system, its most advanced prototype) provides data to a computer that uses algorithms to measure the level of commitment to the task and suggest to teachers activities. To summarize (and simplify): if students do not pay attention a computer will notify the teacher.
This first model of experimental use of Big Data in education opens two big ethical questions: what happens with the privacy of the students? Who can access this border technology?
The first issue faces great legal gaps, as analyzed in this note . Big Data is almost completely deregulated all over the world. Who has the right to film and analyze the facial expressions of our children? What are the consequences of knowing that one is being filmed all the time? Is not this educational model a threat of a Big Brother-style panopticon, of absolute vigilance that can change the personality of students, their relationships and threaten their privacy?
The second question refers to equity. AltSchools are elite schools that were born in Silicon Valley: they cost approximately $ 30,000 per year for those who want to attend them. If they really have results, if they achieve the personalization of teaching through machines, will not they be widening the social gap through exponential educational changes that benefit the privileged?
The other path of Big Data in education has a very different orientation, but it shares the radical innovation component that allows advances in Artificial Intelligence based on algorithms. It involves the use of large amounts of data to analyze the functioning of an educational system, detect inequalities and act to reduce them.
A revealing example of these possibilities was systematized by different Chilean researchers who won the “New debates, Data for development” contest of the IDB. The study was titled: ” Supporting the formulation of public policies and decision making in education using techniques of massive data analysis: the case of Chile”.
Using data science techniques, the researchers analyzed open data published by the Chilean government on educational offerings, social contexts and various indicators of educational performance. One of the axes of the analysis was to study where the students lived according to the school they attend, and even measure the time it took to get to school through different transportation routes.
A map of these characteristics is a powerful predictor of school drop-out, since it allows the distance between students and schools to be personalized. With these data, the detailed creation of new schools (or transport systems) could be planned to cover unjustified demands and injustices without this level of disaggregation of information.
The study built a model to predict dropouts that initially used 127 attributes of students, establishments and the blocks where they live or are located, to feed an automatic learning algorithm. These attributes were reduced to 31, among which appear as significant variables such as the vulnerability of the school, the coexistence, participation, self-esteem and motivation of the students.
In sum, this new generation of data use has immense potential both to personalize teaching and to map inequalities in such a detailed way that they facilitate State action and directly attack the factors that promote school dropouts. The risks of the first model should not be underestimated: to erase the privacy of students, to control private life from large corporations or from the same State, to aggravate social gaps through technological gaps. But also their possibilities of personalization of education should be studied with rigor in the coming years.
It is time to open the questions, the debates, the experiences and create a new educational discussion: the use of Big Data to promote equity and the passion for learning. The governments of Latin America can not leave these discussions to attend to emergencies: here some of the answers to these same urgencies may be hidden.