The state's charter school board approved an application on Monday from Unbound Academy to open a school with a two-hour per day academic curriculum set by AI.
Remember that one teacher who made going to school fun and inspired you to pursue your passions? Students at a new charter school in Arizona won’t, because they don’t get to have teachers. Instead, the two hours of academic instruction they receive each day—yes, just two hours—will be directed entirely by AI.
By a 4-3 margin, the Arizona State Board for Charter Schools on Monday approved an application from Unbound Academy to open a fully online school serving grades four through eight. Unbound already operates a private school that uses its AI-dependent “2hr Learning” model in Texas and is currently applying to open similar schools in Arkansas and Utah.
Under the 2hr Learning model, students spend just two hours a day using personalized learning programs from companies like IXL and Khan Academy. “As students work through lessons on subjects like math, reading, and science, the AI system will analyze their responses, time spent on tasks, and even emotional cues to optimize the difficulty and presentation of content,” according to Unbound’s charter school application in Arizona. “This ensures that each student is consistently challenged at their optimal level, preventing boredom or frustration.”
I have my doubts about this, but it's an interesting experiment and charter schools are great for that.
Also, the kids aren't just ignored for the rest of the school day. They spend most of their time being taught by humans.
Spending less time on traditional curriculum frees up the rest of students’ days for life-skill workshops that cover “financial literacy, public speaking, goal setting, entrepreneurship, critical thinking, and creative problem-solving,” according to the Arizona application.
Teachers are replaced by “guides” who lead those workshops.
Edit: One interesting possibility is that simply teaching young kids to interact with computers will in itself be beneficial for them. I introduced my friend's second grader to Minecraft and he learned a lot of very useful skills because of that. At the beginning he had to be taught to use a mouse, but he got the hang of it quickly and soon he wasn't just playing the game. He was reading the wiki, watching tutorials on YouTube, etc. That's learning how to learn, which is arguably more important than learning anything specific.
(Now he's a 5th grader who wants a 3D printer for Christmas and I suspect that that may somehow be related to Minecraft too. He's probably a little too young for the printer but I suppose it's better to start early than to start late. One of my adult friends started being taught how to program when he was nine and he has gone very far, although I'm sure that simply being the sort of person who is capable of learning to program at nine played a huge role in that.)
This charter school's software is probably not as interesting as Minecraft yet, but that might be the direction where things are headed. A personal tutor that's infinitely patient opens up interesting possibilities.
What's wrong with spellcheckers? The only problem I have had with them is that they're triggered by technical terms, but that's just a minor inconvenience.
(Thinking I have the spellchecker on when I don't and therefore leaving mistakes in can also be a problem, but it's not strictly the spellchecker's fault.)
Also, before anyone asks, I do find ChatGPT and similar software quite useful.
The reference to spellchecking was because at the core this is how (very simplistically) LLMs work as well. Training on data and probability of the next word. For some purposes it works great most of the time, for others it's like using a screwdriver to beat in a nail. It might work sometimes to some degree, but not what it's for.
My opinion is that LLMs are being forced to be solutions in all sorts of places when we're still trying to figure out their best application. To do this in a grade school academic setting is probably not the best idea, such experimental things should filter down from the higher education once they work well. This is about money and someone trying to find a simple answer instead of fixing the problem correctly.
The reference to spellchecking was because at the core this is how (very simplistically) LLMs work as well.
That's not wrong but it is pedantic, contrary to popular usage, and irrelevant to the discussion of how LLMs might affect education. (I'm not saying you are pedantic, because you aren't the one who originally brought it up.) The whole discussion of spellcheckers is.
LLMs are being forced to be solutions in all sorts of places when we’re still trying to figure out their best application
Education doesn't have a first-mover advantage, but people are excited about AI and I don't blame them. The risks of this particular attempt are quite low, so while I don't think I would send a kid to this school myself, I don't think parents who do are wrong.
filter down from the higher education once they work well
I think this technology will be useful in elementary schools before it will be useful in higher education, because college students are more capable of learning without supervision.
Your last point is really the key one, isn't it? Is a LLM reliable enough to be put in charge of supervising a child's path of learning? I've messed around with local LLMs enough to realize that I'd better double check everything it gives me, as its goal is to tell me what I want to hear, not what is factual.
In rereading that it occurred to me that it was not very different from the worse of the teachers I have had long ago in the past...so take that as a warning, I guess.
Ah, I was thinking of autocorrect on PCs, which generally won't change what you wrote without your input. I swipe to type on my phone and the phone does often interpret my gestures as a word other than the one that I intended, but my gestures are so imprecise that I think the phone does a remarkably good job even if I do have to proofread afterwards.
I expect that the phones will do better once they have AI capable of noticing things the user clearly didn't intend to write.
I suppose they're AI in a very general sense according to which even simple, deterministic programs such as one that plays tic-tac-toe are AI, but they're not generative models like the software people people usually have in mind when they say "AI" is.