
Is i
Artificial intelligence is no longer simply a tool; it is a theology. Its rapid deployment across education has taken on messianic urgency, promising salvation through optimization, personalization, and scalability. Its apostles – venture capitalists, platform architects, billionaires with philanthropic gloss – offer not mere solutions, but eschatologies. AI will remake learning. AI will fix inequity. AI will replace teachers. AI will displace the human.
These are not neutral propositions. The forces driving AI transformation in education serve markets, not communities. Their logic is extractive, their models disruptive in the Silicon Valley sense: disruption as demolition of precedent, not enhancement of wisdom. To remain in traditional education today is to place oneself in the devil’s path – standing against the full weight of transformation. To enter tech’s inner sanctum, to design for or profit from the system, is to join the devil’s right hand: powerful, paid, and compromised.
This essay defines that devil, excavates the terrain, and traces the ethical fracture at the heart of modern schooling: whether education will be a public covenant or a private commodity. To be clear AI is NOT the devil, but, rather, in the details.
The Devil We Face: Techbro AI and Billionaire-Funded Reform
The devil is not AI itself, though there are many who demonize the moral, engineering, and environmental impact, there is . The devil is the regime governing its deployment, and the intent behind that governance: billionaire-funded, platform-led, and obsessed with market logic.
Consider the collapse of AltSchool, a heavily hyped personalized learning experiment backed by Mark Zuckerberg, Laurene Powell Jobs, and other tech elites. Despite raising $174 million, it failed to scale or sustain its vision, closing its schools and quietly pivoting to software. AltSchool wasn’t merely a failed startup – it was a case study in hubris: data-first pedagogy built by technologists who neither trusted nor understood teachers.
AltSchool is emblematic of the broader movement to privatize education through AI-enhanced platforms. A 2024 report from Senator Bernie Sanders reveals a coordinated billionaire effort to defund public schools and expand voucher programs. The goal: erode civic education and replace it with market-driven alternatives – charters, micro-schools, software subscriptions.
The underlying ideology is clear: students are products, teachers are liabilities, and learning is a business model.
This AI-industrial complex operates with the same extractive principles as Uber or Facebook. It does not serve learning. It serves surveillance, scale, and speculative return.
Traditional Education as Resistance
Remaining within traditional public or independent education now requires deliberate resistance. It is no longer neutral. It is oppositional.
Schools, when rightly oriented, are not delivery systems for curriculum. They are moral communities. They form citizens, not just workers. They convene pluralism, not personalization. Traditional schools still carry the burden of these commitments, even as they are battered by funding cuts and media delegitimization.
This does not mean AI must be excluded. It means AI must never come first. OECD guidelines warn that AI in education must be human-centered, teacher-led, and equity-grounded. When integrated well – under the authority of educators – it can augment formative feedback, identify instructional gaps, and support multilingual access.
But integration is not leadership. AI cannot set the curriculum. AI cannot mediate values. AI cannot form conscience.
We already see what happens when it does. In Houston ISD, under a state takeover, AI-driven curriculum platforms produced plagiarized and inaccurate lessons, demoralizing teachers and sparking mass resignations. This is not modernization. It is institutional sabotage.
To remain in the path of the devil is to absorb his blows while defending education as a civic trust. The work is grueling. The cost is real. But so is the mandate.
The Safety of the Right Hand: Serving the Devil to Build Havens
Others choose to work from inside the regime – joining the platforms, taking the capital, building counter-models with the proceeds. This is the position of the devil’s right hand: powerful, well-funded, and often well-intentioned. But deeply entangled.
This is precisely what many billionaire reformers have done. Laurene Powell Jobs now funds XQ Super Schools, elite public-private hybrids with selective admission and tech-forward pedagogy. Elon Musk built Ad Astra for his own children, outside state oversight. Peter Thiel pays young people to drop out of school altogether.
These figures destabilize public systems while creating gated academies for their own. It is a pattern of elite exodus disguised as innovation.
Educators who join the AI-industrial complex – by writing prompts, building apps, or training models – often do so to gain financial security. Some use those resources to build safe-harbor schools of their own. But this strategy is not clean.
To build on techbro spoils is to risk importing their ideology. Without structural guardrails, even counter-models replicate the very inequities they claim to resist. Selective admissions. Algorithmic sorting. Over-surveillance. Privatized success.
The question is not whether you can build something better from inside. The question is whether you can do so without becoming the very thing you sought to replace.
AI as Tool, Not Master
AI has a place in education. But it must be a peripheral place. It must be a servant, not a sovereign.
AI literacy is critical: teachers and students must understand what AI is, how it works, where it fails, and what biases it encodes. Tools must be explainable, auditable, and transparent. Not just functional.
Research from Stanford and OECD confirms: AI used without pedagogical design undermines trust, displaces human insight, and reinforces inequality.
Generative AI may accelerate writing, summarizing, or grading – but it does not teach. Studies reveal that overreliance leads to surface-level thinking, intellectual passivity, and substitution of synthesis with sampling. It creates speed, not depth.
Worse, it encodes existing injustice. Data-driven learning systems often reproduce systemic bias – tracking students by proxy variables like zip code, device usage, or lexicon. Without intervention, AI doesn’t fix inequity. It hardens it.
The only ethical use of AI in schools is one that is teacher-driven, community-accountable, and structurally transparent. Anything else is extraction disguised as personalization.
Implications: Stratification, Commodification, and Collapse
The implications of AI-first schooling are already visible.
Stratification is accelerating. Affluent families use AI to supplement tutors, consultants, enrichment platforms. Poorer districts receive stripped-down platforms with minimal human contact. The result is a two-tier system: creative autonomy for the rich, automation for the rest. As Business Insider notes, AI reveals and widens existing cracks.
Commodification is rampant. Learning becomes engagement metrics. Curriculum becomes content. The student becomes the product. Scholar Kenneth Saltman calls this the “alienation of fact”: knowledge detached from purpose, reduced to data point.
Collapse follows. AI-first platforms repeatedly fail to deliver pedagogical value. AltSchool collapsed. Summit Learning has faced backlash over its depersonalized, screen-heavy models. Even the most capitalized projects stumble when they treat teachers as obsolete.
This is not an accident. It is a design failure rooted in ideology. When pedagogy is treated as product design, and learning as UX flow, the human heart of education dies.
Ethic of Use
The question is not whether to use AI. It is how, and under what terms.
For those staying in the path:
- Protect teacher autonomy.
- Advocate for human-first pedagogy.
- Refuse AI-centered curriculum design.
- Build AI literacy among students as a civic skill, not a shortcut.
For those in the right hand:
- Use capital to build transparent, democratic academies.
- Design governance models with community voice, not just founders.
- Center admissions on equity, not exclusivity.
- Use AI minimally, with human review, and open documentation.
For both:
- Establish school-level AI ethics boards.
- Require transparent disclosure of AI-generated content.
- Train all staff in the limitations and affordances of algorithmic tools.
- Ensure that culturally responsive pedagogy is not displaced by AI neutrality.
This is not a rejection of AI. It is a rejection of its dominion.
The AI-industrial complex seeks to rewire education in its own image: fast, efficient, scalable, and empty. It promises personalization but delivers surveillance. It promises equity but delivers tracking. It promises wisdom but delivers speed.
If schools surrender to this theology, they will cease to be places of human formation. They will become nodes in a data economy.
Those who stay in the path do so with bloodied hands but clean hearts. Those who join the right hand must tread carefully, extracting without becoming extractive.
Education is not a market. It is a covenant. And the devil keeps no covenants.
References:
- Pasi, S. (2025, February 6). The human and environmental impact of artificial intelligence. Human Rights Research Center. https://www.humanrightsresearch.org/post/the-human-and-environmental-impact-of-artificial-intelligence
- Van Uffelen, N., Lauwaert, L., Coeckelbergh, M., & Kudina, O. (2024, December 19). Towards an environmental ethics of artificial intelligence. arXiv. https://arxiv.org/abs/2501.10390
- MIT News. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
- InsideHook. (2023, October 4). Tech billionaires wasted millions on failed education startup AltSchool. InsideHook. https://www.insidehook.com/culture/tech-billionaires-wasted-millions-on-failed-education-startup-altschool
- Sanders, B. (2024, March 26). Senator Sanders releases report on billionaire effort to dismantle public schools. U.S. Senate. https://www.sanders.senate.gov/press-releases/news-new-report-on-the-coordinated-effort-by-billionaires-to-dismantle-the-american-public-school-system/
- DiMaggio, A. (2020, October 5). Billionaires who aim to disrupt education may get a chance, even if Trump loses. Truthout. https://truthout.org/articles/billionaires-who-aim-to-disrupt-education-may-get-a-chance-even-if-trump-loses
- Paul, A. M. (2016, March 7). Learning differently. The New Yorker. https://www.newyorker.com/magazine/2016/03/07/altschools-disrupted-education
- Blackburn, J. (2024, May 3). AI is not going to fix HISD. Houston Chronicle. https://www.houstonchronicle.com/opinion/outlook/article/hisd-state-takeover-mike-miles-ai-prof-jim-20359937.php
- OECD. (2024). The potential impact of artificial intelligence on equity and inclusion in education. Organisation for Economic Co-operation and Development. https://www.oecd.org/en/publications/the-potential-impact-of-artificial-intelligence-on-equity-and-inclusion-in-education_15df715b-en.html
- Stanford News. (2024, September). Educating AI: What students and teachers need to know. Stanford University. https://news.stanford.edu/stories/2024/09/educating-ai
- Armstrong, M. (2024, March 18). The AI revolution is coming for your brain. Financial Times. https://www.ft.com/content/adb559da-1bdf-4645-aa3b-e179962171a1
- Business Insider. (2025, July 9). AI reveals how broken our education system is, economist says. https://www.businessinsider.com/ai-reveals-how-broken-our-education-system-is-economist-says-2025-7
- Saltman, K. (2021). The alienation of fact: The commodification of learning. ERIC. https://files.eric.ed.gov/fulltext/EJ1297432.pdf
- EdSurge. (2019, June 25). As demand for personalized learning grows, Summit Learning falters. https://www.edsurge.com/news/2019-06-25-as-demand-for-personalized-learning-grows-summit-learning-expands
- Wikipedia contributors. (n.d.). AI literacy. Wikipedia. Retrieved August 5, 2025, from https://en.wikipedia.org/wiki/AI_literacy
- Gillani, N., & Singh, R. (2023). AI in education: Challenges, risks, and ethical design [Preprint]. arXiv. https://arxiv.org/abs/2301.06102