
AI Solutions and Broken Promises

The Real Manufacturing Revolution in the Age of Dark Factories
The promise of artificial intelligence has always been the same: liberation from labor through the precision of machines. In the twentieth century, that promise took physical form in the assembly line, where repetition was mechanized but oversight remained human. In the twenty-first, the same rhetoric has been re-engineered for algorithms – factories that think, optimize, and never sleep. The dark factory is the purest expression of that ambition: a facility that operates without light because no human eyes are required.
China has already built such sites. Fully automated manufacturing parks in Shenzhen and Hangzhou run through the night, controlled by industrial AI systems that balance energy loads, coordinate robotic arms, and route materials through digital twins of the factory floor (Industrial Equipment News, 2024). These are not prototypes but continuous production environments where uptime is measured in months, not shifts. They stand as monuments to a nation that treats automation as national strategy rather than speculative venture.
The United States, by contrast, treats AI as a consumer novelty more than an infrastructural revolution. The country leads in software patents and venture capital yet lags in physical deployment. Its factories still depend on human dexterity, inconsistent automation standards, and a cultural memory of the worker as the moral core of industry (Brookings Institution, 2024). The result is a paradox: while American discourse frames AI as an unstoppable force, the manufacturing base capable of using it remains bound to twentieth-century rhythms.
As Narayanan and Kapoor (2024) note, most AI achievements are “narrow, brittle, and dependent on ideal data conditions,” not self-generalizing intelligence. Predictive models in manufacturing succeed only when the domain is tightly constrained – temperature control, defect detection, logistics scheduling – conditions that already favor algorithmic precision. Outside those boundaries, the same tools collapse under the noise of human context. Manufacturing, therefore, is one of the few places where AI delivers on its advertised efficiency precisely because it excludes the unpredictability of people.
Yet this success carries moral residue. Each iteration of machine vision or autonomous quality control eliminates a form of skilled observation once done by hand. The more the system learns, the less society values the human who once learned it. As Yampolskiy (2024) warns, systems that exceed their designers’ capacity for explanation become “unexplainable, unpredictable, [and] uncontrollable,” turning supervision itself into an unsolved engineering problem.
The promise of artificial intelligence has always been the same: liberation from labor through the precision of machines. In the twentieth century, that vision took form in the assembly line, where repetition was mechanized but oversight remained human. In the twenty-first, the same rhetoric has been refitted for algorithms – factories that think, optimize, and never sleep. The dark factory has become the clearest expression of that ambition: a facility that operates without light because no human eyes are required.
China has already built such factories. In Shenzhen and Hangzhou, fully automated manufacturing parks run through the night, coordinated by industrial AI systems that balance energy loads, calibrate robotic arms, and route materials through digital twins of the production floor (Industrial Equipment News, 2024). These are not prototypes but mature systems, measured in months of uptime rather than daily shifts. They stand as monuments to a nation that treats automation not as a speculative investment but as national infrastructure.
The United States, by contrast, treats AI primarily as a consumer novelty rather than an industrial foundation. The country leads in software patents and venture capital but lags in physical deployment. Its factories remain dependent on human dexterity, inconsistent data standards, and a cultural memory of the worker as the moral center of industry (Brookings Institution, 2024). The result is paradoxical: a nation that speaks of AI as an unstoppable force while its manufacturing base remains tethered to twentieth-century rhythms.
As Narayanan and Kapoor (2024) observe, most AI success stories arise not from generalized intelligence but from narrow, structured environments. Manufacturing is one of the few places where those constraints naturally exist. Predictive maintenance, visual inspection, and production scheduling are quantifiable, data-rich tasks that suit algorithmic precision. Beyond these boundaries, the same systems collapse under ambiguity and noise. Thus, the apparent success of industrial AI reveals the inverse truth – AI performs best where the human element has already been minimized.
Yet even here, each innovation carries moral residue. Every new layer of automation replaces a form of human judgment once embedded in craft. The more the machine learns, the less society values the worker who once learned it. As Yampolskiy (2024) warns, the most advanced systems are not simply efficient but unexplainable – their logic inaccessible even to their creators. When control becomes statistical rather than moral, productivity gains obscure a loss of understanding.
The shift from illuminated to dark factory is not just an engineering milestone but a civic threshold. Can a society built on the ethic of labor adapt to an economy that no longer requires it? The answer will define not only the future of American manufacturing but the moral character of its republic.
The Hype Cycle and the Reality of AI Integration
Every industrial revolution begins with a vocabulary of miracles. Artificial intelligence has been no exception. Executives and policymakers describe it as a new electricity – a universal catalyst that will rewire every domain. Yet the pattern repeats: inflated promise, selective success, and the slow revelation of limits. In manufacturing, this cycle has inverted. AI has delivered its technical gains but failed its social ones.
As Narayanan and Kapoor (2024) demonstrate, most AI systems succeed only through extreme constraint. They thrive where data are abundant, inputs are uniform, and uncertainty is minimal. Industrial production meets these conditions perfectly. Temperature, torque, and vibration can all be expressed as numbers, and the relationship between them is stable. Within such parameters, machine learning excels – not by thinking creatively but by optimizing relentlessly. Predictive maintenance, defect detection, and logistics scheduling have matured from experiments into reliable infrastructure.
This reliability has been mistaken for generality. The public reads industrial precision as proof of universal intelligence, when in truth it reflects the narrowness of the problem. What passes for learning is pattern recognition under laboratory control (Narayanan & Kapoor, 2024). When exposed to unstructured data or novel conditions, the same systems fail spectacularly. Manufacturing’s apparent success thus conceals the brittleness of AI itself – it functions perfectly only when the world is already simplified.
Yampolskiy (2024) identifies the deeper issue as unexplainability: the point at which system complexity exceeds human comprehension. In modern production environments, this is routine. Neural controllers constantly adjust parameters invisible to supervisors, producing efficiency gains whose origins no one can fully trace. Metrics improve, but reasoning disappears. The result is an illusion of mastery – an industry that performs flawlessly without knowing why.
Even this controlled success depends on human scaffolding. Every autonomous process relies on technicians to label data, calibrate sensors, and correct the model’s blind spots. The myth of self-sufficiency endures because these forms of hidden labor remain unseen. As McKinsey & Company (2024) reports, nearly half of all “autonomous” production incidents are resolved manually. The machine’s autonomy is conditional – a performance sustained by constant human maintenance.
Outside the factory, the gap between hype and reality widens. Large language and generative models, celebrated for versatility, collapse under manufacturing’s demand for precision. Their probabilistic reasoning is incompatible with tolerances measured in microns, not metaphors. Even minor sensor noise can cascade into failure, proving Yampolskiy’s (2024) thesis that unpredictability is a feature of intelligence itself, not a flaw.
The U.S. Department of Commerce (2025) estimates that more than 80 percent of American manufacturers remain at “Stage 2 automation,” dependent on basic robotics with limited AI integration. The lights-out factory, so often promised in white papers and campaign speeches, remains aspirational. The hype cycle persists because it serves two markets simultaneously: investors who trade in optimism and policymakers who legislate by narrative.
In truth, AI’s transformation of manufacturing is incremental, not revolutionary. Machines execute repetition; humans sustain the possibility of repetition. The miracle is not that factories can run themselves – it is that societies continue to believe doing so will make them whole.
China’s Dark Factories as a Proof of Concept
China’s dark factories are not speculative prototypes; they are the operational expression of state policy. Since the launch of Made in China 2025, automation has been treated as infrastructure – a national tool to secure technological sovereignty and maintain output as the workforce ages. The policy’s objective is simple: reduce dependence on foreign components, compress production cycles, and make scale itself a strategic advantage.
In cities such as Shenzhen, Suzhou, and Hangzhou, factories already operate with minimal human intervention. Robotic arms assemble components guided by real-time vision systems; sensors feed continuous data into national cloud platforms for optimization; maintenance schedules are generated algorithmically (Industrial Equipment News, 2024). These facilities run for months without stopping. They are designed for permanence, not shifts.
Foxconn’s transition illustrates the principle. Facing rising wages and labor unrest, the company replaced tens of thousands of assembly workers with automated lines coordinated by predictive analytics. Similar systems now produce automotive parts, medical equipment, and consumer electronics across coastal provinces. What unites them is vertical integration: the same entities design the software, own the hardware, and control the grid. McKinsey & Company (2024) notes that this coordination allows minor algorithmic improvements to compound across thousands of identical sites – a feedback loop of scale the United States cannot replicate.
Policy coherence enables this success. Government ministries offer direct incentives for automation – tax relief, credit access, and pilot zones where regulation bends to experimentation. Failure is treated as iteration. That attitude transforms risk into discovery and bureaucratic alignment into industrial velocity.
Socially, automation functions as demographic strategy. Machines replace a shrinking young labor force; retraining programs channel displaced workers into logistics, maintenance, and software support. Displacement is not denied – it is absorbed. By contrast, the United States frames automation as cultural loss, defining the factory worker as moral identity rather than variable input. In China, the state defines productivity as public good; in the United States, it remains a private pursuit.
The difference is not technological but structural. Chinese automation succeeds because its objectives – economic, political, and social – are synchronized. American industry operates through fragmented ownership, short-term capital, and a regulatory system built for the analog era (Brookings Institution, 2024; U.S. Department of Commerce, 2022). Where China integrates, the United States negotiates.
Manufacturing’s cybernetic evolution in China also reveals the narrow truth of AI’s competence. These plants function reliably because they are closed systems. Every variable is measurable, every deviation an immediate feedback signal. It is a controlled ecosystem that renders unpredictability impossible – a nation operating as one continuous experiment.
Narayanan and Kapoor (2024) argue that manufacturing remains AI’s most legitimate domain precisely because it excludes ambiguity. The lesson from China’s dark factories is not that AI has reached maturity but that its maturity requires an environment stripped of uncertainty and dissent. Efficiency, at scale, demands obedience.
The achievement is real; the implications are political. The model works because consent is implicit, coordination compulsory, and equity subordinate to outcome. For nations that equate democracy with deliberation, replicating such efficiency would require a redefinition of governance itself.
Why the United States Is Not Ready
The United States built the industrial world it now struggles to modernize. Its machines remain advanced, but its institutions remain analog – bound by habit, hierarchy, and nostalgia for a kind of labor that automation makes obsolete. The gap between technological potential and political capacity defines the nation’s current paralysis.
Infrastructure and Technological Fragmentation
Most American factories operate as islands. Decades of privatized modernization have produced incompatible standards, proprietary data formats, and aging control systems. The National Institute of Standards and Technology (2024) reports that over 60 percent of plants lack real-time data integration between the production floor and enterprise systems. This fragmentation prevents AI models from learning across facilities and isolates each factory as a local experiment. In China, by contrast, interoperability was built into national 5G and industrial IoT frameworks from the start (Industrial Equipment News, 2024). America’s problem is not invention but coordination.
Cultural and Workforce Inertia
The American factory still imagines itself as a social institution – a place of dignity and belonging. That memory resists automation. The Brookings Institution (2024) found that fewer than 20 percent of mid-sized manufacturers plan large-scale AI integration, citing cultural resistance and fears of community backlash. Blue-collar identity remains politically sacred, even as it becomes economically untenable. The moral weight of the worker slows the technical evolution of the factory.
Capital and Time Horizons
Dark factories require patient capital and long amortization cycles. American firms live quarter to quarter. The U.S. Department of Commerce (2025) estimates that half of the nation’s manufacturing base would require more than $800 billion in upgrades to achieve end-to-end automation. China funds such transitions through state-backed credit and policy continuity. The American market rewards volatility and treats infrastructure as liability. The result is innovation without implementation.
Fragmented Policy and Weak Governance
Industrial AI touches labor, energy, and cybersecurity, yet no single federal agency oversees their intersection. Regulation is reactive and dispersed across departments. The World Economic Forum (2024) ranks the United States twelfth globally in AI readiness for manufacturing governance – behind Singapore, Germany, and China. Without clear accountability, ethical oversight defaults to self-regulation. As Yampolskiy (2024) observes, this is “control without controller,” a condition where technology evolves faster than the institutions meant to contain it.
Education and Skill Deficit
Automation demands data-literate technicians and systems integrators – roles that the U.S. education system is ill-prepared to supply. Community colleges remain underfunded, and technical curricula lag behind industry demand. A MIT Technology Review (2024) survey found that 72 percent of manufacturing executives cite a shortage of AI-skilled labor as their primary barrier to adoption. The workforce remains aspirationally digital but operationally mechanical.
Structural Unreadiness
The American factory sits between two epochs: analog infrastructure below, digital ambition above. Its progress depends less on innovation than on integration – linking fragmented systems, bridging generational skill gaps, and reconciling cultural identity with automation’s logic. Until capital, governance, and labor align around a shared vision of modernization, the nation will continue to design intelligent machines for other countries to build and use.
The Uncontrollable and Unexplainable Machine
Artificial intelligence has not only automated production – it has altered the nature of control itself. The assembly line once represented mastery through visibility: every step could be traced, every error observed. Today’s machine-learning systems replace that transparency with opacity. They deliver precision without explanation. The outcome is measurable; the reasoning is not.
Yampolskiy (2024) calls this condition unexplainability – a threshold where system complexity surpasses human comprehension. In manufacturing, that threshold has already been crossed. Algorithms adjust torque, recalibrate temperatures, or halt operations based on patterns no engineer can interpret in real time. Managers witness results – fewer defects, faster cycles – but cannot reconstruct the logic that produced them. The factory now performs more than it understands.
Predictability, once the foundation of engineering, becomes probabilistic. Neural controllers detect micro-variations – fluctuations in humidity, material fatigue, sensor drift – that humans cannot see. These adjustments optimize yield but erode transparency. IEEE Spectrum (2023) reports that over 40 percent of industrial-AI deployments have faced explainability gaps severe enough to delay safety certification. The smarter the system, the less certain its rationale.
Accountability dissolves with comprehension. When an automated process fails, causality becomes statistical rather than moral. Responsibility disperses across hardware, software, and data inputs. The operator becomes observer, the engineer becomes curator. Traditional error chains – human fault, procedural lapse – no longer apply. As Narayanan and Kapoor (2024) note, “AI systems rarely malfunction; they merely behave in ways we cannot decode.”
Verification systems have not kept pace. The National Institute of Standards and Technology (2024) confirms that current audit protocols can validate output accuracy but not process integrity. The algorithms governing production evolve faster than the standards that define compliance. Without mandated transparency, oversight becomes symbolic. Firms self-certify what cannot be independently verified.
In this environment, control itself becomes a performance. Engineers watch dashboards rather than machines, trusting graphs that summarize processes too complex to interrogate. The human role shifts from direction to reassurance – a sentinel ensuring the appearance of order. As Yampolskiy (2024) observes, “Control theory fails when the controller cannot interpret its own feedback.”
The unexplainable factory thus marks a civilizational turn. Humanity has constructed systems that embody precision but not meaning – machines that behave correctly for reasons no one can articulate. The risk is not rebellion but indifference: automation that functions flawlessly and answers to no one. The lights-out factory has achieved what philosophers once reserved for gods – action without accountability.
Broken Promises and Moral Debt
Artificial intelligence entered manufacturing under the banner of progress with empathy – machines would inherit the danger, humans the dignity. The rhetoric promised liberation through partnership, not replacement. The reality has inverted that equation. The machine absorbed the work, and the worker absorbed the consequence.
The Mirage of Inclusion
Industry leaders once spoke of “cobots” and “human-in-the-loop” systems as symbols of balance. In practice, these designs served as transitional scaffolding. Each generation of optimization reduced human discretion. Predictive analytics compressed reaction time; robotic handling replaced dexterity; algorithmic scheduling eliminated negotiation. What remains of the worker is not collaboration but supervision – an observer tasked with affirming the precision that displaced them (Yampolskiy, 2024).
The Economy of Asymmetry
Automation concentrates gain and externalizes loss. A Harvard Business Review (2025) analysis found that firms realizing over 30 percent productivity gains through AI reduced payrolls by nearly one-fifth, while executive compensation rose. Retraining budgets, once marketed as moral offsets, declined. Profit recirculated upward while responsibility flowed down. Efficiency became a closed loop: algorithms learn from production; corporations learn from profit.
The Loss of Ikigai
Ziesche and Yampolskiy (2025) describe a new moral risk – the erosion of ikigai, the sense of purpose derived from meaningful labor. For many, work has been not only livelihood but identity. As that foundation erodes, so too does the architecture of belonging. The psychological cost emerges as apathy, detachment, and distrust – a collective fatigue that no productivity metric records. The machine perfects output while hollowing the meaning of contribution.
Deferred Promises and Policy Illusions
Political rhetoric still offers reskilling as remedy, but the infrastructure of transition does not exist. The International Labour Organization (2024) reports that fewer than one in five displaced manufacturing workers in advanced economies have secured stable reemployment. In the United States, most retraining programs operate at local scale, underfunded and detached from national industrial planning. The moral promise of the “knowledge worker” future collapses against economic geography and the limits of human displacement.
The Debt of Progress
Each efficiency gain accrues a moral debt – a cost unpaid but accumulating. The automation dividend funds innovation while leaving the human residue of obsolescence unaddressed. Narayanan and Kapoor (2024) argue that AI’s failure is not technical but ethical: intelligence optimized for profit rather than shared welfare. A just economy would measure advancement not by throughput but by the degree to which progress remains participatory.
The dark factory, in this light, is less a triumph than a confession. It proves that technology can meet every metric of success while society drifts further from its own. The machines have delivered on every promise except the one that mattered most – that progress would still include us.
Disruption as a Feature, Not a Bug
Disruption has become the measure of innovation rather than its side effect. In the language of artificial intelligence, instability signals vitality. The dark factory is not a failure of capitalism but its most complete expression – production detached from people, value detached from work.
Industrial AI is built on continuous iteration. Every dataset improves the model, every optimization cycle renders another task unnecessary. The result is planned obsolescence disguised as progress. What once looked like natural attrition now reads as design. Yampolskiy (2024) describes this as “uncontrollability by architecture” – systems structured to evolve beyond the comprehension or restraint of their makers. Disruption has become a functional requirement, not a symptom to be cured.
The rhetoric of reshoring exemplifies this paradox. Policymakers celebrate the return of manufacturing as proof of national renewal, but much of what returns is empty of work. As The Economist (2024) notes, many of the factories built in the United States under the banner of revival are dark by intention – automated from inception and staffed by algorithms rather than citizens. The square footage increases, the payroll contracts. Industry reappears as symbol, not livelihood.
Automation also reshapes geography. The traditional factory anchored community; it created schools, unions, and neighborhoods. The dark factory requires none of these. It occupies physical space but generates no civic infrastructure. The OECD (2023) reports that regions with high automation adoption experience faster GDP growth but stagnant employment, a divergence that turns prosperity into abstraction. The machine stays; the people leave.
The cultural cost is harder to quantify. Innovation, once justified by creative destruction, now destroys faster than it creates. The promise of equilibrium has been replaced by a doctrine of permanent transition. Each technological wave justifies the next by producing the instability it claims to solve. Narayanan and Kapoor (2024) call this “the marketing of inevitability” – the idea that technological disruption is natural law rather than political choice.
This feedback loop benefits those who can absorb volatility. Investors profit from churn; communities collapse under it. The factory, once a stabilizing force, now operates as experiment. Its success is measured by speed of iteration, not social continuity. Disruption has become both method and metric – a self-reinforcing justification for instability.
The end of work is not announced through unemployment statistics but through normalization. Fewer people are needed, and fewer people notice. The dark factory does not signal an industrial apocalypse but a social reconfiguration – production without presence, output without obligation. What remains to be decided is whether a democracy that no longer depends on labor can still depend on loyalty.
Toward Transparent Automation Ethics
The next phase of automation will not be defined by what machines can do but by what governments choose to permit. Each generation of technology tests the moral bandwidth of its society. The question is not whether artificial intelligence can govern production, but whether democratic systems can still govern intelligence.
Efficiency has replaced virtue as the dominant measure of success. In industrial policy, ethics has become an appendix – an afterthought that follows growth rather than guides it. Nature Machine Intelligence (2022) notes that most national AI frameworks remain voluntary and lack legal enforcement. Corporations treat compliance as morality and performance as proof of virtue. The result is a structural imbalance between the citizen bound by law and the enterprise bound only by profitability.
This imbalance violates an older philosophical order. Plato warned in The Republic that a state fails when it confuses wealth with justice, writing that “where the law is subject to another authority and has none of its own, the collapse of the state is not far off” (Plato, trans. 1992). Centuries later, Quaker political ethics restated the same principle more directly – that governance must serve conscience over commerce. Both traditions insist that legitimate authority arises from moral restraint.
A representative government that allows policy to be purchased undermines that restraint. The Atlantic (2024) documents how lobbying expenditures by technology firms now exceed those of the defense industry. Subsidies flow without corresponding transparency, and ethics councils remain advisory rather than authoritative. When legislators accept compensation from the industries they regulate, the result is governance without representation – a democracy outsourced to the market.
Transparency must therefore become a civic function, not a corporate gesture. NIST (2025) has outlined technical standards for traceability and explainability, but without legislative mandate they remain aspirational. True transparency requires reciprocal visibility: citizens must be able to see how algorithms make decisions, and algorithms must reflect the values of those citizens through law.
Quaker discipline treats silence as a moral tool – a pause for reflection before action. That principle applies here. Before the next wave of industrial automation is legislated into permanence, governments must stop long enough to decide what kind of society automation is meant to serve. Plato’s philosopher-king and the Quaker conscience both converge on this point: power without reflection is efficient at producing injustice.
The republic’s responsibility is not to match the speed of machines but to ensure that its ethics keep pace with its ambition. A government that cannot explain its own algorithms has surrendered its mandate to govern. The dark factory may not need light, but the law that regulates it must.
Conclusion: The Dark Factory as Mirror
The dark factory is not only an industrial artifact – it is a mirror held to the moral architecture of modern civilization. Its silence reflects the balance between progress and principle, between what is possible and what is permitted. It shows the kind of society that builds machines without asking what they are for.
The story of automation is now complete in form if not in meaning. AI has fulfilled its engineering promise: factories that run continuously, algorithms that optimize without fatigue, systems that learn from their own data. Yet each success exposes a larger absence – a loss of clarity about who benefits and who belongs. The dark factory represents both the perfection of production and the eclipse of participation.
For the United States, this technology arrives as both opportunity and indictment. It reveals the power of invention without the structure of stewardship. The nation that once defined the modern factory now imports its logic from abroad, caught between nostalgia for labor and faith in innovation. The machines are competent, the institutions are not.
Ethical clarity is the next frontier. Quaker business traditions once insisted that integrity and community could not be separated from commerce (Burton & Sinnicks, 2021). Their model of stewardship over ownership offers an older and deeper form of sustainability – one measured not by profit but by conscience. Plato reached the same conclusion in The Republic: a society remains just only when its rulers are guided by wisdom rather than desire. Modern democracies inherited that premise but often forget it.
Sunde (2025) argues that the American constitutional design drew on Platonic ideals of virtue as the necessary counterpart to freedom. In that lineage, democracy is not rule by appetite but by reason, exercised through representation and restraint. When governments legislate for those who fund them rather than those who elect them, they reverse that equation. The republic ceases to be representative and becomes transactional.
The ethical lineage of both traditions is clear – technology without moral supervision becomes power without purpose. The factory, once a place of collective enterprise, now stands as a test of collective responsibility. Whether automation will serve the citizen or replace them depends on whether governance can still act in the interest of those it represents.
The dark factory operates perfectly in the absence of people. The danger is that democracy may learn to do the same. Progress without ethics is efficient but hollow. The machines will hum regardless; the question is whether we still deserve to call their output civilization.
References
Burton, N., & Sinnicks, M. (2021). Quaker business ethics as MacIntyrean tradition. Journal of Business Ethics, 176(3), 507–518. https://doi.org/10.1007/s10551-020-04706-y – open-access PDF via University of Reading repository: https://centaur.reading.ac.uk/98456/
Industrial Equipment News. (2024, May 13). The tech enabling China’s dark factories. https://www.ien.com/redzone/blog/22948773/the-tech-enabling-chinas-dark-factories
Choi, C. Q. (2023, April 13). 200-year-old math opens up AI’s mysterious black box. IEEE Spectrum. https://spectrum.ieee.org/black-box-ai
International Labour Organization. (2024). World employment and social outlook: Trends 2024. https://www.ilo.org/sites/default/files/wcmsp5/groups/public/%40dgreports/%40inst/documents/publication/wcms_908142.pdf
McKinsey & Company. (2024, February 21). Adopting AI in manufacturing at speed and scale: The 4IR push to stay competitive. https://www.mckinsey.com/capabilities/operations/our-insights/adopting-ai-at-speed-and-scale-the-4ir-push-to-stay-competitive
Armstrong, M., Autor, D., Reynolds, E., & Stansbury, A. (2024, September). Automation from the worker’s perspective: How can new technologies make jobs better? MIT Industrial Performance Center. https://ipc.mit.edu/wp-content/uploads/2024/09/Automation_from_the_Worker_s_Perspective-30-Sep-pub.pdf
Megas, K., Fagan, M., Cuthill, B., Hoehn, B., & Petrella, E. (2025, April). Summary report for “Workshop on updating manufacturer guidance for securable connected product development” (NIST Interagency Report IR 8562). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8562
Narayanan, A., & Kapoor, S. (2024). AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference. Princeton University Press. https://press.princeton.edu/books/hardcover/9780691236429/ai-snake-oil
Porter, Z., Zimmermann, A., Morgan, P., McDermid, J., Lawton, T., & Habli, I. (2022). Distinguishing two features of accountability for AI technologies. Nature Machine Intelligence. https://eprints.whiterose.ac.uk/id/eprint/191455/1/Distinguishing_two_features_of_accountability_for_AI_technologies_Nature_Machine_Intelligence_.pdf
Regions in Industrial Transition 2023: New Approaches to Persistent Problems
Organisation for Economic Co-operation and Development. (2023). Regions in Industrial Transition 2023: New Approaches to Persistent Problems. OECD Publishing. https://doi.org/10.1787/5604c2ab-en
Plato. (ca. 375 BCE/1892). The Republic 1892, Macmillan and Co. https://archive.org/details/republicofplato13plat/page/n11/mode/2up?ref=ol
New York Yearly Meeting. (2020). Quaker practice / Faith & Practice. https://www.nyym.org/content/quaker-practice
Sunde, C. H. (2025). From Politeia to Republic: How Plato’s wisdom relates to American democracy [Manuscript]. Academia.edu. https://www.academia.edu/128226229/From_Politeia_to_Republic_How_Platos_Wisdom_Relates_to_American_Democracy
U.S. Department of Commerce. (2022). National strategy for advanced manufacturing (Final report). https://www.manufacturing.gov/sites/default/files/2022-10/FINAL%20National%20Strategy%20for%20Advanced%20Manufacturing%2010072022%20Approved%20for%20Release.pdf
Wilson, H. J., & Daugherty, P. R. (2025, January–February). The secret to successful AI-driven process redesign. Harvard Business Review. https://hbr.org/2025/01/the-secret-to-successful-ai-driven-process-redesign
World Economic Forum. (2024, December). Beyond cost: Country readiness for manufacturing and supply chains (White paper). https://www.weforum.org/publications/beyond-cost-country-readiness-for-the-future-of-manufacturing-and-supply-chains/
Yampolskiy, R. V. (2024). AI: Unexplainable, unpredictable, uncontrollable. Boca Raton, FL: CRC Press (Taylor & Francis Group). https://openlibrary.org/works/OL37584607W/AI?edition=key%3A/books/OL50651147M
Ziesche, S., & Yampolskiy, R. V. (2025). Considerations on the AI endgame: Ethics, risks, and computational frameworks. Boca Raton, FL: CRC Press (Taylor & Francis Group). https://openlibrary.org/works/OL42538590W/Considerations_on_the_AI_Endgame?edition=key%3A/books/OL57749539M
Leave a Reply