Too important to be private
In September 2008, Lehman Brothers collapsed. The fourth-largest investment bank in the United States - a firm whose financial instruments were woven into the balance sheets of banks, pension funds, and insurance companies across the planet - filed for bankruptcy and nearly took the global economy with it. Within weeks, the United States government committed over $700 billion in public funds to bail out the financial institutions whose reckless speculation had caused the crisis.
The homeowners who lost their houses did not receive a bailout. The workers who lost their jobs did not receive a bailout. The pension funds that evaporated did not receive a bailout. The banks did. The executives who presided over the catastrophe kept their bonuses. The firms that survived used the public's money to stabilize their balance sheets, then returned to profitability, then returned to the same practices that had caused the crisis. The losses were socialized. The gains remained private.
This was not corruption in the usual sense. It was the logical outcome of a principle that capitalism has operated on for decades: when a private entity becomes so systemically entangled that its collapse would destroy the economy, the public absorbs the cost. "Too big to fail." The diagnosis is correct. These institutions were too big to fail. Their collapse would have been catastrophic. Something had to be done.
The framework agrees with the diagnosis. It draws the opposite conclusion. If something is too big to fail, it is too big to be private.
The line moves
The traditional left position on nationalization is relatively simple: the means of production should be publicly owned. Factories, mines, railroads, utilities. The state takes ownership, the workers benefit, the extraction ends. This is correct as far as it goes. But it does not go far enough, because it implies a static boundary - a fixed list of things that should be nationalized. Steel: yes. Luxury handbags: no. Water: obviously. A bakery: obviously not.
The problem is that the boundary is not fixed. It moves.
In 1950, a three-person startup in a garage was not systemically important. In 1976, three-person startup became a personal home computing and later, a mobile device revolution (Apple). In 2025, a three-person AI lab producing a model that powers medical diagnostics, financial trading, logistics optimization, and military targeting across a dozen countries is one of the most systemically consequential entities on earth. The lab has three employees. It might have a single product. And the failure of that product - or its capture by a hostile actor, or its weaponization against the population (or any for that matter) - would be systematically, and potentially civilizationally, catastrophic.
Size does not determine systemic importance. Consequence does. The question is not "how big is this company?" The question is "what happens if this company fails, or if the person who owns it decides to use it against the public interest?"
A thousand-employee luxury goods manufacturer can go bankrupt and the economy will barely notice. A twelve-person team running a model that half the country's hospitals depend on for diagnostic support cannot. The first company is private in a way that carries acceptable risk. The second is private in a way that is a structural danger. They have nothing in common except the legal form of their ownership.
The scale is no longer hypothetical. OpenAI's valuation reached $852 billion in early 2026, with 900 million weekly active users. Its February 2026 funding round of $110 billion - led by Amazon, SoftBank, and Nvidia - exceeded total annual American venture capital investment during any year prior to 2021. The Pentagon awarded OpenAI a $200 million military AI contract. Palantir's Maven Smart System is deployed across five combatant commands and NATO. When one AI company refused to remove safeguards from its models for mass surveillance or autonomous weapons (Anthropic), the Department of Defence designated it a supply-chain risk and the government stopped using its products across all agencies. The retaliatory response proved the dependency: the state had become so reliant on a private company's product that the company's ethical stance was treated as a national security threat. This is the systemic criticality the framework describes, and it is already here.
The framework calls this the dynamic nationalization threshold. Not a fixed line between "public" and "private" sectors. A moving boundary, determined by one thing: systemic criticality. When the output of an enterprise becomes so embedded in the functioning of the economy, the infrastructure, or the security of the state that its failure would constitute an unacceptable risk, private ownership of that enterprise is no longer a reasonable model. The enterprise must be absorbed into the commons because at a certain level of consequence, private ownership is a single point of failure controlled by a person or group whose interests may not align with the public's.
The trigger is not size. It is not revenue. It is not employee count. It is commodification and systemic integration. When a product or service has been commodified - when it is bought and sold as a standard input to other economic processes - and when it has become integrated into systems whose failure would cascade, the threshold has been crossed.
This is the inverse of the capitalist model. Capitalism waits for the systemically important thing to fail, then uses public money to rescue the private owner. The framework nationalizes before the failure, while the thing is still functioning, precisely because it is too important to risk.
What happens after
There is a version of this that people imagine and it looks like the Soviet Union. The state takes over the factory. The engineers are replaced by party appointees. Production targets are set by a committee in the capital. Quality drops. Innovation stops. The nationalized entity becomes a bloated, unresponsive arm of the state bureaucracy.
That version is the wrong one. It is also the version that failed, and the failure is well documented.
When an enterprise is nationalized under the dynamic threshold, the enterprise keeps running. The engineers keep engineering. The managers keep managing. What changes is governance - not operations.
The state adds three things. Oversight: the enterprise is subject to democratic accountability. Its activities are transparent. Its strategic decisions are reviewed against national objectives. Strategic direction: the enterprise's priorities can be aligned with public needs rather than shareholder returns. A nationalized pharmaceutical company can prioritize affordable drugs over profitable ones. A nationalized energy company can prioritize grid transition over quarterly earnings. Veto authority: the state can block decisions that conflict with the public interest. The sale of critical infrastructure to foreign entities. The shut-down of services to extract concessions. The deployment of technology in ways that violate the rights of the population.
What the state does not do is replace the management with politicians. This is the mechanism described in the previous piece - the structural separation between political roles and functional roles. The politician sets the direction: we need affordable medication, we need clean energy, we need reliable infrastructure. The functional implementor executes: here is how to manufacture the drug, here is the engineering plan for the grid transition, here is the maintenance schedule for the water system. The politician has oversight, and the implementor has autonomy: The two roles do not merge.
This is what went wrong in the Soviet Union, and it is what the framework is designed to prevent. When the party secretary of a steel plant also runs the steel plant, you get a person who makes political decisions about steel production and operational decisions about party priorities, and both decisions are worse for it. When the steel plant is run by an engineer under the strategic direction of an elected government, you get good steel and good politics. When you scale this nationally, and use technology to enhance the supply chain with consideration of all of your steel plants, you get good steel, good politics, and a surplus that gives back to the people and can be used to strenthen sovereignty.
The operational continuity principle is simple: nationalization changes who the enterprise answers to. It does not change who runs it. Not unless the people running it are failing, in which case the failure is an operational question, not a political one, and the replacement is based on competence, not loyalty.
Technology is not neutral
Everything above applies to enterprises in general - steel mills, banks, pharmaceutical companies, logistics networks, energy grids. But there is a category of nationalization that requires something more than governance. When the thing being nationalized is systemically important and technologically dangerous, the framework adds an additional layer: the question of whether the thing should exist at all.
Marx treated technology as neutral. The machine is not good or evil. Under capitalist relations it exploits; under socialist relations it liberates. The loom that grinds the factory worker into poverty under capital could free her from drudgery under socialism. The technology is the same. The relations of production determine its effect.
This was correct in the 19th century. It is partially correct now. A lathe is a lathe. An industrial robot that assembles cars is an industrial robot that assembles cars. Under socialist relations, the robot reduces working hours. Under capitalist relations, it eliminates jobs while concentrating the productivity gains in the hands of the owner. Marx was right about this, and the principle still holds for the vast majority of productive technology.
But some technologies have properties that make them dangerous regardless of who controls them. The machine is not always neutral. Some machines carry risks that follow from the technology itself, not from the relations of production. And those risks operate by the same reciprocal logic that governs everything else in this framework: the tool will turn.
The framework sorts technology into five categories.
Socializable technologies are the straightforward case. Task-specific AI that automates repetitive labour. Computational tools that augment human capability. Robotics that reduce physical drudgery. Under socialist relations, these are instruments of liberation. Socialize them. Deploy them. Use them to reduce working hours, improve healthcare delivery, optimize resource distribution, free people from work that no one should have to do. This is what Marx envisioned, and under the right relations, it works.
Conditionally deployable technologies are tools whose danger depends on context. A genetic editing tool can cure diseases, or create bioweapons. An industrial chemical process can produce fertilizer, or nerve agents. These require case-by-case analysis under the framework's proportionality logic - the same logic that governs how we fight. The question is always: what are the reciprocal consequences of deployment, and can they be structurally contained?
Deterrent technologies are tools whose existence is necessary for sovereignty but whose use is prohibited or constrained. Nuclear weapons are the primary example. In a world where adversaries possess nuclear capability, the absence of that capability creates an asymmetric vulnerability that invites imperial intervention. The framework permits acquisition. It prohibits first use. The weapon exists to ensure that any adversarial use would face reciprocal consequences. Possession is sovereignty. Use is a transgression. This distinction matters and a later piece in this series will address it in full.
Sovereignty tools are technologies used for foreign intelligence and national defence. Satellite intelligence. Signals intelligence. Cyber capability. The framework requires them. In a world where hostile powers conduct intelligence operations against you, the absence of your own capability is not restraint: that is a material vulnerability. Foreign intelligence collection is a tool of sovereignty, and sovereignty is the precondition for everything else the framework proposes.
Transgressions are technologies that must not be built, deployed, or acquired for domestic use. Not because they are morally wrong in some abstract sense, but because their reciprocal consequences are so severe and so probable that no material benefit justifies their existence.
Domestic mass surveillance is a transgression. A surveillance apparatus built to monitor a country's own population cannot exist without being turned against that population. Not "might be turned." Cannot exist without being turned. The institution that operates it develops the institutional interest in using it. The data it collects becomes a tool for whoever controls the state. The apparatus outlives the people who built it and serves the interests of the people who inherit it. This is the pattern of every surveillance state in history, and reciprocal materialism predicts it as a material certainty.
The distinction between this and nuclear weapons matters. A nuclear weapon can sit in a silo. Acquirement is not usage. A domestic surveillance network cannot sit in a silo. The moment it is built, it is collecting. The moment it is collecting, it is being used. The moment it is being used, it is being used against someone. And the someone changes depending on who holds power. Build the panopticon for criminals. It will be used against dissidents. Build it for dissidents. It will be used against rivals. Build it for rivals. It will be used against everyone. Acquirement is usage, the tool cannot exist without turning inward.
Likewise, AGI without demonstrated containment is a transgression for the same reason. An artificial general intelligence that cannot be reliably controlled is not a tool. It is a risk whose reciprocal consequences are existential regardless of the economic system that produces it. This is not a capitalist problem or a socialist problem. It is a material problem. The framework refuses the fantasy that the right relations of production somehow make an uncontrolled alien superintelligence safe. They do not. The risk is technological, not political.
The domestic-foreign boundary
There is a specific application of this technology framework that requires its own treatment, because it involves a distinction that most states blur and the framework insists must be absolute.
Foreign intelligence collection is permitted. Domestic surveillance is a transgression. Between these two lies a set of technologies - CCTV cameras, for instance - that can be used for either purpose. The framework draws a hard line.
CCTV in public spaces for investigative purposes - solving specific crimes, collecting evidence for specific investigations - is permitted. But under strict constraints. No AI-based identification or tracking. No facial recognition. No algorithmic analysis of movement patterns. No mass monitoring. Footage is used for specific investigations, with warrants, reviewed by independent oversight. The camera is a tool for solving a crime after it happens, not a tool for watching a population before it does anything.
This is not a soft guideline. It is a structural requirement. Any technology deployed domestically that cannot be structurally prevented from becoming mass surveillance must be evaluated as if it were mass surveillance. If you cannot guarantee that the tool will not be turned from investigation to monitoring, it belongs in the transgression category. The burden of proof falls on the technology, not on the population.
The reason is reciprocal. The state that builds the capacity to track its population will track its population. Not because the people who build it have bad intentions - they might have the best intentions in the world. But because the capacity, once built, outlasts the intention. The database survives the administrator. The algorithm survives the programmer. The infrastructure survives the government. And the next government, or the government after that, or the faction within the government that gains influence a decade from now, will use the infrastructure that exists because it exists and because they can.
This is not speculation. This is the history of every surveillance tool the series has examined. Built for colonial subjects, used against citizens. Built for foreign enemies, used against domestic dissidents. Built for criminals, used against journalists. The pattern does not break. The framework builds the prohibition into its architecture because the pattern cannot be managed, only prevented.
Not too big to fail - too important to be private
The capitalist model of "too big to fail" has a specific structure: the private entity grows until its failure would be systemic, then the public is held hostage to its survival. The entity knows this, and it behaves accordingly. It takes larger risks because it knows the downside is socialized. It extracts larger profits because it knows the state will not let it die. This is material hazard at civilizational scale, and it is built into the structure of capitalism, not a bug of it.
The framework inverts this. The point of nationalization is not to rescue a failing entity. It is to absorb a successful one into the commons before it fails, before a single owner can use its systemic importance as leverage against the public. The water utility is nationalized not because it went bankrupt but because water is too important to depend on someone's quarterly earnings report. The energy grid is nationalized not because it failed but because energy access is a precondition for everything else in modern life - healthcare, food, transport, heating, communication - and a precondition that important cannot be subject to the profit motive of a private owner. Internet access and mobile access is nationalized not because there are a few telecommunications companies failing but because internet and communications are critical to the function of democratic processes and information. This does not prevent the formation of new competitors within the national framework, on the contrary, the national interest is served best by the competition and subsequent distribution of innovation between each competitor.
The same logic extends to the new categories. A logistics network that delivers the majority of a country's consumer goods is systemically critical. A cloud computing provider that hosts the majority of a country's business infrastructure is systemically critical. An AI system that half the country's hospitals use for diagnostic support is systemically critical. The question is not whether these should be nationalized in principle. The question is whether the current owner's private interest can diverge from the public interest - and if the answer is yes, and if the consequence of that divergence is catastrophic, then private ownership is an unacceptable risk.
This is not hostility to small business. A bakery, a restaurant, a local construction firm, a freelance developer - these are not systemically critical. Their ownership structure is their own business. The framework has nothing to say about a person who bakes bread and sells it. The framework has everything to say about a person who controls the grain supply.
The dynamic threshold means the framework does not need to enumerate every industry that should be nationalized. It does not produce a list. It produces a test: is the thing systemically critical? Is its output commodified and integrated into systems whose failure would cascade? If yes, it crosses the threshold. If not, it does not. The test is reapplied continuously, because systemic criticality changes. An industry that was not critical a decade ago may be critical now. An enterprise that was not consequential last year may be consequential this year. The boundary is alive, and it moves with the material conditions.
Chilean copper
In 1971, the Chilean congress voted unanimously - every party, left to right - to nationalize the country's copper mines. Copper was Chile's primary export, the backbone of its economy, responsible for the majority of its foreign exchange earnings. The mines were owned by two American corporations: Anaconda and Kennecott. The surplus they extracted left Chile entirely. The profit from Chilean labour, dug from Chilean soil, was deposited in American bank accounts, paid to American shareholders, and reinvested in American interests.
Salvador Allende, the democratically elected president, did not nationalize copper because he was a radical. He nationalized it because the test was straightforward: Copper was the single most systemically critical resource in the Chilean economy. Its failure - or its mismanagement, or its diversion to foreign interests at the expense of domestic needs - would be catastrophic for the country. And the private owners' interests were not aligned with Chile's. Their interest was to extract the maximum surplus and export it. Chile's interest was to retain that surplus and invest it in development.
The nationalization was correct under the dynamic threshold. Copper was commodified, systemically integrated, and controlled by foreign entities whose private interest conflicted directly with the public interest. Every indicator pointed to the same conclusion: this is too important to be private.
The operational implementation followed the continuity principle. The miners kept mining. The engineers kept engineering. What changed was the flow of surplus. Instead of leaving Chile, it stayed. Allende used the copper revenues to fund healthcare, education, and land reform. The nationalized mines continued to operate. They produced copper. The difference was that the people of Chile, rather than the shareholders of Anaconda, determined what happened with the value produced.
What happened next is a story about sovereignty, not about nationalization. The United States, acting through the CIA, destabilized the Chilean economy, funded opposition movements, and backed the military coup that ended in Allende's death and Pinochet's dictatorship. The nationalization was destroyed not because it failed - it was functioning - but because it threatened the interests of the imperial power that had been extracting Chile's resources. A later piece in this series will address what this teaches about sovereignty. Here, the relevant lesson is narrower: the decision to nationalize was correct. The test was met. The implementation worked. What broke was not the policy. What broke was the capacity to defend it.
AI labs and the commodification trigger
The most urgent contemporary application of the dynamic nationalization threshold is artificial intelligence.
In 2020, the major AI labs were research organizations. Some were non-profits. Their outputs were papers, models, experiments. Interesting, sometimes impressive, but not systemically integrated into anything. A lab could shut down and the economy would not notice.
By 2025, this is no longer true. AI models are embedded in healthcare diagnostics, legal research, financial trading, logistics optimization, education, military targeting, and the daily workflow of hundreds of millions of people. The output of a handful of labs - firms with hundreds of employees, not tens of thousands - has been commodified and woven into the infrastructure of multiple countries. The models are not experimental curiosities. They are productive forces, in the Marxist sense: instruments through which labour is organized and surplus is generated.
The commodification trigger has been pulled. The output is no longer research. It is a commodity - bought, sold, licensed, embedded in other products, integrated into systems whose failure would cascade across industries and borders.
The systemic integration test is met. If the leading AI providers were to fail simultaneously - or if their owners were to restrict access, manipulate outputs, sell the technology to a hostile power, or simply decide that their private interest no longer aligned with the public's - the consequences would propagate through every system that depends on their products. Healthcare, finance, logistics, defence, education. These are not hypothetical dependencies. They are current ones.
Under the dynamic threshold, the conclusion is straightforward: AI development at this scale is too important to be private. The question is not whether a government should have oversight of AI. The question is who makes the final decision about what the technology does, and the answer cannot be a private individual or a shareholder group whose interests may diverge from the public's at exactly the moment when the divergence matters most.
Nationalization of AI labs would follow the same operational continuity principle as any other nationalization. The researchers keep researching. The engineers keep engineering. The models keep running. What changes is governance: strategic direction aligned with national objectives rather than shareholder returns. Veto authority over deployments that threaten the public interest. Transparent development under democratic oversight rather than proprietary development under corporate secrecy, following strict containment requirements to prevent transgressions.
The objection will be: this kills innovation. The answer is: no, it doesn't. Innovation in AI is driven by talent, compute, and data. None of these are destroyed by public ownership. What is destroyed is the capacity of a private owner to capture the surplus generated by the innovation and to make unilateral decisions about deployment, in particular when crossing into transgression. That is the point.
The full technology spectrum
The five-category framework for technology requires a complete treatment beyond what the main body provides. Each category has distinct properties, evaluation criteria, and examples.
Socializable technologies are the largest category and the least controversial. These are tools whose primary function is productive - they augment human labour, automate drudgery, or enable capabilities that improve material conditions. Under capitalist relations, these technologies eliminate jobs and concentrate the productivity gains in the hands of owners. Under socialist relations, they reduce working hours and distribute the gains across the population.
Examples: industrial robotics, task-specific AI (language translation, medical imaging analysis, crop optimization), renewable energy generation technology, computational tools for scientific research, logistics optimization systems. These should be nationalized when they cross the systemic criticality threshold and socialized - made available as public goods - as broadly as possible. The goal is to use productive technology to free people from labour that no one should have to do, and to distribute the gains of automation across the working class rather than concentrating them in the owner class.
Conditionally deployable technologies require case-by-case analysis because their danger depends on context. These are dual-use technologies - tools that can serve liberation or destruction depending on implementation.
Examples: genetic editing (CRISPR - therapeutic use is socializable; weaponization is a transgression), industrial chemistry (fertilizer production is socializable; nerve agent production is a transgression), encryption technology (individual privacy protection is a right; state-imposed back doors are a transgression). The evaluation uses the framework's proportionality logic: what are the reciprocal consequences of deployment? Can those consequences be structurally contained? If containment is achievable and verifiable, deployment is permitted. If containment cannot be guaranteed, the technology slides toward the transgression category.
Deterrent technologies exist in a paradox: their value lies in their non-use. Nuclear weapons are the paradigmatic case. Biological weapons research (defensive, for vaccine development and threat assessment) is another. The framework permits acquisition because the asymmetry created by absence is itself a danger - a state without nuclear capability in a nuclear world is a state that can be blackmailed, invaded, or destroyed at will. But the framework prohibits use - first use is a transgression, an act whose reciprocal consequences are civilizational.
The distinction between deterrent and transgression is the acquirement-usage gap. Nuclear weapons can exist without being fired. The capability deters without the deployment. Domestic surveillance cannot exist without being used - acquirement is deployment. This is why nuclear weapons are deterrent and the panopticon is transgression, even though both are catastrophically dangerous.
Sovereignty tools are technologies whose purpose is the defence of the state's independence in a hostile international system. Foreign intelligence collection - satellite surveillance, signals intelligence, cyber espionage directed at foreign adversaries - is required. The framework does not function in a fantasy where hostile powers respect the sovereignty of socialist states. They do not. They never have. The capacity to know what your adversaries are doing, planning, and building is a precondition for defending against it.
The boundary is absolute: sovereignty tools are directed outward. The moment they are directed inward - the moment foreign intelligence infrastructure is used to monitor the domestic population - the technology has crossed from sovereignty tool to transgression. The structural separation between foreign intelligence and domestic governance must be architectural, not procedural. The people who collect foreign intelligence cannot have access to domestic data. The systems that process foreign signals cannot be connected to domestic networks. The wall must be physical. Legal walls are rewritten by the governments that pass through.
Transgressions are technologies whose reciprocal consequences are so severe that no deployment context justifies them. The category is small but absolute.
Domestic mass surveillance: any system designed to monitor a country's own population at scale. This includes facial recognition in public spaces, AI-based identification or tracking of individuals, mass communications interception directed domestically, social media monitoring by state agencies, and any algorithmic system that profiles citizens based on behaviours, movements, associations, or communications. The prohibition is structural: these systems must not be built. Not regulated. Not built.
AGI without demonstrated containment: an artificial general intelligence whose behaviour cannot be reliably predicted and controlled. This is not a question of who owns it. Under any economic system - capitalist, socialist, anything - an uncontrolled superintelligence is an existential risk. The framework does not pretend that socialist relations of production somehow neutralize a technology risk that is rooted in the technology itself. If you cannot demonstrate containment, you do not build it. Some may interpret this to forbid AI entirely, that is an acceptable socialist position. However, the empirical evidence to support targeted applications in healthcare and similar national objectives cannot be dismissed entirely: the reciprocal response to the vacuum of tools in improving the working conditions results in the decline in working and living conditions, and eventually the degeneration of the state.
The category is deliberately narrow. Most technology is not transgressive. Most technology is socializable or conditionally deployable. The transgression category exists for the specific cases where reciprocal materialism predicts, with near certainty, that the technology will be turned against the population regardless of intent, governance, or economic system.
The domestic-foreign surveillance boundary
The separation between foreign intelligence collection and domestic surveillance requires specific structural architecture. This is not a policy preference that can be implemented through legislation and goodwill. It is a design requirement that must be enforced through physical, technical, and institutional separation.
The physical separation: The agencies or bodies responsible for foreign intelligence collection and those responsible for domestic law enforcement must be structurally distinct. Different buildings. Different networks. Different personnel. No shared databases. No shared analysts. No secondments. No "coordination centres" that provide a bridge between the two. Every bridge is a leak point. Every leak point will be exploited.
The historical pattern is uniform. The NSA was authorized to collect foreign signals intelligence. PRISM collected the communications of American citizens. MI5 is a domestic security service; MI6 is a foreign intelligence service. The two have shared intelligence under every framework meant to separate them. The separation must be physical, not procedural. Procedures are written by the people who want access. Physical separation is harder to circumvent.
The CCTV constraints: Public-space camera systems used for crime investigation are permitted under the following hard constraints:
No artificial intelligence is applied to the footage. No facial recognition. No algorithmic pattern analysis. No behavioural prediction. No movement tracking. The camera records. A human investigator reviews footage for a specific investigation, under a specific warrant, reviewed by independent oversight. The footage is deleted after a defined retention period unless it is evidence in a criminal case.
This sounds restrictive, because it is. The restriction is the point. Every expansion of camera capability, every addition of AI analysis, every connection of camera networks to centralized databases moves the system from investigation tool to surveillance apparatus. The framework draws the line at investigation and defends it structurally because the line, once crossed, has never in history been walked back.
The technology containment test: Any technology deployed for domestic use that could, with modification or reinterpretation of its mandate, be used for mass surveillance must be evaluated as if it were already being used for mass surveillance. The question is not "is this technology currently being used to watch the population?" The question is "can this technology be used to watch the population?" If yes, it must be constrained to the point where it cannot, or it must not be deployed. To that extent, where a technology already exists in a state outside the framework: one must consider it a transgression and reciprocate accordingly.
This is the precautionary application of reciprocal materialism. The tool will turn. It always turns. The question is not whether it will be misused but when. The framework does not trust future governments to exercise restraint with tools that enable control. It does not build the tools.
Nationalization in the tradition
Marx's position on the means of production was clear: private ownership of the instruments of production is the mechanism by which the capitalist class extracts surplus value from the working class. The abolition of private property in the means of production is the precondition for the abolition of exploitation. Capital, Volume I, presents this as the central contradiction of the capitalist mode of production: social production, private appropriation. The remedy follows from the diagnosis. If production is social - if it requires the coordinated labour of many - then the product of that labour should be socially owned.
Lenin, in State and Revolution (1917), went further. The revolutionary state must immediately seize large-scale industry, banking, transport, and communications - the "commanding heights" of the economy. The Bolsheviks nationalized banks within weeks of the revolution, followed by large industry, railways, and foreign trade. Lenin's justification was that these sectors determined the trajectory of the entire economy and could not be left in the hands of a hostile class during the transition to socialism.
The framework draws on both but departs from them in a specific way. Marx and Lenin both implied, to varying degrees, a comprehensive nationalization - all significant means of production, eventually, under the control of the working class via the state. The framework introduces a dynamic threshold that is deliberately selective. Not everything needs to be nationalized. Not everything should be. The bakery, the freelance developer, the local construction firm - these are not systemically critical. Their private ownership does not create the conditions for extraction at the level that demands state intervention.
What demands state intervention is systemic criticality - a property that is historically contingent, not permanent. The sectors Lenin identified (banking, heavy industry, transport) were systemically critical in 1917. Some remain critical. Others have been joined or superseded by sectors Lenin could not have anticipated: cloud computing, AI development, logistics platforms, pharmaceutical supply chains, semiconductor manufacturing. The framework's contribution is the recognition that the threshold is not fixed but dynamic, and that the test must be reapplied continuously as material conditions change.
China's industrial policy provides a partial model and a cautionary lesson simultaneously. The Chinese state retained control over banking, energy, telecoms, and heavy industry while permitting - and then actively encouraging - private enterprise in consumer goods, technology, and services. The model produced extraordinary economic growth. It also produced a billionaire class, integrated into the Communist Party of China, whose material interests now diverge from the working class's. The selective nationalization without the framework's anti-ossification mechanisms - without multi-party competition, without term limits, without the political-functional separation, without the armed populace - produced exactly the dynamic that reciprocal materialism predicts. The party that nationalized selectively became a vehicle for the selective interests of the class that emerged around the private sector it permitted. The lesson is not that selective nationalization is wrong. The lesson is that selective nationalization without structural protections against class formation is simply a slower route to the same destination.
The framework also diverges from the command economy model - the comprehensive central planning of the Soviet type, where the state determines production targets, allocates inputs, sets prices, and manages operational detail. The command economy's failure is well documented: it could not process information efficiently enough to match supply to demand across millions of goods, it destroyed the incentive structures that drive adaptation and innovation, and it produced precisely the bureaucratic class formation that the framework's anti-ossification architecture is designed to prevent. The dynamic nationalization threshold, combined with the political-functional separation, avoids this by nationalizing what is systemically critical, preserving operational autonomy for functional implementors, and keeping the state's role focused on governance and strategic direction rather than operational management.
The framework's position is neither laissez-faire nor command economy. It is targeted intervention at the point of systemic criticality, governed by democratic accountability, constrained by reciprocal materialism, and structurally separated from the operational management of the nationalized entities. The state does not run the steel mill. The state ensures that the steel mill serves the public interest. The engineer runs the steel mill. The distinction is the framework's contribution.