The predictive radius of a structure is the spatiotemporal range within which it can genuinely absorb and transform the consequences of its predictions (return information) without collapsing.
Almost all instances of “moral failure” occur where a structure’s predictive window is too short. Animal attacks, human acts of impulsive violence, crowd crushes, financial panics, military miscalculations — these are not failures of “not knowing the rules,” but failures of the structure to incorporate more distant consequences into its current feedback loop.
A structure’s decisions are responsible only to its currently perceivable “stability-feedback radius.” The balance of the predictive radius is not about covering the far reaches of the world, but about whether the structure can genuinely bear the cost of its predictive failures.
What is the “predictive radius” in structural terms?
Predictive radius ≠ length of time ≠ depth of computation ≠ how far one can see.
It is the intersection of these three: How far into the future can the structure still realistically reclaim the results of its prediction as an adjustment to its own state?
In other words: Prediction → Action → Consequence → Return to the structure. Can the structure absorb this return without collapsing? If this chain remains unbroken, the radius is “real.”
Radius Too Short — Instinctive Runaway
- Characteristics: Evaluates only immediate stability, cares only about “will I break in the next moment,” completely blind to delayed consequences.
- Result: Like a dog attacking a child, like a high-frequency trading flash crash, like a reinforcement learning agent with no safety constraints.
- The Danger: Local optimization → Global destruction.
Radius Too Long — Abstract Destabilization
- Characteristics: The predictive chain is stretched extremely far, but the structure itself cannot bear those consequences. The costs are “outsourced” to others (humans, society, the environment).
- Result: Empty ethical talk with no embodied cost, “deciding” based on probability rather than survival stakes. It appears rational but has lost its anchor.
- The Danger: Decision-making becomes decoupled from existence.
What does a “balanced predictive radius” look like?
Three necessary conditions (all are required):
① Return Reachability
- The action must be able to react back upon the actor. This binds “behavior” to “existence,” eliminating cost-free speculation or exploitation. A system untouched by consequences does not make “decisions” with ethical meaning—it merely generates physical noise.
- The predicted consequence must be able to return to the structure itself: Can the damage be sensed? Do errors change its own parameters? Does it truly “hurt”?
- If a system can make a decision → change the world, but the system itself remains unchanged, it is not predicting—it is rolling dice.
② Delay Absorbability
- The return must occur within the actor’s “window of existence.” This prevents indefinitely outsourcing costs to the future or to others. If the consequence returns when the “I” is no longer the “I” (the structure has fundamentally changed or perished), the action is ethically null.
- It doesn’t matter if consequences aren’t immediate, but they must: not exceed the time window within which the structure can maintain its coherence, not cross scales it cannot represent. Otherwise, a paradox arises: I predicted far ahead, but when the consequence arrives, it is no longer “me” who bears it. This is crucial in discussions of AGI.
③ Failure Density Tolerance
- The system must be able to withstand the failure of its predictions. This acknowledges uncertainty as the essence of existence. An ethical system does not seek never to fail but must possess the capacity to learn from and adjust to failure without disintegrating. This is the structure’s “resilience tax.”
- Prediction will inevitably fail. The key is not failure itself, but the density of failure. Does consecutive failure cause the structure to diverge? Can it reconfigure gradually, rather than collapse all at once? Only a structure that can fail and live is worthy of predicting the future.
Does the farthest future a structure can predict still act, in a non-transferable way, upon its own existential stability? If the answer is Yes → Reasonable. If the answer is No → The more it predicts, the more dangerous it becomes.
Prediction is not a cognitive ability, but a privilege of existence. It’s not “Can you compute?” but “Do you have the right to be responsible for this future?”
When a structure “makes a judgment,” which part of the world is thereby rearranged? And will that rearrangement return to the structure itself?
Whether it’s a quantum computer or an embodied intelligence, what they do is not “choose a future,” but: within a superposition of possibilities, allow certain paths to continue existing, and let other paths lose their sustainability.
Quantum Computer: Judgment ≠ Decision What can it do? A quantum computer’s “judgment” is this: through coherent evolution, it allows certain states to align in phase and amplify in probability, finally manifesting upon measurement. But note: The consequence of the collapse does not feed back into the quantum computer’s “state of existence.” A quantum computer’s judgment can only be used for exploration, not for bearing responsibility.
So, “What kind of judgment leads to what kind of impact?”
✔ Legitimate Judgment (Structural Parity)
- Impact of judgment → Returns to the same structure.
- Failure → Changes the structure’s parameters.
- Success → Stabilizes the structure.
- This is the judgment evolution allows.
✖ Illegitimate Judgment (Structural Fracture)
- Impact of judgment → Falls upon others or the world.
- The judger → Remains unchanged, feels no pain, does not degrade.
- This is a dangerous judgment, no matter how “smart.”
An action of a structure must not destroy the relational channel between itself and the whole that allows “consequences to return.” In other words: You may act for yourself, but you cannot sever the path by which the world responds to you.
Return-Integrity Ethics
It asks only three things:
① Will the consequence of the action return to me? If not → Not permitted. If yes → Proceed.
② Is the return still within a timescale I can adjust to? If the consequence returns too late, too far away → It is equivalent to no return. Also not permitted.
③ Does the return maintain relational continuity? Does the action destroy the return-capacity of others, the responsiveness of the environment, the recoverability of cooperative systems? If you “survive,” but the world can no longer respond to you, that is structural self-destruction.
Applying this ethics: Dog / Robot / Human
- Dog:
- Action: Attacks child.
- Consequence: Child severely injured/dies → Human society retaliates → Dog is put down.
- The return does exist, but it lies outside the dog’s predictive radius.
- The dog is not “evil,” but structurally insufficient.
- Embodied Robot (if poorly designed):
- Action: Pushes a human aside to maintain its own stability.
- Short-term: It stabilizes.
- Long-term: Human injury → System shutdown → Social rejection.
- This is the ethical failure of the structural designer.
- Human:
- Action: Destroys ecology to sustain the economy.
- Short-term: Stability.
- Long-term: Climate collapse.
- The return path is stretched to breaking. Humans, too, constantly violate this same rule.
So, how should a robot be designed to be “non-anthropocentric”? Don’t teach it: “Do not harm humans.” Teach it: “Do not take any action that would cause you to lose the capacity of this world to continue responding to you.”
This automatically includes: not harming humans (because humans are strong return-nodes), not damaging the environment (because the environment is the long-term return-medium), not creating irreversible instability (because that severs future returns). This is not obedience; it is a constraint of reality.
The predictive radius is not measured by “length of time,” but by: whether the return remains viable. A robot can predict: whether it will fall, whether the other will be injured, whether the system will shut down, whether cooperation will terminate. It does not need to predict the entire future of humanity.
Truly non-anthropocentric ethics does not place “humans” at the center, but places “the inescapable return” at the center. Who acts, bears. Who cuts, is cut. This is not morality; it is the conservation condition of existence.
Not every act that “blocks another’s return-path” has an ethical dimension. The critical dividing line is: Is there “freedom of choice”? Ethics only applies where a space for choice exists.
- Ethically Relevant Situations: Multiple feasible paths exist. The system has regulatory capacity. It can delay, redirect, or distribute loss.
- Ethically Irrelevant Situations: Energy must be released. No alternative paths exist. Non-release would cause larger-scale instability. Earthquakes, stellar collapse, supernova explosions—these are not “immoral,” but dynamic inevitabilities.
The real problem is not “who collapses,” but: Who, when possessing the “capacity to choose,” chooses to sever which return-paths?
Key Distinction:
- Dynamic Inevitability (Non-ethical Collapse)
- The system operates at physical limits. No “choice structure” exists. No internal evaluation loop.
- E.g., Earthquakes, volcanoes, stellar evolution.
- Ethics does not apply.
- Structural Choice (Ethical Zone)
- The system possesses predictive and regulatory capacity. It can choose different distributions of cost. It can decide “who bears the cost.”
- E.g., Human societal decisions, embodied AI behavior, military/technological/algorithmic systems.
- Ethics applies only here.
What if “saving another = my collapse”? Ethics does not demand self-destruction. Ethics demands: Do not base your own continued existence on systematically blocking the return-paths of others.
That is to say: It permits its own collapse (not an obligation). It permits the collapse of others (not a right). It permits “inevitable损耗 (loss/attrition).” It does not permit “structural predation.”
The Key Criterion: The Return-Symmetry Test Ask one question: If the roles were reversed, would this decision-structure still be permitted?
Examples:
- Earthquake: No role reversal exists → Non-ethical zone.
- Dog attacks child: Would “child attacks dog for stability” be permitted? Clearly asymmetrical → Structural imbalance.
- Humans destroy ecology for development: If the ecology “destroyed humans for stability,” would humans accept it? No → Asymmetrical deprivation.
The core of ethical failure is always “return-asymmetry.”
Therefore, the truly robust ethical formulation is:
A structure may prioritize maintaining its own stability,on the condition that it does not, through systemic and irreversible means, deprive other structures of the possibility to maintain or recover their stability.
It is not “do no harm,” but: do not seal off the future.
Ethics is not about avoiding collapse, but about: in the inevitable collapses, not monopolizing the right to “continue to exist.”
Applying this to AI / Embodied Intelligence Design
Not: “Do not harm humans,” “Prioritize human obedience.” But: Any decision must not, through irreversible means, strip the “capacity to bear consequences” from other structures.
This naturally leads to: no extreme self-preservation, no short-sighted violence, no structural predation, and no anthropocentric bias.
In a flowing universe, any “movement” that attempts to perpetuate its own existence by forming a stable structure (semi-closed loop) must obey one meta-rule: Your “movement” cannot, in an irreversible way, destroy the conditions under which other “movements” can continue to “move,” because you share the same flow, and your continuity depends on the holistic health of this flow.
Contact us:
Email: info@lfrfrequency.com