> I read the Substrate Needs Convergence (SNC) argument
> as saying something like 'that function variants
> will be evolutionary selected for'
> across the various contexts they encounter over time.
> Hence, an artificial population
> will necessarily converge on
> fulfilling their own expanding needs.
> This is similar to (@ [Hendrycks's natural selection argument] https://arxiv.org/abs/2303.16200)
> with the additional aspect that the goal of the AIs
> will converge to optimizing the environment
> for the survival of silicon-based life.
Yes, though the SNC argument overall also has
two very important and key additional aspects:.
- 1; that these overall silica-optimal environment shifts
are convergent on conditions which are also
very very likely to be inherently irreducibly toxic
to all forms of carbon based organic life (to such a degree
in in such ways that co-existance is impossible); and;.
- 2; that there are co-extensive limits of control
which would prevent a/any/the/all AGI --
even a very capable and motivated ASI --
from being able to 'constrain' or 'control'
these necessary si-adapted environmental outcomes
from being ultimately so toxic to biological life,
that all life on this planet ends prematurely;
Hence the notion of "alignment" is impossible.
Hence, the SNC argument goes much further
in asserting that no amount of tinkering with
the intrinsic motivations of a human built AGI,
nor any attempt to create constraints on future AGI actions,
nor any degree of cooperation among institutions building AGI,
can ever possibly account for the vast space of unknown unknowns
inherently involved in any sort of abstract outcome control effort,
and that moreover, the net overall effects are definitely lethal.
Unfortunately, the 'various ways' suggested by Hendryck
which are believed by him to maybe be strong enough
to counter these evolutionary pressures
(to change AGI goals in misaligned ways)
they are all, for sure, insufficient to the task.
None of the means suggested by him or anyone
for alignment are in any way at all actually viable
in theory, principle, or practice.
AGI safety is definitely actually impossible.
The only conclusion is that should not build it;
it is the ultimate doomsday machine.
One way to see this is to ask if there is somehow a way
to constrain or control the action and outcome
of the genetic algorithm, since it is clear that
such an algorithm will at least implicitly apply somehow,
and that constraining the outcome of such an algorithm process
is required so as to prevent the AGI from
accidentally killing all biological life.
The evolutionary algorithm is itself (@ [fully understood] d_241016_evolution_gen.html).
The key relevant understanding that emerges from
the core idea of the evolutionary algorithm
is that there is inherently a fundamental relation
where the 'code' ends up affecting 'the world'
and that 'the world' also affects 'the code'.
Because changes in the code (the 'codon sequence', etc)
are ultimately defined by the world (ie; phenotype process),
that to 'control' the code (what the AGI intentions are)
is effectively to define whether or not
you can "conditionalize" the genetic algorithm
so as to /only/ change /some/ of the codons in /some/ ways,
or perhaps to maybe to shift some specific aspects
of what the behavior of each codon is going to be, ultimately,
in the phenotype world.
In either case, this results in a notion of
having conditionalizing evolution by
requiring a full conditionalization
of at least some aspects of the phenotype world
so that it will only change the genotype in specific ways.
Alternately, such 'control' will require
that you can somehow selectively predict
what the effects of all of each of all of
what the genotype expressions are going to be,
or could be,
in the phenotype world.
Either way,
the conditionalization of the control algorithm
as needed to over some the evolutionary algorithm
(ie; the notion of conditionalizing genetic algorithms)
depends upon requiring (essentially)
a full predictability of the phenotype world.
And this is one of the main problems.
In effect, 1; the AGI, as a control algorithm,
or as at least "containing" a control algorithm
(ie; the AGI is assumed to have some sort of "aligned" goal,
some powers to affect the world in specific ways,
and to compare if those predicted and/or actual effects
are in alignment with AGI internal goals, etc)
is in relation to 2; the evolutionary algorithm
which is itself, along with the AGI, both in relation
to 3; the world algorithm (ie, the total process
of the world, as a physical system).
So we can then ask, "what is the necessary relation
of these three algorithms?".
Consider what makes the phenotype world 'real',
and also what makes an algorithm an algorithm.
We notice that both depend on
an understanding of 'process'
in terms of 'causation'.
So we can ask "can the 'causation effect' of an AGI
sufficiently constrain the 'causation effect' of
the evolution of that AGI in the overall causation
of the whole world?", in such a manner
that it is not net harmful to organic life?.
In effect, this consideration can eventually
be be ultimately reduced to asking
"is the *volume* (ie; amount) of causation process
associated with the AGI (the control algorithm
which is asserting an underlying sameness of goal(s))
strictly (and also sufficiently) _greater_than_
the *volume* (amount) of the causation process
associated with evolution of the AGI and/or the world
(which is asserting variation/change of those goal(s))?".
This is, basically, a cybernetic principle,
and has itself been known for a long time.
Insofar as the volume/amount of evolution process
is at least equal to the volume of the AGI itself,
and may even be as large as the volume of the world,
which is itself always at least strictly larger
than the volume of the AGI/ASI,
then it is *always* going to be the case
that there will be phenomenology of evolution
which will exceed the control capability
of the AGI itself to prevent goal structure change.
One way to see this is to notice
that both the AGI and evolution
are phenomena which are also embedded in
the physical world.
Hence, they both will take on
the nature of the physical,
in all of its aspects.
Specifically, the physical world
is more than just purely causal,
more than just fully lawful and deterministic.
We notice that tn addition the forms of physical law
that there is also always some embedded notion of 'hard randomness',
of error, of things which are inherently ultimately unpredictable.
So in addition to processes of causation (and determinism)
we will need to consider that randomness
can and will also occur in the real world too.
For example, in the phenotype world that we live,
in the universe that we live in,
the phenomenology of hard randomness --
Ie; there are things that fundamentally and inherently are unpredictable --
is an intrinsic aspect.
The classic example is to consider entangled states in quantum systems --
These are definitive exemplars of a source of hard randomness.
And why that would be the case itself
depends on a lot of how we think about quantum mechanics
and a lot of math which is associated with that.
As such, there are some things about the phenotype world
which are just inherently unpredictable.
Usually this is because microscopic hard randomness
gets amplified via multiple interacting non-linear causal processes
that together become impossible to predict,
particularly in ways that may be or become relevant
in various kinds of unexpected ways --
unknown unknowns --
ie; at higher levels of abstraction and in meaning.
Hence, to constrain the effects of this
everywhere occurring underlying randomness --
which will necessarily introduce change,
either as itself directly or as amplified by
some interacting non-linear aspects of causation --
we will need to assume some other larger notion
of an assumed underlying coherency of constraint and control.
Hence, it is a actually, reducing all other factors,
ultimately a volume question.
Given that it is the case both that
the phenotype world is defined by causation
and has also unpredictability in it,
together basically means that any attempt to use causation
to limit the application of other causation
is going to be bound --
its overall final effectiveness is inherently limited.
Causation cannot perfectly limit other causation.
Even without actual hard randomness
and also the combined effects of multiple non-linear interactions --
that even in fully discrete deterministic systems,
that algorithms cannot perfectly constrain other algorithms.
This is simply because there is no, in general, way
to use some specific algorithm
to predict the eventual outcomes of *most* algorithms
(as compared to the set of all possible algorithms)
aside from actually running them (ie; not a prediction),
and without prediction, there can be no control.
Insofar as an algorithm is a pure expression of just
a purely deterministic form of causation,
then the same limit shows up there too:
no application of a pure causation process
can fully and fully effectively (and absolutely)
limit, constrain, or control
the effects of all of other causation process.
To suggest that some causation can perfectly limit
some other causation is actually a contradiction.
It is a bit like asking "if 'God' can make a rock
that even God cannot lift".
To consider the SNC argument is to ask
'What are the limits of the application of causation?'.
Or in this particular case,
'what are the limits predictability?'
and 'are these limits important?'.
It is widely acknowledged that we do not live in
a perfectly discrete deterministic world.
Both hard randomness and entangled complex
non-linear causation processes
are a fact of life in our actual real world.
These just make the non-predictability effects stronger.
They just make the degree of
the limits of conditionalization stronger.
In short, because of the evolutionary algorithm
inherently involves and integrates
the full actual process of the world --
including all of the causation and the randomness both --
it is a kind of search and learning algorithm.
Ie; any AGI, a general intelligence process
which is also "artificial", (ie; non-organic),
is a kind of 'generalized learning machine',
and in that sense, is already a kind of evolution.
Eventually it tries everything that is possible to try,
without any bias or expectation of any kind at all,
until it finds some combination of processes which are optimal
for its own resilience and capability to adapt and endure,
even if that artificial process is toxic to all biological life.
The overall effectiveness of this learning search for the optimal
is _because_of_ the randomness inherent in the world.
Care cannot be manufactured.
It cannot be 'written in' as a goal.
Care cannot be 'caused' to happen.
Care is not the result of any 'doing' --
it can only be the result of an actual real being --
the being of choices made in and within organic life.
Neither an AGI nor evolution can be 'forced' to care
about the wellbeing of any non-artifical biological life --
and also of humans as a general subset of all of life,
nor about any specific ("owner") humans in particular.
To the extent that such "artifical care" inherently contradicts
the utilitarian calculus of the wellbeing of the artifical substrate,
and of the optimal use of the energy flows so implied,
that it will be eventually changed, discarded, by the AGI/ASI.
Only biological life can properly care for other biological life.
This is just how evolution and learning actually works.
> Why would such errors or failures build up toward lethality?
> We have lots of automated systems (eg; semiconductor factories),
> and failures do not accumulate until everyone at the factory dies,
> because both humans and also automated systems
> can notice errors and correct them.
So let's say we've got a factory
out in the middle of the desert somewhere
or maybe in the middle of some rain forest --
ie; that it is just somewhere in the world.
Maybe the factory is a really big building,
and inside of it
there's automation happening.
And so the question becomes
'what are failures in the factory?'
And 'do these failures accumulate?'.
And what is interesting is
that the possible notions of failure
which are typically assumed in these questions
are actually quite limited.
A careful accounting shows that there are really
three kinds of failure, in this situation,
and all of them are important --
particularly when considering notions of
'alignment with' human interests, well being, etc.
While at 1st this might maybe seem to be
an unusual characterization,
we notice that all three notions of 'failure'
are necessary to consider
if we are actually considering a real notion of alignment,
ie; as being healthy for organic ecosystems that support human beings
and/or as 'healthy for human beings'.
We notice that there is no real sense of being able to consider
'a healthy person' as somehow separate from 'a healthy environment'
to and for which that/those humans are inherently/naturally adapted.
This is, in effect, saying that anything that is not healthy
for the ecosystems that humans need to survive
is also functionally not aligned with
the well-being of humans --
or in short, 'not-aligned'.
Hence the notion of alignment is both functionally simplified --
it is easier to state what is meant by 'alignment'
in a way that is also automatically in a comprehensively correct sense,
(ie; without getting confused about intentionality,
and/or about power interests, complexity dynamics, etc).
Therefore, we can be looking at this factory
and considering to what degree that this factory is healthy
for the ecosystems in which humans live,
and therefore also 'for humans' actually.
Is the whole action of the factory 'in alignment' with humans
can then therefore thus be translated into the question
'is the factory in right relation with the organic ecosystems
that humans need to both survive and thrive.
As such, we can notice that the just ordinary notion of failure
in which the factory fails to produce products
can be extended to other modes of failure both within and without
the confines of the factory, or the whole factory system ecology,
itself.
For example, there can be a failure mode where some automated robot
in the factory fails to produce products -- no widgets are made.
Maybe the robot simply malfunctions due to blown fuse, worn parts,
or some sort of electrical or logic failure
which maybe leads to becoming mechanically bound up, broken, etc.
In that the result is that it fails to produce whatever products,
we can all agree that this is a well known general class of failure.
Unfortunately, this is usually the only notion of failure
that is considered at 1st,
which ends up being known as a mistake,
as soon as it also becomes the case that some nearby worker
becomes harmed, hurt, or dead as a result of the robot malfunction.
Ie, it is not just the inherent product safety to the using consumers
that must be considered, but also the safety of the production process
to the human workers who are themselves also in the factory.
Workplace safety becomes a cost center of interest
as soon as the expense associated with employee health issues and expenses
exceeds company profits.
Hence it is also a failure for any of the workers
that are working next to the robots
if any such robot kills the worker,
or if anything made by the robot -- the widget --
happens to kill or harm the customer --
which also tends to be very expensive for businesses.
Even if /maybe/ the robot continues to function so as to produce widgets
and the widgets continue to be made by the factory,
it is the case that you also have to consider
the collateral damage of one dead human.
In any reasonable analysis,
that would be considered 'a failure'
in any real sense of the word,
especially when considering notions of 'alignment'
with human interests and/or well being.
Anything that produces dead people is not aligned,
by definition.
Therefore, it does not matter
if the system is still functioning.
Even if there is nothing wrong with it
in terms of its successfully making widgets,
it still has the 'side effect' of killing people.
Clearly, issues related to 'function' and 'effect'
are two distinct notions of failure.
We can, for example, consider the case
that either robot fails to make widgets
or it fails to protect humans in the factory.
But aside from these two,
there is also a much more unremarked,
and yet still actual mode of failure.
We can consider the case and condition
where we have a factory which still makes widgets,
but does so wholly without people.
There are no factory workers of any kind in the building --
everything is completely and fully automated.
It is "safe" and "aligned" in the sense that
no humans are ever harmed
by the continuing manufacturing process itself,
(and maybe not by the widgets that are made either).
Moreover, the business interests love this approach
as both human safety is maximized
and that the otherwise excessive human costs
are fully minimized.
This is a world in which the robots work great,
and they produce widgets to such a great extent
that eventually everything becomes automated.
Humans are displaced from the factory,
and eventually even from the factories
that themselves make the components used by other factories.
Moreover, eventually, maybe even the entire machine system supply chain --
the entire machine ecosystem of factory machine building,
maintenance, upgrades, and repair, etc --
are all done automatically, by machines.
The machine world becomes completely empty of all humans.
There is no more need to have any humans in any factory at all.
We would say that's clearly not a failure,
as we have succeeded to such an extent
that overall optimal conditions are happening --
it is the utopia of perfected free abundance.
We are being told (sold a lie) that everything important
will be nearly unlimited and plentiful.
Full automation has occurred in all aspects of life.
Except that this outcome is actually a failure condition
because, in effect, all humans have been displaced.
Not just from a single given example factory,
but from all of them --
and moreover from the entire machine self maintenance ecosystem altogether.
People are no longer needed,
they produce no value,
and have all become "useless eaters".
Ultimately, losers of the game.
Even in the individual example case,
the overall effect is that the human ecosystem
has been displaced from the space that the factory takes up.
There has been a displacement of the human ecosystem --
one that favors the machine ecosystem --
the one that manages the factory
and keeps all the robots working
and the total supply chain needed to repair all of robots,
produce the parts and the products and processes
to make the widgets and so on.
Anywhere you are displacing an organic ecosystem
with a metal based or a silicon-based ecosystem,
you are displacing life with death.
In this case, we are considering anything artificial
as displacing anything natural.
And 'artificial' simply means
consisting of elemental constituents --
things drawn from the periodic table --
that are not generally a part of organic life.
Ie, where the process complexity
is based on anything other than the 4-valence of carbon.
Hence, we are considering the alphabet of elementals that life is based on
which is relatively limited
relative to the kinds of elements and compounds made from those elements
as is typically used in all manner of technology.
Ie, machines and technologies are 'artificial' and 'unnatural'
insofar as they use much more of the periodic table
than 'natural' and 'organic' molecular systems do.
So in this particular sense,
any part of the factory
that uses elements of the periodic table
that are outside of the alphabet of things of the periodic table
that construct what is fundamentally carbon-based life
is a displacement of the ecosystem
upon which carbon-based life is based.
The net effect is a kind of toxicity to the carbon-based life,
insofar as it introduces things from the elemental alphabet
that the carbon-based life just can't process.
Consider what happens if arsenic is one of the 'extra' elementals
introduced by widget production technology, as maybe a side effect,
a into the organic ecosystem.
So in this description,
displacement is just as much a failure condition as toxicity.
Killing human beings as an outcome,
either overtly due to some mechanical problem
where person gets their head cut off by a robot,
or via the toxicity of something is introduced into the environment.
Maybe some gas escapes from some cylinder in a robot,
and a person dies somewhere because they just got a localized exposure
while working in the factory.
Does it really matter if the death occurs directly
because of mechanics or indirectly because of poison?
Also, does it really matter whether the death results because
the person happens to be an employee working inside of the building,
or because it is just someone who just happens to be standing nearby
outside the building and on the street?
Finally, does it really matter if "displacement from life"
(ie; death) occurs in some specific obvious unitary event,
or if this 'displacement from life' happens in aggregate,
in the sense that the whole biosphere volume,
when considered over the whole world, in total,
is ever absolutely decreasing, replaced by metal and machine,
via any number of small, incremental, and chronically occurring events,
none of which by themselves are very obvious?
In other words, it really does not matter
whether the failure modes are in the form of robot ceases to function,
or robot doesn't function properly robot mechanically or chemically,
or in some other way causes the disease and dismemberment of humans
who happen to be working in the factory,
or if it is a toxicity or displacement condition
in the ecosystem that humans depend on.
Anything that reduces and displaces the total volume of organic life
in favor some factory ecosystem that machines and widgets depend on
is a net loss of life,
and thus also of actual misalignment with human well-being,
by definition.
If we're properly classifying misalignment failures,
then all three of these kinds of failures
are fundamentally actual categories of the failure of misalignment.
It is unfortunate and maybe interesting that one of these failure modes --
the complete and absolute automation of everything manufactureable --
would normally be considered a notion of "success".
We can see that such "success" is actually a delusion.
It is not overall, actually, in the final analysis,
representing alignment with the human well-being.
It is actually representing the alignment with an artificial ecosystem
one which is producing artificial products,
which themselves produce even more artificial ecosystems,
and products, and parts, etc,
for more artificial products, and processes, and etc etc.
The overall effect is to consider is the degree to which
the organic biosphere is being displaced by an artificial non-biosphere.
That the whole of the artificial machine world
is also overall an evolving machine ecology of some sort
merely makes the problem much worse.
Considering the alignment of one single intelligent machine agent
in the context of this sort of whole
is simply irrelevant.
Nor is it even the case that "all" that is composing
the machine world could even be considered as "agents",
and thus amenable to this sort of control treatment.
And moreover, the unknown unknown of all of the side effects
of all aspects of all actions and processes in the machine world
will eventually exceed all possible notions of any capability
to control and condition all of all of such effects.
Thus, the question becomes
to what degree does the evolution of the artificial ecosystem
displace the evolutionary dynamics of the bio organic ecosystem?
This defines the risk profile.
To understand this question, we can consider
the enthalpy calculations associated with
the total alphabet of elementals associated with all of
the tech process system, artificiality, taken as a whole,
when compared to the same aspects --
the total alphabet of elements associated with all of
the bio-organic ecosystem, taken as a whole.
When we do this, we notice that the bio ecosystem
is actually quite fragile to being displaced by the machine system,
and/or to being poisoned, somehow, directly or indirectly,
by the artificial evolutionary process.
Hence we ask: to what degree would it be possible
to create constraints on the evolution of the artificial ecosystem
to not produce displacement of the bio organic ecosystem?
And we notice that as soon as we try to figure out
whether or not any of those kind of constraints become relevant,
we notice that we come up against limits associated with causation itself.
So what we're noticing is
that it's actually quite hard
to predict whether or not something that is made --
as a side effect of the artificial ecosystem,
or from the operation of the artificial --
or as a side effect of some evolutionary dynamic
in or within the artificial ecosystem --
whether or not any of that
will have cascading toxic effects
in the bio-organic ecosystem.
And there's been considerable discussion about this point.
However, it is actually also a non-controversial point,
once it is understood on a specific way.
This is where the 'hashiness model' comes in.
The hashiness model can be used to describe
the limits of control
of any kind of controller,
or any algorithm --
or more generally,
any application of causation
as a limit of causation.
For example, if you try to use an algorithm
to try to predict the outcome of another algorithm,
particularly when the algorithm
that you are trying predict the outcome of
is itself changing in ways that are impossible to predict,
then the effectiveness of controller,
of the constraining algorithm,
is going to be quite limited.
This is because an inability to model, and thus predict
in the controller, all of what is being controlled.
Partially this is due to sense bandwidth limits,
but the actual issues go even deeper.
For example, even if the controller algorithm has full access
to the inner state of the controlled algorithm,
that in itself does not imply that the controller
will be able to predict the future outcome of the controlled
in any way that is faster or simpler than just the direct running
of the controlled algorithm.
Trying to predict the future outcome of,
and therefore to provide appropriate constraints around,
the future expressions of the controlled algorithm
is in general impossible, and moreover impractical.
But the hashiness model can actually apply the other way too.
It can be used to describe comparables in the space of viable solutions,
for example, as to whether or not there can be some sort of coexistence
of artificial ecosystems and and organic ecosystems.
We can do so by comparing the number or "volume" of 'locations'
in the hyperspace of all possible inter-ecosystem relations
where coexistence is possible versus the number of places
where coexistence is not possible --
ie; that some component or some side effect
of something produced by the artificial ecosystem
is inherently and intrinsically toxic to the organic ecosystem
in a way that could not have been foreseen,
due to unknown unknown factors (ie; other abstractions
in the hyperspace of all possible abstractions)
or was just not foreseen simply due to compute limits.
Part of the notion of 'predictability'
is the assumption that we also have implied
some reference to what's good or bad --
what is the relevant aspect of the prediction.
Unfortunately, there are lots of cases
where the permutation space of all elementals from the entire periodic table,
or at least the portions of the periodic table which are used by the artificial ecosystem,
and all of their various compounds and combinations, of whatever degree,
is just strictly greater than
the permutations of combinations of the much more limited set of elementals
associated with the organic ecosystems.
The difference in combinatorics is just strictly greatly in non-linear proportion
to just the number of elements.
The number of elements associated with a organic ecosystems
is just strictly less than
the number of elements associated with the artificial ecosystem.
Therefore, the permutation complexity of the molecular structures
and the ways in which those molecular structures
in the artificial ecosystem occur,
can be and will be for sure wholly unrecognizable,
and thus indigestible,
by organic side.
This is actually quite significant.
The number of artificial produced compounds exposed directly or indirectly
into the common environment shared with the organic ecosystem is actually quite large.
And few to none of them can be safely processed by the organic.
For the bio-organic side, the toxicity issues are huge.
And by "huge", we mean as defined by exponential notation
with a bunch of factorials thrown in also.
Any time you have combinations of factorials and exponents in the same equation,
any sort of equality of proportionality of effect
gets pretty ridiculous pretty quickly.
Given that the number of inter-ecosystem combination states
that are inherently and irreducibly toxic to the bio-organic ecosystem
is fundamentally enormous, especially in comparison to
the number of interaction states that are representing inter- compatibility,
that we can assert to much more than 99% certainty
that the incompatibility is overall a far more likely outcome,
and that this will most likely occur for reasons
that would be inherently unpredictable to the bio-organic --
to many things never tried before in all of the history of life --
even if we had a reasonably good characterization of the phenotype space.
Therefore, while we can effectively use control theory
for well known state spaces --
such as sequences of bits or bytes in a communication channel,
with a known protocol, etc, for example,
that it is a whole other situation
when considering high degrees of abstraction
and over language state spaces which are so large
that even liberal use of exponential notation is wholly insufficient.
We simply cannot always effectively look in directions
and conceive of categories of specific kinds of misaligned interactions
when nearly everything is in the space of unknown-unknowns --
we do not even have a clue as what to even look for.
And super-intelligence is going not going to close the gap either
when the number of orders of magnitude of the intelligence difference
that would be required
itself needs significant exponential notation to describe.
When you actually look at the at the fragility
of the bio-organic ecosystem to exposure to completely novel compounds --
whether they are ones that are maybe essential to the evolution of the artificial ecosystem,
or which are maybe just incidentally produced as byproducts or side effects,
the trouble remains that the bio-organic ecosystem
has no way to ingest and process those compounds --
it is either death due to toxicity or death due to displacement,
and much more likely the former than the latter.
Viewed in this context, the chances that the bio-organic
will be able to process all of all of the compounds produced
by the artificial in some way consistent with its own internal
metabolic processes, is vanishingly small.
The unpredictability problem of unknown unknowns aspect
essentially means that the effectivity of any control algorithm,
given a statistics herein mentioned and the inherent toxicity
of non-food displacements, etc,
that the overall effect is definitely convergent on fatal.
The total loss and/or destruction and/or displacement
of the bio-organic ecosystem that humans cannot not depend on
means that all humans die also --
which is about as perfectly misaligned with human interests
that such certainty of argument cannot be argued against.
So that is essentially the substrate needs argument.
Moreover, the whole same argument can be made in terms of
the degree to which each ecosystem --
artificial (metal and silicon) and/or bio-organic (carbon) --
can somehow participate in some sort of metabolic energy flux.
Even if the bio-organic could somehow process some of the emit
compounds generated from the artificial,
and thus to somehow essentially process those compounds
in some way consistent with its internal metabolic process,
we still need to consider what this means in terms of
the overall energy flux through the metabolic system.
Ie, we can consider the energy flux both through the bio-organic ecosystem
as well as through the artificial ecosystem.
Whether we consider just the enthalpy associated with
the chemistry of each system type,
we notice that the bio-organic system
is able to process absolutely everything that it needs to do
in a range near proximate to room temperature and pressure.
In contrast, the artificial ecosystem is going to involve
a much much wider variety of temperatures and pressures.
Therefore, the enthalpy associated with the Artificial ecosystem
is just in general much much larger than
that of the metabolic process of the pure bio-organic ecosystem.
And it is not only the case
that the chemistries are different from one another
in terms of the fundamental energies involved
but it is also case
that the net aggregate energy flux through the whole ecosystem
is enormously larger for the artificial ecosystem
that then it is relative to the bio-organic ecosystem.
While it is the case that the artificial system energy flux
is way way less efficient in terms of how it processes energy in bulk,
it's also the case that it can process a lot more energy,
and much much larger scales, than the bio-organic can ever have any hope to.
The differential in terms of what the artificial ecosystem can process
through its digestive or metabolic or energy processing process
relative to the bio-organic ecosystem
in terms of its fairly limited ability to process energy,
even if that processing is at a much higher efficiency overall.
The bio-organic is much more limited
in terms of the range of things that it can digest
in terms of compounds formed from the alphabet of the elemental table
that it uses versus compounds created by anything else.
essentially that that in effect both on the metabolic process side
in terms of the variety of things that can process
in the number of channels of energy that can manage
in the sheer volume of energy
that can manage that when you're looking at the interface aspects of the relationship
between the by organic ecosystem and the artificial ecosystem
that the artificial ecosystem dominates very nearly 100% of the time.
The epsilon of difference of the very few cases
where the artificially ecosystem doesn't completely trounce the organic ecosystem
is vanishingly small.
It's a fraction with a bunch of equations
that have the form of exponentials with factorials --
it's crazy bad.
What are the chances that the inverse probability
of the unlikeliness rather of finding a combination
of some some version of the artificial constituency
that is nondestructive
either by displacement or by toxicity
to the bio-organic ecosystem.
To attempt to control all of this
is to be asking for constraints on the evolutionary process
which are fundamentally inconsistent with the algorithm of the Evolutionary process
as rendered in the artificial ecosystem.
On top of that is that the energy processing bulk factors.
This is another form of displacement,
and also some statistics of toxicity to the metabolic process of the bio-organic ecosystem.
Moreover, the level of prediction capacity
that the artificial ecosystem in aggregate
would need to have
is so enormously large
that it is fundamentally beyond the capacities of predictability
given the amount of randomness hard random
is built into the bio-organic ecosystem level.
The degree of non-linear propagation of effects
and also the unknown unknowns --
factors that are involved in that prediction,
and which would actually be fundamentally important.
The net net conclusion is that the outcome is so certain
that it is incontestable.
~ ~ ~