Wednesday, March 15, 2023

The Consistency Mystery


Sea Org members don't have time to think about such things, but there is a strange bias that affects every human even if they believe the opposite:

For some reason, the reality we inhabit appears absolutely consistent and perfectly integrated. Everything happens for a precise reason that affects everything else through an unbroken chain of logical connections, without a single flaw or glitch. There has never been the tiniest violation of the laws that govern our minds' reality.

The odd thing is that there are VASTLY more possible ways for minds to have completely absurd experiences. A randomly selected mind from the list of all possible minds should be having constantly changing meaningless hallucinations. Even their memories should be unstable.

L. Ron Hubbard talked about shared postulates, but didn't say how they worked or why they should work. It was a purely religious declaration, disguised as science.
However it does appear we are not randomly selected minds, but a type of special case. Specifically: our perceptions are entirely limited by the universe we appear to inhabit. Which implies this universe is not an illusion of our minds; instead our minds are fully part of the universe.

This fact has been used to argue that we are not "Boltzmann Brains".
In the distant future of our universe, an infinite number of Boltzmann Brains will briefly "pop" into existence in empty space before immediately disintegrating again (this seems to be inevitable). Almost all of these Brains will have completely absurd experiences. But we are almost certainly not part of that infinite group.

Mathematically, this implies that a larger number of fully realized physical universes must also come into existence in the distant future, similar to Boltzmann Brains. Several ways have been suggested this could happen ("A Big Bang In A Little Room", Zeeya Merali, 2017), but there could be easier ways. We are actually probably part of such a "Boltzmann Universe".
In fact it may be EASIER for nature to create an entire universe that contains minds, than to create just a mind by itself.

Another way to look at it is to consider any string of random numbers without end. Eventually, you will come across a string of numbers that describes a "Boltzmann Mind", which will (almost certainly) have a very brief and absurd existence.
However, LONG before then, you will come across many mathematical equations that will fully describe entire universes (if you solve them). An equation that can generate countless minds is much shorter than a description of one mind.
In fact the type of equation that generated our universe may actually be one of the most "productive" such equation types in all of reality.

If such simple equations are constantly "generating" new universes from within themselves, most universes should be expected to be relatively young. In fact, in almost all such universes the Big Bang would still be happening (an anthropic explanation for cosmological inflation). Most universes with observers would find themselves near the start of their observers histories.
Just like we are.

Of course, that also means that Sea Org members would be far more likely to find themselves near the start of their billion year contracts . . .

Monday, March 13, 2023

Doom Sooner

https://www.lesswrong.com/posts/nFdqzdKKy8EyYqJ5d/doom-sooner

The paradox about life in the Sea Org (an organization with a billion year contract) is how much they care about the here and now. They focus on the shortest possible timelines, to keep their low-level staff working around the clock. The sci-fi stops as soon as the Sea Org oath has been taken.
In fact they want to get rid of the very concept of evolution but enter a timeless state of production, where the money keeps flowing uplines. Every day will be the same forever, like in the Tom Cruise film "Edge of Tomorrow".
For Sea Org staff members progress is an illusion, at least at a personal level.

My own life has also been rather static, but now it seems there is a small chance that the real world may start changing in ways that would even affect life in Big Blue and Saint Hill . . .

I have always thought that software is fantastically weak and infuriatingly defective, in fact criminally so. Software has been made as evil as they can get away with. The way that things don't work is just part of how the world is endlessly malevolent in all directions.
That has made me more interested in ongoing AI research trends. Perhaps smarter computers could design better interfaces than human programmers are willing to do?
Well, that depends on whether these computers will be less or more evil than the human programmers who are tormenting us today . . .

The basic truth about any complex system is that it's always more complex than it seems. All projects take longer than planned (and not just when David Miscavige is involved). However, if you repeat a process enough, it can eventually be done faster and more reliably.
Multicellular life took eons to evolve. Animals took millions of centuries to develop intelligence. Primitive humans were stuck in the stone age for thousands of centuries.

Right now, society is about as dumb and inefficient as it can get away with. The most powerful force in the world is the implacable consensus that exists everywhere. There are too few geniuses to overcome the sometimes monstrously deliberate inefficiencies of life.
For those reasons, it seems probable that developing something as complex as Artificial Superintelligence will take several decades at least, and can only be done with a great deal of effort.
By this I mean that completely unexpected delays will arise that will keep slowing things down. Yet it's the only thing that might possibly save us, the closest thing to a magic genie.

All the posts on the LessWrong.com website make a powerful case that when the first AI does develop superintelligence, it will likely not be "well rounded", but hyper-focused on some inadequately defined goal. Having less general intelligence will not make it less dangerous. The threat range may be "smaller" but no less deadly.

What is the simplest way a brute-force AI could run amuck? All it would take is one super clever idea, like the easiest self-replicating nanobot, DNA rewriting meta-viruses, or even social memes to manipulate personalities. We vastly underestimate how badly things could go wrong. Just dropping a test tube with bat guano can crash the world economy for three years, and cause me personally to lose $100,000 in life savings.

Open-ended software entities running on sufficiently powerful hardware are likely to be controlled by nations or large corporations. Due to their extreme cost and thanks to popular fears, it may be possible to impose worldwide restrictions on such projects. For example, they could only be allowed to run on a shared global network, with many "kill switches".

The real danger comes from smaller AI projects using cobbled together supercomputers or rented CPU farms. These will also arrive sooner. No one is monitoring the research in places like North Korea, or even the Flag Land Base. (My opinion is the world is full of evil people, but in self-righteous ways. The world is evil in ways that most people refuse to talk or think about.)
Any efforts to anticipate how these projects might go wrong would generate new dangerous ideas themselves. There are a million ways the biosphere could be poisoned or society disrupted (even THIS extremely obscure blog post could be dangerous, though the expected costs from increased human obliteration risks could hardly be more than a few cents).

For that reason smaller AI projects should also have mandatory oversight (without excessive costs being imposed) or else they shouldn't be allowed to benefit from any discoveries they make. Copyright and patents only work if most countries enforce them, so only a few countries would need to pass pro-alignment legislation to reduce the profit motive behind unmonitored research.
For AI to be controlled, the whole world would have to be open to full inspection for global safety risks, including areas that seemingly have nothing to do with AI. (I wrote an incredibly obscure novel about such inspectors. Also I've been told the female characters especially have been written in a very unrealistic way, so it may not be too readable.) Global inspection would only be practical if all needlessly intrusive laws (like for non-violent crimes) would not be prosecuted as a result of the inspection process.

Again, the principle of mediocrity applies. There is a likely limit to how much damage early AI projects can do, unless we get very unlucky.
Perhaps we will be protected from an all-encompassing Singularity takeover by several pre-Singularity crises that help us prepare better. Of course millions of people would have to die first. I tend to think that is how it will go.

I also want to repeat my unpopular proposal not to rely on developing super-AI tools to solve the problem of human mortality for us, but to focus on that problem directly. (That includes minor philosophical questions like what should be the highest ethical principles across all reality.)

Anyway, the point of this post is very simple: We don't have to worry about the threat of artificially intelligent entities destroying humanity, not the least bit. Long before then, a vast array of semi-intelligent software will be able to obliterate the world just as thoroughly.
Should we manage to overcome that threat, the things we learn then will prepare us for a full AI supermind far better than anything we can imagine now.

IF this all goes right, then something like the Singularity might actually happen. It could lead to as many different outcomes as there are individuals.
Sea Org members might then find themselves locked in their current position, doing the same job twelve or fourteen hours a day, for six days a week. Not just for a billion years, but effectively forever.

Does anyone have an online copy of the most recent "High Winds"?

High Winds is the official magazine of the Sea Org. No one outside the cult knows if it's still being published. The publication date of...