Wiki Contributions


[Epistemic status: a little out of my depth. There might be subtleties I'm missing.]

An oracle machine with a halting oracle is a type of hypercomputer that can "solve" (by fiat, the known laws of physics do not permit such a thing) the halting problem of any conventional Turing machine, but then an analogous oracle-machine halting problem would appear which is undecidable by these halting oracle machines, so this doesn't get rid of undecidable problems.

If we then suppose a second-order halting oracle, we can "solve" the oracle-machine halting problem, but then a new, harder, second-order oracle-machine halting problem would appear, and so on up to any order, in the arithmetical hierarchy.

Thus, we can never solve all undecidable problems, even with hypercomputers. Now, is your proposed hypercomputer model of equivalent power to a halting oracle machine? To the extent it is a well-defined model, I think so.

Lasers can be widened with optics, like curved reflectors. Fiber could potentially distribute an intense source to multiple endpoints, although UVC would require the use of special materials. I’m not an optics specialist either. I don’t know of quantum dots in the UVC range, maybe it hasn’t been done yet. For visible wavelengths they can be pretty bright, so maybe? I don’t think these alternatives exist yet, but so many approaches seem potentially viable that I’m not sure it will take ten years.

Forecasting is hard. Maybe conventional LEDs wouldn’t be that easy, but there may be other approaches superior to excimer lamps we could use for pathogen control. Only one of them has to work, making this a disjunctive claim. For example, frequency-doubling solid-state lasers can kick blue light up to the UVC range. Also, quantum dots can be tuned very precisely, even without changing the component materials.

As always, not investment advice. There are signs that a volatility spike is imminent, which often coincides with a market drop. I have reversed my usual short vol position and bought tail insurance (e.g. OTM puts). Remember vol can fall just as quickly. How long a spike lasts depends on how high it goes.

Can AI destroy modern civilization in the next 30 minutes?

Doubt it, but it might depend on how much of an overhang we have. My timelines aren't that short, but if there were an overhang and we were just a few breakthroughs away from recursive self-improvement, would the world look any different than it does now?

Can a single human being unilaterally decide to make that happen, right now, today?

Oh, good point. Pilots have intentionally crashed planes full of passengers. Kids have shot up schools, not expecting to come out alive. Murder-suicide is a thing humans have been known to do. There have been a number of well-documented close calls in the Cold War. As nuclear powers proliferate, MAD becomes more complicated.

It's still about #3 on my catastrophic risk list depending on how you count things. But the number of humans who could plausibly do this remains relatively small. How many human beings could plausibly bioengineer a pandemic? I think the number is greater, and increasing as biotech advances. Time is not the only factor in risk calculations.

And likely neither of these results in human extinction, but the pandemic scares me more. No, nuclear war wouldn't do it. That would require salted bombs, which have been theorized, but never deployed. Can't happen in the next 30 minutes. Fallout become survivable (if unhealthy) in a few days. Nobody is really interested in bombing New Zealand. They're too far away from everybody else to matter. Nuclear winter risk has been greatly exaggerated, and humans are more omnivorous than you'd think, especially with even simple technology helping to process food sources. Not to say that a nuclear war wouldn't be catastrophic, but there would be survivors. A lot of them.

A communicable disease that's too deadly (like SARS-1) tends to burn itself out before spreading much, but an engineered (or natural!) pandemic could plausibly thread the needle and become something at least as bad as smallpox. A highly contagious disease that doesn't kill outright but causes brain damage or sterility might be similarly devastating to civilization, without being so self-limiting. Even New Zealand might not be safe. A nuclear war ends. A pandemic festers. Outcomes could be worse, and it's more likely to happen, and becoming more likely to happen. It's #2 for me.

And #1 is an intelligence explosion. This is not just a catastrophic risk, but an existential one. An unaligned AI destroys all value, by default. It's not going to have a conscience unless we put one in. Nobody knows how to do that. And short of a collapse of civilization, an AI takeover seems inevitable in short order. We either figure out how to build one that's aligned before that happens, and it solves all the other solvable risks, or everybody dies.

Reminder that "The Merge" for Ethereum is coming up soon. There are bullish signs, like call-to-put ratios. Totally not advice, and please Don't bet the farm; crypto has high volatility.

Interest on these is now between nine and ten percent. If it was a good deal then, maybe it's an even better deal now. I bought another one (not investment advice). Also, someone pointed out that if you buy multiple smaller ones, you don't have to cash them all in at once.

How did you decide which posts to include?

Normal browser bookmarks do work. Use the link icon between the date and karma to get the URL for one.

I think I'm lacking some jargon here. What's a latent/patent in the context of a large language model? "patent" is ungoogleable if you're not talking about intellectual property law.

The Eyeronman link didn't seem very informative. No explanation of how it works. I already knew sensory substitution was a thing, but is this different somehow? Is there some neural net pre-digesting its outputs? Is it similarly a random-seeming mismash? Are there any other examples of this kind of thing working for humans? Visually?

Would the mismash from a smaller text model be any easier/faster for the human to learn?

Load More