gilch

As of June 2024, I have signed no contracts or agreements whose existence I cannot mention.

Sequences

An Apprentice Experiment in Python Programming
Inefficient Markets

Wiki Contributions

Load More

Comments

Sorted by
gilch177

I feel like this has come up before, but I'm not finding the post. You don't need the stick-on mirrors to eliminate the blind spot. I don't know why pointing side mirrors straight back is still so popular, but that's not the only way it's taught. I have since learned to set mine much wider.

This article explains the technique. (See the video.)

In a nutshell, while in the diver's seat, tilt your head to the left until it's almost touching your window, then from that perspective point it straight back so you can just see the side of your car. (You might need a similar adjustment for the passenger's side, but those are often already wide-angle.) Now from normal position, you can see your former "blind spot". When you need to see straight back in your side mirror (like when backing out), just tilt your head again. Remember that you also have a center mirror. You should be able to see passing cars in your center mirror, and then in your side mirror, then in your peripheral vision without ever turning your head or completely losing sight of them.

gilch227
  • It's not enough for a hypothesis to be consistent with the evidence; to count in favor, it must be more consistent with the hypothesis than its converse. How much more is how strong. (Likelihood ratios.)
  • Knowledge is probabilistic/uncertain (priors) and is updated based on the strength of the evidence. A lot of weak evidence can add up (or multiply, actually, unless you're using logarithms).
  • Your level of knowledge is usually not literally zero, even when uncertainty is very high, and you can start from there. (Upper/Lower bounds, Fermi estimates.) Don't say, "I don't know." You know a little.
  • A hypothesis can be made more ad-hoc to fit the evidence better, but this must lower its prior. (Occam's razor.)
    • The reverse of this also holds. Cutting out burdensome details makes the prior higher. Disjunctive claims get a higher prior, conjunctive claims lower.
    • Solomonoff's Lightsaber is the right way to think about this.
  • More direct evidence can "screen off" indirect evidence. If it's along the same causal chain, you're not allowed to count it twice.
  • Many so-called "logical fallacies" are correct Bayesian inferences.
gilch270

French, but because my teacher tried to teach all of the days of the week at the same time, they still give me trouble.

They're named as the planets: Sun-day, Moon-day, Mars-day, Mercury-day, Jupiter-day, Venus-day, and Saturn-day.

It's easy to remember when you realize that the English names are just the equivalent Norse gods: Saturday, Sunday and Monday are obvious. Tyr's-day (god of combat, like Mars), Odin's-day (eloquent traveler god, like Mercury), Thor's-day (god of thunder and lightning, like Jupiter), and Freyja's-day (goddess of love, like Venus) are how we get the names Tuesday, Wednesday, Thursday, and Friday.

gilch210

Why is Google the biggest search engine even though it wasn't the first? It's because Google has a better signal-to-noise ratio than most search engines. PageRank cut through all the affiliate cruft when other search engines couldn't, and they've only continued to refine their algorithms.

But still, haven't you noticed that when Wikipedia comes up in a Google search, you click that first? Even when it's not the top result? I do. Sometimes it's not even the article I'm after, but its external links. And then I think to myself, "Why didn't I just search Wikipedia in the first place?". Why do we do that? Because we expect to find what we're looking for there. We've learned from experience that Wikipedia has a better signal-to-noise ratio than a Google search.

If LessWrong and Wikipedia came up in the first page of a Google search, I'd click LessWrong first. Wouldn't you? Not from any sense of community obligation (I'm a lurker), but because I expect a higher probability of good information here. LessWrong has a better signal-to-noise ratio than Wikipedia.

LessWrong doesn't specialize in recipes or maps. Likewise, there's a lot you can find through Google that's not on Wikipedia (and good luck finding it if Google can't!), but we still choose Wikipedia over Google's top hit when available. What is on LessWrong is insightful, especially in normally noisy areas of inquiry.

gilch20

I think #1 implies #2 pretty strongly, but OK, I was mostly with you until #4. Why is it that low? I think #3 implies #4, with high probability. Why don't you?

#5 and #6 don't seem like strong objections. Multiple scenarios could happen multiple times in the interval we are talking about. Only one has to deal the final blow for it to be final, and even blows we survive, we can't necessarily recover from, or recover from quickly. The weaker civilization gets, the less likely it is to survive the next blow.

We can hope that warning shots wake up the world enough to make further blows less likely, but consider that the opposite may be true. Damage leads to desperation, which leads to war, which leads to arms races, which leads to cutting corners on safety, which leads to the next blow. Or human manipulation/deception through AI leads to widespread mistrust, which prevents us from coordinating on our collective problems in time. Or AI success leads to dependence, which leads to reluctance to change course, which makes recovery harder. Or repeated survival leads to complacency until we boil the frog to death. Or some combination of these, or similar cascading failures. It depends on the nature of the scenario. There are lots of ways things could go wrong, many roads to ruin; disaster is disjunctive.

Would warnings even work? Those in the know are sounding the alarm already. Are we taking them seriously enough? If not, why do you expect this to change?

gilch20

I don't really have a problem with the term "intelligence" myself, but I see how it could carry anthropomorphic baggage for some people. However, I think the important parts are, in fact, analogous between AGI and humans. But I'm not attached to that particular word. One may as well say "competence" or "optimization power" without losing hold of the sense of "intelligence" we mean when we talk about AI.

In the study of human intelligence, it's useful to break down the g factor (what IQ tests purport to measure) into fluid and crystallized intelligence. The former being the processing power required to learn and act in novel situations, and the latter being what has been learned and the ability to call upon and apply that knowledge.

"Cognitive skills" seems like a reasonably good framing for further discussion, but I think recent experience in the field contradicts your second problem, even given this framing. The Bitter Lesson says it well. Here are some relevant excerpts (it's worth a read and not that long).

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. [...] Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation.

[...] researchers always tried to make systems that worked the way the researchers thought their own minds worked---they tried to put that knowledge in their systems---but it proved ultimately counterproductive, and a colossal waste of researcher's time, when, through Moore's law, massive computation became available and a means was found to put it to good use.

[...] We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.

One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.

[...] the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds [...] these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. [...] We want AI agents that can discover like we can, not which contain what we have discovered.

Your conception of intelligence in the "cognitive skills" framing seems to be mainly about the crystalized sort. The knowledge and skills and application thereof. You see how complex and multidimensional that is and object to the idea that collections of such should be well-ordered, making concepts like "smarter-than human" if not wholly devoid of meaning, at least wrongheaded.

I agree that "competence" is ultimately a synonym for "skill", but you're neglecting the fluid intelligence. We already know how to give computers the only "cognitive skills" that matters: the ones that let you acquire all the others. The ability to learn, mainly. And that one can be brute forced with more compute. All the complexity and multidimensionality you see come when something profoundly simple, algorithms measured in mere kilobytes of source code, interacts with data from the complex and multidimensional real world.

In the idealized limit, what I call "intelligence" is AIXI. Though the explanation is long, the definition is not. It really is that simple. All else we call "intelligence" is mere approximation and optimization of that.

gilch20

Strong ML engineering skills (you should have completed at least the equivalent of a course like ARENA).

What other courses would you consider equivalent?

gilch20

I don't know why we think we can colonize Mars when we can't even colonize Alaska. Alaska at least has oxygen. Where are the domed cities with climate control?

gilch50

Specifically, while the kugelblitz is a prediction of general relativity, quantum pair production from strong electric fields makes it infeasible in practice. Even quasars wouldn't be bright enough, and those are far beyond the energy level of a single Dyson sphere. This doesn't rule out primordial black holes forming at the time of the Big Bang, however.

It might still be possible to create micro black holes with particle accelerators, but how easy this is depends on some unanswered questions about physics. In theory, such an accelerator might need to be a thousand light years across at most, but this depends on achievable magnetic field strength. (Magnetars?) On the other hand, if compactified extra dimensions exist (like in string theory), the minimum required energy would be lower. One that small would evaporate almost instantly though. It's not clear if it could be kept alive long enough to get any bigger.

Answer by gilch231

What are efficient Dyson spheres probably made of?

There are many possible Dyson sphere designs, but they seem to fall into three broad categories: shells, orbital swarms, and bubbles. Solid shells are probably unrealistic. Known materials aren't strong enough. Orbital swarms are more realistic but suffer from some problems with self-occlusion and possibly collisions between modules. Limitations on available materials might still make this the best option, at least at first.

But efficient Dyson spheres are probably bubbles. Rather than being made of satellites, they're made of statites, that is, solar sails that don't orbit, but hover. Since both gravitational acceleration and radiant intensity follow the inverse square law, the same design would function at almost any altitude above the Sun, with some caveats. These could be packed much more closely together than the satellites of orbital swarms while maybe using less material. Eric Drexler proposed 100 nm thick aluminum films with some amount of supporting tensile structure. Something like that could be held open by spinning, even with no material compressive structure. Think about a dancer's dress being held open while pirouetting and you get the idea.

The radiation needs to be mostly reflected downwards for the sails to hover, but it could still be focused on targets as long as the net forces keep the statites in place. Clever designs could probably approach 100% coverage.

What percent of the solar system can be converted into Dyson-sphere material? Are gas giants harvestable?

Eventually, almost all of it, but you don't need to to get full coverage. Yes, they're harvestable; at the energy scales we're talking about, even stellar material is harvestable via star lifting. The Sun contains over 99% of the mass of the Solar System.

How long would it take to harvest that material?

I don't know, but I'll go with the 31 years and 85 days for an orbital swarm as a reasonable ballpark. Bubbles are a different design and may take even less material, but either way, we're talking about exponential growth in energy output that can be applied to the construction. At some point the energy matters more than the matter.

What would the radius of a Dyson sphere be? (i.e. how far away from the sun is optimal). How thick?

I'd say as close to the Sun as the materials can withstand (because this takes less material), so probably well within the orbit of Mercury. Too much radiation and the modules would burn up. Station keeping becomes more difficult when you have to deal with variable Solar wind and coronal mass ejections, and these problems are more severe closer in.

The individual statite sails would be very thin. Maybe on the order of 100 nm for the material, although the tensile supports could be much thicker. I don't know how many sails an optimal statite module would use (maybe just 1). But the configuration required for focus and station keeping probably isn't perfectly flat, so a minimal bounding box around a module could be much thicker still.

An energy efficient Dyson Sphere probably looks like a Matrioshka brain, with outer layers collecting waste heat from the inner layers. Layers could be much farther apart than the size of individual modules.

If the sphere is (presumably) lots of small modules, how far apart are they?

Statites could theoretically be almost touching, especially with active station keeping, which is probably necessary anyway. What's going to move them? Solar wind variation? Micrometor collisions? Gravitational interactions with other celestial bodies? Remember, statites work about the same regardless of altitude, so there can be layers with some amount of overlap.

"if an AI is moderately 'nice', leaves Earth alone but does end up converting the rest of the solar system into a Dyson sphere, how fucked is Earth?

Very, probably. And we wouldn't have to wait for the whole (non-Sun) Solar System to be converted before we're in serious trouble.

Load More