wearsshoes

Wiki Contributions

Comments

My current thoughts on the risks from SETI

Even granting that there are grabby aliens in your cosmic neighborhood (click here to chat with them*), I find the case for SETI-risk entirely unpersuasive (as in, trillionths of a percent plausible, or indistinguishable from cosmic background uncertainty), and will summarize some of the arguments others have already made against it and some of my own. I think it is so implausible that I don't see any need to urge SETI to change their policy. [Throwing in a bunch of completely spitballed, mostly-meaningless felt-sense order-of-magnitude probability estimates.]

  • Parsability. As Ben points out, conveying meaning is hard. Language is highly arbitrary; the aliens are going to know enough about human languages to have a crack at composing bytestrings that compile executable code? No chance if undirected transmission, 1% if directed transmission intended to exploit my civilization in particular.

  • System complexity. Dweomite is correct that conveying meaning to computers is even harder. There is far too much flexibility, and far too many arbitrary and idiosyncratic choices made in computer architectures and programming languages. No chance if undirected, 10% if directed, conditioning on all above conditions being fulfilled.

  • Transmission fidelity. If you want to transmit encrypted messages or program code, you can't be dropping bits. Do you know what frequency I'm listening on, and what my sample depth is? The orbital period of my planet and the location of my telescope? What the interplanetary and terrestrial weather conditions that day are going to be, you being presumably light-years away or you'd have chosen a different attack vector? You want to mail me a bomb, but you're shipping it in parts, expecting all the pieces to get there, and asking me to build it myself as well? 0.01% chance if undirected, 1% if directed, conditioning on all above conditions being fulfilled.

  • Compute. As MichaelStJules's comment suggests, if the compute needed to reproduce powerful AI is anything like Ajeya's estimates, who cares if some random asshole runs the thing on their PC? No chance if undirected, 1% if directed, conditioning on all above conditions being fulfilled.

  • Information density. Sorry, how much training is your AI going to have to do in order to be functional? Do you have a model that can bootstrap itself up from as much data as you can send in an unbroken transmission? Are you going to be able to access the hardware necessary to obtain more information? See above objections. There's terabytes of SETI recordings, but probably at most megabytes of meaningful data in there. 1% chance if undirected, 100% if directed, conditioning on all above conditions being fulfilled.

  • Inflexible policy in the case of observed risk. If the first three lines look like an exploit, I'm not posting it on the internet. Likewise, if an alien virus I accidentally posted somehow does manage to infect a whole bunch of people's computers, I'm shutting off the radio telescope before you can start beaming down an entire AI, etc, etc. (I don't think you'd manage to target all architectures with a single transmission without being detected; even if your entire program was encrypted to the point of indistinguishability from entropy, the escape code and decrypter are going to have to look like legible information to anyone doing any amount of analysis.) Good luck social engineering me out of pragmatism, even if I wasn't listening to x-risk concerns before now. 1% chance if undirected, 10% if directed, conditioning on all above conditions being fulfilled.

So if you were an extraterrestrial civilization trying this strategy, most of the time you'd just end up accomplishing nothing, and if you even got close to accomplishing something, you'd more often be alerting neighboring civilizations about your hostile intentions than succeeding. Maybe you'd have a couple lucky successes. I hope you are traveling at a reasonable fraction of C, because if not you've just given your targets a lot of advance warning about any planned invasion.

I just don't think this one is worth anyone's time, sorry. I'd expect any extraterrestrial communications we receive to be at least superficially friendly, and intended to be clearly understood rather than accidentally executed, and the first sign of hostility to be something like a lethal gamma-ray burst. In the case that I did observe an attempt to execute this strategy, I'd be highly inclined to believe that the aliens already had us completely owned and were trolling us for lolz.

*Why exactly did you click on a spammy-looking link in a comment on the topic of arbitrary code execution?

We Choose To Align AI

Composer Christopher Tin has set JFK's "We Choose to go to the Moon" speech to music, https://www.youtube.com/watch?v=HBITb9Zz0rY . Solsticegoers may recognize the opening leitmotif as shared with Sogno Di Volare, another movement from the same work, an oratorio on the theme of flight, To Shiver the Sky.

What are some beautiful, rationalist artworks?

I've recently gained a better appreciation for how astonishingly good this work is at linear perspective, which had only come about in European art in the prior century. Many things about this painting are good (and some bad to my eye, like the messy color scheme) but those hexagonal details on the curved arches in perspective is 100% Raphael showing off.

An aside, but linear perspective is the most rational part of art, in the older philosophical sense of rational; it's pretty much the only major part of classical art which descends from first principles rather than having an empirical basis.

Free Educational and Research Resources

Okay, this guy sold me as soon as I saw he had an episode on Doc Ing Hay's general store in rural Oregon. I stumbled upon this place once just passing through, at a convenient time to get a guided tour of the little museum they'd made out of it. There's not even a Wikipedia article on it yet; which gives me the impression that this podcaster is committed to both a broad and deep history of the chinese experience

Free Educational and Research Resources

Ah you've got my directionality confused, the bias preventing me from judging History of China podcast dispassionately is in his inability to pronounce Chinese fluently. I'm in the weird position of being fluent enough in Chinese to be a little intolerant of English speakers with bad Chinese pronunciation but not fluent enough to understand the Chinese-language content. I will say though that China History Podcast seems a little better on this very particular axis and I think it would be unreasonable to expect much better. They definitely seem to have a lot of content, and much of it relevant to the modern era!

Are there documentaries on rationality?

Latest update: I did not complete the documentary and have no plans to continue working on it in the near future. The 90 hours of footage that I shot is all archived for possible later use, and is partially available to the community upon request.

What are you looking for in a Less Wrong post?

A bulleted list of answers others have written:

  • Generates a new insight (TurnTrout)
  • Is good for something (adamzerner)
  • Shows its work (bvxn)
  • Ties up its loose ends (curi)
  • Resolves a disagreement (curi)
  • Shows effort (Alexei)
  • Well-written (Alexei)
  • Surprising (Alexei)
  • Credible (Alexei)
  • Summarize work (me)

And certain topical interests which LW is a topos for:

  • Cognition
  • AI
  • Self-improvement

I'm throwing in that I like posts and comments that compress knowledge (such as this).

My further two cents are that what people answer here will be somewhat unrepresentative. The answers will be a certain set of ideal practices which your answerers may not actually implement and even if they did, they might not represent the community at large. The honest answer to your question is probably data-driven; by scraping the site you could generate a better predictive model of what content actually gets upvotes than people will tell you here.

But nevertheless there is value to your question. The idealized picture you'll get is in fact the picture of the ideal you want. If you take onboard people's best-case answers, you'll make stuff that the most engaged people want the most of, and that will contribute to a making better community overall.

Free Educational and Research Resources

Thanks! I won't add these to the top list but I hope people will scroll down to see the comments. I should mention that there are a whole bunch of Mike Duncan - inspired "History of X" which are of varying quality. I wanted to get into the History of China dude, but I couldn't give him more than a few episodes due to wincing at his accent, didn't even get to judge his content. Unfortunately my Chinese isn't actually good enough to listen to podcasts in Chinese about Chinese history. History of Byzantium is supposedly also good.

Unifying the Simulacra Definitions

Zvi, thank you for writing this. I’ve been working through Baudrillard too and coming to the same conclusion - he is far more insight porn than philosophy, compared to famous scholars with similar metaphysics such as Foucault and Zizek. I’ve got a long post in the pipeline on this as well. 
 

It’s really frustrating that this community has been spinning up an elaborate schema which is a misinterpretation of a sophist, where the original conversants both admitted they had by that point only read the Wikipedia summary of the book. This feels like the opposite of quality scholarship, not that this is entirely Benquo and jessicataylor’s fault, rather how the discussion ended up picking this up and running with it. 

The rationalist community’s reading of Baudrillard tries to put some sense back into what is fairly sophisticated. But the main problem both groups make is assuming that Level 1 is some fallen ideal, rather than something progressively achieved. Baudrillard is baking a hotter take - which most rationalist discussion completely misses - that Level 1 is completely vanished and Level 2 is on its way out too. He thinks we live in a postmodern world (surprisingly to rationalists who haven’t read the postmodernists: like most postmodernist scholars he does not actually think this is very good) where meaning is composed wholly of simulacra, which does not actually reference the real world which our bodies live in, although he says the real world sure references it.

This misinterpretation of him is easy to make - partly because it sounds like he developed a philosophy out of being totally dissociated. He hated the Matrix, which the Wachowskis referenced him in, for this reason: in the Matrix the virtual reality can be escaped.

My alternative proposal is to re-ground the discussion in a better take about power relations and social games, noticing which groups throughout history play these games and which don’t. The basic conclusion is people generally converse as if they were in the 2nd order (level), jump up in simulation order whenever their access to resources they don’t produce are at stake, and jump down when they have a hand in producing resources. Global meaning has no particular order, unlike Baudrillard’s claim that it is of the 4th order.

More to come.

Free Educational and Research Resources

Added AskHistorians podcast! Mentioning Coursera inline.

Load More