Posts

Sorted by New

Wiki Contributions

Comments

At the time I took AlphaGo as a sign that Elizer was more correct than Hanson w/r/t the whole AI-go-FOOM debate. I realize that's an old example which predates the last-four-years AI successes, but I updated pretty heavily on it at the time. 

I'm going to suggest reading widely as another solution. I think it's dangerous to focus too much on one specific subgenre, or certain authors, or books only from from one source (your library and Amazon do, in fact, filter your content for you, if not very tightly). 

For me, the benefit of studying tropes is that it makes it easy to talk about the ways in which stories are story-like. In fact, to discuss what stories are like, this post used several links to tropes (specifically ones known to be wrong/misleading/inapplicable to reality). 

I think a few deep binges on TVtropes for media I liked really helped me get a lot better at media analysis very, very quickly. (Along with a certain anime analysis blog that mixed in obvious and insightful cinematography commentary focusing on framing, color, and lighting, with more abstract analysis of mood, theme, character, and purpose -- both illustrated with links to screenshots, using media that was familiar and interesting to me.) 

And by putting word-handles on common story features, it makes it easy to spot them turning up in places they shouldn't. Like in your thinking about real-life situations.

However, you decided to define "intelligence" as "stuff like complex problem solving that's useful for achieving goals" which means that intentionality, consciousness, etc. is unconnected to it 

This is the relevant definition for AI notkilleveryoneism. 

There has to be some limits

Those limits don't have to be nearby, or look 'reasonable', or be inside what you can imagine. 

Part of the implicit background for the general AI safety argument is a sense for how minds could be, and that the space of possible minds is large and unaccountably alien. Eliezer spent some time trying to communicate this in the sequences: https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general, https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message. 

Early LessWrong was atheist, but everything on the internet around the time LW was taking off had a position in that debate. "...the defining feature of this period wasn’t just that there were a lot of atheism-focused things. It was how the religious-vs-atheist conflict subtly bled into everything." Or less subtly, in this case. 

I see it just as a product of the times. I certainly found the anti-theist content in Rationality: A to Z to be slightly jarring on a re-read -- on other topics, Elizer is careful to not bring into it the political issues of the day that could emotionally overshadown the more subtle points he's making about thought in general -- but he'll drop in extremely anti religion jabs despite that. To me, that's just part of reading history. 

Tying back to an example in the post: if we're using ascii encoding, then the string "Mark Xu" takes up 49 bits. It's quite compressible, but that still leaves more than enough room for 24 bits of evidence to be completely reasonable.

This paper suggests that spoken language is consistently ~39bits/second.

https://advances.sciencemag.org/content/5/9/eaaw2594

Where does the money go? Is it being sold at cost, or is there surplus?

If money is being made, will it support: 1. The authors? 2. LW hosting costs? 3. LW-adjacent charities like MIRI? 4. The editors/compilers/LW moderators?

EDIT: Was answered over on /r/slatestarcodex. tldr: one print run has been paid for at a loss, any (unlikely) profits go to supporting the Lesswrong nonprofit organization.

If  and  are the fourier transforms of  and , then . This is yet another case where you don't actually have to compute the convolution to get the thing. I don't actually use fourier transforms or have any intuition about them, but for those who do, maybe this is useful?

 

It's amazingly useful in signal processing, where you often care about the frequency-domain because it's perceptually significant (eg: percieved pitch & timbre of a sound = fundamental frequency of the air-vibrations & other frequencies. Sounds too fizzy or harsh? Lowpass filter it. Too dull or muffled? Boost the higher frequencies, etc etc etc). Although it's used the other way around -- by doing convolution, you don't have to compute the thing.

 If you have a signal  and want to change it's frequency distribution, what you do is construct a 'short' (finite support) function-- the convolution kernel -- whose frequency-domain transform would multiply to give the kind of frequency responce you're after. Then you can convolve them in the time domain, and don't need to compute the fourier/reverse-fourier at all.

For example, in audio processing. Many systems (IIRC linear time-invariant ones) can be 'sampled' by taking an impulse response -- the output of the system when the input is an impulse (like the Dirac delta function, which is ∞ at 0 but 0 elsewhere -- or as close as you can physically construct). This impulse response can then impart the 'character' of the system via convolution -- this is how convolution reverbs add, as an audio effect, the sound of specific, real-life resonant spaces to whatever audio signal you feed them ("This is your voice in the Notre Dame cathedral" style). There's also guitar amp/cab sims that work this way. This works because the Dirac delta is the identity under (continuous) convolution (also because these real physical things like sounds interacting with space, and speakers, are linear&time-invariant).

It also comes up in image processing. You can do a lot of basic image processing with a 2d discrete convolution kernel. You can implement blurs/anti-aliasing/lowpass, image sharpening/highpass, and edge 'detection' this way.

In my experience, stating things outright and giving examples helps with communication. You might not need a definition, but the relevant question is would it improve the text for other readers?

Load More