sharps030

Posts

Sorted by New

Comments

Strong Evidence is Common

Tying back to an example in the post: if we're using ascii encoding, then the string "Mark Xu" takes up 49 bits. It's quite compressible, but that still leaves more than enough room for 24 bits of evidence to be completely reasonable.

This paper suggests that spoken language is consistently ~39bits/second.

https://advances.sciencemag.org/content/5/9/eaaw2594

The LessWrong 2018 Book is Available for Pre-order

Where does the money go? Is it being sold at cost, or is there surplus?

If money is being made, will it support: 1. The authors? 2. LW hosting costs? 3. LW-adjacent charities like MIRI? 4. The editors/compilers/LW moderators?

EDIT: Was answered over on /r/slatestarcodex. tldr: one print run has been paid for at a loss, any (unlikely) profits go to supporting the Lesswrong nonprofit organization.

The central limit theorem in terms of convolutions

If  and  are the fourier transforms of  and , then . This is yet another case where you don't actually have to compute the convolution to get the thing. I don't actually use fourier transforms or have any intuition about them, but for those who do, maybe this is useful?

 

It's amazingly useful in signal processing, where you often care about the frequency-domain because it's perceptually significant (eg: percieved pitch & timbre of a sound = fundamental frequency of the air-vibrations & other frequencies. Sounds too fizzy or harsh? Lowpass filter it. Too dull or muffled? Boost the higher frequencies, etc etc etc). Although it's used the other way around -- by doing convolution, you don't have to compute the thing.

 If you have a signal  and want to change it's frequency distribution, what you do is construct a 'short' (finite support) function-- the convolution kernel -- whose frequency-domain transform would multiply to give the kind of frequency responce you're after. Then you can convolve them in the time domain, and don't need to compute the fourier/reverse-fourier at all.

For example, in audio processing. Many systems (IIRC linear time-invariant ones) can be 'sampled' by taking an impulse response -- the output of the system when the input is an impulse (like the Dirac delta function, which is ∞ at 0 but 0 elsewhere -- or as close as you can physically construct). This impulse response can then impart the 'character' of the system via convolution -- this is how convolution reverbs add, as an audio effect, the sound of specific, real-life resonant spaces to whatever audio signal you feed them ("This is your voice in the Notre Dame cathedral" style). There's also guitar amp/cab sims that work this way. This works because the Dirac delta is the identity under (continuous) convolution (also because these real physical things like sounds interacting with space, and speakers, are linear&time-invariant).

It also comes up in image processing. You can do a lot of basic image processing with a 2d discrete convolution kernel. You can implement blurs/anti-aliasing/lowpass, image sharpening/highpass, and edge 'detection' this way.

Moral uncertainty: What kind of 'should' is involved?

In my experience, stating things outright and giving examples helps with communication. You might not need a definition, but the revenant question is would it improve the text for other readers?

Blackmail

"It's obviously bad. Think about it and you'll notice that. I could write a YA dystopian novel about how the consequences are bad." <-- isn't an argument, at all. It assumes bad consequences rather than demonstrating or explaining how the consequences would be bad. That section is there for other reasons, partially (I think?) to explain Zvi's emotional state and why he wrote the article, and why it has a certain tone.

Blackmail
I am not sure why you pick on blackmail specifically

This is in response to other writers, esp. Robin Hanson. That's why.

Blackmail
This only looks at the effects on Alice and on Bob, as a simplification. But with blackmail "carrying out the threat" means telling other people information about Bob, and that is often useful for those other people.

When the public interest motivates the release of private info, it's called 'whistleblowing' and is* legally protected and considered far more moral than blackmail. I think that contrast is helpful to understanding why that's not enough to make blackmail moral.

*in some jurisdictions, restrictions may apply, see your local legal code for a full list of terms & conditions.

I think you're right that it's not trivially negative sum because it can have positive outcomes for third parties. Still expect a world of legal blackmail to be worse.

Open Thread August 2018

If you're throwing your AI into a perfect inescapable hole to die and never again interacting with it, then what exact code you're running will never matter. If you observe it though, then it can affect you. That's an output.

What are you planning to do with the filtered-in 'friendly' AIs? Run them in a different context? Trust them with access to resources? Then an unfriendly AI can propose you as a plausible hypothesis, predict your actions, and fake being friendly. It's just got to consider that escape might be reachable, or that there might be things it doesn't know, or that sleeping for a few centuries and seeing if anything happens is a option-maximizing alternative to halting, etc. I don't know what you're selecting for -- suicidality, willingness to give up, halts within n operations -- but it's not friendliness.

5 general voting pathologies: lesser names of Moloch

https://www.lesserwrong.com/posts/D6trAzh6DApKPhbv4/a-voting-theory-primer-for-rationalists

The first link in this post should go ^ here to your voting theory primer. Instead, for me, it links here:

https://www.lesserwrong.com/posts/JewWDfLoxgFtJhNct/utility-versus-reward-function-partial-equivalence