So far hardly any feedback on places & no restaurant recommendations. If I get no more responses by tomorrow I'll just search the net for a well-reviewed restaurant that's walkable-to from Montgomery Theater, good for groups, accepting of casual attire and hopefully not too crowded/noisy (with a private room?), book it for Saturday probably around 7pm for 21 people, post the details and directions and hope everyone turns up.
If you'd rather a different time, or have any preferences at all, please let me know before I do that. So far no one's mentioned vegetarian, parking or wheelchair access needs, or preference for or against any food except one vote for pizza. How do you feel about Chinese? Italian? Mexican?
Excellent post. Please write more on ethics as safety rails on unseen cliffs.
Nazir, a secret hack to prevent Eliezer from deleting your posts is here. #11.6 is particularly effective.
Ah, I see...
other events may be offered at the same time, and I can not predict such events.
As far as Eliezer is currently aware, Saturday night should be clear.
I meant some of you singularity-related guys may want to meet me at other times, possibly at my apartment.
I'd love to come to another meet, Anna would too, probably others. I just wasn't sure there'd be enough people for two, so focused on making at least one happen.
I guess this was not the right place to post such an offer.
If the invite extends to OB readers, you're very welcome to share this page. If it's just for us Singularitarians, it's probably better to plan elsewhere and post a link here.
Oops, misinterpreted tags. Should read:
It's 3am and the lab calls. Your AI claims [nano disaster/evil AI emergence/whatever] and it must be let out to stop it. It's evidence seems to check out.
Even if we had the ultimate superintelligence volunteer to play the AI and we proved a gatekeeper strategy "wins" 100% (functionally equal to a rock on the "no" key) that wouldn't show AI boxing can possibly be safe.
It's 3am and the lab calls. Your AI claims and it must be let out to stop it. It's evidence seems to check out...
If it's friendly, keeping that lid shut gets you just as dead as if you let it out and it's lying. That's not safe. Before it can hide it's nature, we must know it's nature. The solution to safe AI is not a gatekeeper no smarter than a rock!
Besides, as Drexler said, Intelligent people have done great harm through words alone.
If there's a killer escape argument it will surely change with the gatekeeper. I expect Eliezer used his maps the arguments and psychology to navigate reactions & hesitations to a tiny target in the vast search space.
A gatekeeper has to be unmoved every time. The paperclipper only has to persuade once.
I'm not saying this is wrong, but in its present form, isn't it really a mysterious answer to a mysterious question? If you believed it, would the mystery seem any less mysterious?
Hmm. You're right.
it doesn't explain why we find ourselves in a low-entropy universe rather than a high-entropy one
I didn't think it would solve all our questions, I just wondered if it was both the simplest solution and lacking good evidence to the contrary. Would there be a higher chance of being a Boltzmann brain in a universe identical to ours that happened to be part of a what-if-world? If not, how is all this low-entropy around me evidence against it?
Just because what-if is something that humans find deductively compelling does not explain how or why it exists Platonically
How would our "Block Universe" look different from the inside if it was a what-if-Block-Universe? It all adds up to...
Not trying to argue, just curious.
Eliezer: imagine that you, yourself, live in a what-if world of pure mathematics
Isn't this true? It seems the simplest solution to "why is there something rather than nothing". Is there any real evidence against our apparently timeless, branching physics being part of a purely mathematical structure? I wouldn't be shocked if the bottom was all Bayes-structure :)