Ruby

LessWrong Team

 

I have signed no contracts or agreements whose existence I cannot mention.

Sequences

LW Team Updates & Announcements
Novum Organum

Wiki Contributions

Comments

Sorted by
Ruby20

Dog: "Oh ho ho, I've played imaginary fetch before, don't you worry."

Ruby113

My regular policy is to not frontpage newsletters, however I frontpaged this one as it's the first in the series and I think it's neat for more people to know this is a series Zvi intends to write.

Ruby50

Curated! I think it's generally great when people explain what they're doing and why in way legibile to those not working on it. Great because it let's others potentially get involved, build on it, expose flaws or omissions, etc. This one seems particularly clear and well written. While I haven't read all of the research, nor am I particularly qualified to comment on it, I like the idea of a principled/systematic approach behind, in comparison to a lot of work that isn't coming on a deeper, bigger, framework.

(While I'm here though, I'll add a link to Dmitry Vaintrob's comment that Jacob Hilton described as "best critique of ARC's research agenda that I have read since we started working on heuristic explanations". Eliciting such feedback is the kind of good thing that comes out of up writing agendas – it's possible or likely Dmitry was already tracking the work and already had these critiques, but a post like this seems like a good way to propagate them and have a public back and forth.)

Roughly speaking, if the scalability of an algorithm depends on unknown empirical contingencies (such as how advanced AI systems generalize), then we try to make worst-case assumptions instead of attempting to extrapolate from today's systems.

I like this attitude. The human standard, I think often in alignment work too, is to argue why one's plan will work and find stories for that, and adopting the methodology of the opposite, especially given the unknowns, is much needed in alignment work.

Overall, this is neat. Kudos to Jacob (and rest of the team) for taking the time to put this all together. Doesn't seem all that quick to write, and I think it'd be easy to think they ought to not take time out off from further object-level research to write it.  Thanks!

Ruby72

Curated. I really like that even though LessWrong is 1.5 decades old now and has Bayesianism assumed as background paradigm while people discuss everything else, nonetheless we can have good exploration of our fundamental epistemological beliefs.

The descriptions of unsolved problems, or at least incompleteness of Bayesianism strikes me as technically correct. Like others, I'm not convinced of Richard's favored approach, but it's interesting. In practice, I don't think these problems undermine the use of Bayesianism in typical LessWrong thought. For example, I never thought of credences being applied to "propositions" rigorously, and more like "hypotheses" or possibilities for how things are that could be framed as models already too.  Context-dependent terms like "large" or quantities without explicit tolerances like "500ft" are the kind of things that you you taboo or reduce if necessary either for your own reasoning or a bet

That said, I think the claims about mistakes and downstream consequences of the way people do Bayesianism are interesting. I'm reading a claim here I don't recall seeing. Although we already knew that bounded reasons aren't logically omniscient, Richard is adding a claim (if I'm understanding correctly) that this means that no matter how much strong evidence we technically have, we shouldn't have really high confidence in any domain that requires heavy of processing that evidence, because we're not that good at processing. I do think that leaves us with a question of judging when there's enough evidence to be conclusive without complicated processing or not.

Something I might like a bit more factored out is the rigorous gold-standard epistemological framework and the manner in which we apply our epistemology day to day.

I fear this curation notice would be better if I'd read all the cited sources on critical rationalism, Knightian uncertainty, etc., and I've added them to my reading list. All in all, kudos for putting some attention on the fundamentals.

Ruby50

Welcome! Sounds like you're on the one hand at start of a significant journey but also you've come a long distance already. I hope you find much helpful stuff on LessWrong.

I hadn't heard of Daniel Schmachtenberger, but I'm glad to have learend of him and his works. Thanks.

Ruby2-1

The actual reason why we lied in the second message was "we were in a rush and forgot." 

My recollection is we sent the same message to the majority group because:

  1. Treating it different would require special-casing it and that would have taken more effort.
  2. If selectors of different virtues had received a different messages, we wouldn't be able to have a properly compared their behavior.
  3. [At least in my mind], this was a game/test and when playing games you lie to people in the context of the game to make things work. Alternatively, it's like how scientific experimenters mislead subjects for the sake of the study.
Ruby40

Money helps. I could probably buy a lot of dignity points for a billion dollars. With a trillion variance definitely goes up because you could try crazy stuff and could backfire. (I mean true for a billion too). But EV of such a world is better. 

I don't think there's anything that's as simple as writing a check though.

US Congress gives money to specific things. I do not have a specific plan for a trillion dollars.

I'd bet against Terrance Tao being some kind of amazing breakthrough researcher who changes the playing field.

Load More