BrandonReinhart

Wiki Contributions

Comments

Sorted by

Donation sent.

I've been very impressed with MIRI's output this year, to the extent I am able to be a judge. I don't have the domain specific ability to evaluate the papers, but there is a sustained frequency of material being produced. I've also read much of the thinking around VAT, related open problems, definition of concepts like foreseen difficulties... the language and framework for carving up the AI safety problem has really moved forward.

Well I totally missed the diaspora. I read star slate codex (but not the comments) and had no idea people are posting things in other places. It surprises me that it even has a name "rationalist diaspora." It seemed to me that people ran out of things to say or the booster rocket thing had played itself out. This is probably because I don't read Discussion, only Main and as Main received fewer posts I stopped coming to Less Wrong. As "meet up in area X" took over the stream of content I unsubscribed from my CSS reader. Over the past few years the feeling of a community completely evaporated for me. Good to hear that there is something going on somewhere, but it still isn't clear where that is. So archiving LW and embracing the diaspora to me means so long and thanks for all the fish.

When you’re “up,” your current strategy is often weirdly entangled with your overall sense of resolve and commitment—we sometimes have a hard time critically and objectively evaluating parts C, D, and J because flaws in C, D, and J would threaten the whole edifice.

Aside 1: I run into many developers who aren't able to separate their idea from their identity. It tends to make them worse at customer and product oriented thinking. In a high bandwidth collaborative environment, it leads to an assortment of problems. They might not suggest an idea, because they think the group will shoot it down and they will be perceived as a generator of poor ideas. Or they might not relinquish an idea that the group wants to modify, or work on an alternative to, because they feel that, too, is failure. Or they might not critically evaluate their own idea to the standard they would evaluate any other idea that didn't come from their mind. Over time it can lead to selective sidelining of that person in a way that needs a deliberate effort to undo.

The most effective collaborators are able to generate many ideas with varying degrees of initial quality and then work with the group to refine those ideas or reject the ones that are problematic. They are able to do this without taking collateral damage to their egos. These collaborators see the ideas they generate as products separate from themselves, products meant to be improved by iteration by the group.

I've seen many cases where this entanglement of ego with idea generation gets fixed (through involvement of someone who identifies the problem and works with that person) and some cases where it doesn't get fixed (after several attempts, with bad outcomes).

I know this isn't directly related to the post, but it occurred to me when I read the quoted part above.

Aside 2: I have similar mood swings when I think about the rationalist community. "Less Wrong seems dead, there is no one to talk to." then "Oh look, Anna has a new post, the world is great for rationalists." I think it's different from the work related swings, but also brought to mind by the post.

I've always thought that "if I were to give, I should maximize the effectiveness of that giving" but I did not give much nor consider myself an EA. I had a slight tinge of "not sure if EA is a thing I should advocate or adopt." I had the impression that my set of beliefs probably didn't cross over with EAs and I needed to learn more about where those gaps were and why they existed.

Recently through Robert Wiblin's facebook have encountered more interesting arguments and content in EA. I had no concrete beliefs about EA, only vague impressions (not having had much time to research it in depth in the past). I had developed an impression that EA was about people maximizing giving to a self-sacrificial degree that I found uncomfortable. I also have repeatedly bounced off the animal activism - I have a hard time separating my pleasure of eating meat from my understanding of the ethical arguments. (So, I figured I would be considered a lawful evil person to the average EA).

However, now having read a few more things even just today, I feel like these are misplaced perceptions of the movement. Reading the 2014 summary, posted in a comment here from Tog, makes me think that:

  • EAs give in a pattern similar to what I would give. However, I personally favor the ex-risk and teaching rationality stuff probably a bit higher than the mean.

  • EAs give about as much as I'd be willing to give before I run into egoist problems (where it becomes painful in a stupid way I need to work to correct). So 10% seems very reasonable to me. For whatever reason, I had thought that "EA" meant "works to give most of what they earn and live a spartan life." I think this comes from not knowing any EAs and instead reading 80,000 hours and other resources not completely processing the message correctly. Probably some selective reading going on and I need to review how that happened.

  • The "donate to one charity" argument is so much easier for me to plan around.

Overall I should have read the 2014 results much sooner and it helped me realize that my perspective is probably a lot closer to the average LWer than I had thought. This makes me feel like taking further steps to learn more about EA and make concrete plans to give some specific amount from an EA perspective is a thing I should do. Which is weird, because I could have done all of that anyway, but was letting myself bounce off of the un-pleasurable conclusions of giving up meat eating or giving a large portion of my income away. Neither of which I have to do in the short term to both give effectively or participate in the EA community. Derp.

I'm curious about the same thing as [deleted].

Furthermore, a hard to use text may be significantly less hard to use in the classroom where you have peers, teachers, and other forms of guidance to help digest the material. Recommendations for specialists working at home or outside a classroom might not be the same as the recommendations you would give to someone taking a particular class at Berkeley or some other environment where those resources are available.

A flat out bad textbook might seem really good when it is something else such as the teacher, the method, or the support that makes the book work.

"A directed search of the space of diet configurations" just doesn't have the same ring to it.

Thanks for this. I hadn't seen someone pseudocode this out before. This helps illustrate that interesting problems lie in the scope above (callers to tdt_uility() etc) and below (implementation of tdt() etc).

I wonder if there is a rationality exercise in 'write pseudocode for problem descriptions, explore the callers and implementations'.

Doh, I have no idea why my hands type c-y-r instead of c-r-y, thanks.

Metaphysical terminology is a huge bag of stupid and abstraction, but what I mean by mysticism is something like 'characteristic of a metaphysical belief system.' The mysticism tag tells me that a concept is positing extra facts about how the world works in a way that isn't consistent with my more fundamental, empirical beliefs.

So in my mind I have 'WARNING!' tags (intentionally) attached to mysticism. So when I see something that has the mysticism tag attached to it, I approach cautiously and with a big stick. Or to save time or avoid the risk of being eaten I often don't approach at all.

If I find that I have a metaphysical belief or if I detect that a fact/idea may be metaphysical, then I attach the mystical tag to it and go find my stick.

If something in my mind has the mysticism tag attached to it inappropriately, then I want to reclassify that thing -- slightly reduce the size of the tag or create a branch through more specific concept definition and separation.

So I don't really see value in attaching the mysticism tag to things that don't directly warrant it. What you call a mystical litany I'd call a mnemonic technique for reminding yourself of a useful process or dangerous bias. Religions have litanies, but litanies are not inherently religious concepts.

So no, I won't consider mysticism itself as a useful brain hack. Mysticism is allocated the purpose of 'warning sign' . It's not the only warning sign, but it's a useful one.

Load More