Wiki Contributions


Interpreting Yudkowsky on Deep vs Shallow Knowledge

After first read-through of your post the main thing that stuck with me was this:

But the thing is… rereading part of the Sequences, I feel Yudkowsky was making points about deep knowledge all along? Even the quote I just used, which I interpreted in my rereading a couple of weeks ago as being about making predictions, now sounds like it’s about the sort of negative form of knowledge that forbids “perpetual motion machines”.


This gives me an icky feeling.


(low confidence in the following parts of this comment)


It makes me think of the Bible.  The "specifications" laid out in the bible are loosey-goosey enough that believers can always re-interpret such-and-such verse to actually mean whatever newer evidence permits. (I want to stress that I'm not drawing a parallel between unthinking Christian believers and anyone changing their belief based upon new evidence! I'm drawing a parallel between the difficult task of writing text designed to change future behavior.)

If it's so loosey-goosey than what's it good for?

That's most definitely not to say that anything that you can re-interpret in the light of new evidence is full of shit. However, you've got to have a good and solid explanation for the discrepancy between your earlier and later interpretations.  The importance of, and difficulty of producing, this explanation is probably based upon if we're talking about a quantitative physics experiment or a complicated tome of reasoning, philosophy, rhetoric. The complicated tome case is important and hard because it's so very hard to convey our most complicated thoughts in ways that are so explicit that we can't interpret them in a multitude of ways.

I think producing the explanation of the discrepancy between earlier and later interpretations is likely full of cognitive booby traps.

What have your romantic experiences with non-EAs/non-Rationalists been like?
Answer by DustinDec 05, 202114

Married for 20ish years to such a person. In fact, not only is she not anyone I'd call a rationalist, she's not even really interested in any of the works of the mind... philosophy, science, literature, etc.

It's been 95% happy times between us. For my part I think we stick together because she makes me laugh, I'm very easy going, and it's not of the utmost importance to me that my life partner is interested in the same things as I.

All that being said, I wouldn't recommend it. I think we've lucked out so far in that we've been able to make it work as well as we have. If I was looking for a new partner I wouldn't count on such luck.

Looking for reasoned discussion on Geert Vanden Bossche's ideas? +6 Months on...

I do not think that you can track the performance of the LW community on this over time in this manner.  You only had 5-ish commentors on the last post so the best you could do was tracking those people's performance...but you can't even really do that because you don't have any quantifiable data from them.

If you try to track performance with the type and quantity of data you get from free-form comments you will be almost guaranteed to extract the conclusion you already believed and not learn anything new.

If you want more useful data, maybe try a prediction market? But even that is just going to get you a small number of participants that won't tell you anything about the "LW community".

Use Tools For What They're For

If they are so confident that their vaccines are stellar successes, why did they specify in their contracts with European governments that they could not be held liable for side effects?


I mean, that's just good practice.  No one can be 100% sure of anything, and you always want to take as little liability as possible...particularly when the costs of taking less liability are low.

How will OpenAI + GitHub's Copilot affect programming?

I've been using the GitHub's official Copilot plugin for PyCharm for a couple of weeks. Nice to see that they reached outside of their corporate parent to make this.

The plugin is a bit buggy, but very usable.  Copilot is a good assistant.

A Brief Introduction to Container Logistics

Ahh yes, 10% of the impact they were supposed to have.  Thanks for noting that.

A Brief Introduction to Container Logistics

Cool post with insights into an interesting industry!  

I've been in positions with executive authority in multiple locations in multiple industries and at my best guess 90% of quick fixes had only 10% of the impact that they were supposed to have.

Much of what humanity does is too complicated to completely understand and formalize into structures that enable the type of analysis that allows you to figure out the right "quick fix"...particularly to outsiders!  (Not to discount the very real benefits outside eyes can bring to a problem.)


edit: To expand, I'm not talking about only quick fixes created and implemented by me.  There's always someone coming in with some thing to fix all our problems and we'll either see the shortcoming's of the plan up front or try them out and be disappointed.

Disagreeables and Assessors: Two Intellectual Archetypes

I like this post a lot.

One thing I keep thinking about is this sentence: 

They're quick to call out bullshit and are excellent at coming up with innovative ideas. Unfortunately, they produce a whole lot of false positives; they're pretty overconfident and wrong a great deal of the time.

Are they excellent at coming up with innovative ideas?  

In the context of the framing you're using here: On the one hand, yes of course they are.  On the other hand, a stopped clock is excellent at being right twice a day.  I have a bit of a hard time differentiating the two hands here.  

I think maybe it comes down to what we mean by "excellent" and you get into that in your post.  It just feels wrong on a fundamental level to me to call the process by which these ideas are arrived at as "excellent"...but I guess that's what a dirty Assessor would say!

Resurrecting all humans ever lived as a technical problem

While generating all possible minds some random things I thought about while skimming the post:

  1. Is my mind from 1 second or 1 year ago the same mind as my mind right now?  Are we interested in Archimedes mind from any specific point in time or just at any point in his life?
  2. The state of the minds we're generating in our search.  Are we generating minds whose state is that of a mind that has experienced one billion years of torture?  Is that bad?  Stopping torture is good, no?
  3. How many minds will be generated that did not want to be resurrected?
  4. Some of this depends on the technical details of how traversing the space of possible minds works and also something like...can we generate a mind without "running" it?  If we can generate a mind, and the mind isn't running, are we OK with generating the tortured mind if we can inspect it in a not-running state?
  5. If we're just generating possible minds, how many will say "Yo, I'm Archimedes!" but actually have no relation to our historical Archimedes?
They don't make 'em like they used to

One thing I don't see mentioned too often in these discussions:  I often don't care if something doesn't last as long as it possibly could. I like new things so I get them.  I don't want the ratty lookin stove I had 20 years ago. I want one with an induction cooktop, "smart" features, and a design that matches my current decor. Some people derisively call this "consumerism".  I call this a benefit of living in the modern age. (Note that I do not dismiss the downsides of consumerism as invalid.)

I'd be way happier with this state of affairs if it was much easier to recycle/resale/re-gift older products.  Living in a more rural area than most makes it way harder than it should be to not just throw the old product away.

Sometimes I purposefully choose the product that doesn't last as long because it holds too much of a cost premium over a product that won't last as long. Boots and blue jeans are two items that come to mind.

Load More