LESSWRONG
LW

Garrett Baker
5557Ω104810990
Message
Dialogue
Subscribe

I have signed no contracts or agreements whose existence I cannot mention.

They thought they found in numbers, more than in fire, earth, or water, many resemblances to things which are and become; thus such and such an attribute of numbers is justice, another is soul and mind, another is opportunity, and so on; and again they saw in numbers the attributes and ratios of the musical scales. Since, then, all other things seemed in their whole nature to be assimilated to numbers, while numbers seemed to be the first things in the whole of nature, they supposed the elements of numbers to be the elements of all things, and the whole heaven to be a musical scale and a number.

Metaph. A. 5, 985 b 27–986 a 2.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Isolating Vector Additions
No wikitag contributions to display.
Benito's Shortform Feed
Garrett Baker3d40

Economics, magician, sigh, ???, ditto, disgusting, easy to digest, moloch, found a bug, yup that’s what I said, passes the sniff test, missing link, not missing link in argument chain, and marginally changed my mind.

Reply21
plex's Shortform
Garrett Baker3d31

This seems potentially useful if you find yourself regularly getting told you are manipulative or rude, but if not, I don’t see the value proposition.

I’m also skeptical general, universal, rules like this exist for everyone, as well as “dissonance” being bad in every circumstance for everyone.

As a concrete example, sometimes my friends will tell me they are stupid or inadequate, and also sometimes I know that’s dumb & stupid, so I tell them. This no doubt causes dissonance but a good kind because I know those people well enough to know they just aren’t thinking clearly (and if they were, my confident assertions would not much change their trajectory).

Less concretely, many people thrive in direct confrontation, and NVC seems opposed to that.

Reply
Underdog bias rules everything around me
Garrett Baker5d20

The ancient Greeks had many tragic stories too then, for example, Prometheus Bound.

Reply
Before LLM Psychosis, There Was Yes-Man Psychosis
Garrett Baker6d6-4

Its a good effort to fight against new medically inaccurate terms which give the wrong impression of real medical phenomena, but sadly I think the ship is long sailed on this one, and the term is already much more popularly watered down than what is described in this post (though the term even then does refer to real & harmful psychological consequences of interacting with AIs).

Reply
Eli's shortform feed
Garrett Baker7d40

I will note that historically, despite the positive externalities, fire insurance companies often put out not only their subscribers’ fires, but everyone’s.

Reply
Linch's Shortform
Garrett Baker8d2713

@Mo Putera has asked for concrete examples of 

over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to "buy" such a complicated tool

I think AI 2027 is a good example of this sort of thing. Similarly, the notorious Rethink Priorities welfare range estimates on animal welfare, and though I haven't thought deeply about it enough to be confident, GiveWell's famous giant spreadsheets (see the links in the last section) are the sort of thing I am very nervous about. I'll also point to Ajeya Cotra's bioanchors report. 

Reply32
Linch's Shortform
Garrett Baker8d70

I think we both agree insofar as we would give similar diagnoses, but we maybe disagree insofar as we would give different recommendations about what to change.

I would recommend LessWrongers read more history, do more formal math and physics, and make more mathematical arguments[1].

I would expect you would recommend LessWrongers spend more time looking at statistics (in particular, our world in data), spend more time forecasting, and make more mathematical models.

Is this accurate?


  1. This is not an exclusive list. I think also LessWrongers should read more textbooks about pretty much everything. ↩︎

Reply11
Linch's Shortform
Garrett Baker8d3929

I think the EA Forum is epistemically better than LessWrong in some key ways, especially outside of highly politicized topics. Notably, there is a higher appreciation of facts and factual corrections.

My problems with the EA forum (or really EA-style reasoning as I've seen it) is the over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to "buy" such a complicated tool. So in some sense, perhaps, they move too far in the opposite direction. But I think the way EAs think about these things is wrong, even directionally for LessWrong (though not for people in general), to do.

I think this leads EAs to have a pretty big streetlight bias in their thinking, and the same with forecasters, and in particular EAs seem like they should focus more on bottleneck-style reasoning (eg focus on understanding & influencing a small number of key important factors).

See here for concretely how I think this caches out into different recommendations I think we'd give to LessWrong.

Reply1
Banning Said Achmiz (and broader thoughts on moderation)
Garrett Baker8d1714

Its not obvious this is dumb to me. If two people are super angry at each other, that conversation seems likely to create more heat than light.

Reply2
Yudkowsky on "Don't use p(doom)"
Garrett Baker8d30

Why a single necessary and sufficient policy? What if the most realistic way of helping everyone is several policies that are by themselves insufficient, but together sufficient? Doesn't this focus us on dramatic actions unhelpfully, in the same way that a "pivotal act" arguably so focuses us?

I agree the phrasing here is maybe bad, but I think its generally accepted that "X and Y" is a policy when "X" and "Y" are independently policies, so I would expect a set of policies which are together sufficient would be an appropriate answer.

Reply
Load More
1D0TheMath's Shortform
5y
233
67What and Why: Developmental Interpretability of Reinforcement Learning
1y
4
51On Complexity Science
1y
19
52So You Created a Sociopath - New Book Announcement!
1y
3
75Announcing Suffering For Good
1y
5
40Neuroscience and Alignment
1y
25
16Epoch wise critical periods, and singular learning theory
2y
1
24A bet on critical periods in neural networks
2y
1
27When and why should you use the Kelly criterion?
2y
25
26Singular learning theory and bridging from ML to brain emulations
2y
16
61My hopes for alignment: Singular learning theory and whole brain emulation
2y
5
Load More