LESSWRONG
LW

Tyrrell_McAllister
4759441287100
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Don't take the organizational chart literally
Tyrrell_McAllister2y61

I just saw this post today. I was a little worried that I'd somehow subconsciously stolen this concept and its name from you until I saw your link to my comment. At any rate you definitely described it more memorably than I did.

Reply
What Failure Looks Like: Distilling the Discussion
Tyrrell_McAllister3y20

Giving up this new technology would be analogous to living like a quaker today

Perhaps you meant "Amish" or "Mennonite" rather than "quaker"?

Reply
A non-magical explanation of Jeffrey Epstein
Tyrrell_McAllister3y250

Nice article all around!

Another error that conspiracy-theorists make is to "take the org chart literally".

CTists attribute superhuman powers to the CIA, etc., because they suppose that decision-making in these organizations runs exactly as shown on the chart. Each box, they suppose, takes in direction from above and distributes it below just as infallibly as the lines connecting the boxes are drawn on the chart.

If you read org charts literally, it looks like leaders at the top have complete control over everything that their underlings do. So of course the leader can just order the underlings not to defect or leak or baulk at tasks that seem beyond the pale!

This overly literal reading of the org chart obscures the fact that all these people are self-interested agents, perhaps with only a nominal loyalty to the structure depicted on the chart. But many CTists miss this, because they read the org chart as if it were a flowchart documenting the dependencies among subroutines in a computer program.

Reply
Deflationism isn't the solution to philosophy's woes
Tyrrell_McAllister4y10

LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.

LW should not be comparing itself to Plato. It's trying to do something different. The best of what Plato did is, for the most part, orthogonal to what LW does.

You can take the LW worldview totally onboard and still learn a lot from Plato that will not in any way conflict with that worldview.

Or you may find Plato totally useless. But it won't be your adoption of the LW memeplex alone that determines which way you go.

Reply
To listen well, get curious
Tyrrell_McAllister5y30

Also, your empathy reassures them that you will be ready with truly helpful help if they do later want it.

Reply
Globally better means locally worse
Tyrrell_McAllister5y20

I agree that a rich person won't tolerate disposable products where more durable versions are available. Durability is a desirable thing, and people who can afford it will pay for it when it's an option.

But imagine a world where washing machines cost as much as they do in our world, but all washing machines inevitably break down after a couple years. Durable machines just aren't available.

Then, in that world, you have to be wealthier to maintain your washing-machine-owning status. People who couldn't afford to repurchase a machine every couple of years would learn to do without. But people who could afford it would consider it an acceptable cost of living in the style to which they have become accustomed.

Reply
Msg Len
Tyrrell_McAllister5y30

Did your really need to say that you'd be brief? Wasn't it enough to say that you'd omit needless words? :)

Reply
Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’
Tyrrell_McAllister5y60

It seems unlikely that joining a specific elite is terminally valuable as such, except to ephemeral subagents that were built for instrumental reasons to pursue it.

It seems quite likely that people seek to join whatever elite they can as a means to some more fundamental ends. Those of us who aren't driven to join the elite are probably satisfying our hunger to pursue those more fundamental ends in other ways.

For example, people might seek elite status in part to win security against bad fortune or against powerful enemies. But it might seem to you that there are other ways to be more secure against these things. It might even seem that being elite would leave you more exposed to such dangers.

For example, if you think that the main danger is unaligned AI, then you won't think of elite status as a safe haven, so you'll be less motivated to seek it. You'll find that sense of security in doing something else that seems to address that danger better.

Reply
On Destroying the World
Tyrrell_McAllister5y*230

I've played lot of role-playing games back in my day and often people write all kinds of things as flavour text. And none of it is meant to be taken literally.

This line gave me an important insight into how you were thinking.

The creators were thinking of it as a community trust-building exercise. But you thought that it was intended to be a role-playing game. So, for you, "cooperate" meant "make the game interesting and entertaining for everyone." That paints the risk of taking the site down in a very different light.

And if there was a particular goal, instead of us being supposed to decide for ourselves what the goal was, then maybe it would have made sense to have been clear about it?

But the "role-playing game" glasses that you were wearing would have (understandably) made such a statement look like "flavor text".

Reply
Why is Bayesianism important for rationality?
Answer by Tyrrell_McAllisterSep 01, 2020*80

I wrote a LessWrong post that addressed this: What Bayesianism Taught Me

Reply
Load More
24Good arguments against "cultural appropriation"
7y
12
14Moving Factward
7y
11
91Sam Harris and the Is–Ought Gap
6y
46
18Intrinsic properties and Eliezer's metaethics
8y
27
8Globally better means locally worse
8y
20
23Buckets and memetic immune disorders
8y
2
11Why is the A-Theory of Time Attractive?
11y
89
7Rationality Quotes October 2014
11y
238
7Link: How Community Feedback Shapes User Behavior
11y
13
16Rationality Quotes June 2014
11y
283
Load More
Updateless Decision Theory
3y
Coherent Extrapolated Volition
6y
Less Wrong/2007 Articles/Summaries
8y
(+57/-5)
Less Wrong/2007 Articles/Summaries
8y
(+52)
Updateless Decision Theory
8y
Updateless Decision Theory
8y
(+9/-18)
Screening Off (evidence)
12y
(+199/-69)
Less Wrong/2008 Articles/Summaries
13y
(+4/-3)
Less Wrong/2008 Articles/Summaries
13y
(+401/-82)
Solomonoff induction
13y
(+54/-69)
Load More