Posts

Sorted by New

Wiki Contributions

Comments

Does the Scott Alexander post lay this out? I am having difficulty finding it. 

He doesn’t really. Here’s the original article:

https://www.astralcodexten.com/p/mr-tries-the-safe-uncertainty-fallacy

Also there was a long follow-up where he insists 50% is the right answer, but it’s subscriber-only:

https://www.astralcodexten.com/p/but-seriously-are-bloxors-greeblic

I claim the problem is that our model is insufficient to capture our true beliefs.

There’s a difference in how we act between a coin flip (true 50/50) and “are bloxors greeblic?” (a question we have no info about).

For example, if our friend came and said “Yes, i know this one, the answer is (heads|yes)”. For coin flip you’d say “are you out of your mind?” and for bloxors you’d say “Ok, sure, you know better than me”

I’ve been idly pondering over this since Scott Alexander’s post. What is a better model?

One option would be to have another percentage — a meta-percentage. e.g. “What credence do i give to “this is an accurate model of the world””? For coin flips, you’re 99.999% that 50% is a good model. For bloxors, you’re ~0% that 50% is a good model.

I don’t love it, but it’s better than presuming anything on the base level, i think.

I don’t understand this.  Plus I suspect it was largely written by an LLM.  

First of all, where does this theory come from?  Did you invent it?  how much evidence does it have?

The rope analogy seems like it doesn’t offer much.    I don’t see any intuition-pumps that the rope gives you that simply talking about challenges and rewards wouldn’t.  Plus there’s so much in this that isn’t explained by the analogy, for example:

However, by taking on challenging projects that align with the employee's skills and interests and provide valuable outcomes (whether in the form of recognition, promotion, or personal satisfaction), the perceived value of the work can be maintained or enhanced.

What does that have to do with “Rope Management Theory”?  It just sounds like basic management advice.

Also, why is the reward called a breakfast?

The conclusion contains no information, just broad assertions about how important this thing is.  

Most of your arguments hinge on it being difficult to develop superintelligence. But superintelligence is not a prerequisite before AGI could destroy all humanity. This is easily provable by the fact that humans have the capability to destroy all humanity (nukes and bioterrorism are only two ways).

You may argue that if the AGI is only human level, that we can thwart it. But that doesn’t seem obvious to me, primarily because of AGI’s ease of self-replication. Imagine a billion human intelligence aliens suddenly pop up on the internet with the intent to destroy humanity. It’s not 100% to succeed, but seems pretty likely they would to me.

No. With unspecified units, that's saying (energy - x) of sodium = 8 * (energy - x) of water. For celcius, x = 273.15.

I don’t think you understood my point, but I was a little wrong anyway. Turns out bill gates was close enough: https://en.wikipedia.org/wiki/TerraPower

Sodium offers a 785-Kelvin temperature range between its solid and gaseous states, nearly 8x that of water's 100-Kelvin range.

There are nuclear plant designs using natural convection with water for emergency cooling.

ok? Was he trying to compare with those designs? Or the ones that caused deaths?

Because when I look up my half-assed ideas they're often close to what people use today or what people on the cutting edge are researching.

This is poor evidence and exactly the same as people who have deja vu saying they can predict the future. You’ve probably been exposed to those ideas and forgot that you were exposed to them.

Because when I get to talk to people involved in things, I can tell how smart they are relative to me.

Also poor evidence. You’re trusting your gut? How do you know it’s right? Most people are biased to believe they’re smarter and you seem to place a lot of value on it, so I imagine it also applies here.

These are not disagreements among serious nuclear engineers. Gates just found a bunch of clowns instead.

Have you heard a respected physicist make this claim? Or is it just a judgment you’ve made? Because it’s sounding like a no-true-Scotsman argument to me.

This is getting a little nit picky so I’ll back off here. Maybe you are as smart as you claim and bill gates is as dumb as you claim. So far none of the evidence you’ve provided moves me at all.

I actually think gates’ article was pretty reasonable and don’t think you should read as much into it as you are. To be fair, I’m not a physicist, and don’t know anything about this tech and very little about nuclear reactors in general, so I might phrase some of my objections as questions back to you.

Part of the reason I think it’s reasonable is that it’s marketing material more than anything, and if you give him the benefit of the doubt on his exact phrasing, or interpret in the context he means, then there’s rational explanations.

Gates is using unspecified temperature units and pressure, presumably Celcius at 1 bar. Divisions of temps in C aren't meaningful - does water have -3x the boiling point of ammonia?

Oh, why is absolute zero relative at all? In this case, i think it’s 8x higher than water when using normal outdoor temps as baseline, which actually seems like a useful measurement in this application, no?

Unlike water, the sodium doesn’t need to be pumped, because as it gets hot, it rises, and as it rises, it cools off Water does that too. It's an almost universal property of liquids. You can do natural convection cooling with water

I thought he was saying that, in this application, water would boil before it rises away to be cooled. Anyway, do most applications of water based cooling in nuclear plants use pumps? Does natrium?

The TerraPower Natrium design is much less safe than current reactors, and using sodium does nothing to improve safety.

I guess I don’t know your background, but why do you believe yourself over them? Im not saying everyone should always trust the experts, but you should have reason to believe you’re better than them at least.

If nothing else, a disagreement on what’s safe in an incredibly complicated system with lots of disagreement out there should probably not cause us to update negatively on Gates’ ability.

Is it possible that the disconnect is that you‘re valuing technical ability over being good at people+management?  Most high level executives don’t need to understand these things in detail, because they have other people they trust that do understand it.

Powerpoints need to be 5-word phrases because that’s how you should communicate with crowds.  And it’s not simply about reducing complexity to the lowest common denominator (though that is part of it).  It’s more about how getting any team of more than a few people to do anything at all together means getting them all on the same page.  And simplicity increases the likelihood that everyone will have the same take-aways, and so have the same goal.  

Two motivated intelligent individuals have some decently high chance of miscommunicating about meaningful topics — increase the number to 20 and now you‘re almost guaranteed it.  Put it in a presentation where everyone’s only half listening and things are even worse.

Relatedly, Bill Gates’ article wasn’t that bad.  Sure, there’s some inaccuracies if you’re reading it strictly.  But it’s not meant to be read strictly.  It’s basically marketing material aimed at a very large crowd, which, as discussed above, requires using phrasing that gets the point across, not phrasing that is scientifically accurate when dissected.

The whole point of capitalism is that the people who have and direct money are the ones who can make good decisions about how it should be used. When you see firsthand that high-level decision-making is a farce, where does that leave you?

Is a farce compared to what?  It seems like you’re comparing capitalism to some ideal that has never actually been realized.  And you can’t actually know if your ideal is feasible or even better in practice unless it’s been tried.

Oh, I actually think those studies are probably accurate for the thing they’re measuring, which is ”short-term individual developer productivity”.  But they don’t really account for “long-term productivity” nor “team productivity”, both of which I think benefit a lot from being in the office.   You get an uptick in people’s ability to focus, but downtick in people’s ability to communicate, and both education and coordination are dependent on the latter.

As a counterpoint, consider that ~every major tech company is constantly pushing for people to be back in the office.  I know the reddit groupthink about this is that managers are just being dumb, but I think it’s more likely that the individual devs don’t see the impact that working-remotely is having on the productivity of the company over time.

Programmers don't become more productive when they move to Silicon Valley or Seattle or NYC.

Why do you believe this?   I definitely became a better developer when I moved to NYC.  How do you know that everyone else didn’t either?

 

correlation of wages with housing prices and with wealth is stronger.

I think this is just recursive?  Of course wages are higher in places with more wealth.  Higher wages causes more wealth.  And housing prices can follow, just cause of supply+demand (there’s a higher supply of dollars, so people are willing to spend more of them).

——

Anyway, it’s incredibly effective to take very talented people, and put them together on the same team. They build on each other’s talents, and it becomes multiplicative in a lot of cases. I like the story about the IBM black team as an example (though admittedly it may be apocryphal).  

 

Achieving this requires gathering them, which requires paying them well, which means you need somewhere like SV or NYC where that can happen.  

 

I mostly don’t think remote work is nearly as effective.  It massively reduces the compounding effects of having smart people work closely together.  But this is just based on my personal experience so far.

The legalizer is the only thing here that isn’t inherently evil. The others may not be end-the-world kind of evil, but still evil.

To expound on the first two: they’re morally wrong because they’re lying. Your explanation of why they’re ok seems like some very short-sighted utilitarian thinking.

Load More