jimrandomh

LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur. Currently working on auto-applying tags to LW posts with a language model.

Comments

jimrandomh12hModerator Comment62

LW gives authors the ability to moderate comments on their own posts (particularly non-frontpage posts) when they reach a karma threshold. It doesn't automatically remove that power when they fall back under the threshold, because this doesn't normally come up (the threshold is only 50 karma). In this case, however, I'm taking the can-moderate flag off the account, since they're well below the threshold and, in my opinion, abusing it. (They deleted this comment by me, which I undid, and this comment which I did not undo.)

We are discussing in moderator-slack and may take other actions.

Yeah, this is definitely a minimally-obfuscated autobiographical account, not hypothetical. It's also false; there were lots of replies. Albeit mostly after Yarrow had already escalated (by posting about it on Dank EA Memes).

I don't think this was about pricing, but about keeping occasional bits of literal spam out of the site search. The fact that we use the same search for both users looking for content, and authors adding stuff to Sequences, is a historical accident which makes for a few unfortunate edge cases.

Adam D'Angelo retweeted a tweet implying that hidden information still exists and will come out in the future:

Have known Adam D’Angelo for many years and although I have not spoken to him in a while, the idea that he went crazy or is being vindictive over some feature overlap or any of the other rumors seems just wrong. It’s best to withhold judgement until more information comes out.

Was Sam Altman acting consistently with the OpenAI charter prior to the board firing him?

Short answer: No, and trying this does significant damage to people's health.

The prototypical bulimic goes through a cycle where they severely undereat overall, then occasionally experience (what feels from the inside like) a willpower failure which causes them to "binge", eating an enormous amount in a short time. They're then in a state where, if they let digestion run its course, they'd be sick from the excess; so they make themselves vomit, to prevent that.

I believe the "binge" state is actually hypoglycemia (aka low blood sugar), because (as a T1 diabetic), I've experienced it. Most people who talk about blood sugar in relation to appetite have never experienced blood sugar low enough to be actually dangerous; it's very distinctive, and it includes an overpowering compulsion to eat. It also can't be resolved faster than 15 minutes, because eating doesn't raise blood sugar, digesting raises blood sugar; that can lead to consuming thousands of calories of carbs at once (which would be fine if spaced out a little, but is harmful if concentrated into such a narrow time window).

The other important thing about hypoglycemia is that being hypoglycemic is proof that someone's fat cells aren't providing enough energy withdrawals to survive. The binge-eating behavior is a biological safeguard that prevents people from starving themself so much that they literally die.

It's an AWS firewall rule with bad defaults. We'll fix it soon, but in the mean time, you can scrape if you change your user agent to something other than wget/curl/etc. Please use your name/project in the user-agent so we can identify you in logs if we need to, and rate-limit yourself conservatively.

I wrote about this previously here. I think you have to break it down by company; the answer for why they're not globally available is different for the different companies.

For Waymo, they have self-driving taxis in SF and Phoenix without safety drivers. They use LIDAR, so instead of the cognitive task of driving as a human would solve it, they have substituted the easier task "driving but your eyes are laser rangefinders". The reason they haven't scaled to cover every city, or at least more cities, is unclear to me; the obvious possibilities are that the LIDAR sensors and onboard computers are impractically expensive, that they have a surprisingly high manual-override and there's a big unscalable call center somewhere, or they're being cowardly and trying to maintain zero fatalities forever (at scales where a comparable fleet of human-driven taxis would definitely have some fatalities). In any case, I don't think the software/neural nets are likely to be the bottleneck.

For Tesla, until recently, they were using surprisingly-low-resolution cameras. So instead of the cognitive task of driving as a human would solve it, they substituted the harder task "driving with a vision impairment and no glasses". They did upgrade the cameras within the past year, but it's hard to tell how much of the customer feedback represents the current hardware version vs. past versions; sites like FSDBeta Community Tracker don't really distinguish. It also seems likely that their onboard GPUs are underpowered relative to the task.

As for Cruise, Comma.ai, and others--well, distance-to-AGI is measured only from the market leader, and just as GPT-4, Claude and Bard have a long tail of inferior models by other orgs trailing behind them, you also expect a long tail of self-driving systems with worse disengagement rates than the leaders.

It seems likely that all relevant groups are cowards, and none are willing to move forward without a more favorable political context. But there's another possibility not considered here: perhaps someone has already done a gene-drive mosquito release in secret, but we don't know about it because it didn't work. This might happen if local mosquito populations mix too slowly compared to how long it takes a gene-driven population to crash; or if the initially group all died out before they could mate; or something in the biology of the driven-drive machinery didn't function as expected.

If that were the situation, then the world would have a different problem than the one we think it has: inability to share information about what the obstacle was and debug the solution.

Load More