DPiepgrass

Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I'm wrong.

Software developer and EA with interests including programming language design, international auxiliary languages, rationalism, climate science and the psychology of its denial.

Looking for someone similar to myself to be my new best friend:

❖ Close friendship, preferably sharing a house ❖ Rationalist-appreciating epistemology; a love of accuracy and precision to the extent it is useful or important (but not excessively pedantic) ❖ Geeky, curious, and interested in improving the world ❖ Liberal/humanist values, such as a dislike of extreme inequality based on minor or irrelevant differences in starting points, and a like for ideas that may lead to solving such inequality. (OTOH, minor inequalities are certainly necessary and acceptable, and a high floor is clearly better than a low ceiling: an "equality" in which all are impoverished would be very bad) ❖ A love of freedom ❖ Utilitarian/consequentialist-leaning; preferably negative utilitarian ❖ High openness to experience: tolerance of ambiguity, low dogmatism, unconventionality, and again, intellectual curiosity ❖ I'm a nudist and would like someone who can participate at least sometimes ❖ Agnostic, atheist, or at least feeling doubts

Wiki Contributions

Comments

I'm confused why this is so popular.

Sure, it appears to be espousing something important to me ("actually caring about stuff, and for the right reasons"). But specifically it appears to be about how non-serious people can become serious. Have you met non-serious people who long to be serious? People like that seem very rare to me. I've spent my life surrounded by people who work 9 to 5 and will not talk shop at 6; you do some work, then you stop working and enjoy life. 

some of the most serious people I know do their serious thing gratis and make their loot somewhere else.

Yeah, that's me. So it seems like the audience for the piece can't be the audience that the piece appears to be for. Unserious people are unserious because they don't care to be. Also:

For me, I felt like publishing in scientific journals required me to be dishonest.

...what?

it's wrong to assume that because a bunch of Nazis appeared, they were mostly there all along but hidden

I'd say it's wrong as an "assumption" but very good as a prior. (The prior also suggests new white supremacists were generated, as Duncan noted.) Unfortunately, good priors (as with bad priors) often don't have ready-made scientific studies to justify them, but like, it's pretty clear that gay and mildly autistic people were there all along, and I have no reason to think the same is not true of white supremacists, so the prior holds. I also agree that it has proven easy for some people to "take existing sentiments among millions of people and hone them", but you call them "elites", so I'd point out that some of those people spend much of their time hating on "elites" and "elitism"...

I think this post potentially does a very good job getting at the core of why things like Cryonics and Longtermism did not rapidly become mainstream

Could you elaborate?

I'd say (1) living in such a place just makes you much less likely to come out, even if you never move, (2) suspecting you can trust someone with a secret is not a good enough reason to tell the secret, and (3) even if you totally trust someone with your secret, you might not trust that he/she will keep the secret.

And I'd say Scott Alexander meets conservatives regularly―but they're so highbrow that he wasn't thinking of them as "conservatives" when he wrote that. He's an extra step removed from the typical Bush or MAGA supporter, so doesn't meet those. Or does he? Social Dark Matter theory suggests that he totally does.

that the person had behaved in actually bad and devious ways

"Devious" I get, but where did the word "bad" come from? (Do you appreciate the special power of the sex drive? I don't think it generalizes to other areas of life.)

Your general point is true, but it's not necessarily true that a correct model can (1) predict the timing of AGI or (2) that the predictable precursors to disaster occur before the practical c-risk (catastrophic-risk) point of no return. While I'm not as pessimistic as Eliezer, my mental model has these two limitations. My model does predict that, prior to disaster, a fairly safe, non-ASI AGI or pseudo-AGI (e.g. GPT6, a chatbot that can do a lot of office jobs and menial jobs pretty well) is likely to be invented before the really deadly one (if any[1]). But if I predicted right, it probably won't make people take my c-risk concerns more seriously?

  1. ^

    technically I think AGI inevitably ends up deadly, but it could be deadly "in a good way"

Evolutionary Dynamics

The pressure to replace humans with AIs can be framed as a general trend from evolutionary dynamics. Selection pressures incentivize AIs to act selfishly and evade safety measures.

Seems like the wrong frame? Evolution is based on mutation, which AIs won't have. However, in the human world there is a similar and much faster dynamic based on the large natural variability between human agents (due to both genetic and environmental factors) which tends to cause people with certain characteristics to rise to the top (e.g. high intelligence, grit, power-seeking tendencies, narcissism, sociopathy). AIs and AGIs will have the additional characteristics of rapid training and being easy to copy, though I expect they'll have less variety than humans.

Given the exponential increase in microprocessor speeds, AIs could process information at a pace that far exceeds human neurons. Due to the scalability of computational resources, AI could collaborate with an unlimited number of other AIs and form an unprecedented collective intelligence.

Worth noting that this was the case since long ago. Old AIs haven't so much been slow as undersized.

Not sure what I would add to "Suggestions" other than "don't build AGI" :D

Speaking of AGI, I'm often puzzled about the extent to which authors here say "AI" when they are talking about AGI. It seems to me that some observers think we're crazy for worrying humanity GPT5 is going to kill all humans, when in fact our main concern is not AI but AGI.

given intense economic pressure for better capabilities, we shall see a steady and continuous improvement, so the danger actually is in discontinuities that make it harder for humanity to react to changes, and therefore we should accelerate to reduce compute overhang

I don't feel like this is actually a counterargument? You could agree with both arguments, concluding that we shouldn't work for OpenAI but a outfit better-aligned to your values is okay.

I expect there are people who are aware that there was drama but don't know much about it and should be presented with details from safety-conscious people who closely examined what happened.

I think there may be merit in pointing EAs toward OpenAI safety-related work, because those positions will presumably be filled by someone and I would prefer it be filled by someone (i) very competent (ii) who is familiar with (and cares about) a wide range of AGI risks, and EA groups often discuss such risks. However, anyone applying at OpenAI should be aware of the previous drama before applying. The current job listings don't communicate the gravity or nuance of the issue before job-seekers push the blue button leading to OpenAI's job listing:

I guess the card should be guarded, so that instead of just having a normal blue button, the user should expand some sort of 'additional details' subcard first. The user then sees some bullet points about the OpenAI drama and (preferably) expert concerns about working for OpenAI, each bullet point including a link to more details, followed by a secondary-styled button for the job application (typically, that would be a button with a white background and blue border). And of course you can do the same for any other job where the employer's interests don't seem well-aligned with humanity or otherwise don't have a good reputation.

Edit: actually, for cases this important, I'd to replace 'View Job Details' with a "View Details" button that goes to a full page on 80000 Hours in order to highlight the relevant details more strongly, again with the real job link at the bottom.

Hi Jasper! Don't worry, I definitely am not looking for rapid responses. I'm always busy anyway. And I wouldn't say there are in general 'easy or strong answers in how to persuade others'. I expect not to be able to persuade the majority of people on any given subject. But I always hope (and to some extent, expect) people in the ACX/LW/EA cluster to be persuadable based on evidence (more persuadable than my best friend whom I brought to the last meetup, who is more of an average person).

By the way, after writing my message I found out that I had a limited picture of Putin's demands for a peace deal―because I got that info from two mainstream media articles. When the excellent Anders Puck Neilson got around to talking about Putin's speech, he noticed other very large demands such as "denazification" (apparently meaning the Zelensky administration must be replaced with a more Kremlin-friendly government) and demilitarization (🤦‍♂️).

Yeah, Oliver Stone and Steven Seagal somehow went pro-Putin, just as Dennis Rodman befriended Kim Jong Un. There's always some people who love authoritarians, totalitarians, "strong leaders" or whatever. I don't understand it, but at least the Kremlin's allies are few enough that they decided to be friends with North Korea and Iran, and to rely on a western spokesman who is a convicted underage sex offender. So they seem a bit desperate―on the other hand, China has acted like a friend to North Korea since forever.

I'm not suggesting all the mainstream media got it wrong―only that enough sources repeated the Kremlin's story enough times to leave me, as someone who wasn't following the first war closely, the impression that the fight was mainly a civil war involving Kremlin-supplied weapons. (Thinking In hindsight, I'm like "wait a minute, how could a ragtag group of rebels already know how to use tanks, heavy artillery systems and Buk missile launchers in the same year the war started?") So my complaint is about what seems like the most typical way that the war was described.

A video reminded me tonight that after the war in Ukraine started, Russia pulled lots of its troops from all other borders, including the borders with NATO, in order to send them to Ukraine, and after Finland joined NATO, Russia pulled troops (again?) from its border with Finland―indicating that Putin has no actual fear of a NATO invasion.

Load More