I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.
Longer bio: www.lesswrong.com/posts/aG74jJkiPccqdkK3c/the-lesswrong-team-page-under-construction#Ben_Pace___Benito
Here are some things I like about owning this space:
Perhaps of interest, when we were considering alternative locations, the main other places that had the properties of my first two bullets were educational religious spaces. The School of Religion, the School of Theology, and a strange surprise-Buddhist-temple that Habryka and I unexpectedly found ourselves in one evening (as the woman was showing us around the school-like building, she fully walked past the temple doors, until I politely asked to look inside, and she unlocked them to show us a ~7k square foot room with a 40-foot high ceiling, filled with golden statues and colorful ribbons hanging from the ceiling and ancient texts inscribed on rotating pillars and 300 folding chairs and a big stage). These places had a lot of beauty. But one of them basically wasn't on sale, and the other two were only partially on sale (we couldn't have owned the whole property and would have to share with some religious groups, which is not a total dealbreaker but I strongly prefer having full ownership).
We also considered renting solely office spaces, which would have been much faster to get started with, and were on the verge of going through on a deal last year. But then at the last minute they explained the elevator needed replacing and would be out of use for the first 2 months of us living there (which is a pretty big obstruction for moving in all of our heavy furniture up ~3 floors). They wouldn't negotiate at all on this and we walked away. I actually heard (epistemic status: I assign 75% to this being true, I have pinged the person who said this to me to double-check) that the elevator only actually got fixed around a month or two ago. To me not having to deal with this is part of the advantage of having full-ownership I describe in the first bullet above.
- He cites ARC’s GPT-4 evaluation and Lesswrong in his AI report which has a large section on safety.
I wanted to double-check this.
The relevant section starts on page 94, "Section 4: Safety", and those pages cite in their sources around 10-15 LW posts for their technical research or overviews of the field and funding in the field. (Make sure to drag up the sources section to view all the links.)
Throughout the presentation and news articles he also has a few other links to interviews with ppl on LW (Shane Legg, Sam Altman, Katja Grace).
Thanks! This is fairly tempting. I'm a bit concerned by
Some other explanation that's of this level of "very weird"
To be clear, if it were just the 4 hypotheses you mention, then I feel pretty good about this, and I'd just want to reflect over 200:1 versus 100:1.
Regarding the hypotheses, I'd probably want to determine now some set of LW posters to resolve it if we disagree. My first guess is that Oliver Habryka, Alyssa Vance, and Vaniver could be good, where if any of them think the bet resolves in your favor then it does.
I have not read this post, and I have not looked into whatever the report is, but I'm willing to take a 100:1 bet that there is no such non-human originating craft (by which I mean anything actively designed by a technological species — I do not mean that no simple biological matter of any kind could not have arrived on this planet via some natural process like an asteroid), operationalized to there being no Metaculus community forecast (or Manifold market with a sensible operationalization and reasonable number of players) that assigns over 50% probability to this being a craft of non-human design being true in the next 2 years.
(I am actually going to check that this post makes a claim like this now, before posting, in case I am off-topic. K, looks like I am broadly on-topic.)
"AI maniacs" is maybe a term that meets this goal? Mania is the opposite side to depression, both of which are about having false beliefs just in opposite emotionally valenced directions, and also I do think just letting AI systems loose in the economy is the sort of thing a maniac in charge of a civilization would do.
The rest of my quick babble: "AI believers" "AI devotee" "AI fanatic" "AI true believer" "AI prophets" "AI ideologue" "AI apologist" "AI dogmatist" "AI propagandists" "AI priests".
I think I tend to base my level of alarm on the log of the severity*probability, not the absolute value. Most of the work is getting enough info to raise a problem to my attention to be worth solving. "Oh no, my house has a decent >30% chance of flooding this week, better do something about it, and I'll likely enact some preventative measures whether it's 30% or 80%." The amount of work I'm going to put into solving it is not twice as much if my odds double, mostly there's a threshold around whether it's worth dealing with or not.
Setting that aside, it reads to me like the frame-clash happening here is (loosely) between "50% extinction, 50% not-extinction" and "50% extinction, 50% utopia", where for the first gamble of course 1:1 odds on extinction is enough to raise it to "we need to solve this damn problem", but for the second gamble it's actually much more relevant whether it's a 1:1 or a 20:1 bet. I'm not sure which one is the relevant one for you two to consider.
(Strong-upvote, weak-disagree. I sadly don't have time right now to reflect and write why I disagree with this position but I hope someone else who disagrees does.)
For the record, our relationship to supporting events for this ecosystem is changing from something like "all of our resources are the same, here have my venue for free if you need it" to "markets and pricing are a great way for large masses of people to coordinate on the value of a good or service, let's coordinate substantially via trade".
For instance, during a previous cohort of SERI MATS scholars at the Lightcone Offices, I spent a couple of weeks of work adding a second floor and getting it furnished and doing interior design, hiring another support person to the office team, and then later on dealing with closing it down and downsizing when the demand went away. I did all of that for free, and was not paid salary or anything by MATS, it was part of my Lightcone work, because we wanted to support mentorship happening in the AI alignment ecosystem. It's different this time around. They're paying us a substantial amount of money (well over $100k) for the use of 2.5 of our nicely furnished and designed buildings for 2 months, an amount that makes the trade pretty good for Lightcone (and I hope+expect to work hard and make it worthwhile for SERI MATS too!). The other workshops Habryka has mentioned elsethread will also mostly be paying trade partners (general pricing TBD as we get a better sense of the demand).
I bring this up because the extent to which funds for Lightcone are spent supporting SERI MATS in particular (and other teams/orgs/events) is (I suspect) much less than you are thinking.