Independent alignment researcher
I have downvoted my comment here, because I disagree with past me. Complex systems theory seems pretty cool from where I stand now, and I think past me has a few confusions about what complex systems theory even is.
I'm pretty skeptical they can achieve that right now using CoEm given the limited progress I expect them to have made on CoEm. And in my opinion of greater importance than "slightly behind state of the art" is likely security culture, and commonly in the startup world it is found that too-fast scaling leads to degradation in the founding culture. So a fear would be that fast scaling would lead to worse info-sec.
However, I don't know to what extent this is an issue. I can certainly imagine a world where because of EA and LessWrong, many very mission-aligned hires are lining up in front of their door. I can also imagine a lot of other things, which is why I'm confused.
I agree with Conjecture's reply that this reads more like a hitpiece than an even-handed evaluation.
I don't think your recommendations follow from your observations, and such strong claims surely don't follow from the actual evidence you provide. I feel like your criticisms can be summarized as the following:
Conjecture was publishing unfinished research directions for a while.
Conjecture does not publicly share details of their current CoEm research direction, and that research direction seems hard.
Conjecture told the government they were AI safety experts.
Some people (who?) say Conjecture's governance outreach may be net-negative and upsetting to politicians.
Conjecture's CEO Connor used to work on capabilities.
One time during college Connor said that he replicated GPT-2, then found out he had a bug in his code.
Connor has said at some times that open source models were good for alignment, then changed his mind.
Conjecture's infohazard policy can be overturned by Connor or their owners.
They're trying to scale when it is common wisdom for startups to try to stay small.
It is unclear how they will balance profit and altruistic motives.
Sometimes you talk with people (who?) and they say they've had bad interactions with conjecture staff or leadership when trying to tell them what they're doing wrong.
Conjecture seems like they don't talk with ML people.
I'm actually curious about why they're doing 9, and further discussion on 10 and 8. But I don't think any of the other points matter, at least to the depth you've covered them here, and I don't know why you're spending so much time on stuff that doesn't matter or you can't support. This could have been so much better if you had taken the research time spent on everything that wasn't 8, 9, or 10, and used to to do analyses of 8, 9, and 10, and then actually had a conversation with Conjecture about your disagreements with them.
I especially don't think your arguments support your suggestions that
Don't work at Conjecture.
Conjecture should be more cautious when talking to media, because Connor seems unilateralist.
Conjecture should not receive more funding until they get similar levels of organizational competence than OpenAI or Anthropic.
Rethink whether or not you want to support conjecture's work non-monetarily. For example, maybe think about not inviting them to table at EAG career fairs, inviting Conjecture employees to events or workspaces, and taking money from them if doing field-building.
(1) seems like a pretty strong claim, which is left unsupported. I know of many people who would be excited to work at conjecture, and I don't think your points support the claim they would be doing net-negative research given they do alignment at Conjecture.
For (2), I don't know why you're saying Connor is unilateralist. Are you saying this because he used to work on capabilities?
(3) is just absurd! OpenAI will perhaps be the most destructive organization to-date. I do not think your above arguments make the case they are less organizationally responsible than OpenAI. Even having an info-hazard document puts them leagues above both OpenAI and Anthropic in my book. And add onto that their primary way of getting funded isn't building extremely large models... In what way do Anthropic or OpenAI have better corporate governance structures than Conjecture?
(4) is just... what? Ok, I've thought about it, and come to the conclusion this makes no sense given your previous arguments. Maybe there's a case to be made here. If they are less organizationally competent than OpenAI, then yeah, you probably don't want to support their work. This seems pretty unlikely to me though! And you definitely don't provide anything close to the level of analysis needed to elevate such hypotheses.
Edit: I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with reporters about this type of stuff! So mostly I think they should continue doing what they're doing.
I'm not myself an expert on PR (I'm skeptical if anyone is), so maybe my impressions of the articles are naive and backwards in some way. This is something which if you think is important, it would likely be good to mention somewhere why you think their media outreach is net-negative, ideally pointing to particular things you think they did wrong rather than vague & menacing criticisms of unilateralism.
Oh yeah, good point about Germany. I’m still pretty skeptical about the claim. Even if the claim ended up being true, I’d be worried its just because democracy is a pretty new concept, so we just don’t have as much data as we’d like. But far less worried it’d be non predictive as I am now.
The particular argument why democracies are so stable does not seem robust to the population wrongly believing a dictatorship would be better in their interests than the current situation. Voters can be arbitrarily wrong when they aren’t able to see the effects of their actions and then re-vote.
Oh also, I'd expect this analysis breaks down once the size of the essentials becomes large enough that people start advocating policies for fashion rather than the policy will actually have a positive effect on their life if implemented. See The Myth of the Rational Voter, and I expect that for a bunch of pro-Trump people, this is exactly why they are pro-Trump (similarly with a bunch of the pro-Biden people).
The relevant section of The Dictator's Handbook is the following
Given the complexity of the trade-off between declining private rewards and increased societal rewards, it is useful to look at a simple graphical illustration, which, although based on specific numbers, reinforces the relationships highlighted throughout this book. Imagine a country of 100 people that initially has a government with two people in the winning coalition. With so few essentials and so many interchangeables, taxes will be high, people won’t work very hard, productivity will be low, and therefore the country’s total income will be small. Let’s suppose the country’s income is $100,000 and that half of it goes to the coalition and the other half is left to the people to feed, clothe, shelter themselves and to pay for everything else they can purchase. Ignoring the leader’s take, we assume the two coalition members get to split the $50,000 of government revenue, earning $25,000 a piece from the government plus their own untaxed income. We’ll assume they earn neither more nor less than anyone else based on whatever work they do outside the coalition.
Now we illustrate the consequences of enlarging the coalition. Figure 10.1 shows how the rewards directed towards those in the coalition (that is, private and public benefits) compare to the public rewards received by everyone as more people enter the coalition. Suppose that for each additional essential member of the winning coalition taxes decrease by half of 1 percent (so with three members the tax rate drops from 50 percent to 49.5 percent), and national income improves by 1 percent for each extra coalition member. Suppose also that spending on public goods increases by 2 percent for each added coalition member. As coalition size grows, tax rates drop, productivity increases, and the proportion of government revenue spent on public goods increases at the expense of private rewards. That is exactly the general pattern of change we explained in the previous chapters.
I can't find a section of the book talking about how often empirically democracies backslide into dictatorships, but literally zero of them seems false. I don't know much about the history of democracy or dictatorships, but Germany was a democracy before it became Nazi Germany, so the claim of literally zero backsliding seems false.
I like Voice Dream Reader. I don't know how the voice compares to Natural Reader, but it does emphasize words and pronounce things differently based on context-cues. But those context cues are like periods and commas and stuff.
I find I stay approximately as engaged when listening to Voice Dream Reader when compared to an audiobook or someone reading stuff, but this could be an effect of having listened to several days worth of content via it.
I’m pretty confident the primary labs keep track of the number of flops used to train their models. I also don’t know how such a tool would prevent us all from dying.
I responded to a very similar comment of yours on the EA Forum.
To respond to the new content, I don't know if changing the board of conjecture once a certain valuation threshold is crossed would make the organization more robust (now that I think of it, I don't even really know what you mean by strong or robust here. Depending on what you mean, I can see myself disagreeing about whether that even tracks positive qualities about a corporation). You should justify claims like those, and at least include them in the original post. Is it sketchy they don't have this?