All of aphyer's Comments + Replies

Answer by aphyerNov 30, 2023159

I don't think you're being consistently downvoted: most of your comments are neutral-to-slightly positive?

I do see one recent post of yours that was downvoted noticeably,

I downvoted that post myself.  (Uh....sorry?) My engagement with it was as follows:

  1. I opened the post.  It consisted of a few paragraphs that did not sound particularly encouraging ('ethicophysical treatment...modeled on the work of Hegel and Marx' does not fill me with joy to read), plu
... (read more)

Chess is a game where, in every board state, almost all legal moves are terrible and you have to pick one of the few that aren't.


So is reality.

Another thing to keep in mind is that a full set of honest advisors can (and I think would) ask the human to take a few minutes to go over chess notation with them after the first confusion.  If the fear of dishonest advisors means that the human doesn't do that, or the honest advisor feels that they won't be trusted in saying 'let's take a pause to discuss notation', that's also good to know.

Question for the advisor players: did any of you try to take some time off explain notation to the human player?

Conor explained some details about notation during the opening, and I explained a bit as well. (I wasn't taking part in the discussion about the actual game, of course, just there to clarify the rules.)

This is true, but in general the differences between an ordinary employee and a CEO go in the CEO's favor.  I believe this does also extend to 'how are they fired': on my understanding the modal way a CEO is 'fired' is by announcing that they have chosen to retire to pursue other opportunities/spend more time with their family, and receiving a gigantic severance package.

Answer by aphyerNov 27, 202314-7

Disclaimer: I do not work at OpenAI and have no inside knowledge of the situation.

I work in the finance industry.  (Personal views are not those of my employer, etc, etc).

Some years ago, a few people from my team (2 on a team of ~7) were laid off as part of firm staff reductions.

My boss and my boss's boss held a meeting with the rest of the team on the day those people left, explaining what had happened, reassuring us that no further layoffs were planned, describing who would be taking over what parts of the responsibilities of the laid-off people, et... (read more)

You just explained why it's totally disanalogous. An ordinary employee is not a CEO {{citation needed}}.
I laughed out loud on this line... ...and then I wondered if you've seen Margin Call? It is truly a work of art. My experiences are mostly in startups, but rarely on the actual founding team, so I have seen more stuff that was unbuffered by kind, diligent, "clueless" bosses. My general impression is that "systems and processes" go a long way into creating smooth rides for the people at the bottom, but those things are not effectively in place (1) at the very beginning and (2) at the top when exceptional situations arise. Credentialed labor is generally better compensated in big organizations precisely because they have "systems" where people turn cranks reliably that reliably Make Number Go Up and then share out fractional amounts of "the number". Did you ever see or talk with them again? Did they get nice severance packages? Severance packages are the normal way for oligarchs to minimize expensive conflict, I think.

Visits to emergency rooms might not be down if parents are e.g. panicking and bringing a child to the ER with a bruise.

True. The "arm fracture" one on the Victoria chart seems pretty concrete, though.
I know we took our kid to the emergency room around four months because we couldn’t find the button that had come off his shirt, we assumed he ate it, and the poison control hotline missheard button as button battery. That sequence probably wouldn’t be in the statistics in the 80s!

The board had a choice.

If Ilya was willing to cooperate, the board could fire Altman, with the Thanksgiving break available to aid the transition, and hope for the best.

Alternatively, the board could choose once again not to fire Altman, watch as Altman finished taking control of OpenAI and turned it into a personal empire, and hope this turns out well for the world.

They chose to pull the trigger.


I...really do not see how these were the only choices?  Like, yes, ultimately my boss's power over me stems from his ability to fire me.  But it w... (read more)

2Roland Pihlakas9d
The following is meant as a question to find out, not a statement of belief. Nobody seems to have mentioned the possibility that initially they did not intend to fire Sam, but just to warn him or to give him a choice to restrain himself. Yet possibly he himself escalated it to firing or chose firing instead of complying with the restraint. He might have done that just in order to have all the consequences that have now taken place, giving him more power. For example, people in power positions may escalate disagreements, because that is a territory they are more experienced with as compared to their opponents.
This comes from a fundamental misunderstanding of how OpenAI and most companies operate. The board is a check on power. In most companies they will have to approve of high level decisions: moving to a new office space or closing a new acquisition. But they have 0 day to day control. If they tell the CEO to fire these 10 people and he doesn't do it, that's it. They can't do it themselves, they can't tell the CEO's underlings to do it. They have 0 options besides getting a new CEO. OpenAI's board had less control even than this. Tweeting "Altman is not following our directions and we don't want to fire him, but we really want him to start doing what we ask" is a sure fire way to collapse your company and make you sound like a bunch of incompetent buffoons. It's admitting that you won't use the one tool that you actually do have. I'm certain the board threatened to fire Sam before this unless he made X changes. I'm certain Sam never made all of those X changes. Therefore they can either follow through on their threat or lose. Turns out following through on their threat was meaningless because Sam owns OpenAI both with tacit power and the corporate structure.

Speculating of course, but it reads to me like the four directors knew Altman was much better at politics and persuasion than they were. They briefly had a majority willing to kick him off, and while "Sam would have found a basilisk hack to mind-control the rest of the board" is phrased too magically to me I don't think it's that far off? This sort of dynamic feels familiar to me from playing games where one player is far better than the others at convincing people.

(And then because they were way outclassed in politics and persuasion they handled the aftermath of their decision poorly and Altman did an incredible job.)

I think the central confusion here is: why, in the face of someone explicitly trying to take over the board, would the rest of the board just keep that person around?

None of the things you suggested have any bearing whatsoever on whether Sam Altman would continue to try and take over the board. If he has no board position but is still the CEO, he can still do whatever he wants with the company, and also try to take over the board. If he is removed as CEO but remains on the board, he will still try to take over the board. Packing the board has no bearing on... (read more)

I feel like this is a good observation. I notice I am confused at their choices given the information provided.... So there is probably more information? Yes, it is possible that Toner and the former board just made a mistake, and thought they had more control over the situation than they really did? Or underestimated Altman's sway over the employees of the company? The former board does not strike me as incompetent though. I don't think it was sheer folly that lead them to pick this debacle as their best option. Alternatively, they may have had information we don't that lead them to believe that this was the least bad course of action.

If you pushed for fire sprinklers to be installed, then yell "FIRE", and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don't think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.


The situation is actually even less surprising than this, because the thing people actually initially contemplated doing in response to the board's actions was not even 'taking away your ability to trigger the f... (read more)

I think this is confusing 'be nice' with 'agree'.

Alice suggests that civilization should spend less on [rolls dice] animal shelters and more on [rolls dice] soup kitchens.

Bob: Yes, I absolutely agree.

Charlie: I don't think I agree with that policy.  I think animal shelters are doing good work, and I wouldn't want to see them defunded to pursue some other goal.


If your interpretation of 'nice' is such that Bob is 'nice' and ... (read more)

Some of the feedback was disappointing– people said things along the lines of: “Lies of omission are not lies! The fact that you have to add ‘of omission’ shows this!”.

The worst that I have gotten in that vein was: “When you are on stand, you take an oath that says that you will convey the truth, the whole truth, and nothing but the truth. The fact that you have to add ‘the whole truth’ shows that lies of omission are not lies.”

This was from a person that also straightforwardly stated “I plan to keep continuing to not state my most relevant opinions in pub

... (read more)
Thanks, this nicely encapsulated what I was circling around as I read it. I kept reaching for much more absurd cases, like "Mr. President, you're a liar for not disclosing all the details of the US latest military hardware when asked."  Even aside from that... I'm a human with finite time and intelligence, I don't actually have the ability to consistently avoid lies of omissions even if that were my goal.  Plus, I do think it's relevant that many of our most important social institutions are adversarial. Trials, business decisions, things like that. I expect that there are non-adversarial systems that can outperform these, but today we don't have them, and you need more than a unilateral decision to change such systems. Professionally, I know a significant amount of information that doesn't belong to me, that I am not at liberty to disclose due to contracts I'm covered under, or that was otherwise told to me in confidence (or found out by me inadvertently when it's someone else's private information). This information colors many other beliefs and expectations. If you ask me a question where the answer depends in part on this non-disclosable information, do I tell you my true belief, knowing you might be able to then deduce the info I wasn't supposed to disclose? Or do I tell you what I would have believed, had I not known that info? Or some third thing? Are any of the available options honest?

If you died and went to a Heaven run by a genuinely benevolent and omnipotent god, would it be impossible for you to enjoy yourself in it?

It would be possible. "Fun Theory" describes one such environment the benevolent god could create.

For an entertainingly thematic choice, I'd recommend Twilight Struggle.

I think it is very hard to artificially forbid that: there isn't a well-defined boundary between playing out a full game and a conversation like:

"that other advisor says playing Rd4 is bad because of Nxd4, but after Nxd4 you can play Qd6 and win"

"No, Qd6 doesn't win, playing Bf7 breaks up the attack."

One thing that might work, though, is to deny back-and-forth between advisors. If each advisor can send one recommendation, and maybe one further response to a question from A, but not have a free-form conversation, that would deny the ability to play out a game.

Yeah, that's a bit of an issue. I think in real life you would have some back-and-forth ability between advisors, but the complexity and unknowns of the real world would create a qualitative difference between the conversation and an actual game - which chess doesn't have. Maybe we can either limit back-and-forth like you suggested, or just have short enough time controls that there isn't enough time for that to get too far.
Answer by aphyerOct 25, 202390

I'm rated ~1700 on, though I suspect their ratings may be inflated relative to e.g. FIDE ones. Happy to play whatever role that rating fits best with. I work around NYC at a full-time job: I'm generally free in the evenings (perhaps 7pm-11pm NY time) and on weekends.

Two questions:

  1. Do you anticipate using a time control for this? I suspect B will be heavily advantaged by short time controls that don't give A much time, while A will be heavily favored by having enough time to e.g. tell two advisors who disagree 'okay, C1 thinks that move is a

... (read more)
I am also in NYC and happy to participate. My lichess rating is around 2200 rapid and 2300 blitz.
I think a time control of some sort would be helpful just so that it doesn't take a whole week, but I would prefer it to be a fairly long time control. Not long enough to play a whole new game, though, because that's not an option when it comes to alignment - in the analogy, that would be like actually letting loose the advisors' plans in another galaxy and seeing if the world gets destroyed. I'm not sure exactly what the time control would be - maybe something like 4 hours on each side, if we're using standard chess time controls. I'm also thinking about using a less traditional method of time control - for example, on each move, the advisors have 4 minutes to compose their answers, and A has another 4 minutes to look them over and make a decision. But then it's hard to decide how much time it's fair to give B for each move - 4 minutes, 8 minutes, somewhere in between? I don't think chess engines would be allowed; the goal is for the advisors to be able to explain their own reasoning (or a lie about their reasoning), and they can't do that if Stockfish reasons for them.

Fooling people into thinking they're talking to a human when they're actually talking to an AI should be banned for its own sake, independent of X-risk concerns.

I feel like your argument here is a little bit disingenuous about what is actually being proposed.

Consider the differences between the following positions:

1A: If you advertise food as GMO-free, it must contain no GMOs.

1B: If your food contains GMOs, you must actively mark it as 'Contains GMOs'.

2A: If you advertise your product as being 'Made in America', it must be made in America.

2B: If your produ... (read more)

I think the synthesis here is that most people don't know that much about AI capabilities, and so if you are interacting with an AI in a situation that might lead you to reasonably believe you were interacting with a human, then that counts. For example, many live chat functions on company websites open with "You are now connected with Alice" or some such. On phone calls, hearing a voice that doesn't clearly sounds like a bot voice also counts. It wouldn't have to be elaborate - they could just change "You are now connected with Alice" to "You are now connected with HelpBot."   It's a closer question if they just take away "you are now connected with Alice", but there exist at least some situations where the overall experience would lead a reasonable consumer to assume they were interacting with a human.

You have as broad categories:

  1. Fizzlers saying capable AGI is far or won’t happen
  2. How-Skeptics saying AGI won’t be able to effectively take over or kill us.
  3. Why-Skeptics saying AGI won’t want to.
  4. Solvabilists saying we can and definitely will solve alignment in time.
  5. Anthropociders who say ‘but that’s good, actually.’

Pithy response: You don't need to not believe in global warming to think that 'use executive order to retroactively revoke permits for a pipeline that has already begun construction' is poor climate policy!

Detailed response: One that I think is nota... (read more)

Weird confluence here? I don't know what the categories listed have to do with the question of whether a particular intervention makes sense. And we agree of course that any given intervention might not be a good intervention. For this particular intervention, in addition to slowing development, it allows us to potentially avoid AI being relied upon or trusted in places it shouldn't be, to allow people to push back and protect themselves. And it helps build a foundation of getting such things done to build upon. Also I would say it is good in a mundane utility sense.  Agreed that it is no defense against an actual ASI, and not a permanent solution. But no one (and I do mean no one, that I can recall anyway) is presenting this as a full or permanent solution, only as an incremental thing one can do. 
2Odd anon1mo
(Author of the taxonomy here.) So, in an earlier draft I actually had a broader "Doom is likely, but we shouldn't fight it because..." as category 5, with subcategories including the "Doom would be good" (the current category 5), "Other priorities are more important anyway; costs of intervention outweigh benefits", and "We have no workable plan. Trying to stop it would either be completely futile, or would make it even more likely" (overhang, alignment, attention, etc), but I removed it because the whole thing was getting very unfocused. The questions of "Do we need to do something about this?" and "Which things would actually help?" are distinguishable questions, and both important. My own opinion on the proposals mentioned: Fooling people into thinking they're talking to a human when they're actually talking to an AI should be banned for its own sake, independent of X-risk concerns. The other proposals would still have small (but not negligible) impact on profits and therefore progress, and providing a little bit more time isn't nothing. However, it cannot a replacement for a real intervention like a treaty globally enforcing compute caps on large training runs (and maybe somehow slowing hardware progress). 
Of course the costs have to be balanced, but well, I wouldn't mind living ten more years. I think that is a perfectly valid thing to want for any non-negligible P(doom).
And moving doom back by a few years is entitely valid as a strategy, I think it should be realized, and is even pivitol. If someone is trying to punch you and you can delay it by a few seconds, that can determine the winner of the fight. In this case, we also have other technologies which are concurrently advancing such as genetic therapy or brain computer interfaces. Having them advance ahead of AI may very well change the trajectory of human survival.

Is the fruit you're thinking of typically red in color when ripe?


Is it a strawberry?


Is it a cherry?


Is it a raspberry?


Is it a pomegranate?


Is it a blueberry?

No. Remember, it’s often red when ripe.

My apologies for the oversight. Is it a cranberry?


I apologize for the previous guesses. Is it a currant?


I'm sorry for the incorrect guesses. Is it a red currant?



This part is actually fascinating.  If this were a human I would imagine that they were trolling you.  Here I'm wondering whether it gets through strawberry-cherry and then d... (read more)

1Bill Benzon1mo
Yes, I was scratching my head when that happened. And when it went to pomegranate without even mentioning apple I was really puzzled. And then it guesses blueberry. I did apple again earlier today and it got right to it in 11 questions.


(Good to ask, though. I think the unfinished-story percentage on Glowfic is like 98%)

The big problem faced by cooperative games without hidden agendas is that they are fundamentally solitaire games.

Imagine four people playing chess 'as a team' with the following ruleset:

  • Player 1 controls the King and Queen
  • Player 2 controls the Bishops and Knights
  • Player 3 controls the Rooks.
  • Player 4 controls the Pawns.
  • Each turn, choose one player.  That player may move one of their pieces once.  Then, your opponent moves.

It's easy to see that the way to play this if you want to do well is simply to have whichever player is best at chess take over,... (read more)

2mako yass2mo
You just made me intensely curious as to what happens to chess if you let people move more than one piece in per turn. In this case you're allowing a move per four different categories. What if it was just the pawns and the bishops. (What if it was every single piece at once?)
2mako yass2mo
Huh, I'm surprised to find that I didn't explain this in the post. Yeah this is the reason I don't think cooperative games are going to be as fun as cohabitive games, although it has a pretty simple patch: Stop playing to win. (I talk about how to enjoy cooperative games in this)

The "every coal burned contributes to global warming, past a certain cap everybody loses" approach is not going to work in an otherwise competitive game. Instead, it will create a race to use up the coal budget as fast as possible so that your opponents can't benefit from it.

Citation: Twilight Struggle

2mako yass2mo
This raises an interesting view. There's no reason it should do that, but if you give it to an unprepared group of competitive boardgame players, that is how they would be have. It wont occur to them that they should start the game by creating a coal rationing tribunal with material enforcement mechanisms. Bad norms would breed bad norms. Probably, the player who broke the norms most often would tend to get ganged up on and lose, but it is hard, even, for experience, to overcome a bad social norm.
I meant "get players to cooperate within a cooperative-game-with-prisoners-dilemmas", yes.

Maybe I'm unusual in this regard, but peering at my mirrors closely enough to make out a car in one of these seems harder than just head-checking my blind spot?

Assuming that by head-checking you mean turning back: I don't like it because I'd lose eye contact with the cars in front of me. I typically move my head to the right until I see the blind spot in the left mirror.

Does the lie detection logic work on humans?

Like, my guess would be no, but stranger things have happened.

Also asked (with some responses from the authors of the paper) here:
We don't have the human model weights, so we can't use it.  My guess is that if we had sufficiently precise and powerful brain scans, and used a version of it tuned to humans, it would work, but that humans who cared enough would in time figure out how to defeat it at least somewhat.
9Stephen Bennett2mo

If the information environment prevents people from figuring out the true cause of the obesity epidemic, or making better engineered foods, this affects you no matter what place and what social circles you run in. And if epistemic norms are damaged in ways that lead to misaligned AGI instead of aligned AGI, that could literally kill you.

The stakes here are much larger than the individual meat consumption of people within EA and rationality circles. I think this framing (moralistic vegans vs selfish meat eaters with no externalities) causes people to misunderstand the world in ways that are predictably very harmful.

'If done intelligently' is really one hell of an 'if'.

Yes, intelligent climate change mitigation strategies would not cost very much. (On some assumptions about nuclear power, intelligent climate change mitigation strategies might have negative cost).

But the more relevant question is the cost of the climate change mitigation strategies we actually get.

I don't think nuclear power is currently a cost-effective approach to mitigating global warming. It only really makes sense when geopolitical concerns are a factor, eg threats to tankers transporting LNG to Europe or Japan.

How well do you think current governments do at abiding by their past commitments that large numbers of their constituents disagree with?

Probably not good, which is why I've invented these “enforcement perpetuities”. As far as I know, nobody has used these financial instruments before, so we have no data on them, and must use logic instead. Your question is like asking the inventor of car whether anybody has travelled 100 miles an hour before. It's not only a totally irrelevant question, it's exactly the problem that I'm solving. So, do you have any logical explanation on why enforcement perpetuities wouldn't sufficiently incentivize future governments? Either the future government can enact the policy and get their debt wiped without a default, or they can continue to pay the debt. Do you think the future politicians, or even the future public, would prefer the latter?

Probably generally not true, you're right.  Even if they are prosecuted, 'fined' or 'barred from the securities industry' are more likely.  I do think legal trouble would be at least very plausible.  I'll edit the parent comment.

(Obligatory disclosure I guess: I work in the financial industry, though not in a way related to mortgages or housing.  Anything I write here is my opinion and not that of my employer.)



Imagine a world where, as well as their regular business selling groceries, grocery stores sell tokens that entitle you to a perpetual stream of groceries.  Rather than spending $200/week on groceries, you spend...let's say (200*52/0.05)=$208,000 to buy a Grocery Token. 

Of course many people cannot afford $20... (read more)

Really? We jail financial advisors for giving unreasonably aggressive financial advice?

I had a similar gut reaction.  When I tried to run down my brain's root causes of the view, this is what it came out as:

There are two kinds of problem you can encounter in politics.  One type is where many people disagree with you on an issue.  The other type is where almost everyone agrees with you on the issue, but most people are not paying attention to it.

Protests as a strategy are valuable in the second case, but worthless or counterproductive in the first case.

If you are being knifed in an alleyway, your best strategy is to make as muc

... (read more)

I think that, in particular, protesting Meta releasing their models to the public is a lot less likely to go well than protesting, say, OpenAI developing their models.  Releasing models to the public seems virtuous on its face both to the general public and to many technologists.  Protesting that is going to draw attention to that specifically and so tend to paint the developers of more advanced models in a comparatively better light and their opponents in a comparatively worse light compared. 

I agree with your assessment of the situation a lot, but I disagree that there is all that much controversy about this issue in the broader public. There is a lot of controversy on lesswrong, and in tech, but the public as a whole is in favor of slowing down and regulating AI developments. (Although other AI companies think sharing weights is really irresponsible and there are anti-competitive issues with llama 2’s ToS, which why it isn’t actually open source.) (read more)

Very nearly everyone agrees that there is a meaningful difference between action and inaction?

Alice is trying to decide whether to give Bob $10.

Claire is trying to decide whether to steal $10 from Bob.

If you refuse to acknowledge a difference between action and inaction, you can claim that both of these scenarios represent 'choosing whether $10 should end up in Bob's pocket or in your own', and therefore that these two situations are the same, and therefore that Alice's obligation to give Bob $10 is exactly as strong as Claire's obligation to not steal $10 from him.

Outside of the deep end of Peter-Singer-style altruism, though, I don't think many people believe that.

2Thomas Sepulchre3mo
I think you are missing the point. Getting back to the example about an old man collapsing in a bank lobby, let's compare three alternative types of actions: * Helping * Doing nothing * Harming an old man on purpose Claiming that there is no meaningful difference between action and inaction would imply that doing nothing to help the old man is equivalent to harming an old man. This is indeed a fairly extreme position, and I agree with you that it is rejected by nearly everyone. In this very real case, the bystanders were fined by the German justice system for not helping, but they were not put in jail, as would have been the case for harming an old man (at least on purpose). So the German justice system agrees with you on this point. But that's not really the question of duty to rescue. The question is not about the equivalence of doing nothing and harming an old man, it's about the equivalence between helping and doing nothing. In this case, one would be fined for doing nothing, but wouldn't be fined for calling an ambulance.  Without the duty to rescue, one wont be fined, or otherwise punished, for doing nothing. This makes doing nothing a safe choice (at least in term of legal consequences).

[Chloe was] paid the equivalent of $75k[1] per year (only $1k/month, the rest via room and board)


So, it's not the most important thing in the post, but this sounds hella sketchy.  Are you sure these are the numbers that were given?

$75k/yr minus $1k/mo leaves $63k/year in 'room and board'.  The median household income in New York City is $70,663/yr per  Where were they boarding her, the Ritz Carlton?

This is more false info.  The approximate/expected total compensation was $70k which included far more than room and board and $1k a month.  

Chloe has also been falsely claiming we only had a verbal agreement but we have multiple written records.  

We'll share specifics and evidence in our upcoming post.

I think a lot of travel expenses?

The 'whole point of libel suits' is to weaponize the expensive brokenness of the legal system to punish people for saying mean things about you.

Going forward I think anyone who works with Kat Woods, Emerson Spartz, or Drew Spartz, should sign legal employment contracts, and make sure all financial agreements are written down in emails and messages that the employee has possession of. I think all people considering employment by the above people at any non-profits they run should take salaries where money is wired to their bank accounts, and not do unpaid work or work that is compensated by ways that don’t primarily include a salary being wired to their bank accounts.


While I have no knowledge ... (read more)

I have worked without legal contracts for people in EA I trust, and it has worked well.

Even if all the accusation of Nonlinear is true, I still have pretty high trust for people in EA or LW circles, such that I would probably agree to work with no formal contract again.

The reason I trust people in my ingroup is that if either of us screw over the other person, I expect the victim to tell their friends, which would ruin the reputation of the wrongdoer. For this reason both people have strong incentive to act in good faith. On top of that I'm wiling to take ... (read more)

Yeah, this post makes me wonder if there are non-abusive employers in EA who are nevertheless enabling abusers by normalizing behavior that makes abuse popular. Employers who pay their employees months late without clarity on why and what the plan is to get people paid eventually. Employers who employ people without writing things down, like how much people will get paid and when. Employers who try to enforce non-disclosure of work culture and pay.

None of the things above are necessarily dealbreakers in the right context or environment, but when an employ... (read more)

Haha, I like your edit. I do think there are exceptions — for instance if you are independently wealthy, you might take no salary, and I expect startups cofounders have high-trust non-legal agreements while they're still getting started. But I think that trust is lost for Kat/Emerson/Drew and I would expect anyone in that relationship to regret it. And in general I agree it's a good heuristic.

I wrote a reply to this but it got too long, so I posted it as its own post.

(Disclosures: am American.  Strong views presented without evidence.)

The most damning indictment of French food I can think of is the fact that American capitalism hasn't even bothered stealing it.  We have Italian restaurants on every corner, Chinese and Mexican and Thai and Indian and Korean and every other cuisine from every other corner of the world...except French.  One time I went to a Korean hotpot place, but it was too full and had a long wait, so instead of waiting I walked to a different Korean hotpot restaurant.  There are tw... (read more)

Gregor in Berkeley is a take out French restaurant in Berkeley available to franchise. La Note, Berkeley’s not overpriced French restaurant, went brunch only after the pandemic!

American capitalism "stealing" food is usually a process of lower-income, unskilled migrants moving to a country and adapting their cuisines to American tastes/ ingredients, which explains the wave of Italian (historically), Chinese, Mexican, Thai and Indian places far better than the quality of their respective cuisines. Not sure about Korean/ Japanese places (higher income), but (in Europe at least) they're mostly run by people from Wenzhou, unless they're high-end, which may be an interesting exception to the rule. 

I'd guess you see very few restau... (read more)

Your argument is sound but I think it's actually because of its diversity in the base foods. Pasta and pizza is 95% of italian food, rice and noodles are the base of 80% of chinese/japanese/korean food, etc... In french cuisine, there is no base that is often used so you must have a lot of different ingredients. Not the best thing when you operate at "small scale" (when you're not very expensive or cheesecake factory) I don't know if french restaurants are pretentious outside of france, but that looks more like a parisian problem than a french one.
On the other hand, Euros will constantly denigrate American food by appeal to the most unsophisticated examples of our cuisine, all the while pretty much every corner of their continent imports McDonald's by the kiloton. Revealed preferences.

If you're allowed to cancel a pledge at any point, there's really very little reason not to just fund anything with a refund bonus the moment it posts, aiming to cancel your pledge if it looks like it might succeed.

Some thoughts:


#1: If a project is almost funded, a creator can contribute money themselves (or e.g. indirectly via a friend).  The example above said that if you offered a 10% refund on a $100k project:

The most you'd have to pay is 10% of $99,999

but in practice if you raised $95,000, rather than pay back $9,500 in bounties and have the project fail you'll probably just kick in $5k of your own money to make the project succeed.  Platforms can try to forbid this, but it's probably going to be quite hard to do so.


#2:  If nobody or n... (read more)

This seems to assume that social graces represent cooperative social strategies, rather than adversarial social strategies. I don't think this is always the case.

Consider a couple discussing where to go to dinner. Both keep saying 'oh, I'm fine to go anywhere, where do you want to go?' This definitely sounds very polite! Much more socially-graceful than 'I want to go to this place! We leave at 6!'

Yet I'd assert that most of the time this represents these people playing social games adversarially against one another.

If you name a place and I agree to g... (read more)

I believe the common case of mutual "where do you want to go?" is motivated by not wanting to feel like you're imposing, not some kind of adversarial game. Maybe I'm bubbled though?

Tyler Cowen has his unique take on the actors strike and the issue of ownership of the images of actors. As happens frequently, he centers very different considerations than anyone else would have, in a process that I cannot predict (and that thus at least has a high GPT-level). I do agree that the actors need to win this one.

I do agree with his conclusion. If I got to decide, I would say: Actors should in general only be selling their images only for a particular purpose and project. At minimum, any transfer of license should be required to come with due

... (read more)
True, but I definitely don't expect such a flawless AI to be available any soon. Even Stable Diffusion is not stable enough to consistently draw the exact same character twice, and the current state of AI-generated video is much worse. Remember the value of the long tail: if your AI-generated movie has 99% good frames and 1% wonky frames, it will still looks like a very bad product compared to traditional movies, because consumers don't want movies where things look vaguely distorted once per minute (maybe the stunt doubles should be more concerned about being replaced by AI frames that the actor themselves?).
4Gerald Monroe4mo
First, I agree with your general conclusion : laws to protect a limited number of humans in a legacy profession are inefficient.  Though this negotiation isn't one of laws, it's unions vs studios, where both sides have leverage to force the other to make concessions. However, I do see a pattern here.  Companies optimizing for short term greed very do often create the seeds of larger problems: 1.  In engineering fields, companies often refuse to hire new graduates, preferring mid level and up, as new graduates are unproductive on complex specialized technology.  This creates a shortage of mid level+ engineers and companies are then forced to pay a king's ransom for them in periods of tech boom. 2. 996 in China, and the "salaryman" culture in Japan, create situations where young adults cannot have many children.  This means in the short/medium term companies extract the maximum value per dollar of payroll paid, but create a nationwide labor shortage for the future when future generations are smaller. 3. Companies who pay just $200 for someone's digital likeness in perpetuity, and who intend to eliminate all the actor roles except for "A list" show-stealer stars who bring the most value to the project, eliminate an entire pipeline to allow anyone to ever become famous again.  It will mean a short term reduction in production costs, but the stars created under the old system will age, requiring more and more digital de-aging, and they will demand higher and higher compensation per project. (3) bothers me in that it's excessively greedy, it doesn't come close to paying a human being to even come to LA at all.  It's unsustainable.     Theoretically capitalism should be fixing these examples automatically.  I'm unsure why this doesn't happen.

Actually, I feel like even this was pretty predictable: the text was entirely valid English words.  If a text-prediction engine were reading through this character-by-character trying to predict the upcoming character, they would have failed on the first few characters of each word, but would still have been able to predict quite a lot: there aren't many words that begin with 'malar'.

I posted it like this anyway rather than aiming for actually unpredictable text because I thought that this text was funnier than a string of entirely random characters.

Quantum turnip million. RELEASE malarial assemble!

Clever, but not further. If you increase redundancy, still unpredictable, as here, you probably went too far.
This text shows another key point: not only should your posts be a surprise, but the kind of surprise that causes good actions.
Efcnt cmnictn = cryptc(/grphc) ~

We don't actually want our AI to cooperate with each copy of the malaria bacterium.

That's very astute. True.

Putting a claim into ChatGPT and getting "correct" is little evidence that the claim is correct. 

Unless your claims are heavily preselected e.g. for being confusing ones where controversial wisdom is wrong, I think this specific example is inaccurate?  If I ask ChatGPT 'Is Sarajevo the capital of Albania?', I expect it to be right a large majority of the time.

Fixed, thanks. I implicitly assumed that all ChatGPT use we cared about was about complicated, confusing topics, where "correct" would be little evidence.

I actually edited to include your PVE change, you did manage a 64% winrate.  Sorry not to give you more time, didn't realize there was work still ongoing.

NP aphyer, I didn't ask for any more time, though I was happy to get some extra due to you extending for yonge. I hadn't been particularly focused on it for a while, until trying to get things figured out at the last minute, largely I think due to me having spent a greatly disproportionate-to-value effort on figuring out how to do similarity clustering on a highly reduced (and thus much more random) version of the dataset, and then not knowing what to do with the results once I got them. (though I did learn stuff about finding the similarity clustering, so that was good). Looks like the clusters I found in the reduced dataset more or less corresponded to:

Sorry, wasn't expecting anything today!  I'll update the wrapup doc to reflect your PVE answer: sadly, even if you had an updated PVP answer, I won't let you change that now :P

Sure, no objections.   In the absence of further requests I'll aim to post the wrapup doc Friday the 9th: I'm fairly busy midweek and might not get around to posting things then.

Very minor gripe: '22m' parses to me as '22 years old and male', which was briefly confusing. Maybe '22mo' would be clearer?


For example, here’s a Nash equilibrium: “Everyone agrees to put 99 each round. Whenever someone deviates from 99 (for example to put 30), punish them by putting 100 for the rest of eternity.” 

I don't think this is actually a Nash equilibrium?  It is dominated by the strategy  "put 99 every round.  Whenever someone deviates from 99, put 30 for the rest of eternity."

The original post I believe solved this by instead having the equilibrium be “Everyone agrees to put 99 each round. Whenever someone deviates from 99 (for example to put 30), ... (read more)

2Yair Halberstadt7mo
It's not dominated - holding all other players constant the two strategies have equal payoffs, so neither dominates the other.
The strategy profile I describe is where each person has the following strategy (call it "Strategy A"):  * If empty history, play 99  * If history consists only of 99s from all other people, play 99  * If any other player's history contains a choice which is not 99, play 100  The strategy profile you are describing is the following (call it "Strategy B"):  * If empty history, play 99 * If history consists only of 99s from all other people, play 99  * If any other player's history contains a choice which is not 99, play 30  I agree Strategy B weakly dominates Strategy A. However, saying "everyone playing Strategy A forms a Nash equilibrium" just means that no player has a profitable deviation assuming everyone else continues to play Strategy A. Strategy B isn't a profitable deviation -- if you switch to Strategy B and everyone else is playing Strategy A, everyone will still just play 99 for all eternity.  The general name for these kinds of strategies is grim trigger.

Apologies, I was a bit blunt here.

It seems to me that the most obvious reading of "the burden of proof is on developers to show beyond-a-reasonable-doubt that models are safe" is in fact "all AI development is banned".  It's...not clear at all to me what a proof of a model being safe would even look like, and based on everything I've heard about AI Alignment (admittedly mostly from elsewhere on this site) it seems that no-one else knows either. 

A policy of 'developers should have to prove that their models are safe' would make sense in a world wh... (read more)

Load More