Wiki Contributions

Comments

RE: GPT getting dumber, that paper is horrendous.

The code gen portion was completely thrown off because of Markdown syntax (the authors mistook back-ticks for single-quotes, afaict). I think the update to make there is that it is decent evidence that there was some RLHF on ChatGPT outputs. If you remember from that "a human being will die if you don't reply with pure JSON" tweet, even that final JSON code was escaped with markdown. My modal guess is that markdown was inserted via cludge to make the ChatGPT UX better, and then RLHF was done on that cludged output. Code sections are often mislabeled for what language they contain. My secondary guess is that the authors used an API which had this cludged added on top of it, such that GPT just wouldn't output plaintext code, tho that is baffled by the "there were any passing examples".

In the math portion they say GPT-4-0613 only averaged 3.8 CHARACTERS per response. Note that "[NO]" and "[YES]" both contain more than 3.8 characters. Note that GPT-4 does not answer hardly any queries with a single word. Note that the paper's example answer for the primality question included 1000 characters, so the remaining questions apparently averaged 3 characters flat. Even if you think they only fucked up that data analysis: I also replicated GPT-4 failing to solve "large" number primality, and am close to calling a that cherry picked example. It is a legit difficult problem for GPT, I agree that anyone who goes to ChatGPT to replicate will agree the answer they get back is a coin flip at best. But we need to say it again for the kids in the back: the claim is that GPT-4 got 2% on yes/no questions. What do we call a process that gets 2% on coin flip questions?

If you take the distance between the North and South pole and divide it by ten million: voilà, you have a meter!


NB: The circumference of the Earth is ~40k km - this definition of a meter should instead mention the distance from the North or South pole to the Equator. 

The problem with this is that you get whatever giant risks you aren’t measuring properly. That’s what happened at SVB, they bought tons of ‘safe’ assets while taking on a giant unsafe bet on interest rates because the system didn’t check for that. Also they cheated on the accounting, because the system allowed that too.

A very good example of Goodhart's Law/misalignment. Highlighting for the skimmers. Thanks for the write up Zvi!

Tidbit to make this comment useful: "duration" is the (negative) derivative of price with respect to yield - a bond with duration of 10 will be worth 5% (relative to par) after a 50 bip (0.5%) rate hike. So why do they call it duration? Well, suppose you buy a 10 year bond that pays 2% interest, and then tomorrow someone offers you a 3% 10 year bond. How much money do you have to pay to trade in yesterday's bond? Well, pretty much you have to pay an extra 1% for each year of the bonds life!

This is probably dead obvious to everyone in finance, but I only got into finance by joining fintech as after a math undergrad, and it took me years to figure out why they called it duration when they are nice enough to call the second derivative "convexity".

New U.S. sanctions on Russia (70%): Scott holds, I sell to 60%.

This seems like a better sale than the sale on Russia going to war, by a substantial amount. So if I was being consistent I should have sold more here. Given that I was wrong about the chances of the war, the sale would have been bad, but I didn’t know that at the time. Therefore this still counts as a mistake not to sell more.

This seems like a conjunctive fallacy. "US sanctions Russia" is very possible outside "Russia goes to war", even if "Russia goes to war" implies "US sanctions Russia". You had 30% on "major flare up in Russia-Ukraine". Perhaps you are anchoring your relative sells or something?

I obviously agree that you know these things, and am only noting a self-flagellation that seemed unearned. Thanks for writing Zvi!

What prompts maximize the chance of returning these tokens?

Idle speculation: cloneembedreportprint and similar end up encoding similar to /EOF.

I am sorry for insulting you. My experience in the rationality community is that many people choose abstinence from alcohol, which I can respect, but I forgot that likely in many social circles that choice leads to feelings of alienation. While I thought you were signaling in-group allegiance, I can see that you might not have that connection. I will attempt to model better in the future, since this seems generalizable.

 

I'm still interested in whether the beet margarita with OJ was good~

I wish this post talked about object level trade offs. It did that somewhat with the reference to the importance of "have a decision theory that makes it easier to be traded with". However, the opening was extremely strong and was not supported:

I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world. Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dollars to effective causes?”, I would have said unequivocally no.

What level of funding would make fraud worth it?

Edit to expand: I do not believe the answer is infinite. I believe the answer is possibly less than the amount I understand FTX has contributed (assuming they honor their commitments, which they maybe can't). I think this post gestures at trading off sacred values, in a way that feels like it signals for applause, without actually examining the trade.

Thanks for feedback, I am new to writing in this style and may have erred too much towards deleting sentences while editing. But, if you never cut too much you're always too verbose, as they say. I in particular appreciate that, when talking about how I am updating, I should make clear where I am updating from.

For instance, regarding human level intelligence, I was also describing relative to "me a year/month ago". I relistened to the Sam Harris/Yudkowsky podcast yesterday, and they detour for a solid 10 minutes about how "human level" intelligence is a straw target. I think their arguments were persuasive, and that I would have endorsed them a year ago, but that they don't really apply to GPT. I had pretty much concluded that the difference between a 150 IQ AI and a 350 IQ AI would be a matter of scale. GPT as a simulator/platform seems to me like an existence proof for a not-artificially-handicapped human level AI attractor state. Since I had previous thought the entire idea was a distraction, this is an update towards human level AI.

The impact on AI timelines mostly follows from diversion of investment. I will think on if I have anything additional to add on that front.

Load More