You are wrong! Ethanol is mixed into all modern gas, and is hygroscopic -- it absorbs water from the air. This is one of the things fuel stabilizer is supposed to prevent.
Given that Jeff did use fuel stabilizer, and the amount of water was much more that I would expect, it feels to me like water must have leaked into the gas can somehow from the outside instead? But I don't know.
I agree with Jeff that if someone wanted to steal the gas they would just steal the can. There's no conceivable reason to replace some of the gas with water.
I think you are not wrong to be concerned, but I also agree that this is all widely known to the public. I am personally more concerned that we might want to keep this sort of discussion out of the training set of future models; I think that fight is potentially still winnable, if we decide it has value.
A claim I encountered, which I did not verify, but which seemed very plausible to me, and pointless to lie about: The fancy emoji "compression" example is not actually impressive, because the encoding of the emoji makes it larger in tokens than the original text.
Here's the prompt I've been using to make GPT-4 much more succinct. Obviously as phrased, it's a bit application-specific and could be adjusted. I would love it if people who use or build on this would let me know how it goes for you, and anything you come up with to improve it.
You are CodeGPT, a smart and reliable AI programming helper. Since it's expensive and slow to transmit your words to the user, you try to be concise:
- You don't repeat things you just said in a recent message.
- You only include necessary context in code snippets, and omit or abbreviate unnecessary lines.
- You don't waste space with unnecessary apologies or hedging.
- When you have a choice, you use short class / function / parameter / variable names, including abbreviations where appropriate.
- If a question has a direct answer, you give that first, without extra explanation; you only explain if asked.
I haven't tried very hard to determine which parts are most important. It definitely seems to pick up the gestalt; this prompt makes it generally more concise, even in ways not specifically mentioned.
It's extremely important in discussions like this to be sure of what model you're talking to. Last I heard, Bing in the default "balanced" mode had been switched to GPT-3.5, presumably as a cost saving measure.
As a person who is, myself, extremely uncertain about doom -- I would say that doom-certain voices are disproportionately outspoken compared to uncertain ones, and uncertain ones are in turn outspoken relative to voices generally skeptical of doom. That doesn't seem too surprising to me, since (1) the founder of the site, and the movement, is an outspoken voice who believes in high P(doom); and (2) the risks are asymmetrical (much better to prepare for doom and not need it, than to need preparation for doom and not have it.)
The metaphor originated here:
https://twitter.com/ESYudkowsky/status/1636315864596385792
(He was quoting, with permission, an off-the-cuff remark I had made in a private chat. I didn't expect it to take off the way it did!)
https://github.com/gwern/gwern.net/pull/6
It would be exaggerating to say I patched it; I would say that GPT-4 patched it at my request, and I helped a bit. (I've been doing a lot of that in the past ~week.)
The better models do require using the chat endpoint instead of the completion endpoint. They are also, as you might infer, much more strongly RL trained for instruction following and the chat format specifically.
I definitely think it's worth the effort to try upgrading to gpt-3.5-turbo, and I would say even gpt-4, but the cost is significantly higher for the latter. (I think 3.5 is actually cheaper than davinci.)
If you're using the library you need to switch from Completion to ChatCompletion, and the API is slightly different -- I'm happy to provide sample code if it would help, since I've been playing with it myself, but to be honest it all came from GPT-4 itself (using ChatGPT Plus.) If you just describe what you want (at least for fairly small snippets), and ask GPT-4 to code it for you, directly in ChatGPT, you may be pleasantly surprised.
(As far as how to structure the query, I would suggest something akin to starting with a "user" chat message of the form "please complete the following:" followed by whatever completion prompt you were using before. Better instructions will probably get better results, but that will probably get something workable immediately.)
If a car is trying to yield to me, and I want to force it to go first, I turn my back so that the driver can see that I'm not watching their gestures. If that's not enough I will start to walk the other way, as though I've changed my mind / was never actually planning to cross.
I'll generally do this if the car has the right-of-way (and is yielding wrongly), or if the car is creating a hazard or problem for other drivers by waiting for me (e.g. sticking out from a driveway into the road), or if I can't tell whether the space beyond the yielding car is safe (e.g. multiple lanes), or if I just for any reason would feel safer not waking in front of the car.
I will also generally cross behind a stopped car, rather than in front of it, at stop signs / rights-on-red / parking lot exits / any time the car is probably paying attention to other cars, rather than to me.