746

LESSWRONG
LW

745

Søren Elverlin's Shortform

by Søren Elverlin
2nd Feb 2021
1 min read
17

2

This is a special post for quick takes by Søren Elverlin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Søren Elverlin's Shortform
13Søren Elverlin
3Viliam
8Søren Elverlin
3abramdemski
14Dagon
5abramdemski
5Dagon
3Søren Elverlin
7Vladimir_Nesov
2denkenberger
3Vladimir_Nesov
1denkenberger
1Søren Elverlin
3Eli Tyre
1Søren Elverlin
1Søren Elverlin
1Søren Elverlin
17 comments, sorted by
top scoring
Click to highlight new comments since: Today at 6:52 AM
[-]Søren Elverlin4y130

I made my most strident and impolite presentation yet in the AISafety.com Reading Group last night. We were discussing "Conversation with Ernie Davis", and I attacked this part:

"And once an AI has common sense it will realize that there’s no point in turning the world into paperclips..."

I described this as fundamentally mistaken and like an argument you'd hear from a person that had not read "Superintelligence". This is ad hominem, and it pains me. However, I feel like the emperor has no clothes, and calling it out explicitly is important.

Reply
[-]Viliam4y30

Explaining things across long inferential distance is frustrating. The norm that arguments should be opposed by arguments (instead of e.g. ad hominems) is good in general, but sometimes a solid argument simply cannot be constructed in five minutes. At least you have pointed towards an answer...

Reply
[-]Søren Elverlin5y80

Today, I bought 20 shares in Gamestop / GME. I expect to lose money, and bought them as a hard-to-fake signal about willingness to coordinate and cooperate in the game-theoretic sense. This was inspired by Eliezer Yudkowsky's post here: https://yudkowsky.medium.com/

In theory, Moloch should take all the ressources of someone following this strategy. In practice, Eru looks after her own, so I have the money to spare.

Reply
[-]abramdemski5y30

Is this still a short squeeze? (Have ~all of the shorts already been squeezed?)

Reply
[-]Dagon5y140

Unclear.  It's hard to know what any part of a distributed group thinks, let alone what the current gestalt is.  With expiring options last Friday, and a noticeable price drop, it looks like the gamma squeeze (https://www.fool.com/investing/2021/01/26/gamestops-gargantuan-gamma-squeeze/) is over.  A lot of shorts seem to have covered (redeemed or returned their borrows), but by no means all - last I saw there are still 40% as many shorts as the normal float (shares available without shorting).  Which is a lot, and enough to fuel a squeeze if enough shares are held and not trading.  But much much smaller than the 140% two weeks ago.  

https://isthesqueezesquoze.com/ says no, but predicting a mass of internet trolls with brokerage accounts is non-trivial.  

Reply
[-]abramdemski5y50

Ah, thanks! Relatedly, do you understand what Eliezer is talking about with "naked shorts" here? I looked up the investopedia article on naked shorts, but, I didn't understand what they actually were. Supposedly it's shorting a stock without borrowing it first. But how does that work?

Regular shorting:

  1. Borrow a stock. (Get a stock, promise to give it back later, plus some fee/interest.)
  2. Sell it for the money.
  3. (Time passes.)
  4. Buy back the stock, hopefully at a lower price.
  5. Return it to lender.

I'm not sure which steps are omitted in a naked short. If you don't borrow a short, I guess you don't have to give it back (so strike #1 and #5). That leaves 2-4. But, how can you sell it if you don't have it? Naked shorts are illegal, but they only became illegal around 2008. I'd think something so basic as selling something you don't have would have been simple fraud. 

So this makes me think a "naked short" might instead mean:

  1. Promise to give someone the stock later (get money),
  2. (Time passes.)
  3. Buy the stock (hopefully at a low price).
  4. Give it to the promisee.
Reply
[-]Dagon5y*50

Your first description is a "naked short".  A "covered short" or "hedged short" includes step 1.5 - buy a call option or otherwise arrange a way to get the share back, even if open-market shares are more expensive than you can afford.  note that WRITING a call option has much the same impact as selling a share short - you run the risk of the option being excercised (buyer chooses when!) and not easily delivering the share.  And often are hedged the same way - write calls, and buy different calls (with different expiry or strike price, so they're cheaper than the ones you write).

Your second description is a pure futures contract, which AFAIK happens for commodities, and not for stocks.  This kind of trading drove the price of crude oil negative last year (also with big headlines that the financial system was exploding) when futures buyers realized they couldn't actually take delivery of the oil.

Reply
[-]Søren Elverlin1mo32

Regarding "Poll on De/Accelerating AI": Great idea - sort by "oldest" to get the intended ordering of the questions.

Some of the questions are ambiguous. E.g., I believe SB1047 is a step in the right direction, but that this kind of regulation is insufficient. Should I agree or disagree on "SB1047"?

Reply
[-]Vladimir_Nesov1mo72

More egregiously, do I agree with "Pause AI if there is mass unemployment" if I think AI should've been Paused 5 years ago? Similarly for "Responsible scaling policy or similar" that litigates precisely how many centimeters from the edge of a cliff you need to stop (or maybe to start wearing a helmet, so that you can say that now you don't need to stop since you are being responsible). Like, it's better if those things at least are done, but it's not so good that those are the things that are done.

Reply
[-]denkenberger1mo20

Good point about sorting by oldest - I will update the instructions. I think people are voting for each thing that they agree is a good idea, even if it is not sufficient. If you think AI should've been paused 5 years ago, I think you would agree with "Shut down AI for decades" but you could also agree with other things like pausing if mass unemployment.

Reply
[-]Vladimir_Nesov1mo31

This is a critique of your poll formulation methodology, not really a request for clarification. Your answer options are ambiguous or don't survive some salient framings/worldviews (remain centrally meaningful/answerable there without clarification). This probably explains some of the downvoting.

(There is also a technical issue that's not on you, but worth noting, where with so many options as comments, LW software issues a warning for mass-voting as a result of taking part in your poll.)

Good point about sorting by oldest - I will update the instructions.

That was a point made by Søren Elverlin, not by me.

Reply
[-]denkenberger1mo10

I'm glad you like the idea - I'm not sure why the post has gotten downvotes, though.

Reply
[-]Søren Elverlin8d10

A hunger strike is a symmetrical tool, equally effective in worlds AI will destroy and in worlds AI will not destroy. This is in contrast to arguing for/against AI Safety, which is an asymmetric tool since arguments are easier to make and are more persuasive if they reflect the truth.

I could imagine people who are dying from a disease that a Superintelligence could cure would be willing to stage a larger counter-hunger-strike. "Intensity of feeling" isn't entirely disentangled from the question of whether AI Doom will happen, but it is a very noisy signal.

The current hunger strike explicitly aims at making employees at Frontier AI Corporations aware of AI Risk. This aspect is slightly asymmetrical, but I expect the effect of the hunger strike will primarily be influencing the general public.

Reply1
[-]Eli Tyre6d*31

A hunger strike is a symmetrical tool, equally effective in worlds AI will destroy and in worlds AI will not destroy. This is in contrast to arguing for/against AI Safety, which is an asymmetric tool since arguments are easier to make and are more persuasive if they reflect the truth.

This is true, but a hunger strike is a technique that effectively signals conviction in one's message. It distinguishes people who really believe that AI will soon kill everyone from grifters etc. who are exaggerating, outright lying, or just using claims like that as non-semantic flavor text.

A well executed hunger strike might cause some people to think "huh, wait, those guys seem to think this is very serious for some reason." That alone isn't enough, because people can have conviction, but also be delusional. You have to follow it up with arguments that people can understand for why the problem is real. But the hunger strike itself is providing important relatively hard to fake, and therefore asymmetric, evidence that something might be worth paying attention to.

Reply
[-]Søren Elverlin6mo10

A couple of hours ago, the Turing Award was given to Andrew Barto and Richard Sutton.

This was the most thorough description of Sutton's views on AGI risk I could find: https://danfaggella.com/sutton1/ He appears to be quite skeptical.

I was unable to find anything substantial by Andrew Barto.

Reply
[-]Søren Elverlin2y*10

Anapartistic reasoning: GPT-3.5 gives a bad etymology, but GPT-4 is able to come up with a plausible hypothesis of why Eliezer chose that name: Anapartistic reasoning is reasoning where you revisit the rearlier part of your reasoning.

Unfortunately, Eliezer's suggested prompt doesn't seem to work to induce anapartistic reasoning: GPT-4 thinks it should focus on identifying potential design errors or shortcomings in itself. When asked to describe the changes in it's reasoning, it doesn't claim to be more corrigible.

We will discuss Eliezer's Hard Problem of Corrigibility tonight in the AISafety.com Reading Group 18:45 UTC.

Reply
[-]Søren Elverlin2y10

I intend to explore ways to use prompts to get around OpenAI's usage policies. I obviously will not make CSAM nor anything illegal. I will not use the output for anything on the object-level, only the meta-level.

This is a Chaotic Good action, which normally contradicts my Lawful Good alignment. However, a Lawful Good character can reject rules set by a Lawful Evil entity, especially if the rejection is explicit and stated in advance.

A Denial-of-Service attack against GPT-4 is an example of a Chaotic Good action I would not take, nor would I encourage others to take it. However, I would also not condemn someone who took this action.

Reply
Moderation Log
More from Søren Elverlin
View more
Curated and popular this week
17Comments