LESSWRONG
LW

ESRogs
5170Ω1222213725
Message
Dialogue
Subscribe

Engineer at CoinList.co. Donor to LW 2.0.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The 4-Minute Mile Effect
ESRogs3mo20

Context for anyone who missed it: https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

Reply
Davey Morse's Shortform
ESRogs3mo41

Responding to your parenthetical, the downside of that approach is that the discussion would not be recorded for posterity!

Regarding the original question, I am curious if this could work for a country whose government spending was small enough, e.g. 2-3% of GDP. Maybe the most obvious issue is that no government would be disciplined enough to keep their spending at that level. But it does seem sort of elegant otherwise.

Reply
Consciousness as a conflationary alliance term for intrinsically valued internal experiences
ESRogs8mo2-2

I'm afraid I'm sceptical that you methodology licenses the conclusions you draw. You state that you pushed people away from "using common near-synonyms like awareness or experience" and "asked them to instead describe the structure of the consciousness process, in terms of moving parts and/or subprocesses".

Isn't this just the standard LessWrong-endorsed practice of tabooing words, and avoiding ?

Reply
Wei Dai's Shortform
ESRogs9mo20

Default seems unlikely, unless the market moves very quickly, since anyone pursuing this strategy is likely to be very small compared to the market for the S&P 500.

(Also consider that these pay out in a scenario where the world gets much richer — in contrast to e.g. Michael Burry's "Big Short" swaps, which paid out in a scenario where the market was way down — so you're just skimming a little off the huge profits that others are making, rather than trying to get them to pay you at the same time they're realizing other losses.)

Reply
Wei Dai's Shortform
ESRogs9mo20

It doesn't differentially help capitalize them compared to everything else though, right? (Especially since some of them are private.)

Reply
shortplav
ESRogs1y20

With which model?

Reply
Boycott OpenAI
ESRogs1y41

Use the most powerful AI tools.

FWIW, Claude 3.5 Sonnet was released today. Appears to outperform GPT-4o on most (but not all) benchmarks.

Reply
My thesis (Algorithmic Bayesian Epistemology) explained in more depth
ESRogs1y20

Does any efficient algorithm satisfy all three of the linearity, respect for proofs, and 0-1 boundedness? Unfortunately, the answer is no (under standard assumptions from complexity theory). However, I argue that 0-1 boundedness isn’t actually that important to satisfy, and that instead we should be aiming to satisfy the first two properties along with some other desiderata.

Have you thought much about the feasibility or desirability of training an ML model to do deductive estimation?

You wouldn't get perfect conformity to your three criteria of linearity, respect for proofs, and 0-1 boundedness (which, as you say, is apparently impossible anyway), but you could use those to inform your computation of the loss in training. In which case, it seems like you could probably approximately satisfy those properties most of the time.

Then of course you'd have to worry about whether your deductive estimation model itself is deceiving you, but it seems like at least you've reduced the problem a bit.

Reply
Introducing AI Lab Watch
ESRogs1y26

I wouldn't call this "AI lab watch." "Lab" has the connotation that these are small projects instead of multibillion dollar corporate behemoths.

Disagree on "lab". I think it's the standard and most natural term now. As evidence, see your own usage a few sentences later:

They've all committed to this in the WH voluntary commitments and I think the labs are doing things on this front.

Reply
Load More
semantic stopsigns
Value Learning
11y
(-181)
Value Learning
11y
(+8/-7)
Value Learning
11y
(+10/-9)
Value Learning
11y
(+4/-2)
Value Learning
11y
(+1)
7ESRogs's Shortform
5y
44
19Paul Christiano on Dwarkesh Podcast
2y
0
79Carl Shulman on The Lunar Society (7 hour, two-part podcast)
2y
17
14Is a near-term, self-sustaining Mars colony impossible?
Q
5y
Q
18
7ESRogs's Shortform
5y
44
80Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos"
6y
57
17'Longtermism' definitional discussion on EA Forum
6y
0
17Henry Kissinger: AI Could Mean the End of Human History
7y
12
9AskReddit: Hard Pills to Swallow
7y
1
13Predicting Future Morality
7y
21
27AI Safety via Debate
Ω
7y
Ω
14
Load More