All of NunoSempere's Comments + Replies

One particularity of polymarket is that you couldn't as of the time of this market divide $1 into four shares and sell all of them for $1.09. If you could have--well, then this problem wouldn't have existed--but if you could have then this would have been a 9%.

1wachichornia8m
Got it. Seems to me that it only works on liquid markets right? If the spread is significant you pay much more than what you can sell it for and hence do not get the .09 difference?

I don't have a link off the top of my head, but the trade would have been to sell one share of yes for each market. You can do this by splitting $1 into a Yes and No share, and selling the Yes. Specifically in Polymarket you achieve this by adding and then withdrawing liquidity (for a specific type of markets called "amm', for "automatic market marker", which were the only ones supported by Polymarket at the time, though it since then also supports an order book). 

By doing this, you earn $1.09 from the sale + $3 from the three events eventually, and t... (read more)

2NunoSempere14h
One particularity of polymarket is that you couldn't as of the time of this market divide $1 into four shares and sell all of them for $1.09. If you could have--well, then this problem wouldn't have existed--but if you could have then this would have been a 9%.

The framework is AI strategy nearcasting: trying to answer key strategic questions about transformative AI, under the assumption that key events (e.g., the development of transformative AI) will happen in a world that is otherwise relatively similar to today’s.

Usage of "nearcasting" here feels pretty fake. "Nowcasting" is a thing because 538/meteorology/etc. has a track record of success in forecasting and decent feedback loops, and extrapolating those a bit seems neat. 

But as used in this case, feedback loops are poor, and it just feels like a differ... (read more)

See this comment: <https://www.lesswrong.com/posts/yCuzmCsE86BTu9PfA/there-are-no-coherence-theorems?commentId=v2mgDWqirqibHTmKb>

I am not defending the language of the OP's title, I am defending the content of the post.

4NunoSempere4mo
See this comment: <https://www.lesswrong.com/posts/yCuzmCsE86BTu9PfA/there-are-no-coherence-theorems?commentId=v2mgDWqirqibHTmKb>

You don't have strategic voting with probabilistic results. And the degree of strategic voting can also be mitigated.

1Noosphere894mo
Hm, I remember Wikipedia talked about Hylland's theroem that generalizes the Gibbard-Sattherwaite theorem to the probabilistic case, though Wikipedia might be wrong on that.

Copying my second response from the EA forum:

Like, I feel like with the same type of argument that is made in the post I could write a post saying "there are no voting impossibility theorems" and then go ahead and argue that the Arrow's Impossibility Theorem assumptions are not universally proven, and then accuse everyone who ever talked about voting impossibility theorems that they are making "an error" since "those things are not real theorems". And I think everyone working on voting-adjacent impossibility theorems would be pretty justifiedly annoyed by

... (read more)
1Noosphere894mo
Unfortunately, most democratic countries do use first past the post. The 2 things that are inevitable is condorcet cycles and strategic voting (Though condorcet cycles are less of a problem as you scale up the population, and I have a sneaking suspicion that condorcet cycles go away if we allow a real numbered infinite amount of people.)
5habryka4mo
Sorry, this might have not been obvious, but I indeed think the voting impossibility theorems have holes in them because of the lotteries case and that's specifically why I chose that example.  I think that intellectual point matters, but I also think writing a post with the title "There are no voting impossibility theorems", defining "voting impossibility theorems" as "theorems that imply that all voting systems must make these known tradeoffs", and then citing everyone who ever talked about "voting impossibility theorems" as having made "an error" would just be pretty unproductive. I would make a post like the ones that Scott Garrabrant made being like "I think voting impossibility theorems don't account for these cases", and that seems great, and I have been glad about contributions of this type.

Copying my response from the EA forum:

(if this post is right)

The post does actually seem wrong though. 

Glad that I added the caveat.

Also, the title of "there are no coherence arguments" is just straightforwardly wrong. The theorems cited are of course real theorems, they are relevant to agents acting with a certain kind of coherence, and I don't really understand the semantic argument that is happening where it's trying to say that the cited theorems aren't talking about "coherence", when like, they clearly are.

Well, part of the semantic nuance is tha... (read more)

Well, part of the semantic nuance is that we don't care as much about the coherence theorems that do exist if they will fail to apply to current and future machines

The correct response to learning that some theorems do not apply as much to reality as you thought, surely mustn't be to change language so as to deny those theorems' existence. Insofar as this is what's going on, these are pretty bad norms of language in my opinion.

5habryka4mo
I do really want to put emphasis on the parenthetical remark "(at least in some situations, though they may not arise)". Katja is totally aware that the coherence arguments require a bunch of preconditions that are not guaranteed to be the case for all situations, or even any situation ever, and her post is about how there is still a relevant argument here.

I am also curious about the extent to which you are taking the Hoffman scaling laws as an assumption, rather than as something you can assign uncertainty over.

I thought this was great, cheers. 

Here:

Next, we estimate a sufficient horizon length, which I'll call the k-horizon, over which we expect the most complex reasoning to emerge during the transformative task. For the case of scientific research, we might reasonably take the k-horizon to roughly be the length of an average scientific paper, which is likely between 3,000 and 10,000 words. However, we can also explicitly model our uncertainty about the right choice for this parameter.

It's unclear whether the final paper would be the needed horizon length.

F... (read more)

2NunoSempere4mo
I am also curious about the extent to which you are taking the Hoffman scaling laws as an assumption, rather than as something you can assign uncertainty over.

But I think it is >30% likely you can compensate for past over or under estimations.

I'd bet against that at 1:5, i.e., against the proposition that the optimal forecast is not subject to your previous history

This is true in the abstract, but the physical word seems to be such that difficult computations are done for free in the physical substrate (e.g,. when you throw a ball, this seems to happen instantaneously, rather than having to wait for a lengthy derivation of the path it traces). This suggests a correct bias in favor of low-complexity theories regardless of their computational cost, at least in physics.

Neat. I have some uncertainty about the evolutionary estimates you are relying on, per here. But neat.

Seems like this assumes an actual superintelligence, rather than near-term scarily capable successor of current ML systems.

8Thane Ruthenis10mo
Yup. The point is just that there's a level of intelligence past which, it seems, we can't do literally anything to get things back on track. Even if we have some theoretically-perfect tools for dealing with it, these tools' software and physical implementations are guaranteed to have some flaws that a sufficiently smart adversary will be able to arbitrarily exploit given any high-bandwidth access to them. And at that point, even some very benign-seeming interaction with it will be fatal. This transition point may be easy to miss, too — consider, e. g., the hypothesized sharp capabilities gain [https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/GNhMPAWcfBCASy8e6]. A minimally dangerous system may scale to a maximally dangerous one in the blink of an eye, relatively speaking, and we need to be mindful of that. Besides, some of the currently-proposed alignment techniques may be able to deal with a minimally-dangerous system if it doesn't scale. But if it does, and if we let any degree of misalignment persist into that stage, not even the most ambitious solutions would help us then.

Why publish this publicly? Seems like it would improve optimality of training runs?

Good question. Some thoughts on why do this:

  • Our results suggest we won't be caught off-guard by highly capable models that were trained for years in secret, which seems strategically relevant for those concerned with risks
  • We looked whether there was any 'alpha' in these results by investigating the training durations of ML training runs, and found that models are typically trained for durations that aren't far off from what our analysis suggests might be optimal (see a snapshot of the data here)
  • It independently seems highly likely that large training runs
... (read more)

Software: archivenow

Need: Archiving websites to the internet archive.

Other programs I've tried: The archive.org website, spn, various scripts, various extensions.

archivenow is trusty enough for my use case, and it feels like it fails less often than other alternatives. It was also easy enough to wrap into a bash script and process markdown files. Spn is newer and has parallelism, but I'm not as familiar with it and subjectively it feels like it fails a bit more.

See also: Gwern's setup.

Have prediction markets which pay $100 per share, but only pay out 1% of the time, chosen randomly. If the 1% case that happens, then also implement the policy under consideration.

Have prediction markets which pay $100 per share, but only pay out 1% of the time, chosen randomly. If the 1% case that happens, then also implement the policy under consideration.

The issue is that probabilities for something that will either happen or not don't really make sense in a literal way

 

This is just wrong/frequentist. Search for the "Bayesian" view of probability.

I thought this post was great; thanks for writing it.

3gilch1y
Thank you for putting a number on it. Here's another one: https://www.metaculus.com/questions/9552/maximum-price-for-ethereum-by-2023/ [https://www.metaculus.com/questions/9552/maximum-price-for-ethereum-by-2023/]. Current prediction is $5,180. As Ether is trading around $1,000 now, I'd say that's undervalued, even in the relatively short term of 2023. But how much can we trust these predictions?

The Litany of Might

 

I strive to take whatever steps may help me best to reach my goals,

I strive to be the very best at what I strive

 

There is no glory in bygone hopes,

There is no shame in aiming for the win,

there is no choice besides my very best,

to play my top moves and disregard the rest

2Yitz1y
lol well now it needs to be one!

I was assigning less than 3% probability to ~plagiarism being the case, mostly based on Isusr not mentioning that at all in the original post + people seeing similarities where there are none. But seems that I was wrong. 

6Ben Pace1y
Oh! I thought it was some emoticon.

Curious if you know where those people come from?

Sure, see here: https://imgur.com/a/pMR7Qw4

I'm not sure to what extent there's a "forecasting scene", or who is part of it. 

There is a forecasting scene, made out of hobbyist forecasters and more hardcore prediction market players, and a bunch of researchers. The best prediction market people tend to have fairly sharp models of the world, particularly around elections. They also have a pretty high willingness to bet. 

4Austin Chen1y
I've been thinking for a while that maybe forecasting should have its own LessWrong instance, as a place to discuss and post essays (the way EA Forum and AI Alignment have their own instances); curious to get your thoughts on whether this would improve the forecasting scene by having a shared place to meet, or detract by making it harder for newcomers to hear about forecasting? I really, really wish crossposting and crosslinking was easier between different ForumMagnum instances...

I've become a bit discouraged by the lack of positive reception for my forecasting newsletter on LessWrong, to which I've been publishing it since April 2020. For example, I thought that Forecasting Newsletter: Looking back at 2021 was excellent. It was very favorably reviewed by Scott Alexander here. I poured a bunch of myself into that newsletter. It got 18 karma.

I haven't bothered crossposting it to LW this month, but it continues in substack and on the EA forum.

3CitizenTen1y
Huh.  I found your forecasting newsletter via LessWrong, and then subscribed to the substack's RSS feed?  Which probably made me less likely to open it/see it in LessWrong?  Dunno.  Maybe your LessWrong traffic moved to substack?  (sample size=1)    
9Adam Zerner1y
This isn't a particularly informed or confident take, but forecasting strikes me as, I'm not sure what the right words are. Important? Useful? Cool? Impressive? But it doesn't seem to get nearly as much attention as it should. And so I too am sad to learn of the lack of engagement and positive reception. I just subscribed to the substack because it's something I'd like to keep my eye on.

Alas, that also makes me sad. I wonder whether this means something is going wrong in the basic attention-allocation system on the site. I've enjoyed every newsletter that I read, but I only noticed like 2-3 (and upvoted each of them correspondingly). 

Introspecting on my experience, I actually think the biggest difference for me would have been if you had given any of them a more evocative title that had captured the most important thing in that month's newsletter. I definitely feel a strong sense of boredom if I imagine clicking on "Forecasting Newsletter March 2021" instead of "Nuclear war forecasts & new forecasting platforms (Forecasting Newsletter Mar '21)".

4ryan_b1y
Well I liked the looking back post - though I have only just now noticed they are in a running sequence. Query - would you prefer to have engagement here, or at substack? Also, once again note to myself to be what-feels-from-the-inside like gushingly, disgustingly effusive but-in-fact-is just positive feedback at all.
4Pattern1y
I guess there's not a lot of clickthrough? Wait, the link is to the EA forum. Okay, still, that's weird.

That's sad. 

Looks like you're getting decent engagement on substack. Curious if you know where those people come from? I'm not sure to what extent there's a "forecasting scene", or who is part of it. 

Speaking as a non-forecast-specialized-person – I have a belief that it's good to have a forecasting scene that is developing tech/skills/infrastructure. But so far, whether rightly or wrongly, I've mostly thought of that as something that's good/virtuous for other people to do. A newsletter feels like something that makes sense to read if I want to ... (read more)

This was hillarious, very fun to read.

Odds are an alternative way of presenting probabilities. 50% corresponds to 1:1, 66.66..% corresponds to 1:2, 90% corresponds to 1:9, etc. 33.33..% correspond to 2:1 odds, or, with the first number as as a 1, 1:0.5 odds.

Log odds, or bits, are the logarithm of probabilities expressed as 1:x odds. In some cases, they can be a more natural way of thinking about probabilities (see e.g., here.)

1Ian McKenzie1y
I think 75% is 1:3 rather than 1:2.

Couldn't they just get lower interest rate loans elsewhere?

This doesn't mean necessarily that you shouldn't take the bet, but maybe that you should also take the loan.

3MichaelStJules1y
Ya, I was thinking this, too, but they could possibly get a lot of loans or much larger loans at lower interest rates, and it's not clear when they would start looking at this one as the next best to pursue. Maybe it's more time-efficient (more loaned money per hour spent setting up and dealing with) to take this kind of AI-bet loan, though, but $1000 is very low.
2jefftk1y
Thanks! I was confused because you also quoted my "I couldn't figure out how to get it to do focus-follows-mouse between the panes" bit

When I tried using tmux to get a similar effect the main problem I had was that if I tried to select text with the mouse it didn't understand my panes. And I couldn't figure out how to get it to do focus-follows-mouse between the panes

This works in Ubuntu 20.04:

# selection
## Source: https://www.rockyourcode.com/copy-and-paste-in-tmux/
set-option -g mouse on ## key line
set-option -s set-clipboard off
bind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel 'xclip -se c -i'

## Bonus: Pasting, vim bindings
bind P paste-buffer
bind-key -
... (read more)
2jefftk1y
Before I dig into this, does this also include focus following the mouse between panes?

Tmux only has the default prefix be C-b because it was developed inside Screen, which has  C-a. But the first thing that everybody does is to map it to C-a.

2jefftk1y
I like C-b, because C-a means "go to beginning of the current line" and I wasn't using C-b for anything. In screen I used to use C-^

Kinda annoying how this one was the most upvoted of my forecasting newsletters. I would also recommend Forecasting Newsletter: Looking back at 2021 and 2020: Forecasting in Review.


 

Right, just seed a prediction market maker using a logarithmic scoring rule, where the market's prior probability is given by Solomonoff. There is the small issue of choosing a Turing interpreter, but I think we just choose the Lisp language as a reasonably elegant one.

Something is less than three percent. Or, a heart emoji followed by a percentage sign.

My favourite interpretation is that the "<3" is a heart emoji. Then the "%" sign is supposed to remind us of the comment character in eg. python.  But nothing follows the %-symbol, so the full meaning is "I love this, no comment."

5Ben Pace1y
I cannot tell what this means, but I like it.

Good idea. One could also go Turing machine by Turing machine, voting on the ones which would produce the most upvoted content. Then you can just read the binary output as html.

Prediction market with Solomonoff prior. The theoretically most efficient way to do anything.

(I wonder if we just found a solution to AI alignment. Start with the simplest machines, and let the prediction markets vote on whether they will destroy the world...)

Thank you for the kind comment, although you might want to refer to Thiel with more respect. He has done great things – terrible, yes, but great.

Obwohl in Ihren Schertz genossen habe, finde ich höchst unwahscheinlich, dass Herr Davon seinen Namen gewechselt hätte, nur um die Wahl zu gewinnen.

Because the space of possible things one could do is vast, and optimal actions are far in between

A few years later: Mars Emperor Tim Chu vows to colonize Andromeda. Prediction markets rose to 99% upon announcement, up from an early estimate of 0.5% (source: Metacortex.)

It's not a world war if China, India, Indonesia, Pakistan, Brazil, Nigeria, Bangladesh, Mexico, Japan, Ethiopia, the Phillipines, Egypt and Vietnam aren't fighting.

As a nitpick, see the Kuril Islands and Japan recently calling them "ancestral territories"

fwiw I do think that it's a concern. But there is also an anti-inductiveness to close calls, where the more you've had in the past the more measures that might have been implemented to avoid future close calls.

So e.g., updating upwards on the Cuba missile crisis doesn't feel right, because the red phone got implemented after that.

2Quinn1y
yeah the bet pressured me to post it a little early. I'd be interested in elaboration of your view of comparative advantage shifting. You mean shifting more toward lucrative E2G opportunities? Shifting more away from capacity to make lucrative alignment contributions?
Load More