Wiki Contributions

Comments

lol, I filed the same market on manifold before scrolling down and seeing you already did.

Thanks!

  • 0.75 to 0.95 vs. 0.75 to 0.9 is strictly my transcription bug, not being careful enough.
  • In general I wasn't auditing the code from the Jonas Moss comment, I just stepped through looking at the functionality. I should've been more careful, if I was going to make a claim about the conversion factor.
  • You're kinda right about the question "if it's a constant number of lines written exactly once, does it really count as boilerplate?" I can see how it feels a little dishonest of me to imply that the ratio is really 15:1. The example I was thinking of was the Biological Anchors Report ("Ajeya's Timeilnes"), those notebooks have lots of LOC in hidden cells, but the relative cost of those goes down as the length of the report goes up. All that considered, I could be updated to the idea that the boilerplate point is moot for power users (who are probably able and willing to provide that boilerplate once per file), but I would still be excited about what is opened up for more casual users.
  • You're right that, or your comment is suggesting to me indirectly that, squiggle, having not yet provided a way to give non-default quantiles with the to syntax, hasn't done anything to show that it'd really beat hand-crafted python functions, to accomplish this.
  • Re the underlying squiggle notebook concerning GiveDirectly and so on, I've flagged your comment to Sam (it's something else I haven't taken a close look at).
Answer by QuinnJul 14, 2022112

Yes, the problem is real. I'd try your solution if it existed.

Optimal for me would be emacs or vscode keybindings, not the 4-fingers of tablet computing.

Unlikely, see here (Rohin wrote a TLDR for alignment newsletter, see the comment).

Some of what follows is similar to something I wrote on EA Forum a month or so ago.

Returns on meatspace are counterfactually important to different people to different degrees. I think it's plausible that some people simply can't keep their eye on the ball if they're not getting consistent social rewards for trying to do the thing, or that the added bandwidth you get when you move from discord to meatspace actually provides game-changing information.

I have written that if you're not this type who super needs to be in meatspace with their tribe, who can cultivate and preserve agentiness online, that it may be imperative for you to defect in the "everyone move to the bay game" specifically to guard against brain drain, because people who happen to live in non-bay cities really do, I think, deserve access to agenty/ambitious people working on projects. An underrated movement building theory of change is that someone fails out of the university entrance exam in Minneapolis, and we're there to support them.

However, I'm decreasingly interested in my hypothesis about why brain drain is even bad. I'm not sure the few agenty people working on cool projects in Philly are really doing all that much for the not-very-agenty sections of the movement that happen to live in Philly, which is a conclusion I really didn't want to draw, but I've had way too much of going to an ACX or EA meetup and meeting some nihilist-adjacent guy who informs me that via free will being fake trying to fix problems is pointless. People have to want to cultivate ambition/agentiness and epistemics before I can really add any value, I'm concluding. I read this as a point against heeding the brain drain concern. There's a sense in which I can take PG's post about cities very seriously then conclude that the nihilist-adjacent guy is a property of Philly, and conclude that it's really important for me to try other cities since what I'm bringing to Philly is being wasted and Philly isn't bringing a lot to me. There's another sense in which I take PG's post seriously but I think Philly isn't unique among not-quite-top-5 US cities, and another sense in which I don't take PG's post seriously. The fourth sense, crucially, is that my personal exhaustion with nihilist-adjacent guy doesn't actually relate to the value I can add if I'm there for someone when they flunk out of the university entrance exam (I want a shapley points allocation for saving a billion lives, dammit!).

Another remark is that a friend who used to live in the bay once informed me that "yeah you meet people working on projects very much all the time, but so many of the projects are kinda dumb". So I may end up being just as frustrated with the Bay as I am with Philly if I tried living there. Uncertain.

missed opportunities to build a predictive track record and trump

I was reminiscing about my prediction market failures, the clearest "almost won a lot of mana dollars" (if manifold markets had existed back then) was this executive order. The campaign speeches made it fairly obvious, and I'm still salty about a few idiots telling me "stop being hysterical" when I accused him of being what he's writing on the tin that he is pre inauguration even though I overall reminisce that being a time when my epistemics were way worse than they are now.

However, there does seem like there needs to be a word for "lack of shock but failed to predict concretely". We were threatmodeling a ton of crazy stuff back then! So what if you can econo-splain "well if you didn't predict concretely then you were, by definition, shocked", the more useful and accurate thing sounds more like "we were worried about various classes of populist atrocities, some of which would look hysterical in hindsight, those which would look hysterical in hindsight crowded out the ability to write detailed executive orders just to win the mana dollars / bayes points / etc.". Early onsets of a populist swing are so anxiety-inducing and chaotic, I forgive myself for making an at least token attempt at security mindset by thinking about how bad it could get, but I shouldn't do so too quickly-- a post manifold markets populist would give me a great opportunity to take things seriously, put a little of that anxiety to use.

So of course, what is the institutional role of metaculus or manifold in the leadup to january 6 2021, or things in that reference class? Again, "didn't write down a detailed description of what would happen, but isn't shocked when it does". It cost 0 IQ points to observe in the months leading up to the election that the administration would be a sore loser in worlds where they lost. So why is it so subtle to leverage this observation to gain actual mana dollars or metaculus ranking? This seems like an open problem to me.

Is there an EV monad? I'm inclined to think there is not, because EV(EV(X)) is a way simpler structure than a "flatmap" analogue.

I find myself, just as a random guy, deeply impressed at the operational competence of airports and hospitals. Any good books about that sort of thing?

Stuart Russell in the FLI podcast debate outlined things like instrumental convergence and corrigibility, though it took a backseat to his own standard/nonstandard model approach, and challenged him to publish reasons why he's not compelled to panic in a journal, but warned him that many people would emerge to tinker with and poke holes in his models.

The main thing I remember from that debate is that Pinker thinks the AI xrisk community is needlessly projecting "will to power" (as in the nietzschean term) onto software artifacts.

Load More