Rob Wiblin from 80k asks:
Inositol, I believe: https://www.facebook.com/100000020495165/posts/4855425464468089/?app=fbl
I've been following the SMTM hypothesis with great interest; don't have much to add on a technical level, but I'm happy to pay a $200 bounty in M$ to Natália in recognition of her excellent writeup here. Also - happy to match (in M$) any of the bounties that she outlined!
San Jose has The Tech Interactive (formerly The Tech Museum of Innovation) located in the downtown. I remember going often as a kid, and being enthralled by the interactions and exhibits. One of the best is located outside, for free: a 2-story tall Rube Goldberg machine that shuffles billiards balls through various contraptions. Absolutely mesmerizing.
I'd have more hope - not significant hope, but more hope - in separating the concerns of (a) credibly promising to pay big money retrospectively for good work to anyone who produces it, and (b) venturing prospective payments to somebody who is predicted to maybe produce good work later.
I desperately want to make this ecosystem exist, either as part of Manifold Markets, or separately. Some people call it "impact certificates" or "retroactive public goods funding"; I call it "equity for public goods", or "Manifund" in the specific case.
If any... (read more)
I'm also on a team trying to build impact certificates/retroactive public goods funding and we are receiving a grant from an FTX Future Fund regrantor to make it happen!
If you're interested in learning more or contributing you can:
Thanks again Elizabeth for pushing forward this initiative; Slime Mold Time Mold's obesity hypothesis has been one of the most interesting things I've come across in the last couple years, and I'm glad to see citizen research efforts springing up to pursue it~
The credit for combining the data set really goes to Oliver S and Josh C; I mostly just posted the bounty haha:
I'm biased towards all the prediction market ones, naturally haha. In case you wanted to get a head start on manipulating markets for fun & profit:
I like this a lot! I am also the kind of person to use a new tab death clock, though your post inspired me to update it to my own AI timeline (~10 years).
I briefly experimented with using New Tab Redirect to set your site as my new tab page, but I think it takes a smidgen longer to load haha (it needs to fetch the Metaculus API or something?)
Sorry about that - had some configuration issues. It should work now!
Thanks! I tried splitting into smaller sections (half the size) so that we don't have this issue as much; not sure what other solutions look like.
Yeah probably a stale caching layer, what fic were you reading? Glowflow doesn't read from an epub, it's reading html from the site itself.
Lemme try rebooting to see if that refreshes. That's obviously not sustainable... I didn't expect people to actually use it for a live, updating fic lol.
Edit: added a "Clear cache" button, hope that solves it!
Text centering should now be live!
dark mode seems to have stopped working
Hm, do you have "dark mode" toggled on the sidebar? (There's two settings unfortunately due to how Streamlit is set up):
The outer box doesn't widen together with the text and background, and the text doesn't stay centered
Yeah unfortunately this is mostly working-as-implemented. The box size isn't a thing I can change; "Wide Mode" lets it the box be big, otherwise it's small.
Text centering might be possible if you're in "Wide Mode" -- I'll look into that.
Done! Thanks for the feedback. Hoping 2000px is plenty but it's easy to increase lol.
(Having too many options is sometimes a symptom of bad UX design, but it seems reasonable for a web reader to support these all of these.)
Hrm, I think I could code in a way to specify the height of the box... lemme look into it.
Thanks for all your suggestions, btw!
I hope so! I myself bounced off of Mad Investor Chaos twice before because the formatting was too hard for me to read... but after implementing this reader, spent 2 hours last night reading through it.
Thanks so much for writing this Glowfic!
Your wish is my command - dark mode added!
I liked how the epub strips out unnecessary UI from the glowfic site, but downloading and moving epubs around is a pain...
So I built a web reader on top of this code! Check it out here: https://share.streamlit.io/akrolsmir/glowflow/main
It'll work for any Glowfic post actually, eg https://share.streamlit.io/akrolsmir/glowflow/main?post=5111 Would probably be simple to add a download button to get the epub file; source code here.
Oh, huh! I can see that format potentially being preferred by a number of people.
I've been thinking for a while that maybe forecasting should have its own LessWrong instance, as a place to discuss and post essays (the way EA Forum and AI Alignment have their own instances); curious to get your thoughts on whether this would improve the forecasting scene by having a shared place to meet, or detract by making it harder for newcomers to hear about forecasting?
I really, really wish crossposting and crosslinking was easier between different ForumMagnum instances...
I'm not sure it's as simple as that - I don't know that just because it's your past self, you get to make decisions on their behalf.
Toy example: last week I promised myself I would go hit the gym. Today I woke up and am feeling lazy about it. My lazy current self thinks breaking the promise is a good idea, but does that mean he's justified in thinking that the past version of Austin would agree?
I don't even think I owe very much to many stated preferences of contemporary living humans
This feels like something of a crux? Definitely, before we get into respecting the preferences of the past, if we don't agree on respecting the preferences of the present/near-future humans we may not find much to agree on.
I'm not even sure where to begin on this philosophical point -- maybe something like universalizability, like "wouldn't it be good if other contemporary living humans, who I might add outnumber you 7billion to 1, try to obey your own stated preferences?"
just like I wouldn't attend masses just because my friend from 10 years ago who is also dead now wanted me to.
I'm not so sure about this analogy -- intuitively, aren't your obligations to yourself much stronger than to a friend? E.g. if a friend randomly asked for $5000 to pay for a vacation I wouldn't just randomly give it to her; but if my twin or past self spent that much I'd be something like 10-100x more likely to to oblige.
Your finger is on the scales with the example of the conservationist. That person's desires are an applause light, while those of their descendants are a boo light. Switch the two sets of desires and the example is no longer persuasive, if it ever was.
First: I picked this example partly because "cuteness optimization" does seem weird and contrary and unsympathetic. I imagine that to people in the past, our present lack of concern for our literal neighbors, or views on gay marriage, seem just as unsympathetic.
Second: "cuteness" might not be the exact ... (read more)
Okay, now I've used the live-collab/commenting feature on a LessWrong draft. It's pretty good! If you haven't seen it yet, I'd recommend writing a new LW post and requesting feedback; Justis Millis's feedback was super fast, highly detailed, and all-around incredibly valuable!
Can I turn on inline comments for a published LessWrong post too? Even after "publishing" it'd super useful to get the comments inline. In my view, a great post should be a timeless, living, breathing, collaborative document, rather than a bunch of words dumped out once and never revi... (read more)
Suggestion: Inline comments for LessWrong posts, ala Google Docs
It's been commented on before that much intellectual work in the EA/Rat community languishes behind private Google Docs. I think one reason is just that the inline-commenting mechanism on a GDoc is so much better than excerpting the comment below. Has the Lightcone team considered this/what is the status?(I vaguely recall them working on a live-collab feature, not sure if commenting would have been part of this)
I think feedback loops and OODA are really great; thanks for drawing attention to this concept! One thing that would have made this post more compelling: do you have any concrete examples of applying OODA in real life?
A bit hard to describe; kind of like ratfic, kind of like roleplay, kind of like a forum.
One more: Progress Studies!
I do think it's a shame that LW, Alignment Forum, and EA Forum are three separate sites rather than a single one. Maybe there are weird political reasons for this but as a user I don't really care, I just want to be able to navigate between all of them and discover content and crosspost with ease. Some other possible subcommunities:
As a result, there is a very strong near-term possibility that hundreds of millions of people in the Middle East, Africa, and Asia will starve. Even where starvation does not kill directly, political unrest and food wars may do it.
Wait, is this like a 0.1%, 1%, 10%, or 50% possibility? Does this account for how international governments will likely respond to wheat export decreases? (I did a quick search on Metaculus, no results)
Taking this idea seriously, it'd be worth sitting down and figuring out what we can personally do to avert outcomes as bad as hundreds of millions starving.
Articles about the fall in wheat exports face another Molochian problem. The ones that say that a massive decrease in grain exports will lead to famine are shared widely. Articles that include the context that most wheat is produced and consumed locally, so exports are a small proportion of global wheat production, are boring articles about boring price movements, so they aren't shared widely.World grain production is 760 million tons per year. Russia exports 35 million tons per year (4.6% of total wheat production), and Ukraine exports 24 million tons (3.... (read more)
Austin from Manifold here - thanks for the shoutout! I would also note on a personal level that Good Heart tokens led me to read/post a lot more on LessWrong than I do on a normal day.
Manifold's already kind of evolving into a forum/discussion site stapled to a prediction market, and spending our currency kind of looks like an upvote if you squint (especially in Free Response markets; placing a bet on an answer is very very similar to upvoting an answer on LessWrong/StackOverflow/Quora).
Incidentally, I've also had the same idea for combining impact certs w... (read more)
I've seen proposals for buying coal mines as a way of efficiently reducing emissions, by reducing the supply of coal and thus driving up coal's price on the open market. But how does that balance against the increasing the demand for coal mines, thus encouraging coal prospectors to seek out new coal sources?Intuitively, this doesn't seem that likely; it feels like new coal sources should be pretty hard to discover? But two worrying examples that come to mind include:
Haha thanks! This took me way too long to make:
typo: "we'll be performing few of crimes in Jining" => "a few crimes"/"a few of the crimes"/"few of the crimes"
Depending on if the prior expectation was "no crimes"/"crimes by others"/"a lot of crimes by this group" -- I wasn't actually too sure based on the context.
Thanks for the story!
Just wanted to say I've purchased a pill organizer based on your recommendation, and plan on checking out your anti-diarrhea and anti-nausea picks! I'll also probably slot in Tums for acid reflux.
Specifically, what kinds of help are these researchers looking for? Some ideas:
Yes -- this surfaces the issue with long-term markets, where it's not worth the investment lockup when you could be trading on issues with a shorter resolution time.
We have a few tentative proposals to address this:
I'm partial to Scott's proposal, though it may be technically tricky to implemen... (read more)
I set up a Manifold market for this, for anyone who'd like to bet along: https://manifold.markets/Austin/will-an-ai-get-gold-on-any-internatThe concept of epistemic bits is quite interesting! In Manifold terms, that would be like comparing the total amount of M$ one has amassed (maybe in a specific community like the one for AI) and using them as weights for how seriously to take two commentators.
I worked through an example just to prove the math to myself:
Eliezer and Paul both start with 1000 M$.
For Eliezer to end up with 2x the epistemic weight, Paul wag... (read more)
We haven't thought through the PoH thing too much -- honestly, you probably know much more about it than we do!
I like the way LessWrong voting is set up a lot; I've actually wondered about a prediction-market mapping system where upvotes = betting that a particular comment will be frontpaged/featured/chosen by mods for quality, and downvotes would be a bet in the other direction. It'd be a cool experiment, though maybe fairly intensive to run.
Haha, some of our users have already invented similar markets for seeing if a market will be resolved correctly (e.g. https://manifold.markets/RavenKopelman/will-dr-ps-question-about-trump-bei ). I think this is a pretty promising solution!
There's still some interface work for making these reputational markets more common and visible, though -- if a popular market is judged likely to be fradulently resolved, this should be very noticeable to a new user.
Kleros is another (crypto) solution for deciding in contentious cases; I believe Omen actually supports Kleros-mediated contracts as a fallback for their user-generated markets.
Thanks for the suggestions! I'm not sure if you meant "less like money" or "less like play". In my head, we actually do want these to be money-like (valuable, transferable, fungible, etc), but less "play", which is why we're exploring fiat and crypto off-ramps.
Right now we require Google accounts, which is another weak form of KYC! KYC helps solve the problem of unique human users so we users don't fradulently take advantage of our sign up bonuses; in the crypto world we might use something like Proof of Humanity for this purpose.
Incentivizing activity (eg... (read more)
Oh, yes, that's a fair point! I think personal questions may self-correct for this, because they'll draw in less interest and less volume compared to a general-interest question (so possible fraud on personal questions is less profitable). Creators may have more of an informational incentive to let personal markets work well?
But it is a good point, that personal questions are much harder to audit and thus contribute less to reputation; if we formalize a reputation system it's one factor to consider!
We do actually pay out the question creators! Right now it's 4% of profits. We don't do a great job of making this understandable in the UI though - and predictably (heh) most of our creators are more interested in the question outcome than in earning transaction fees.
A controversy page is interesting - kind of like Airbnb or Amazon reviews, but on a seller rather than on a product.
I'm not sure I understand - are you saying that a subjective personal question is one where you'd be more tempted to resolve incorrectly (or delay resolution)? There's no clear benefit to the market creator of delaying a resolution (they can't spend the funds that are committed to the markets), but definitely you're taking on some risk that the market creator will insider trade or otherwise act unethically on their market.
Yes, that's absolutely the kind of prediction market we'd love to enable at Manifold! I'd love to chat more about specifically the personal medical use case, and we'd already been considering applying to SFF -- let's get in touch (I'm email@example.com).
Thanks! A real money/crypto version of the Manifold is very high on our priorities as well; they do have their own challenges (regulation for real money, technical infrastructure for crypto), but we're optimistic about being able to solve them.
And the mechanism you describe around reputation for fairness is exactly how we expect things to play out! I do think some more work around surfacing some kind judgment metric could be useful (eg total amount fairly adjudicated) but we have more thinking to do. If anyone has thoughts on what reputational metrics could be useful, let us know!
This is a great question to ask! Random assorted thoughts:- "Business development intern" sounds fairly low ROI, I wouldn't go for this- Studying virality could be cool, but I'm not sure Buzzfeed or an established media company is the right place. Maybe a crypto project?- The best mentorship comes when someone has an aligned incentive to improve your skills; internships are OK but do rely on goodwill of the mentor more than actual aligned incentives, since most intern work is considered throwaway- One thing you didn't mention but I'd encourage: start a pro... (read more)
The Tap Water Database seems to be less forthcoming with their data. Their response: "We don’t share the back end of the database with anyone.... I’m happy to run your proposal by the science team. Just send me a few detailed sentences about your research, whom you’re affiliated with, and where you’re going to publish it."I have no credentials in this space (my background is in software dev); would anyone with a relevant background be willing to help compose a reply + lend their affiliation?
I emailed both sources, and County Health Rankings got back! They offer a spreadsheet download here.
I've copied the data to this Google sheet here (under tab "Ranked Measure Data", column BN) for easier access. What's remaining before we can get it to the statistician:
Happy for anyone else to jump in too!
Metaculus has a calibration tutorial too: https://www.metaculus.com/tutorials/
I've been thinking about adding a calibration exercise to https://manifold.markets as well, so I'm curious: what makes one particular set of calibration exercises more valuable than another? Better UI? Interesting questions? Legible or shareable results?