All of Austin Chen's Comments + Replies

2Sinclair Chen2mo
Inositol indeed. I don't know anyone else that's tried this. I'd only bet 55-65% that it works for any given person. But it's available over the counter and quite safe. I should probably get around to setting up a more rigorous experiment one of these days...
It’s Probably Not Lithium

I've been following the SMTM hypothesis with great interest; don't have much to add on a technical level, but I'm happy to pay a $200 bounty in M$ to Natália in recognition of her excellent writeup here.  Also - happy to match (in M$) any of the bounties that she outlined!

"Science Cathedrals"

San Jose has The Tech Interactive (formerly The Tech Museum of Innovation) located in the downtown. I remember going often as a kid, and being enthralled by the interactions and exhibits. One of the best is located outside, for free: a 2-story tall Rube Goldberg machine that shuffles billiards balls through various contraptions. Absolutely mesmerizing.

AGI Ruin: A List of Lethalities

I'd have more hope - not significant hope, but more hope - in separating the concerns of (a) credibly promising to pay big money retrospectively for good work to anyone who produces it, and (b) venturing prospective payments to somebody who is predicted to maybe produce good work later.

 

I desperately want to make this ecosystem exist, either as part of Manifold Markets, or separately. Some people call it "impact certificates" or "retroactive public goods funding"; I call it "equity for public goods", or "Manifund" in the specific case.

If any... (read more)

I'm also on a team trying to build impact certificates/retroactive public goods funding and we are receiving a grant from an FTX Future Fund regrantor to make it happen!

If you're interested in learning more or contributing you can:

  • Read about our ongoing $10,000 retro-funding contest (Austin is graciously contributing to the prize pool)
  • Submit an EA Forum Post to this retro-funding contest (before July 1st)
  • Join our Discord to chat/ask questions
  • Read/Comment on our lengthy informational EA forum post "Towards Impact Markets"
New Water Quality x Obesity Dataset Available

Thanks again Elizabeth for pushing forward this initiative; Slime Mold Time Mold's obesity hypothesis has been one of the most interesting things I've come across in the last couple years, and I'm glad to see citizen research efforts springing up to pursue it~

The credit for combining the data set really goes to Oliver S and Josh C; I mostly just posted the bounty haha:

Here's a List of Some of My Ideas for Blog Posts

I'm biased towards all the prediction market ones, naturally haha. In case you wanted to get a head start on manipulating markets for fun & profit:

The AI Countdown Clock

I like this a lot! I am also the kind of person to use a new tab death clock, though your post inspired me to update it to my own AI timeline (~10 years).

I briefly experimented with using New Tab Redirect to set your site as my new tab page, but I think it takes a smidgen longer to load haha (it needs to fetch the Metaculus API or something?)

2River Lewis3mo
I’m so glad the idea resonates with you! You’re exactly right about the slightly slow loading — it’s on my list to fix, but I wanted to get something out there. :)
ProjectLawful.com: Eliezer's latest story, past 1M words

Sorry about that - had some configuration issues. It should work now!

2gjm3mo
Yup, all working (and apparently up to date) now. Thanks!
ProjectLawful.com: Eliezer's latest story, past 1M words

Thanks! I tried splitting into smaller sections (half the size) so that we don't have this issue as much; not sure what other solutions look like.

2Raemon3mo
Ah thanks, that was helpful.
ProjectLawful.com: Eliezer's latest story, past 1M words

Yeah probably a stale caching layer, what fic were you reading? Glowflow doesn't read from an epub, it's reading html from the site itself.

Lemme try rebooting to see if that refreshes. That's obviously not sustainable... I didn't expect people to actually use it for a live, updating fic lol.

Edit: added a "Clear cache" button, hope that solves it!

7gjm3mo
Currently some way behind again; clicking "clear cache" does do something that looks like reloading but doesn't make it not be some way behind. [EDITED a couple of days later to add:] No change to what the reader is showing since then even though there have been updates on glowfic.com. The "clear cache" button doesn't appear to help. Forcibly reloading the page also doesn't appear to help.
2gjm3mo
I was, unsurprisingly given context :-), trying to read the latest Planecrash: https://share.streamlit.io/akrolsmir/glowflow/main?post=5930 [https://share.streamlit.io/akrolsmir/glowflow/main?post=5930] Right now, what happens when I go there is that I get "Oh no. Error running app. If this keeps happening, please contact support." (In both Firefox and Chrome, on Linux. It was OK earlier today. I have Firefox configured to disable most Javascript but have whitelisted the domains I think the app actually needs, and again it was working OK earlier today, and Chrome doesn't have any of the JS-blocking stuff.) The only possibly-useful-looking thing in the Javascript console says "INITIAL -> (10, 0, undefined) -> ERROR".
ProjectLawful.com: Eliezer's latest story, past 1M words

dark mode seems to have stopped working

 

Hm, do you have "dark mode" toggled on the sidebar? (There's two settings unfortunately due to how Streamlit is set up):

 

The outer box doesn't widen together with the text and background, and the text doesn't stay centered

Yeah unfortunately this is mostly working-as-implemented. The box size isn't a thing I can change; "Wide Mode" lets it the box be big, otherwise it's small.

Text centering might be possible if you're in "Wide Mode" -- I'll look into that.

1Austin Chen3mo
Text centering should now be live!
ProjectLawful.com: Eliezer's latest story, past 1M words

Done! Thanks for the feedback. Hoping 2000px is plenty but it's easy to increase lol.

(Having too many options is sometimes a symptom of bad UX design, but it seems reasonable for a web reader to support these all of these.)

2gjm3mo
Nice! I see it's some way behind the current state of the story. Is that because there's some background thing that fetches only every so often, or because there's a cacheing layer that might be stale, or what? [EDITED to add:] I guess it's reading from the epub on Dropbox?
2Yoav Ravid3mo
This is how it looks now when I increase the width. The outer box doesn't widen together with the text and background, and the text doesn't stay centered. Also, dark mode seems to have stopped working.
ProjectLawful.com: Eliezer's latest story, past 1M words

Hrm, I think I could code in a way to specify the height of the box... lemme look into it.

Thanks for all your suggestions, btw!

2gjm3mo
I have a kinda-opposite problem. There's a "wide mode" but not a "tall mode", and for reasons beyond my understanding the box with the actual content in it is about half the height of my browser window. I can understand why you wouldn't make it full-width by default -- very wide text can be hard to read -- but I can't think of any reason for preferring less than (full height minus reasonable margins). Personally, for width I would prefer something intermediate between normal and "wide" (given the actual dimensions of my browser window, which occupies half of quite a large monitor) and for height I would prefer All The Pixels. Would it be difficult to have a "manual control" mode where you can set width, height and font size arbitrarily?
ProjectLawful.com: Eliezer's latest story, past 1M words

I hope so! I myself bounced off of Mad Investor Chaos twice before because the formatting was too hard for me to read... but after implementing this reader, spent 2 hours last night reading through it.

Thanks so much for writing this Glowfic!

8Yoav Ravid3mo
Awesome! Thanks! One more thing, currently the box doesn't fully fit on my screen vertically when it's open (I'm using a laptop that doesn't have a high resolution). Is there a way to make the open box fit on the screen?
ProjectLawful.com: Eliezer's latest story, past 1M words

I liked how the epub strips out unnecessary UI from the glowfic site, but downloading and moving epubs around is a pain...

So I built a web reader on top of this code! Check it out here: https://share.streamlit.io/akrolsmir/glowflow/main

It'll work for any Glowfic post actually, eg https://share.streamlit.io/akrolsmir/glowflow/main?post=5111 Would probably be simple to add a download button to get the epub file; source code here.

2Raemon3mo
This is pretty great. One remaining issue is that it seems harder to hold onto your place if you stop in the middle of a section and then reload the page.
2Yoav Ravid3mo
Nice! I'd love a download option, but for now I'll use your reader to read it. I saw there's an option to make the text window wider, which is nice. Is there a way to make the text background dark?

Oh, huh!  I can see that format potentially being preferred by a number of people.

Open & Welcome Thread - May 2022

I've been thinking for a while that maybe forecasting should have its own LessWrong instance, as a place to discuss and post essays (the way EA Forum and AI Alignment have their own instances); curious to get your thoughts on whether this would improve the forecasting scene by having a shared place to meet, or detract by making it harder for newcomers to hear about forecasting?

I really, really wish crossposting and crosslinking was easier between different ForumMagnum instances...

What We Owe the Past

I'm not sure it's as simple as that - I don't know that just because it's your past self, you get to make decisions on their behalf.

Toy example: last week I promised myself I would go hit the gym. Today I woke up and am feeling lazy about it. My lazy current self thinks breaking the promise is a good idea, but does that mean he's justified in thinking that the past version of Austin would agree?

1awenonian3mo
If you only kept promises when you want to, they wouldn't be promises. Does your current self really think that feeling lazy is a good reason to break the promise? I kinda expect toy-you would feel bad about breaking this promise, which, even if they do it, suggests they didn't think it was a good idea. If the gym was currently on fire, you'd probably feel more justified breaking the promise. But the promise is still broken. What's the difference in those two breaks, except that current you thinks "the gym is on fire" is a good reason, and "I'm feeling lazy" is a bad reason? You could think about this as "what would your past self say if you gave this excuse?" Which could be useful, but can only be judged based on what your current self thinks.
What We Owe the Past

I don't even think I owe very much to many stated preferences of contemporary living humans

This feels like something of a crux? Definitely, before we get into respecting the preferences of the past, if we don't agree on respecting the preferences of the present/near-future humans we may not find much to agree on.

I'm not even sure where to begin on this philosophical point -- maybe something like universalizability, like "wouldn't it be good if other contemporary living humans, who I might add outnumber you 7billion to 1, try to obey your own stated preferences?"

4Dagon3mo
Indeed - this is very likely a crux. I'd enjoy it if other humans obeyed my stated preferences, but I think I'd lose respect for them as agents (and making very specific object-level requests would show my disrespect for them as moral targets). Doing things that I project to improve overall quality of life for many people is good, IMO. Following arbitrary stated preferences is very rarely an effective way to do that. There are lots of cases where statements are a good hint to utility weightings, and lots of cases where the speaker is confused or misleading or time-inconsistent. Dead people's historical statements, always, are incorrect about what will improve their experienced universe.
What We Owe the Past

just like I wouldn't attend masses just because my friend from 10 years ago who is also dead now wanted me to.

I'm not so sure about this analogy -- intuitively, aren't your obligations to yourself much stronger than to a friend? E.g. if a friend randomly asked for $5000 to pay for a vacation I wouldn't just randomly give it to her; but if my twin or past self spent that much I'd be something like 10-100x more likely to to oblige.

What We Owe the Past

Your finger is on the scales with the example of the conservationist. That person's desires are an applause light, while those of their descendants are a boo light. Switch the two sets of desires and the example is no longer persuasive, if it ever was.

 

First: I picked this example partly because "cuteness optimization" does seem weird and contrary and unsympathetic. I imagine that to people in the past, our present lack of concern for our literal neighbors, or views on gay marriage, seem just as unsympathetic.

Second: "cuteness" might not be the exact ... (read more)

Austin Chen's Shortform

Okay, now I've used the live-collab/commenting feature on a LessWrong draft. It's pretty good! If you haven't seen it yet, I'd recommend writing a new LW post and requesting feedback; Justis Millis's feedback was super fast, highly detailed, and all-around incredibly valuable!

Can I turn on inline comments for a published LessWrong post too? Even after "publishing" it'd super useful to get the comments inline. In my view, a great post should be a timeless, living, breathing, collaborative document, rather than a bunch of words dumped out once and never revi... (read more)

1JustisMills4mo
Thank you!
Austin Chen's Shortform

Suggestion: Inline comments for LessWrong posts, ala Google Docs

It's been commented on before that much intellectual work in the EA/Rat community languishes behind private Google Docs. I think one reason is just that the inline-commenting mechanism on a GDoc is so much better than excerpting the comment below. Has the Lightcone team considered this/what is the status?

(I vaguely recall them working on a live-collab feature, not sure if commenting would have been part of this)

2Austin Chen4mo
Okay, now I've used the live-collab/commenting feature on a LessWrong draft. It's pretty good! If you haven't seen it yet, I'd recommend writing a new LW post and requesting feedback [https://www.lesswrong.com/posts/nsCwdYJEpmW5Hw5Xm/lesswrong-is-providing-feedback-and-proofreading-on-drafts#TL_DR] ; Justis Millis's feedback was super fast, highly detailed, and all-around incredibly valuable! Can I turn on inline comments for a published LessWrong post too? Even after "publishing" it'd super useful to get the comments inline. In my view, a great post should be a timeless, living, breathing, collaborative document, rather than a bunch of words dumped out once and never revisited. (There's value in the latter in terms for eg news posts; but LW's focus is less on that.)
5ChristianKl4mo
If you go into a draft of a post and highlite text you can see a comment button. When I just tested it, clicking the button did nothing but at least the button to create comments is there.
My Superpower: OODA Loops

I think feedback loops and OODA are really great; thanks for drawing attention to this concept! One thing that would have made this post more compelling: do you have any concrete examples of applying OODA in real life?

General Thoughts on Less Wrong

A bit hard to describe; kind of like ratfic, kind of like roleplay, kind of like a forum.

https://luminousalicorn.tumblr.com/post/145319779970/what-is-a-glowfic

General Thoughts on Less Wrong

I do think it's a shame that LW, Alignment Forum, and EA Forum are three separate sites rather than a single one. Maybe there are weird political reasons for this but as a user I don't really care, I just want to be able to navigate between all of them and discover content and crosspost with ease. Some other possible subcommunities:

  • Forecasting and prediction (Especially if we could integrate prediction markets from Manifold!).
  • Tools for Thought slash  https://futureofcoding.org/. Feels like it should have a decent amount of audience overlap. I'm a bit
... (read more)
3ChristianKl4mo
You can actually see Alignment Forum content directly on LessWrong. The content you see there is a subset of the content you see on LessWrong.
2Austin Chen4mo
One more: Progress Studies!
2Chris_Leong4mo
What's Glowfic?
Moloch and the sandpile catastrophe

As a result, there is a very strong near-term possibility that hundreds of millions of people in the Middle East, Africa, and Asia will starve.  Even where starvation does not kill directly, political unrest and food wars may do it.

 

Wait, is this like a 0.1%, 1%, 10%, or 50% possibility? Does this account for how international governments will likely respond to wheat export decreases? (I did a quick search on Metaculus, no results)

Taking this idea seriously, it'd be worth sitting down and figuring out what we can personally do to avert outcomes as bad as hundreds of millions starving.

1Ericf4mo
Also, too, what is near-term? Supply chain disruptions are currently causing shortages, but new shipments from India and other exporters should arrive within a month or two, which probably isn't long enough for people to literally starve to death (since wheat isn't literally 100% of calories, I'd Fermi estimate that a poor resident of North Africa could physically survive for about 3 months with zero wheat consumption by being hungry and eating a little bit more of whatever other food is available)

Articles about the fall in wheat exports face another Molochian problem. The ones that say that a massive decrease in grain exports will lead to famine are shared widely. Articles that include the context that most wheat is produced and consumed locally, so exports are a small proportion of global wheat production, are boring articles about boring price movements, so they aren't shared widely.

World grain production is 760 million tons per year. Russia exports 35 million tons per year (4.6% of total wheat production), and Ukraine exports 24 million tons (3.... (read more)

2Eric Raymond5mo
I don't know how to quantify the risk. I believe it's serious because (a) global economic analysts like Peter Zeihan say it's serious, and (b) it's difficult for me to construct a plausible scenario in which that many tons per wheat a month becoming unavailable doesn't cause famine somewhere. The risk would be especially acute in places like Egypt, Bangladesh, and Japan that combine a large population with a shortage of arable land - but while Japanese are wealthy enough to meet rising prices in the grain markets and get by, Egyptians and Bangladeshis are not.
6Vaniver5mo
I do think it'll be interesting to see if the sorts of things that ALLFED [https://allfed.info/] has been working on for bigger disasters will get deployed for this smaller disaster as well.
Taking Good Heart Tokens Seriously, So Help Me God

Austin from Manifold here - thanks for the shoutout! I would also note on a personal level that Good Heart tokens led me to read/post a lot more on LessWrong than I do on a normal day.

Manifold's already kind of evolving into a forum/discussion site stapled to a prediction market, and spending our currency kind of looks like an upvote if you squint (especially in Free Response markets; placing a bet on an answer is very very similar to upvoting an answer on LessWrong/StackOverflow/Quora).

Incidentally, I've also had the same idea for combining impact certs w... (read more)

Austin Chen's Shortform

I've seen proposals for buying coal mines as a way of efficiently reducing emissions, by reducing the supply of coal and thus driving up coal's price on the open market. But how does that balance against the increasing the demand for coal mines, thus encouraging coal prospectors to seek out new coal sources?

Intuitively, this doesn't seem that likely; it feels like new coal sources should be pretty hard to discover? But two worrying examples that come to mind include:

  • Discovery of fracking techniques, which lowered the cost of oil
  • (Apocryphal) the cobra effec
... (read more)
5Vaniver5mo
My sense is that coal mines 1) take a lot of money to make in the first place and 2) have poor future prospects. So the thing that happens if you buy a 40-year old coal mine with 10 years of coal left, and shut it down instead of operate it, is not that someone else just opens up a new coal mine with 50 years of life on it. [But this is probably more of a local effect than a global one-- people are actually opening new coal mines [https://www.reuters.com/world/china/worlds-coal-producers-now-planning-more-than-400-new-mines-research-2021-06-03/] somewhere.]
Manafold Markets is out of mana 🤭

Haha thanks! This took me way too long to make:

Anti-Corruption Market

typo: "we'll be performing few of crimes in Jining" => "a few crimes"/"a few of the crimes"/"few of the crimes"

Depending on if the prior expectation was "no crimes"/"crimes by others"/"a lot of crimes by this group" -- I wasn't actually too sure based on the context.

Thanks for the story!

1lsusr5mo
Fixed. Thanks.
Jetlag, Nausea, and Diarrhea are Largely Optional

Just wanted to say I've purchased a pill organizer based on your recommendation, and plan on checking out your anti-diarrhea and anti-nausea picks! I'll also probably slot in Tums for acid reflux.

AI researchers from Russia looking for a new home

Specifically, what kinds of help are these researchers looking for? Some ideas:

  • A place to stay for several months
  • Job openings in industry or academia
  • Visa sponsorship or immigration advice
  • Direct cash (how much, and for what general purposes?)
3AnonResearch5mo
I'm looking mostly for job openings and help with relocation; venture funding is also an option, i think.
IMO challenge bet with Eliezer

Yes -- this surfaces the issue with long-term markets, where it's not worth the investment lockup when  you could be trading on issues with a shorter resolution time.

We have a few tentative proposals to address this:

  • Issue small interest-free loans on the first M$10 bet (Scott's proposal)
  • Margin loans against the value of of your long-term bets
  • Set up derivative markets (e.g. "What will the IMO market be at on April 1st 2022")
  • Demurrage, where we charge a fee for cash balances

I'm partial to Scott's proposal, though it may be technically tricky to implemen... (read more)

IMO challenge bet with Eliezer

I set up a Manifold market for this, for anyone who'd like to bet along: https://manifold.markets/Austin/will-an-ai-get-gold-on-any-internat

The concept of epistemic bits is quite interesting! In Manifold terms, that would be like comparing the total amount of M$ one has amassed (maybe in a specific community like the one for AI) and using them as weights for how seriously to take two commentators.

I worked through an example just to prove the math to myself:

Eliezer and Paul both start with 1000 M$.

For Eliezer to end up with 2x the epistemic weight, Paul wag... (read more)

6paulfchristiano6mo
I bet the market down from 58% to 40% which seems like a steal on the merits. Seems like it will mostly lock up my money though given the long resolution timeline, unless people eventually settle down to a significantly lower consensus.
Create a prediction market in two minutes on Manifold Markets

We haven't thought through the PoH thing too much -- honestly, you probably know much more about it than we do!

I like the way LessWrong voting is set up a lot; I've actually wondered about a prediction-market mapping system where upvotes = betting that a particular comment will be frontpaged/featured/chosen by mods for quality, and downvotes would be a bet in the other direction. It'd be a cool experiment, though maybe fairly intensive to run.

1acylhalide6mo
Nice I thought about betting on comments too once (with $$) but someone pointed out that the person betting will then write comments on the post (possibly using multiple accounts) to bias the mod or whatever procedure is used to resolve the bet. Maybe it can work with play money though. On betting on pred. markets with play money, I feel like giving experienced users more play money might ensure net prediction is better. I don't know too much about it but there's lots of articles on building in-game economies, you want there to be a skill progression but you also want it to seem fair, and control inflation, etc.
Create a prediction market in two minutes on Manifold Markets

Haha, some of our users have already invented similar markets for seeing if a market will be resolved correctly (e.g. https://manifold.markets/RavenKopelman/will-dr-ps-question-about-trump-bei ). I think this is a pretty promising solution!

There's still some interface work for making these reputational markets more common and visible, though -- if a popular market is judged likely to be fradulently resolved, this should be very noticeable to a new user.

Kleros is another (crypto) solution for deciding in contentious cases; I believe Omen actually supports Kleros-mediated contracts as a fallback for their user-generated markets.

Create a prediction market in two minutes on Manifold Markets

Thanks for the suggestions! I'm not sure if you meant "less like money" or "less like play". In my head, we actually do want these to be money-like (valuable, transferable, fungible, etc), but less "play", which is why we're exploring fiat and crypto off-ramps.

Right now we require Google accounts, which is another weak form of KYC! KYC helps solve the problem of unique human users so we users don't fradulently take advantage of our sign up bonuses; in the crypto world we might use something like Proof of Humanity for this purpose.

Incentivizing activity (eg... (read more)

1acylhalide6mo
Less like play :) Proof of humanity is super cool and something I am familiar with - which protocol do you plan to use? re: last para, I see. Lesswrong voting experiments with a system where experienced users get more weight, but I'm not sure how (or if) that maps to prediction markets.
Create a prediction market in two minutes on Manifold Markets

Oh, yes, that's a fair point! I think personal questions may self-correct for this, because they'll draw in less interest and less volume compared to a general-interest question (so possible fraud on personal questions is less profitable). Creators may have more of an informational incentive to let personal markets work well?

But it is a good point, that personal questions are much harder to audit and thus contribute less to reputation; if we formalize a reputation system it's one factor to consider!

Create a prediction market in two minutes on Manifold Markets

We do actually pay out the question creators! Right now it's 4% of profits. We don't do a great job of making this understandable in the UI though - and predictably (heh) most of our creators are more interested in the question outcome than in earning transaction fees.

A controversy page is interesting - kind of like Airbnb or Amazon reviews, but on a seller rather than on a product.

Create a prediction market in two minutes on Manifold Markets

I'm not sure I understand - are you saying that a subjective personal question is one where you'd be more tempted to resolve incorrectly (or delay resolution)? There's no clear benefit to the market creator of delaying a resolution (they can't spend the funds that are committed to the markets), but definitely you're taking on some risk that the market creator will insider trade or otherwise act unethically on their market.

2Sherrinford6mo
Purely subjective personal questions are questions where others cannot check reliably whether you resolved in an "unfair" way. So reputation also does not work, at least it takes a lot of time. I edited the text of my first comment, using the words from Daniel's comment. Maybe it's easier to understand now.
Create a prediction market in two minutes on Manifold Markets

Yes, that's absolutely the kind of prediction market we'd love to enable at Manifold! I'd love to chat more about specifically the personal medical use case, and we'd already been considering applying to SFF -- let's get in touch (I'm akrolsmir@gmail.com).

Create a prediction market in two minutes on Manifold Markets

Thanks! A real money/crypto version of the Manifold is very high on our priorities as well; they do have their own challenges (regulation for real money, technical infrastructure for crypto), but we're optimistic about being able to solve them.

And the mechanism you describe around reputation for fairness is exactly how we expect things to play out! I do think some more work around surfacing some kind judgment metric could be useful (eg total amount fairly adjudicated) but we have more thinking to do. If anyone has thoughts on what reputational metrics could be useful, let us know!

2philh6mo
This is one of those "could easily go wrong in any number of ways" ideas, but... You could plausibly have reputation encoded in other prediction markets. Like, I create a market "will X happen?" and people don't know how much to trust me. A trusted user could create markets for any or all of * Will X happen? (Based on their own judgment, not mine.) * Will philh judge correctly whether X happened? * Conditional on X happening, will philh judge that X happened? * Conditional on X not happening, will philh judge that X didn't happen? And people could look at those markets to guess how much they should trust me, and people who know something about me can play in them. Though that first one could also be done with the motivation of getting the profits from the question, where people will prefer to play in the trusted user's market instead of mine, which seems maybe not great.
4Daniel Kokotajlo6mo
I'm not sure a formal metric is necessary. Maybe you could just have a "controversy" page associated with each user, where people can complain about how particular questions were resolved, and e.g. post evidence like "An anonymous account bought $10k worth of No when the probability was at 92%, and then an hour later that day the question resolved Yes!" Someone who is really trying to scam people would probably pretty quickly accumulate a pretty damning controversy page that anyone could see at a glance was pretty damning. The exception to this would be "grey area" questions where it totally is subjective how it should go. For those questions they can make profit via anonymous accounts without anyone being able to tell what's happening. But hopefully this isn't a huge deal. For comparison, people will resolve many grey area questions in a biased way anyway, e.g. "Will Trump attempt to illegally hold on to power if he loses the 2020 election?" would probably be resolved positive if a Democrat created the question and negatively if a Republican did. If the amount of bias/noise introduced by illicit profit-making is no bigger than the "baseline" amount of bias/noise inherent in the system, then maybe it's not worth worrying about. Originally I was going to suggest paying the question creators 1% of the proceeds of each question. However I think that might not be necessary. They are getting rewarded by having their questions answered, after all.
What should a student who is already a decent programmer do over the summer?

This is a great question to ask! Random assorted thoughts:
- "Business development intern" sounds fairly low ROI, I wouldn't go for this
- Studying virality could be cool, but I'm not sure Buzzfeed or an established media company is the right place. Maybe a crypto project?
- The best mentorship comes when someone has an aligned incentive to improve your skills; internships are OK but do rely on goodwill of the mentor more than actual aligned incentives, since most intern work is considered throwaway
- One thing you didn't mention but I'd encourage: start a pro... (read more)

(briefly) RaDVaC and SMTM, two things we should be doing

The Tap Water Database seems to be less forthcoming with their data. Their response: "We don’t share the back end of the database with anyone.... I’m happy to run your proposal by the science team. Just send me a few detailed sentences about your research, whom you’re affiliated with, and where you’re going to publish it."

I have no credentials in this space (my background is in software dev); would anyone with a relevant background be willing to help compose a reply + lend their affiliation?

3Elizabeth7mo
Hi Austin- I keep holding off responding until I have a new plan, but now I'm swamped so that's going to take a big. Thank you for trying!
(briefly) RaDVaC and SMTM, two things we should be doing

I emailed both sources, and County Health Rankings got back! They offer a spreadsheet download here.

I've copied the data to this Google sheet here (under tab "Ranked Measure Data", column BN) for easier access.  What's remaining before we can get it to the statistician:

  • Get access to the water database (they might charge for this? not super sure, I just pinged them again)
  • Line up counties to zip codes (I think this link should suffice)
  • Write a script to combine these into a single zip code (I could probably do this)

Happy for anyone else to jump in too!

3Austin Chen7mo
The Tap Water Database seems to be less forthcoming with their data. Their response: "We don’t share the back end of the database with anyone.... I’m happy to run your proposal by the science team. Just send me a few detailed sentences about your research, whom you’re affiliated with, and where you’re going to publish it." I have no credentials in this space (my background is in software dev); would anyone with a relevant background be willing to help compose a reply + lend their affiliation?
List of Probability Calibration Exercises

Metaculus has a calibration tutorial too: https://www.metaculus.com/tutorials/

I've been thinking about adding a calibration exercise to https://manifold.markets as well, so I'm curious: what makes one particular set of calibration exercises more valuable than another? Better UI? Interesting questions? Legible or shareable results?

2Isaac King7mo
Questions about a topic that I don't know about result in me just putting the max entropy distribution on that question, which is fine if it's rare, but leads to unhelpful results if they make up a large proportion of all the questions. Most calibration tests I found pulled from generic trivia categories such as sports, politics, celebrities, science, and geography. I didn't find many that were domain-specific, so that might be a good area to focus on. Some of them don't tell me what the right answers are at the end, or even which questions I got wrong, which I found unsatisfying. If there's a question that I marked as 95% and got wrong, I'd like to know what it was so that I can look into that topic further. It's easiest to get people to answer small numbers of questions (<50), but that leads to a lot of noise in the results. A perfectly calibrated human answering 25 questions at 70% confidence could easily get 80% or 60% of them right and show up as miscalibrated. Incorporating statistical techniques to prevent that would be good. (For example, calculate the standard deviation for that number of questions at that confidence level, and only tell the user that they're over/under confident if they fall outside it.) The fifth one in my list above does something neat where they say "Your chance of being well calibrated, relative to the null hypothesis, is X percent". I'm not sure how that's calculated though.
Load More