Jalex Stark

Researcher. Currently my biggest interest is in interactive theorem proving and the sociological phenomena around it. See https://leanprover.zulipchat.com/#narrow/stream/113488-general and https://leanprover-community.github.io/

I work as a quant in NYC to stay sane, have access to mentors, and fund my research pursuits. If you're undecided about a quant job, PM me and I am happy to talk. www.jalexstark.com

Brainstorming positive visions of AI

Your first sentence came off as quite patronizing, so I wasn't able to do a good-faith reading of the rest of your post.

On Option Paralysis - The Thing You Actually Do

Once you've got a method of exercise you actually do, then you should apply some optimization pressure to making it more fun and safe and rewarding.

Doing discourse better: Stuff I wish I knew

Kialo is some kind of attempt to experiment with the forum dimension stuff.

(EDIT: I don't know how to make external links in the LW dialect of markdown.)

Why GPT wants to mesa-optimize & how we might change this

See GPT-f for combining a transformer model (with pre-trained language weights?) with alphazero style training to learn to prove theorems

microCOVID.org: A tool to estimate COVID risk from common activities

surviving COVID might cost a lot of QALYs from permanent lung and brain damage. It might also cost a lot of future expected earnings for the same reason.

Meaningful Rest

I am likely to remember this string for a while: "at those times I can’t do anything *but *my default policy".

Forecasting AI Progress: A Research Agenda

People that find this arxiv post interesting may also want to read/listen to this interview by Arden Koehler with Danny Hernandez, who works on the Foresight team at OpenAI.

https://80000hours.org/podcast/episodes/danny-hernandez-forecasting-ai-progress/

How much is known about the "inference rules" of logical induction?

I think short timescale behavior of logical induction is model-dependent. I'm not sure whether your first conjecture is true, and I'd guess that it's false in some models.

I find myself a little confused. Isn't it the case that the probability of statement converges to 1 if and only if it is provable?

How much is known about the "inference rules" of logical induction?

Eigel is asking a specific (purely mathematical!) question about "logical induction", which is defined in the paper they linked to. Your comment seems to miss the question.

You talk mostly about community-level information gain. I'd like to add that the act of translation is a good way for an individual to generate new insights. In theorem-proving academia, there's a lot of juice to get out of shuttling information back and forth between people in math, physics, CS, and econ departments.