LESSWRONG
LW

1077
rotatingpaguro
679Ω462790
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
rotatingpaguro's Shortform
rotatingpaguro8d60

AI safety training applications are my intellectual clock.

In this period of my life I spend some time applying to various AI safety short courses and programs like MATS. Apply to all the good acronyms, as they say. The applications often declare themselves to take 1-2 hours to fill in. I invariably put in something more like 10 hours. A hypothetical application could start asking some euphemistic form of

> <what do you want to do with your life?>

Oh man, I don't even know what I ate for dinner yesterday. I pace back and forth through my study, then decide to postpone completing the application to the next day. I fall asleep, mulling over the meaning of life.

The next day:

> <is AI going to kill us all?>

...having a p(doom) always stroke me as a bad idea, turns out Yudkowsky agrees but also has a p(doom) apparently, which I think is decision-theoretically consistent with him believing everyone else to be an idiot, amongst other possible explanations; should I make up some bafflegab about decision theory? Maybe I should just say "maybe". This is the easy question.

> <do you know how to improve the state of the art on this thing below?>

Oh yes, I was chatting about this yesterday with von Neumann and my sister, she said to just use quantum computronium, and check the error bars.

I pace back and forth the study, sit down, sit up again, pace, sit down, re-open the laptop, sit up, close the laptop, pace, open the laptop, sit down, up, feint pacing but then close the laptop before it can realize. I go to bed and think. After one hour, I conclude I'm going to have milk next breakfast as I do every day since I was 6, and fall asleep.

Next day. Answer the question. I improved the state of the art, yes, yes. Next question.

> <do you like more your mom or your dad?>

Fear leads to anger, anger leads to hate, hate leads to suffering, suffering leads to the dark side. So I guess mom?Perhaps I should get some sleep again, as night is the mother of counsel, and write the answer from scratch tomorrow.

Tomorrow:

> <please predict the future, explaining your reasoning (100 words, no cheating)>

Uhm so I guess I was supposed to cheat on the other questions. Better go back and rewrite all answers with cheating. Prompting is Turing-complete, so there should be some way to prompt gpt5 to output my answers; would that count as cheating?

...aaand 10 hours have passed!

I already know about all these important issues before getting through the applications. But being asked about them in a bland anonymous form I fill in while bored bears on me the psychological effect of a stranger stopping me on the road, grabbing my arms and shouting "what is the meaning of life, QUICK" as they jolt me. In no other context I am compelled to answer such questions (in 100–200 words). So I take the occasion of each application deadline to sit down and think about my life.

Reply
The Most Common Bad Argument In These Parts
rotatingpaguro22d10

...yeah ok my google-fu was grandma-level here, also I should just have asked a chatbot. Confirmed guf = guf as you know it.

Reply
The Most Common Bad Argument In These Parts
rotatingpaguro26d10

What's a guf?

Reply
AI Safety Research Futarchy: Using Prediction Markets to Choose Research Projects for MARS
rotatingpaguro1mo4-1

I was referring to the fact that you set LessWrong posts with karma thresholds as target metrics. This kind of thing has in general the negative side effect of incentivizing exploitation of loopholes in the LessWrong moderation protocol, karma system, and community norms, to increase the karma of one own's posts. See Goodhart's law.

I do not think this is currently a problem. My superficial impression of your experiment is that it is good. However, this kind of thing could become a problem down the line if it becomes more common. This will be born out as a mix of lowering the quality of the forum and increased moderation work.

Reply
AI Safety Research Futarchy: Using Prediction Markets to Choose Research Projects for MARS
rotatingpaguro1mo-30

LessWrong is increasingly being put under pressure, I hope it does not become a journal. I wish good luck to the admins.

Reply
AI #134: If Anyone Reads It
rotatingpaguro1mo10

Now that the value of OpenAI minus the nonprofit’s share has tripled to $500 billion, that is even more true. We are far closer to the end of the waterfall. The nonprofit’s net present value expected share of future profits has risen quite a lot. They must be compensated accordingly, as well as for the reduction in their control rights, and the attorneys general must ensure this.

I think this reasoning is flawed, but my understanding of economics is pretty limited so take my opinion with a grain of salt.

I think it's flawed in that investors may have priced in the fact that the fancy nonprofit, the AGI dream, & whatnot, where mostly a dumb show. So 500 G$ is closer to the full value of OpenAI, rather than close to the value left out of the nonprofit according to the current setup interpreted to the letter.

Reply
Chinese room AI to survive the inescapable end of compute governance
rotatingpaguro2mo10

Comment to myself after a few months:

To make this the case, the new paradigm should, when properly studied and optimized, lead to more efficient AI systems than DL, below the threshold where it stops scaling. The alternative paradigm should allow to reach the same level of performance with less compute spent. For example, imagine the new paradigm used statistical models with a training procedure close to kosher Bayesian inference, thus having a near-guarantee of squeezing all the information out of the training data (within the capped intelligence of the model).

I now think that LLM pre-training is probably already pretty statistically efficient, so nope, can't do substantially better through the route in this specific example.

Reply
Contra Yudkowsky's Ideal Bayesian
rotatingpaguro2mo20

A general way my mental model of how statistics works disagrees with what you write here is on whether the specific properties that are in different contexts required of estimators (calibration, unbiasedness, minimum variance, etc.) are the things we want. I think of them as proxies, and I think Goodhart's law applies: when you try to get the best estimator in one of these senses, you "pull the cover" and break some other property that you would actually care about on reflection but are not aware of.

(Not answering many points in your comment to cut it short, I prioritized this one.)

Reply
Contra Yudkowsky's Ideal Bayesian
rotatingpaguro2mo20

Bayesian here. I'll lay down my thoughts after reading this post in no particular order, I'm not trying to construct a coherent argument pro/against the argument of your post, not due to lack of interest but due to lack of time, though in general it'll be evident I'm generally pro-Bayes:

  • I have the impression that Yudkowsky has lowered his level of dogmaticism since then, as is common with aging. I've never read this explicitly discussed by him, but I'll cite one holistic piece of my evidence: that I've learned more about the limitations of Bayes from LessWrong and MIRI than from discussions with frequentists (I mean, people preferring or mostly using frequentist stuff or at least feeling about Bayes like a weird thing, not that they would be ideologues and not use it). Going beyond Bayes seems like a central theme in Yudkowsky's work, though it's always framed as extending Bayes. So I'll take a stab at guessing what Yudkowsky thinks right now about this, and it would be that AIXI is the simplest complete idealized model of Bayesian agent, it's of course a model and not reality, and general out-of-formal-math evidence aggregation points to Bayes being substantially an important property of intelligence, agency, knowledge that's going to stick around like Newton is still used after Einsten, and this amounts to saying that Bayes is a law, though he wouldn't today describe it with the biblical vibes he had when writing the sequences.
  • I empirically observed Bayes is a good guide to finding good statistical models, and I venture that if you think otherwise you are not good enough at the craft. It took me years of usage and study to use it myself in that sense, rather than just using Bayesian stuff as pre-packaged tools and basically equivalent to other frequentist stuff if not for idiosyncratic differences in convenience in each given problem.
  • I generally have the impression that the mathematical arguments you mention focus a lot on the details and miss the big picture. I don't mean that they are wrong; I trust that they are right. But the overall picture I'd draw out of them is that Bayes is the right intuition, it's substantially correct, though it's a simplified model of course and you can refine it in multiple directions.
  • Formally, frequentist estimators are just any function. Though of course you'll require sensible properties out of your estimators, there's no real rule about what a good frequentist estimator should be. You can ask it to be unbiased, or to minimize MSE, under i.i.d. repetition. What's if i.i.d. repetition is not a reasonable assumption? What if unbiased is in contradiction with minimizing MSE? Bayes gives you a much much smaller subset of stuff to pick from in a given problem, though still large in absolute terms. That's the strength of the method; that you should not in practice need anything else out of that much smaller subset of potential solutions for your practical inference problems. They are also valid as frequentist solutions, but this does not mean that Bayes is equivalent to frequentist, because the latter does not select so specifically those solutions.
  • OLS is basically Bayesian. If you don't like the improper prior, pick a proper very diffuse one. This should not matter in practice. If it happens to matter in some case, I bet the setup is artificial and contrived. OLS is not a general model of agency and intelligence, it's amongst the simplest regression models, and it need not work under extreme hypothetical scenarios, it needs to work for simple stuff. If I ran OLS and got beta_1 = 1'000'000'000'000, I would immediately think "I fucked up", unless I was already well aware of putting wackily unscaled values into the procedure, so a wide proper prior matches practical reasoning at an intuitive level. Which does not mean that Bayes is a good overall model of my practical reasoning at that point, which should point to "re-check the data and code", but I take it as a good sign for Bayes that it points in the right direction within the allowance of such a simplified model.
  • Thank you for the many pointers to the literature on this, this is the kind of post one gets back to in the future (even if you consider it a rush job).
Reply
AI #130: Talking Past The Sale
rotatingpaguro2mo20

In the last few years, as I read stuff about AI US vs. China in the blogosphere, I've always felt confused by this kind of question (exports to China or not? China this or that?). I really don't have an intuition of what's the right answer here. I've never thought about this deeply, so I'll take the occasion to write down some thoughts.

Conditional on the scenario where dangerous AI/point of no return comes in 2035, if AI development continues to be free, so not because say it would come earlier but was regulated away:

Considering the question Q = "Is China at the edge with chips in 2035?":

Then I consider three policies and write down P(Q|do(Policy)):

P(Q|free chips trade with China) = 30%
P(Q|restrictions on exports to China of most powerful chips) = 50%
P(Q|block all chips exports to China) = 80%

I totally made up these percentages; I guess my brain simply generated three ~evenly-spaced numbers in (0, 100).

Then the next question would be: what difference does Q make? Does it make a difference if China is at the same level of the US?

The US is totally able to create the problem in the first place from scratch in a unipolar world. Would an actually multipolar world be even worse? Or would it not make any difference, because the US is self-racing? Or would it have the opposite effect, where the US is forced to actually sit at a table?

Reply
Load More
5rotatingpaguro's Shortform
8d
1
5rotatingpaguro's Shortform
8d
1
-4Chinese room AI to survive the inescapable end of compute governance
9mo
1
10I want a good multi-LLM API-powered chatbot
Q
1y
Q
5
149At 87, Pearl is still able to change his mind
2y
15
20Contra LeCun on "Autoregressive LLMs are doomed"
3y
20
1Bayesian optimization to find molecules that bind to proteins
3y
0