LESSWRONG
LW

882
ryan_b
5237Ω275712690
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
National Institute of Standards and Technology: AI Standards
6ryan_b's Shortform
6y
58
Underdog bias rules everything around me
ryan_b21d20

Boiling this down for myself a bit, I want to frame this as a legibility problem: we can see our own limitations, but outsiders successes are much more visible than their limitations.

Reply
Yudkowsky on "Don't use p(doom)"
ryan_b22d40

I'm inclined to look at the blunt limitations of bandwidth on this one. The first hurdle is that p(doom) can pass through tweets and shouted conversations in bay area house parties.

Reply
Yudkowsky on "Don't use p(doom)"
ryan_b23d42

I also think he objects to putting numbers on things, and I also avoid doing it. A concrete example: I explicitly avoid putting numbers on things in LessWrong posts. The reason is straightforward - if a number appears anywhere in the post, about half of the conversation in the comments will be on that number to the exclusion of the point of the post (or the lack of one, etc). So unless numbers are indeed the thing you want to be talking about, in the sense of detailed results of specific computations, they are positively distracting from the rest of the post for the audience.

I focused on the communication aspect in my response, but I should probably also say that I don't really track what the number is when I actually go to the trouble of computing a prior, personally. The point of generating the number is clarifying the qualitative information, and then the point remains the qualitative information after I got the number; I only really start paying attention to what the number is if it stays consistent enough after doing the generate-a-number move that I recognize it as being basically the same as the last few times. Even then, I am spending most of my effort on the qualitative level directly.

I make an analogy to computer programs: the sheer fact of successfully producing an output without errors weighs much more than whatever the value of the output is. The program remains our central concern, and continuing to improve it using known patterns and good practices for writing code is usually the most effective method. Taking the programming analogy one layer further, there's a significant chunk of time where you can be extremely confident the output is meaningless; suppose you haven't even completed what you already know to be minimum requirements, and compile the program anyway, just to test for errors so far. There's no point in running the program all the way to an output, because you know it would be meaningless. In the programming analogy, a focus on the value of the output is a kind of "premature optimization is the root of all evil" problem.

I do think this probably reflects the fact that Eliezer's time is mostly spent on poorly understood problems like AI, rather than on stable well-understood domains where working with numbers is a much more reasonable prospect. But it still feels like even in the case where I am trying to learn something that is well-understood, just not by me, trying for a number feels opposite the idea of hugging the query, somehow. Or in virtue language: how does the number cut the enemy?

Reply
Yudkowsky on "Don't use p(doom)"
ryan_b23d53

I can't speak for Eliezer, but I can make some short comments about how I am suspicious of thinking in terms of numbers too quickly. I warn you beforehand my thoughts on the subject aren't very crisp (else, of course, I could put a number on them!)

Mostly I feel like emphasizing the numbers too much fails to respect the process by which we generate them in the first place. When I go as far as putting a number on it, the point is to clarify my beliefs on the subject; it is a summary statistic about my thoughts, not the output of a computation (I mean it technically is, but not a legible computation process we can inspect and/or maybe reverse). The goal of putting a number on it, whatever it may be, is not to manipulate the number with numerical calculations any more than the goal of writing an essay to is grammatically manipulate the concluding sentence, in my view.

Through the summary statistic analogy, I think that I basically disagree with the idea of numbers providing a strong upside in clarity. While I agree that numbers as a format are generally clear, they are only clear as far as that number goes - they communicate very little about the process by which they were reached, which I claim is the key information we want to share.

Consider the arithmetic mean. This number is perfectly clear, insofar as it means there are some numbers which got added together and then divided by how many numbers were summed. Yet this tells us nothing about how many numbers there were, or what the values of the numbers themselves were, or how wide the range of numbers was, or what the possible values were; there are infinitely many variations behind just the mean. It is also true going from no number at all to a mean screens out infinitely many possibilities, and I expect that infinity is substantially larger than the number of possibilities behind any given average. I feel like the crux of my disagreement with the idea of emphasizing numbers is people who endorse them strongly look at the number of possibilities eliminated in the step of going from nothing to an average and think "Look at how much clarity we have gained!" whereas I look at the number of possibilities remaining and think "This is not clear enough to be useful."

The problem gets worse when numbers are used to communicate. Supposing two people meet in a Bay Area House Party and tell each other their averages. If they both say "seven," they'll probably assume they agree, even though it is perfectly possible for the average of what to have literally zero overlap. This is the point at which numbers turn actively misleading, in the literal sense that before they exchanged averages they at least knew they knew nothing, and after exchanging averages they wrongly conclude they agree.

Contrast this with a more practical and realistic case where we might get two different answers on something like probabilities from a data science question. Because it's a data science question we are already primed to ask questions about the underlying models and the data to see why the numbers are different. We can of course do the same with the example about averages, but in the context of the average even giving the number in the first place is a wasted step because we gain basically nothing until we have the data information (where the sum-of-all-n-divided-by-n is the model). By contrast, in the data science question we can reasonably infer that the models will be broadly similar, and that if they aren't that information by itself likely points to the cruxes between them. As a consequence, getting the direct numbers is still useful; if two data science sources give very similar answers, they likely do agree very closely.

In sum, we collectively have gigantic uncertainty about the qualitative questions of models and data for whether AI can/will cause human extinction. I claim the true value of quantifying our beliefs, the put-a-number-on-it mental maneuver, is clarifying the qualitative questions. This is also what we really want to be talking about with other people. The trouble is the number we have put on all of this internally is what we communicate but does not contain the process for generating the number, and then the conversation invariably becomes about the numbers, and in my experience this actively obscures the key information we want to exchange.

Reply
DeepSeek v3.1 Is Not Having a Moment
ryan_b25d30

I suspect DeepSeek is unusually vulnerable to the problem of switching hardware because my expectation for their cost advantage fundamentally boils down to having invested a lot of effort in low-level performance optimization to reduce training/inference costs.

Switching the underlying hardware breaks all this work. Further, I don't expect the Huawei chips to be as easy to optimize as the Nvidia H-series, because the H-series are built mostly the same way as Nvidia has always built them (CUDA), and Huawei's Ascend is supposed to be a new architecture entirely. Lots of people know CUDA; only Huawei's people know how the memory subsystem for Ascend works.

If I am right, it looks like they got hurt by bad timing this round same way as they benefited from good timing last round.

Reply
Elizabeth's Shortform
ryan_b1mo40

From that experience, what do you think of the learning value of being in a job you are not qualified for? More specifically, do you think you learned more from being in the job you weren't qualified for than you did in in other jobs that matched your level better?

Reply
My Interview With Cade Metz on His Reporting About Lighthaven
ryan_b1mo4423

My days of not taking that person seriously sure are coming to a middle.

Reply11
"Buckle up bucko, and get ready for multiple hard cognitive steps."
ryan_b1mo20

I commonly make a similar transition that I describe as task orientation versus time orientation.


The transition happens when there is some project where there seems to some number of tasks to do and I expect it to be done (or get that step done, or whatever). This expectation then turns out to be wrong, usually because the steps fail directly or I didn’t have enough information about what needed to be done. Then I will explicitly switch to time orientation, which really just means that I will focus on making whatever progress is possible within the time window, or until complete.


One difference is that my experience isn’t sorted by problem difficulty per se. I mean it correlates with problem difficulty, but the real dividing line is how much attention I expected it to require versus how much it wound up requiring. Therefore it is gated by my behavior beforehand rather than by being a Hard Problem.


Counterintuitively I find time-orientation to be a very effective method of getting out of analysis paralysis. This seems like a difference to me because the “time to do some rationality” trigger associates heavily with the deconfusion class of analytical strategies (in my mind). I suspect the underlying mechanism here is that analysis in the context of normal problems is mainly about efficiency, but the more basic loop of action-update-action-update is consistently more effective when information is lacking.


I think there’s something to time orientation having the notion of bash-my-face-against-the-problem-to-gather-information-about-it which makes it more effective than analysis for me a lot of the time because it has the information gathering step built in explicitly, whereas my concepts of analysis are still mostly just received from the subjects where I picked them up, leaving them in separate buckets. This makes me vulnerable to the problem of using the wrong analytical methods because almost all presentations of analysis assume the information being analyzed, and I have a limited sense of the proverbial type signature as a result.


 

Reply
I am worried about near-term non-LLM AI developments
ryan_b1mo20

Ah, but is it a point-in-time sidegrade with a faster capability curve in the future? At the scale we are working now, even a marginal efficiency improvement threatens to considerably accelerate at least the conventional concerns (power concentration, job loss, etc).

Reply
My Empathy Is Rarely Kind
ryan_b2mo50

So what happens when you move towards empathy with people you are more aligned with in the first place? Around here, for example?

Reply
Load More
13Near term discussions need something smaller and more concrete than AGI
8mo
0
25SB 1047 gets vetoed
1y
1
7If I ask an LLM to think step by step, how big are the steps?
Q
1y
Q
1
9Do you have a satisfactory workflow for learning about a line of research using GPT4, Claude, etc?
Q
2y
Q
3
7My simple model for Alignment vs Capability
2y
0
7Assuming LK99 or similar: how to accelerate commercialization?
Q
2y
Q
5
50They gave LLMs access to physics simulators
3y
18
25What to do when starting a business in an imminent-AGI world?
Q
3y
Q
7
58Common Knowledge is a Circle Game for Toddlers
3y
1
37Wargaming AGI Development
4y
10
Load More