LESSWRONG
LW

208
Maxime Riché
309Ω216602
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Evaluating the Existence Neutrality Hypothesis - Introductory Series
3Maxime Riché's Shortform
1y
15
anaguma's Shortform
Maxime Riché3h10

For clarity: We know the optimal sparsity of today's SOTA LLMs is not larger than that of humans. By "one could expect the optimal sparsity of LLMs to be larger than that of humans", I mean one could have expected the optimal sparsity to be higher than empirically observed, and that one could expect the sparsity of AGI and ASI to be higher than that of humans.

Reply
anaguma's Shortform
Maxime Riché1d6-3

Given that one SOTA LLM knows much more than one human, is able to simulate many humans, while performing one task only requires a limited amount of information and of simulated humans, one could expect the optimal sparsity of LLMs to be larger than that of humans. I.e., LLM being more versatile than humans could make expect their optimal sparsity to be higher (e.g., <0.5% of activated parameters).

Reply1
Why you should eat meat - even if you hate factory farming
Maxime Riché1mo123

Do you think cow milk and cheese should be included in a low-suffering healthy diet (e.g., should be added in the recommendations at the start of your post)?

Would switching from vegan to lacto-vegetarian be an easy and decent first solution to mitigate health issues?

Reply
Why did everything take so long?
Maxime Riché2mo10

Another reason that I have not seen in the post or the comments is that there are intense selection pressures against doing things differently from the successful people of previous generations.

Most prehistoric cultural and technological accumulation seems to have happened by "natural selection of ideas and tool-making", not by directed innovation. 

See https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/ 

Reply
Daniel Kokotajlo's Shortform
Maxime Riché3mo10

Would sending or transferring the ownership of the GPUs to an AI safety organization instead of destroying them be a significantly better option?

PRO:
- The AI safety organizations would have much more computing power

CON:
- The GPUs would still be there and at risk of being acquired by rogue AIs or human organizations
- The delay in moving the GPUs may make them arrive too late to be of use
- Transferring the ownership has the problem that the ownership can easily be transferred back (nationalization, forced transfer, or sold back)
- This solution requires verifying that the AI safety organizations are not advancing capabilities (intentionally or not)

Reply
Longtermist Implications of the Existence Neutrality Hypothesis
Maxime Riché7mo30

The implications are stronger in that case right.

The post is about implications for impartial longtermists. So either under moral realism it means something like finding the best values to pursue. And under moral anti realism it means that an impartial utility function is kind of symmetrical with aliens. For example if you value something only because humans value it, then an impartial version is to also value things that alien value only because their species value it.

 

Though because of reasons introduced in The Convergent Path to the Stars, I think that these implications are also relevant for non-impartial longtermists.

Reply
Maxime Riché's Shortform
Maxime Riché8mo10

Truth-seeking AIs by default? One hope for alignment by default is that AI developers may have to train their models to be truth-seeking to be able to make them contribute to scientific and technological progress, including RSI. Truth-seeking about the world model may generalize to truth-seeking for moral values, as observed in humans, and that's an important meta-value guiding moral values towards alignment.

In humans, truth-seeking is maybe pushed back from being a revealed preference at work to being a stated preference outside of work, because of status competitions and fighting for resources. For early artificial researchers, they may not have the same selection pressures. Their moral values may focus on working alone (truth-seeking trend), not on replicating via competing for resources. Artifical researchers won't be selected because they are able to acquire resources, they will be selected by AI developers because they are the best at achieving technical progress, which includes being truth-seeking.

Reply
Maxime Riché's Shortform
Maxime Riché9mo30

Thanks for your corrections, that's welcome
 

> 32B active parameters instead of likely ~220B for GPT4 => 6.8x lower training ... cost

Doesn't follow, training cost scales with the number of training tokens. In this case DeepSeek-V3 uses maybe 1.5x-2x more tokens than original GPT-4.

Each of the points above is a relative comparison with more or less everything else kept constant. In this bullet point, by "training cost", I mostly had in mind "training cost per token":

  • 32B active parameters instead of likely ~ 220 280B for GPT4 => 6.8 8.7x lower training cost per token. 

     

If this wasn't an issue, why not 8B active parameters, or 1M active parameters?

From what I remember, the training-compute optimal number of experts was like 64, given implementations a few years old (I don't remember how many activated at the same time in this old paper). Given newer implementations and aiming for inference-compute optimality, it seems logical that more than 64 experts could be great.

 

You still train on every token.

Right, that's why I wrote: "possibly 4x fewer training steps for the same number of tokens if predicting tokens only once" (assuming predicting 4 tokens at a time), but that's not demonstrated nor published (given my limited knowledge on this).

Reply
Maxime Riché's Shortform
Maxime Riché9mo*3-4

Simple reasons for DeepSeek V3 and R1 efficiencies:

  • 32B active parameters instead of likely ~220B for GPT4 => 6.8x lower training and inference cost
  • 8bits training instead of 16bits => 4x lower training cost
  • No margin on commercial inference => ?x maybe 3x
  • Multi-token training => ~2x training efficiency, ~3x inference efficiency, and lower inference latency by baking in "predictive decoding', possibly 4x fewer training steps for the same number of tokens if predicting tokens only once
  • And additional cost savings from memory optimization, especially for long contexts ( Multi Head Latent Attention) => ?x

 

Nothing is very surprising (maybe the last bullet point for me because I know less about it). 

The surprising part is why big AI labs were not pursuing these obvious strategies.

Int8 was obvious, the multi-token prediction was obvious, and more and smaller experts in MoE were obvious. All three have already been demonstrated and published in the literature. May be bottlenecked by communication, GPU usage, and memory for the largest models.

Reply
leogao's Shortform
Maxime Riché10mo10

It seems that your point applies significantly more to "zero-sum markets". So it may be good to notice it may not apply for altruistic people when non-instrumentally working on AI safety.

Reply
Load More
Sycophancy
2 years ago
3Longtermist Implications of the Existence Neutrality Hypothesis
7mo
2
6The Convergent Path to the Stars
7mo
0
5Other Civilizations Would Recover 84+% of Our Cosmic Resources - A Challenge to Extinction Risk Prioritization
7mo
0
4Formalizing Space-Faring Civilizations Saturation concepts and metrics
8mo
0
9Decision-Relevance of worlds and ADT implementations
8mo
0
20Space-Faring Civilization density estimates and models - Review
8mo
0
21Longtermist implications of aliens Space-Faring Civilizations - Introduction
8mo
0
10Thinking About Propensity Evaluations
1y
0
13A Taxonomy Of AI System Evaluations
1y
0
4What are the strategic implications if aliens and Earth civilizations produce similar utilities?
Q
1y
Q
1
Load More