LESSWRONG
LW

Hruss
499260
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Hruss's Shortform
6mo
2
No wikitag contributions to display.
Public anti-AI sentiment can be useful: three mechanisms
Hruss2mo40

This seems like a bad idea. As observed on Reddit, most members of r/accelerate, the main accelerationist sub, have joined because of annoyance at extremely uninformed anti-ai (mostly art) sentiment online. Although there could be a mild benefit to ai safety from anti-ai thought, the risk of converting people to accelerationism is much worse. In addition, the commonly accepted anti-ai perception of ASI/AGI is that it is made up and a way for current AI companies to make the public believe their products are better than they actually are, which would obviously be unhelpful to serious AI safety.

Reply
If you want to be vegan but you worry about health effects of no meat, consider being vegan except for mussels/oysters
Hruss2mo10

True, but I would also think that there are nutritional differences in the other parts of the body, as brains significantly change how the organism functions in their other eating behaviors, and energy consumption 

Reply
If you want to be vegan but you worry about health effects of no meat, consider being vegan except for mussels/oysters
Hruss2mo10

Is it possible that there are benefits from eating neurons? It seems likely that organisms with brains have better nutrition when consumed for one’s own brain.

Reply
Welcome to Cambridge/Boston Less Wrong
Hruss2mo10

Is there a meetup happening next Sunday? I will be nearby and I am interested in attending.

Reply
Open Thread - Summer 2025
Hruss2mo30

Good to see you, Daniel!

Reply
It's time for a self-reproducing machine
Hruss2mo30

This might be interesting

Reply
Beware General Claims about “Generalizable Reasoning Capabilities” (of Modern AI Systems)
Hruss3moΩ153

I find that studies criticizing current models are often used long after the issue is fixed, or without consideration to the actual meaning. I would wish that technology reporting is more careful, as much of this misunderstanding seems to come from journalistic sources. Examples:

Hands in diffusion models

Text in diffusion models

Water usage

Model collapse - not an issue for actual commercial AI models, the original study was about synthetic data production, and directly feeding the output of models as the exclusive training data

LLMs = Autocorrect - chat models have RLHF post training 

Nightshade/glaze: useless for modern training methods

AI understanding - yes, the weights are not understood, but the overall architecture is

 

It is surprising how many times I hear these, with false context.

Reply
Lighthaven Sequences Reading Group #37 (Tuesday 6/10)
Hruss3mo30

Wow! Next time I am nearby I will go again, that was great!

Reply1
Consider buying voting shares
Hruss3mo10

Yes

Reply
Security Mindset: Hacking Pinball High Scores
Hruss3mo30

You could also access the machine controls to change sensor sensitivity, ball #, points per game, etc. depending on the machine, and change it afterwards

Reply
Load More
-3‘Just Tax Land’ - what’s the point?
17d
1
2Consider buying voting shares
3mo
3
15An open job application to AI labs
3mo
0
1Are standardized tests effective?
4mo
1
1Roads are at maximum efficiency always
4mo
3
5I doubt model collapse will happen
4mo
0
9Would it be effective to learn a language to improve cognition?
Q
5mo
Q
7
1Is CCP authoritarianism good for building safe AI?
5mo
0
1Hruss's Shortform
6mo
2