True, but I would also think that there are nutritional differences in the other parts of the body, as brains significantly change how the organism functions in their other eating behaviors, and energy consumption
Is it possible that there are benefits from eating neurons? It seems likely that organisms with brains have better nutrition when consumed for one’s own brain.
Is there a meetup happening next Sunday? I will be nearby and I am interested in attending.
Good to see you, Daniel!
I find that studies criticizing current models are often used long after the issue is fixed, or without consideration to the actual meaning. I would wish that technology reporting is more careful, as much of this misunderstanding seems to come from journalistic sources. Examples:
Hands in diffusion models
Text in diffusion models
Water usage
Model collapse - not an issue for actual commercial AI models, the original study was about synthetic data production, and directly feeding the output of models as the exclusive training data
LLMs = Autocorrect - chat models have RLHF post training
Nightshade/glaze: useless for modern training methods
AI understanding - yes, the weights are not understood, but the overall architecture is
It is surprising how many times I hear these, with false context.
Wow! Next time I am nearby I will go again, that was great!
You could also access the machine controls to change sensor sensitivity, ball #, points per game, etc. depending on the machine, and change it afterwards
This seems like a bad idea. As observed on Reddit, most members of r/accelerate, the main accelerationist sub, have joined because of annoyance at extremely uninformed anti-ai (mostly art) sentiment online. Although there could be a mild benefit to ai safety from anti-ai thought, the risk of converting people to accelerationism is much worse. In addition, the commonly accepted anti-ai perception of ASI/AGI is that it is made up and a way for current AI companies to make the public believe their products are better than they actually are, which would obviously be unhelpful to serious AI safety.