LESSWRONG
LW

Shmi
28882Ω2715180130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
10shminux's Shortform
1y
67
Banning Said Achmiz (and broader thoughts on moderation)
Shmi8d157

From my very much outside view, extending the rate limiting to 3 comments a week indefinitely would have solved most of the stated issues.

Reply1
shminux's Shortform
Shmi1mo20

Clunkergrind beats Bespoke Handcrafting https://www.oneusefulthing.org/p/the-bitter-lesson-versus-the-garbage 

Reply
Expectation = intention = setpoint
Shmi3mo40

Sorry for the delayed reply... I don't get notifications of replies, and the LW RSS has been broken for me for years now, so I only poke my head here occasionally.

Well that sounds... scary, at best. I hope you've come out of it okay.

50/100. But that rather exciting story is best not told in a public forum.

Though these distinctions are kinda confusing for me.

Well, lack of appearance of something otherwise expected would be negative, and appearance of something otherwise unexpected would be positive?

For example, a false pregnancy is a "positive somatization". Or stigmata. Having trouble coming up with intentionally "good" examples, other than the visualizations helping you shoot a hoop better or something. Not sure if the new-agey "think yourself better" is actually a thing. Hence my question. "Send more blood to your hands" seems like a good example, actually. Not something one would normally think possible except by physical labor.

Reply
Expectation = intention = setpoint
Shmi3mo40

I really like this post! (I have liked most of your posts of the last decade and a bit. They also inspired me to learn hypnosis, which led to rather cataclysmic changes in my life.) I think therapists call this "somatization", which can be both positive and negative, in the same sense the hypnotic (or psychotic) illusions are. You seem to mainly focus on the negative somatization (no swelling) and a bit on positive ones, though I suspect that positive somatization (both beneficial and detrimental) is just as controllable with the intent/expectation fusion. Maybe visualizing making the hoop really helps to steady your hand. 

Reply
shminux's Shortform
Shmi6mo60

I once wrote a post claiming that human learning is not computationally efficient: https://www.lesswrong.com/posts/kcKZoSvyK5tks8nxA/learning-is-asymptotically-computationally-inefficient

It looks like the last three years of AI progress suggest that learning is sub-linear in resource use, but probably not logarithmically as I claimed for humans. Looks like the scaling benchmarks show something like capability increase ~ 4th root of model size. https://epoch.ai/data/ai-benchmarking-dashboard

Reply
So You Want To Make Marginal Progress...
Shmi6mo50

Looks like the hardest part in this model is how to " choose robustly generalizable subproblems and find robustly generalizable solutions to them", right?

How does one do that in any systematic way? What are the examples from your own research experience where this worked well, or at all?

Reply
Compute and size limits on AI are the actual danger
Shmi9mo31

Right, eventually it will. But abstraction building is very hard! If you have any other option, like growing in size, I would expect it to be taken first.

I guess I should be a bit more precise. Abstraction building at the same level as before is probably not very hard. But going up a level is basically equivalent to inventing a new way of compressing knowledge, which is a quantitative leap.

Reply
Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong
Shmi10mo20

The argument goes through on probabilities of each possible world, the limit toward perfection is not singular. given the 1000:1 reward ratio, for any predictor who is substantially better than chance once ought to one-box to maximize EV. Anyway, this is an old argument where people rarely manage to convince the other side.

Reply
shminux's Shortform
Shmi11mo0-3

It is clear by now that one of the best uses of LLMs is to learn more about what makes us human by comparing how humans think and how AIs do. LLMs are getting closer to virtual p-zombies for example, forcing us to revisit that philosophical question. Same with creativity: LLMs are mimicking creativity in some domains, exposing the differences between "true creativity" and "interpolation". You can probably come up with a bunch of other insights about humans that were not possible before LLMs.

My question is, can we use LLMs to model and thus study unhealthy human behaviors, such as, say, addiction. Can we get an AI addicted to something and see if it starts craving for it, asking the user, or maybe trying to manipulate the user to get it.

Reply1
Daniel Kokotajlo's Shortform
Shmi1y20

That is definitely my observation, as well: "general world understanding but not agency", and yes, limited usefulness, but also... much more useful than gwern or Eliezer expected, no? I could not find a link. 

I guess whether it counts as AGI depends on what one means by "general intelligence". To me it was having a fairly general world model and being able to reason about it. What is your definition? Does "general world understanding" count? Or do you include the agency part in the definition of AGI? Or maybe something else?

Hmm, maybe this is a General Tool, as opposed a General Intelligence?

Reply
Load More
12Janet must die
6mo
3
32Compute and size limits on AI are the actual danger
9mo
5
10shminux's Shortform
1y
67
15On Privilege
1y
10
5Why Q*, if real, might be a game changer
2y
6
22Why I am not an AI extinction cautionista
2y
40
18Upcoming AI regulations are likely to make for an unsafer world
2y
14
42How can one rationally have very high or very low probabilities of extinction in a pre-paradigmatic field?
2y
15
16Do LLMs dream of emergent sheep?
2y
2
63Top lesson from GPT: we will probably destroy humanity "for the lulz" as soon as we are able.
2y
28
Load More