484

LESSWRONG
LW

483

Yaroslav Granowski's Shortform

by Yaroslav Granowski
9th May 2025
1 min read
10

2

This is a special post for quick takes by Yaroslav Granowski. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Yaroslav Granowski's Shortform
4Yaroslav Granowski
4Viliam
1Yaroslav Granowski
3Viliam
1Yaroslav Granowski
3faul_sname
3Yaroslav Granowski
4the gears to ascension
2Dagon
2Yaroslav Granowski
10 comments, sorted by
top scoring
Click to highlight new comments since: Today at 10:49 PM
[-]Yaroslav Granowski5mo40

Is there anything on LessWrong about human-based superintelligence? I'm a newbie and about to write a lengthy post about. But the idea seems pretty obvious and is likely to be expressed before somewhere.

Reply
[-]Viliam5mo40

As far as I remember, the following kinds of human-based superintelligences were discussed here:

  • superbabies -- genetically engineered humans, getting the best traits, including intelligence
  • cyborgs -- humans connected to computers
  • uploads -- humans (or just brains) simulated in a computer, neuron by neuron, only much faster

Relevant tag: intelligence amplification

Is any of this close to your idea?

Reply
[-]Yaroslav Granowski5mo10

Thank you, I had to clarify better.

Maybe the cyborgs are closer, but without physical implants, only as advanced forms of software, like knowledge databases

Reply
[-]Viliam5mo31

In some sense, a man with a pencil and paper is much better mathematician than a man without them. And we have not yet reached the limits of how helpful the helpful tools could be. For example, there are various forms of note-taking software, and people debate endlessly about their advantages and disadvantages, which suggests that all of them are far from perfect. (And that's still about personal notes. A perfect tool for collaborative note-taking would require even more functions.)

Okay, sounds potentially interesting, go ahead! (I hope you don't mean zettelkasten.)

Reply
[-]Yaroslav Granowski5mo10

Done: Become a Superintelligence Yourself

Reply
[-]faul_sname5mo30

The "Cyborgism" tag and post are likely relevant.

Reply
[-]Yaroslav Granowski5mo37

Upvote/downvote symmetry encourages conformism. Why not analize what good and bad may come from particular posts/comments from rational point of view and adjust the system?

Good: The material contains some usefull information or insight. Users notice that and encourage by upvoting. Seems fine to me as it is.

Bad: The material wastes time and attention of readers. There may also be objective reasons for removal, like infohazards or violation of rules. But if some readers feel offended by the content because it questioned their beliefs, it isn't necessarily a valid reason for its removal. So I suggest to reconsider downvoting system.

Regarding the time waste: A post with properly specified title prevents non-interested readers from looking inside and only consumes a line in the list. While a clickbait lures readers inside without giving them any good. A hard to parse but useless text even more annoying. So, perhaps the total time spent by non-upvoting users multiplied by their vote power could work as a downvote penalty?

Reply11
[-]the gears to ascension5mo41

agreed, I've seen instances of contributors I think would have pushed the field forward being run off the site before they learned norms due to high magnitude of feedback fast. the unfortunate thing is, in the low dimensional representation karma presents right now, any move appears to make things worse. I think making downvotes and agree votes targeted like reacts might be one option to consider. another would be a warning when downvoting past thresholds to remind users to consider whether they want to take certain actions and introduce a trivial inconvenience; eg, some hypothetical warnings (which need shortening to be usable in UI):

  • "you're about to downvote this person past visibility. please take 10 seconds to decide if you endorse people in your position making a downvote of this sort of post."
  • "you might be about to downvote this person enough to activate rate limiting on their posts. If you value their posting frequency, please upvote something else recent from them or reduce this downvote. Please take 30 seconds to decide if you intend to do this." possibly the downvote-warnings should have a random offset of up to 3 karma or something, so that the person who pushes them over the edge only has some probability of being the one who gets the feedback, rather than the only one - effectively a form of dropout in the feedback routing.

also, what if you could only strong agree or strong karma vote?

eigenkarma would be a good idea if <mumble mumble> - I prototyped a version of it and might still be interested in doing more, but I suspect ultimately most of the difficulty of doing something like this well is in designing the linkup between human prompts and incentives, in that you need to be prompting users about what sort of incentives they want to produce for others (out of the ones a system makes available to transmit), at the same time as designing a numeric system that makes incentive-producing actions available that work well.

the LW team seems awfully hesitant to mess with it, and I think they're accepting rather huge loss for the world by doing that, but I guess they've got other large losses to think about and it's hard to evaluate (even for me, I'm not saying they're wrong) whether this is actually the highest priority problem.

Reply
[-]Dagon5mo20

The system is never going to be all that great - it's really lightweight, low-information, low-committment to cast a vote.  That's a big weakness, and also a requirement to get any input at all from many readers.

It roughly maps to "want to see more of" and "want to see less of" on LessWrong, but it's noisy enough that it shouldn't be taken too literally.

Reply
[-]Yaroslav Granowski5mo20

While the alignment community is frantically trying to convince themselves of the possibility of benevolent artificial superintelligence, the human cognition research remains undeservedly neglected.
Modern AI models are predominantly based on neural-networks, which is the so-called connectionist approach in cognitive architecture studies. But in the beginning, the symbolic approach was more popular because of the lesser computational demands. Logic programming was the means to imbue the system with the programmer's intelligence.
Although symbolist AI researchers have studied the work of the human brain, their research was driven by attempts to reproduce the work of the brain, to create an artificial personality, rather than help programmers expressing their thoughts. The user's ergonomics were largely ignored. Logic programming languages aimed to be the closest representation of the programmer's thoughts. But they failed at being practically convenient. As a result, nobody is using vanilla logic programming for practical means.
In contrast to that, my research is driven by ergonomics and attempts to synchronize with the user's thinking. For example, while proving a theorem (creating an algorithm), instead of manually composing plain texts of sophisticated language, the user sees the current context and chooses the next step from available options.

Reply
Moderation Log
More from Yaroslav Granowski
View more
Curated and popular this week
10Comments