LESSWRONG
LW

1586
Дмитрий Зеленский
250370
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Failures in Kindness
Дмитрий Зеленский1y21

The problem raises an important problem. Though I have to admit my gut reaction is "neurotypicals are being weird again" :)

Reply
D&D.Sci Evaluation and Ruleset
Дмитрий Зеленский1y30

Damn. And I tried the strategy "what if I try to predict it only off the text, without looking at csv" :D

Reply
D&D.Sci Evaluation and Ruleset
Дмитрий Зеленский1y30

Why DEX though? Like, conceptually it's absolutely unpredictable, this is one of the most useful scores in most TTRPGs.

Reply
Using axis lines for good or evil
Дмитрий Зеленский2y10

Yeah, there seems to be a lot of personal preference involved. Removing cell borders is obnoxious and inconvenient, the table below hurts. The table above has the borders a tad too thick, but removing them is a cure that's, personally, worse than the disease.

Reply
Biology-Inspired AGI Timelines: The Trick That Never Works
Дмитрий Зеленский2y00

In real life, Reality goes off and does something else instead, and the Future does not look in that much detail like the futurists predicted

Half-joking - unless the futurist in question is Gerbert Wells. I think there was a quote that showed that he effectively predicted pixelization of early images along with many similar small-level details of early XXI century (although, of course, survivor bias for details probably influences my memory and the retelling I rely on).

Reply
Don't leave your fingerprints on the future
Дмитрий Зеленский2y10

Independently,

(in principle it could be figured out by human neuroscientists working without AI, but it's a bit late for that now)

What? Why? There is no AI as of now, LLMs definitely do not count. I think it is still quite possible that neuroscience will make its breakthrough on its own, without any non-human mind help (again, dressing up the final article doesn't count, we're talking about the general insights and analysis here).

Reply
Don't leave your fingerprints on the future
Дмитрий Зеленский2y10

To begin with, there is a level of abstraction at which the minds of all four of you are the same, yet different from various nonhuman minds.
 

I am actually not even sure about that. Your "identify the standard cognitive architecture of this entity's species" presupposes existence thereof - in a sufficiently specified way to then build its utopia and to derive that identification correctly in all four cases.

But, more importantly, I would say that this algorithm does not derive my CEV in any useful sense.

Reply
Superintelligent AI is necessary for an amazing future, but far from sufficient
Дмитрий Зеленский2y40

I like this text but I find your take on Fermi paradox wholly unrealistic.

Let's even assume, for the sake of the argument, that both P(life) and P(sapience|life) are bigger than 1/googol (though why?) so your hunch on how many planets originally evolve sapient aliens is broadly correct. A very substantial part of alternative histories of the last century (I wanted to say "most" but most, of course, is uninteresting differences such as whether a random human puts a right shoe or a left shoe on first) result in humanity dead or thrown into possibly-irrecoverable barbarism. The default take for aliens that have evolved is to fail their version of Berlin crisis, or Caribbean crisis, or whatever other near-total-destruction situation we've had even without AI (not necessarily with nuclear weapons, mind you - say, what if instead of pretty-harmless-in-comparison COVID we got a sterilizing virus on the loose that kills genitalia instead of osmotic nerves? Since its method of proliferation does not depend on the host's ability to procreate, you could imagine sterilized population of the planet). And then you tack on the fact that you also predict very high chance of AGI ruin; so most of the hypothetical aliens that survived the kind of hurdles humanity somehow survived (again, with possibly totally different specifics) are replaced by misaligned AGI, throwing a huge hurdle into the cosmopolitan result you predict - meeting paperclip-maximiser built by ant-people is more likely than meeting ant-people themselves, given your background beliefs.

Reply
Warning Shots Probably Wouldn't Change The Picture Much
Дмитрий Зеленский2y1-5

Banning gain-of-function research would be a mistake. What would be recklessly foolish is incentivising governments to decide what avenues of research are recklessly foolish. The fact that governments haven't prohibited it in a panic bout (not even China that otherwise did a lot of panicky things) is a good testament of their abilities, not an inability to react to warning shots.

Reply
Warning Shots Probably Wouldn't Change The Picture Much
Дмитрий Зеленский2y70

The expected value of that is infinitesimal, both in general and for x-risk reduction in particular. People who prefer political reasoning (so, the supermajority) will not trust it, people who don't think COVID was an important thing except in how people reacted to it (like me) won't care, and most people who both find COVID important (or sign of anything important) and actually prefer logical reasoning have already given it a lot of thought and found out that the bottleneck is data that China will not release anyone soon.

Reply
Load More