LESSWRONG
LW

335
Mike S (StuffnStuff)
6060
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Exploring entropy gradient propulsion via the Casimir Effect
Mike S (StuffnStuff)1mo61

From the linked publication: "A localized suppression of vacuum entropy near the hull creates dS/dx > 0 outward".

I don't understand this part. My naive view is that localized suppression means S<S0 (surrounding entropy), or some local minimum. S0 is the same on both sides far away from this minimum. This means the gradient dS/dx is positive on one side of this localized suppression region, and negative on the other side, possibly in an asymmetrical way. But after you integrate dS/dx over x to calculate the total force acting on the region, it will be exactly 0. The same reasoning applies to the other side of the craft, where you have a local peak of vacuum entropy. The total "push" will be 0 + 0.

Reply
A brief theory of why we think things are good or bad
Mike S (StuffnStuff)11mo10

A lot of it comes down to timescales and sequences of events, long term vs short term.

"I will incur a little suffering today, but my well-being will be much better tomorrow".

"We need to get rid of the <undesired national or social group> at cost of suffering, but our nation will have bright future as a result".

People can be tricked into doing unspeakable evil to themselves and others if they have incorrect predictions of the future well-being.

Reply
Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible
Mike S (StuffnStuff)2y10

I see the biggest problem not on the technical side of things, but on the social side. The existing power balance withing the population and the fact that it discourages cooperation is in my opinion a much bigger obstacle to alignment. Heck, it prevents alignment between human groups, let alone between humans and the future AGI. I don't see how increased intelligence of a small select group of humans can solve this problem. Well, maybe I am just not smart enough.

Reply
Cosmopolitan values don't come free
Mike S (StuffnStuff)2y0-8

I propose a goal of perpetuating interesting information, rather than goals of maximizing  "fun" or "complexity". In my opinion, such goal solves both problems of complex but bleak and desolate future and the fun maximizing drug haze or Matrix future. Of course, the rigorous technical definition of "interesting" must be developed. At least "interesting" assumes there is an appreciating agent and continuous development.

Reply
How could you possibly choose what an AI wants?
Mike S (StuffnStuff)2y00

I think we should start with asking what is meant by "flourishing civilizations"? In the AI's view, a "flourishing civilization" may not necessarily mean "human civilization".

Reply
Nobody’s on the ball on AGI alignment
Mike S (StuffnStuff)2y4-2

I generally agree with Stephen Fowler, specifically that "there is no evidence that alignment is a solvable problem."

But even if a solution can be found which provably works for up to N level AGI, what about N+1 level? A sustainable alignment is just not possible. Our only hope is that there may be some  limits on N, for example N=10 requires more resources than the Universe can provide. But it is likely that our ability to prove the alignment will stop well before a significant limit.

Reply
No posts to display.