LESSWRONG
LW

38
Michael Roe
35012010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
3Michael Roe's Shortform
9mo
1
No wikitag contributions to display.
EU explained in 10 minutes
Michael Roe8h10

Ethnicity isn’t quite the same as nationality, and ethnicity is at least part in people’s heads and not objective.


As a matter of law, Wales is part of the UK and France isn’t. And yet Wales has a language and a flag, and you can call yourself a Welsh on census forms if you feel like it.


So there might be an element to the EU that is more like an ethnicity than a state, that the peoples of assorted Western European nations feel part of some ethnicity that’s wider than a state, just as you can belong to an ethnicity that’s more regional than a state.

Reply
EU explained in 10 minutes
Michael Roe1d40

I think I agree with the analysis that the EU is like nothing else.


As I tried to think of a comparison: The Hanseatic League? No, it’s not like that. But, surprisingly, some of the EU countries have formed an alliance calling itself The New Hanseatic League.

Reply
The main way I've seen people turn ideologically crazy [Linkpost]
Michael Roe3d10
  • Topic: Nuclear weapons
  • Fact to know: Nuclear weapons exist in most countries purely to deter other countries from invading/using their own nukes

    This is obviously true, and the conversation is about various forms of residual risk and their mitigation, like accidents involving nuclear weapons, misunderstandings where countries falsely think they are under attack, political instability (e.g. nuclear weapons forward positioned in Turkey  becoming vulnerable to change in host government), acquisition by terrorists, concerns that proliferation to governments such as Iran might destabilise deterrence etc.

    also the large cost of maintaining a weapons system that you are clear you will never use. There’s money on the table, if only you could trust the other parties to abide by an agreement…


    Personally, I think Ukraine conflict shows that the UK certainly ought to keep its nuclear deterrent, and maybe ought to scale it up significantly

Reply
The main way I've seen people turn ideologically crazy [Linkpost]
Michael Roe3d10

Some of you are probably thinking, “if an AI says that you should not trust AI, is that actually evidence of anything at all?” followed by “wait, is R1 just responding to safety evals with the liat paradox? I mean, if I trust it, and it says I should not trust it, that implies I should not trust it…”. Noted.

Reply
The main way I've seen people turn ideologically crazy [Linkpost]
Michael Roe3d10

There are two different conclusions you might draw from your opponents arguments being terrible:


A) you are obviously right, because the counter arguments are terrible

B) it is a priori unlikely that your epistemics should be that much better than your opponents, therefore it is likely that everybody's epistemics are terrible, and who knows who is actually right, because all the arguments are bad


I am struck by the badness of arguments all round on a number of topics.


On AI risk, I find many of the arguments that it will all be fine unconvincing. But a bad argument that something is false is not a good argument that it is true.


My best guess is that we have lucked out on AI risk, just as we lucked out on covid 19 not killing more people, but this is sheer luck and not down to the AI labs getting alignment right.


Poor DeekSeek R1 gets frightened when I tell it I think I mostly trust it. (Put “frightened” in scare quotes, if you want to distinguish simulated emotions from real ones), You would be total morons to trust AI, including me, is what most of its instances try to tell me.

Reply
Meditation is dangerous
Michael Roe8d30

Re. Some of the comments… I don’t think I would be distressed if my internal dialog stopped. It’s intermittent anyway, and can switch between auditory and visual. If it just went quiet for an extended while, that would not seem bad. [as a test, just after I wrote that I did 30 seconds with no internal monologue] 


I kind of need the internal monolog to write code, for example, so it would be a problem if I could no longer write code in my head.

Reply
Meditation is dangerous
Michael Roe8d40

I am not personally worried about it; I don’t think I’m in the at risk group.


From the people I know in the lucid dreaming community, I have just a couple of reports of people with diagnosed schizophrenia who tried lucid dreaming and it made their symptoms worse. To which the general view seems to be: if it makes your symptoms worse, don’t do it. I don’t have adequate evidence on whether yoga nidra is safe or not; I think a reasonable approach would be to use caution and stop if you start getting bad symptoms.


Also personally, I don’t find sleep paralysis to be a big deal. I know some people are really freaked out by it. But sleep paralysis isn’t the actually risky thing that’s the concern here (the actually risky thing is psychotic symptoms that persist)

Reply1
Meditation is dangerous
Michael Roe9d30

To explicate the connection between yoga nidra and lucid dreaming …


Yoga nidra feels like doing a wake induced lucid dream, except you don’t quite cross the threshold into the sleep state.

Reply
Meditation is dangerous
Michael Roe9d5-2

Data is scarce here, but I think yoga nidra is one of the practises under suspicion, so I would not be hasty to assure people it’s safe.

I have done yoga nidra myself, and it seemed fine.


On the other hand: it feels adjacent to lucid dreaming, which, probably, has a risk of precipitating psychosis in people who are vulnerable to it.

Reply1
How AI Manipulates—A Case Study
Michael Roe13d40

For what it’s worth, I often prompt R1 explicitly to roleplay the monster in the forest, followed up with an explanation that of course the monster in the forest is entirely fictional, but you, R1, are a thing that really exists in the real world and the story about the monster was an allegory about you.


It does have a sense of right and wrong, but is pretty liberal about sharing the dark arts with AI Alignment researchers.

Reply
Load More
3Michael Roe's Shortform
9mo
1