Martin here, the main author of the above. Thanks a ton for this!
If I understand your reply correctly, your conclusion is that epidemiologists should:
I think these are important points!
But given the very small effects estimated here, you aren't arguing for a change to the interpretation of the studies in the post, right? :-)
Just stumbled upon this, sounds awesome! Any tips or pointers on the event radar? Screenshots or similar would be incredible!
Okay, so I got it to work! Basically you just do what it says on here: https://blog.immersed.team/wi-fi-direct-8ec23c74fdab
And then connect the Quest 2 to the new network your mac is broadcasting :-)
My WiFi is good enough to not need it, but I'm sure I'll need it when I'm out of town.
Thanks! I might give it a go and use Ethernet <-> Mac <-> Quest 2. I think you could do iPhone via bluetooth <-> Mac <-> Quest 2 – I'll have to test that out. If you wanna know how it goes, feel free to reply in a week or so!
Ah, so the latency without wifidirect is too bad for regular use?
I read that if you disable SIP, you can get wifidirect back up and running. That's not good security practice, though.
Hi Ozzie! Any news on the adapter, and how you kept using Immersed? :-)
In principle, yes. In practice, many external circumstances modify perceived and factual autonomy :-)
Thank you so much! I'm exploring here, so thank you for your input.
Still, I would not say I have reached some maximum; I still want.
Oh, definitely! I mean "maximum" in the sense of increasing well-being, not in the sense that there is a limit.
Another aspect that I wondered about was that bit about journeys versus end points
This fits incredibly well into SDT, but I agree that I did not specify it in the article. One of the most competence-satisfying things is optimal challenges, challenges where you're stretching your abilities but still likely to succeed.
How would we evaluate things, or even should we, in a retrospective view?
I think this is a much larger causal question on counterfactuals, and it's often very hard/impossible to meaningfully do that. But we can still make clear answers to prospective questions, and to specific retrospective questions: If a choice A is more likely than B to satisfy competence, relatedness and autonomy, then it is the better choice.
To conclude, I agree with basically everything you stated. The goal is no the goal in the to-do sense, rather in the compass sense. Was that a satisfactory explanation? :-)
A lot to unpack here! Three statements catch my eye:
Autonomy: making decisions and taking responsibility for these decisions? The most stressful thing in life.Autonomy: the choice to say 'no' to one's decision? Something that we always have, only the results vary depending on the circumstances and will not always make us happy.Autonomy: financial and physical ability to own and do what we want? Something that we have little influence on.
Autonomy in the SDT-sense is not defined by whether we're making decisions, nor whether we can own what we want. To make it as specific I can, it's scoring high on the BPNSFS which contains the following items on autonomy:
Where (R) items are reverse scored.
As you can see, every item contains "feel". Autonomy is about whether you feel like you can do what you want to do.
It's really amazing how happy we are to give up our autonomy when we feel safe to do so.
Having the ability to give up autonomy and take it back at will is, in itself, incredibly autonomous! It also satisfies relatedness.
Is stress what we need to be happy or how much stress do we need to feel happy?
I highly doubt that stress has an independent effect on happiness, but I find it extremely likely that many of the activities that satisfy competence, relatedness and autonomy to the highest degree are also stressful :-)
I think that phenomenologically, you're right. Other-directed goals (need for relatedness, in SDT terminology) feel like they're essentially other-directed.
I think that the evolutionary cause for having other-directed goals is directed at your own genetic proliferation, and I also think that autonomously holding other-directed goals improves your own well-being, even above and beyond the benefits you get because they like you for it. Eg. Gore et al. 2009.
Stated differently, even if you're optimising completely selfishly, you'll have to be unselfish. We care about others simply because they are important to us, not because they make us happy. They are a terminal value. If they are instrumental, we don't get the benefits to well-being. But caring for them terminally also carries benefits to ourselves. I think that's wonderful!