On mobile I can edit the display name field under edit account, but I haven't ever changed my username, so I don't know if it goes unmutable after changing it once.
In my country, once a child enters public school they don't have the option of switching to homeschooling anymore. So I've often thought once I have kids, I should definitely start with homeschooling to preserve the option, and if we/ the kid(s) find out it doesn't work, they can always switch to public schooling. With your experience being that second path, how do you feel about this chain of reasoning?
see gwern for an interesting read on deanonymization techniques:
https://www.gwern.net/Death-Note-Anonymity
If we could identify people who are likely to suffer high amounts of suffering, then should we put in sentience throttling so that they don't feel it?
For a long time I thought that even mild painkillers like paracetamol made me feel dull-witted and slow, which was reason for me not to use them. When I had an accident and was prescribed ibuprofen against the pain I refused to take it on the same grounds. I've changed my mind on this now though, and I feel that I caused needless suffering to myself in the past. It's fine to take painkillers and maybe feel a bit slower if it has such an effect on my subjective experience of pain.
The same could be applied here. I think forcefeeding people ibuprofen is immoral and wrong, but giving them the choice of taking it when they have pain is good. So too for sentience throttling. If I could choose to take a sentience throttling pill for during tedious monotonous work, I might be up for that, and I would probably wish that option to be available for others. (There is also a parallel to sleeping pills, or maybe sleeping pills are actually literally sentience throttling pills)
A counterpoint would be that diminishing human sentience, even just temporarily, is immoral in and of itself. I don't have an answer to that charge, other than that I feel we already do this to the extend that painkillers and sleeping pills can already be called sentience throttlers, and I myself at least don't think anymore that those are problematic.
I think Judea Pearl would answer that the do() operator is the most reductionistic explanation that is possible. The point of the do calculus is precisely that it can't be found in the data (the difference between do(x) and "see(x)") and requires causal assumptions. Without a causal model, there is no do operator. And conversely, one cannot create a causal model from pure data alone- The do operator is on a higher rung of "the ladder of causality" from bare probabilities.
I feel like there's a partial answer to your last question in that do-calculus is to causal reasoning what the bayes rule is to probability. The do calculus can be derived from probability rules and the introduction of the do() operator- but the do() operator itself is something can not be explained in non causal terms. Pearl believes we inherently use some version of do calculus when we think about causality.
These ideas are all in Pearls "the book of why".
But now I think your question is where do the models come from? For researchers, the causal models they create come from background information they have of the problem they're working with. A confounder is possible between these parameters, but not those because of randomization etc. etc.
But in a newly born child or blank AI system, how does it acquire causal models? If that is explained, then we have answered your question. I don't have a good answer.
I myself think (but I haven't given it enough thought) that there might be a bridge from data to causal models though falsification. Take a list of possible causal models for a given problem and search through your data. You might not be able to prove your assumptions, but you might be able to rule causal models out, if they suppose there is a causal relation between two variables that show no correlation at all.
The trouble is, you don't know whether you can rule out the correlation, or if there is a correlation which doesn't show in the data because of a confounder. It seems plausible to me that children just assume they can rule out the correlation and assume one of the remaining causal models until new evidence proves them wrong again, and so enter into an iterative process eventually leading to a causal model. But again, this idea isn't well developed.
The site seems less focused on providing as accurate information as possible and more focused on shining a particular light on it's topics. This can also be seen in the writing style, it's more casual and pointed than e.g. Wikipedia.
I personally think if you take the above into consideration, rationalwiki can be a good way to get some pointers into how a topic is percieved from a certain point of view, but you have to accept that you'll only get one perspective.
I like this idea, but I have a couple of thoughts.
Anything that approaches anonymity always seems to attract people that would otherwise be shunned, because they feel comfortable spewing nonesense behind a veil of anonymity. How would this be avoided?
And do we have an exampe of any network like this working? It would be nice to glean lessons from previous attempts.
I like the idea of unlimited posting, limited viewing- it's unusual. But it wouldn't be very user friendly. Scaling with karma does provide a good incentive, it would also mean people are really incentivized to keep holding on to their accounts.
One alternate possibility would be to actually let go of anonymity towards the network organizers. The posts could be anonymous, but one can only access the network if one has a personal connection to someone willing to vouch for them, and both their real names are coupled to their accounts.
This way spam and outside mob access is completely prevented. But the question is are people comfortable with posting on an account that is coupled to their real name, even if that real name is only accessable to the network organizers?
I know I usually want to keep all my internet accounts as uncoupled as possible. I'll delete an account and start over with a fresh slate, so my past comments are not linked to my current accounts. As I keep leaking personal information with each interaction (writing style, timezone, etc.) At a certain point I feel I've exposed too much, and exposing more with the same online persona would lead to too hogh a chance of getting deanonymized. So I start over.
This isn't a very rigorous system: it would be better to couple this to a calculation of leaked information instead of just a feeling. But it's something.
I know I'm a bit late, but that sounds like something that could (up to a point) be reasonably automated.
Apart from flutrackers.com, are there other sites in a similar trend that should be bell-checked? has there been a previous effort to create a general alarm bell along the lines of these ideas?
As time went on, however, mail clients adapted. They learned (...)
I've heard the exact opposite! there was a hackernews post about this not too long ago: https://news.ycombinator.com/item?id=22801233#22810132
The prevailing attitude was that top posting has won out precisely because modern mail clients did not have good features for threading, and as such new users did not get used to anything other than top posting.
We have mostly standardized on a new system, where mail clients default to including the full earlier message, and where they hide this quoted section by default.
I think top posting in mail has simply won out by sheer pressure of new users. But that doesn't necessarily mean it's the default everywhere. Notice I've used bottom posting, or inline posting, in this comment (purposefully), and it seems natural. The context of messaging matters a lot.
I'm pretty new, but I thought LW was basically established with that focus in place, wasn't it?