Nobody special, nor any desire to be. Just sharing my ideas when I appear to know better than the person I'm responding to, or when I believe I have something interesting to share/add. I'm not a serious nor a formal person, and if you're more knowledgeable than intelligent, you probably won't like me as I lack academic rigor.
Feel free to correct me when I make mistakes. I'm too certain of myself as my ideas are rarely challenged. Crocker's rules are fine! When playing intellectual (I do on here) I find that social things only get in the way, and when I socialize I find that intellectual things get in the way, so I separate them.
Finally, beliefs don't seem to be a measure of knowledge and intelligence alone, but a result of experiences and personality. Those who have had similar experiences and thoughts already will recognize what I say, and those who don't will mostly perceive noise.
Predict and control... I'm not sure about that, actually. The world seems to be a complex system, which means that naive attempts at manipulating it often fail. I don't think we're using technology to control others in the manner that we can choose their actions for them, but we are decreasing the diversity of actions that one can take (for instance, anything which can be misunderstood seems to be no go now, as strangers will jump in to make sure that nothing bad is going on, as if it was their business to get involved in other peoples affairs). So our range of motion is reduced, but it's not locked to a specific direction which results in virtue or something.
I don't think that the world can be controlled, but I also think that attempts at controlling by force mistaken, as there's more upstream factors which influence most of society. For instance, if your population is buddhist, they will believe that treating others well is the best thing to do, which I think is a superior solution to placing CCTVs everywhere. The best solutions don't need force, and the one which use force never seem optimal (consider the war on drugs, the taboo on sexuality, attempts at stopping piracy, etc). I think the correct set of values is enough (but again, the receiver needs to agree that they're correct voluntarily). If everyone can agree on what's good, they will do what's good, even if you don't pressure them into doing so.
I'm also keeping extinction events in mind and trying to combat them, I just do so from a value perspective instead. I'm opposed to creating AGIs, and we wouldn't have them if everyone else were opposed as well. Some people naively believe that AGIs will solve all their problems, and many don't place any special value on humanity (meaning that they don't resist being replaced by robots). But there's also many people like me who enjoy humanity itself, even in its imperfection.
I mean you as the owner of your machine can audit what packets are entering or exiting it
This is likely possible, yeah. But you can design things in such a way that they're simply secure - as it's impossible for them not to be. How do you prevent a lock from being hacked? You keep it mechanical rather than digital. I don't trust websites which promise to keep my password safe, but I trust websites which don't store my password in the first place (they could run it through a one-way hash). Great design makes failure impossible (e.g. atomic operations in banking transfers)
I’m curious about your thoughts on that.
This would likely result in security, but it comes at a huge cost as well. I feel like there's better solutions, and not just for a specific organization, but for everyone. You could speak freely on the internet just 20 years ago (freely enough that you could tell the nuclear launch codes to strangers if you wanted to), so such a state is still near in a sense. Not only was it harder to spy on people back then, less people even wanted to do such a thing, and this change in mentality is important as well. I'm not trying to solve the problem in our current environment, I want to manipulate our environment to one in which the problem doesn't exist in the first place. We just have to resist the urge to collect and record everything (this collection is mainly done by malicious actors anyway, and mainly because they want to advertise to you so that you buy their products). You could go on vacation in a country which considers it bad taste to pry on others affairs and be more or less immune thanks to that alone, so you don't even need to learn opsec, you just need to be around people who don't know what that word means. You could also use VPNs which have no logs (if they're not lying of course) as nothing can be leaked if nothing is recorded. Sadly, the same forces which destroyed privacy are trying to destroy these methods, it's the common belief that we need to be safe, and that in order to be safe we need certaincy and control. I don't even think this is purely ideology, I think it's a psychological consequence of anxiety (consider 'control freaks' in relationships as well). Society is dealing with a lot of problems right now which didn't exist in the past not because they didn't happen, but because they weren't considered as problems. And if we don't consider things to be problems, then we don't suffer from them, so the people who are resonsible for creating the most suffering in life are those who point at imperfections (like discrimination and strict beauty standards) and convince everyone that life is not worth living until they're fixed.
Finally, people can leak information, but the human memory is not perfect, and people tend to paraphrase eachother, so "he said she said" situations are inherently difficult to judge. You have plausible deniability since nobody can prove what was actually said. I think all ambiguity translates into deniability, which is also why you can sometimes get away with threatening people - "It would be terrible if something bad happened to your family" is a threat, but you haven't actually shown any intent to break the law. Ambiguity is actually what makes flirting fun (and perhaps even possible), but systematizers and people in the autistism-cluster tend to dislike ambiguity, it never occurs to them that both ambiguity and certainty have pros and cons.
I mean politically
Politics is a terrible game. If possible, I'd like to return society to the state it had before everyone cared too much about political issues. Since this is not an area where reasonable ideas work, I suggest just telling people that dictators love surveillance (depending on the ideology of the person you're talking to, make up an argument for how surveillance is harmful). The consensus on things like censorship and surveillance seems to depend on the ideology one perceives it to support. Some people will say "We need to get rid of anonymity so that we can shame all these nazis!" but that same sort of person was strongly against censorship 13 years ago, because back then censorship was though to be what the evil elite used to oppress the common man. So the desire to protect the weak resulted in both "censorship is bad" and "censorship is good" being common beliefs, and it's quite easy for the media to force a new interpretation since people are easily manipulated.
By the way, I think "culture war" topics are against the rules, so I can only talk about them in a superficial and detached manner. Viligantes in the UK are destroying cameras meant to automate fining people, and as long as mentalities/attitudes like this dominate (rather than the belief that total surveillance somehow benefits us and makes us safe) I think we'll be alright. But thanks to technological development, I expect us to lose our privacy in the long run, and for the simple reason that people will beg the government to take away their rights.
Sorry in advance for the wordy reply.
Can you identify the specific arguments from ISAIF that you find persuasive
Here's my version (which might be the same. I take responsibility for any errors, but no credit for any overlap with Ted's argument)
1: New technologies seem good at first/on the surface.
2: Now that something good is available, you need to adapt it (or else you're putting yourself or others at a disadvantage, which social forces will punish you for)
3: Now that the new technology is starting to be common, people find a way to exploit/abuse it. This is because technology is neutral, it can always be use for both good and bad things, you cannot seperate the two.
4: In order to stop abuse of said technology, you need to monitor its use, restrict access with proof of identity, to regulate it, or to create new and even stronger technology.
5: Now that you're able to regulate the new technology, you must do so. If you can read peoples private emails, and you choose not to, you will be accused of aiding pedophiles and terrorists (since you could arguably have caught them if you did not respect their privacy)
This dynamic has a lot of really bad consequences, which Ted also writes about. For instance, once gene editing is possible, why would we not remove genes which results in "bad traits"? If you do not take actions which makes society safer, you will be accused of making society worse. So we might be forced to sanitize even human nature, making everyone into inoffensive and lukewarm drones (as the traits which can result in great people and terrible people are the same, the good and the bad cannot be separated. This is why new games and movies are barely making any money, and it's why Reddit is dying. They removed the good together with the bad)
I’m curious how long you think you will be able to slow it down and what your ideas for doing so are
I can slow it down for myself by not engaging in these new technologies (IoT, subscription-based technology, modern social media, etc.) and using fringe privacy-based technologies, or simply not making noise (If nothing you say escapes the environment in which you said it, you're likely safe. If what you said is not stored for longer periods of time, you're likely safe. If the environment you're in is sufficiently illegible, information is lost and you cannot be held accountable.
I'm also doing what I can to teach people that:
1: Good and Bad cannot be separated. You can only have both of them or none of them. I think this is axiomatically true, which suggests that the Waluigi Effect occurs naturally (just like intrusive thoughts).
2: You cannot have your cake and eat it too. You can have privacy OR safety, you cannot have both. You cannot have a backdoor that only "the good guys" can access. You cannot have a space where vulnerable groups can speak out, without also having a space where terrorists can discuss their plans. You cannot have freedom of speech and an environment in which nothing offensive is said.
Most people in the web3 space are not taking internet anonymity as seriously as it needs to be
This is possibly true, but the very design of web3 (decentralization, encryption) makes it so that privacy is possible. If your design makes it so that large corporations cannot control your community, it also makes it so that the governement is powerless against it, as these are equal on a higher level of abstraction.
That can audit every single internet packet entering and exiting a machine
This sounds like more surveillance rather than less. I don't think this is an optimal solution. We need to create something in which no person is really in charge, if we want actual privacy. The result will look like the Tor network, and it will have the same consequences (like illegal drug trade). If a platform is not a safe place to sell drugs, it's also not a safe platform to speak out against totalitarianism or corruption, and it's also not a safe place to be a minority, and it's also not a safe place to a criminal. I think these are equivalent, you cannot separate good and bad.
I like talking in real life, as no records are kept. What did I say, what did I do? Nobody knows, and nobody will ever know. I don't have to rely on trust or probability here. Like with encryption, I have mathematical certainty, and that's the only kind of certainty which means anything in my eyes.
Self-destructing messages are safe as well, as is talking on online forums which will cease to exist in the future, taking all the information with them (what did I say on the voice chat of Counter Strike 1.4? Nobody knows)
Communities like LW have cognitive preferences for legibility, explicitness, and systematizing, but I think the reason why Moloch did not bother humanity before the 1800s is because it couldn't exist. It seems like game-theoritic problems are less likely to occur when players don't have access to enough information to be able to optimize. This all suggests one thing: That information (and openness of information) is not purely good. It's sometimes a destructive force. The solution is simple to me: Minimize the storage and distribution of information.
edit: Fixed a few typos
I have considered automated mass-surveillance likely to occur in the future, and tried to prevent it, since about 20 years ago. It bothers me that so many people don't have enough self-respect to feel insulted by the infringement of their privacy, and that many people are so naive that they think surveillance is for the sake of their safety.
Privacy has already been harmed greatly, and surveillance is already excessive. And let me remind you that the safety we were promised in return didn't arrive.
The last good argument against mass-surveillance was "They cannot keep an eye on all of us" but I think modern automation and data processing has defeated that argument (people have just forgotten to update their cached stance on the issue).
Enough ranting. The Unabomber argued for why increases in technology would necessarily lead to reduced freedom, and I think his argument is sound from a game theory perspective. Looking at the world, it's also trivial to observe this effect, while it's difficult to find instances in which the amount of laws have decreased, or in which privacy has been won back (also applies to regulations and taxes. Many things have a worrying one-way tendency). The end-game can be predicted with simple exterpolation, but if you need an argument it's that technology is a power-modifier, and that there's an asymmetry between attack and defense (the ability to attack grows faster, which I believe caused the MAD stalemate).
I don't think it's difficult to make a case for "1", but I personally wouldn't bother much with "2" - I don't want to prepare myself for something when I can help slow it down. Hopefully web 3.0 will make smaller communities possible, resisting the pathelogical urge to connect absolutely everything together. By which time, we can get separation back, so that I can spend my time around like-minded people rather than being moderated to the extent that no groups in existence are unhappy with my behaviour. This would work out well unless encryption gets banned.
The maximization of functions lead to the death of humanity (literally or figuratively), but so does minimization (I'm arguing that pro-surveillance arguments are moral in origin and that they make a virtue out of death)
I have more reasons for believing that Mensa members are below 130, but also for believing that they're above.
Below: Most online IQ tests are similar enough to the Mensa IQ test that the practice effect applies. And most people who obsess about their IQ scores probably take a lot of online IQ tests, memorizing most patterns (there's a limit to the practice effect, but it can still give you at least 10 points)
Above: Mensa tests for pattern recognition abilities, which in my experience correlates worse with academic performance than verbal abilities. Pattern recognition abilities also select for people with autism (they tend to score about 20 points higher on RPM-like pattern recognition tests (matrices) than on other subtests). These people will be smarter than they sound, because their low verbal abilities makes them appear stupid, even though their pattern recognition might be 2 standard deviations higher. So you get intelligent people with poor social skills, who sound much dumber than they are, and who tend to have more diagnoses than just autism. It's no wonder that these people go to forums like Mensa, or that they're less successful in life than their IQ would suggest. These people are also incredibly easy targets by the kind of people who go to r/iamverysmart so it's easy to build the public consensus that they're actually stupid, even when it isn't true.
However, in order for high intelligence to shine (and have worthy insights) even without formal education, IQs above 150 are likely needed. For in order to generate your own ideas and still be able to compete with the consensus (which is largely based off the theories of genuises like Tesla, Einstein, Neumann, Turing, Pavlov, etc.) you need to discover similar things yourself independently.
I think many rationalists are above 130. I don't like rationalist mentalities very much though. They seem to think that everything needs to have a source or a proof (a projected lack of confidence in their own discernment). They also tend to overestimate the value of knowledge (even sometimes using it as a synonym of intelligence). If somebodies IQ is, say, 110, I don't think they will ever have any great takes (even with years of studies) which a 140 IQ person couldn't run circles around given a week or two of thoughts. Ever seen somebody invest their whole life into something that you could dismantle or do better in 5 minutes? You could look at this and go "Rapid feedback is better because you approximate reality and update your beliefs faster, makes sense, but why overcompl- right, it's to make mone- to legitimize the only position in which they are thought to have value - because agile coaches are selling ideas/theory and rely on the illusion of substance of course"
People tend to get suspicious if you claim IQs above 125, and start analyzing data and looking for reasons to believe that the actual numbers are less. But I feel like such people really overestimate what an IQ in the 120s or 130s look like. If you go on the Mensa Forums, you will likely find that most of the comments seem rather dumb, and that the community generally appears dumber than LW.
A large number of people who report scoring in the 130s on IQ tests are not lying. If the number seems off but isn't, then what needs updating is the impression of what an IQ in the 130s look like.
I suppose that people dislike that some high IQ people just aren't doing very well in life, and prefer to think that they're lying about their scores
That's basically the exact same idea I came up with!
Your link says popularity ≈ beauty + substance, that's no different than my example of "success of a species = quality of offspring + quantity of offspring". I just generalized to a higher number of dimensions, such that for a space of N dimensions, the success of a person is the area spanned. So it's like these stat circles but n-dimensional where n is the number of traits of the person in question. I don't know if traits are best judged when multiplied or added together, but one could play around with either idea.
I'm not sure my insights say anything that you haven't already, but what I wanted to share is that you might be able to improve yourself by observing unsuccessful people and copying their trait in the dimension where you're lacking (this was voted 'wrong' above but I'm not sure why). And that if you want success, mimicking the actions of somebody who is ugly should be more effective, and this is rather unintuitive and amusing.
I also think it would be an advantage for an attractive person to experience what it's like not to be attractive for a while, getting used to this, and then becoming attractive again. Since he would have to make up for a deficit (he's forced to improve himself) and then when the advantage comes back, he'd be further than if he never had the period of being unattractive. And as is often the case with intelligent people, I never really had to study in school, but this made me unable to develop proper study habits. If I learned how below-average people made it through university, this would likely help me more than even observing the best performing student in my class.
A related insight is that if you want a good solution, you have to solve a worse problem. Want a good jacket for the cold weather? Find out what brands they use on Greenland, those are good, they have to be. Want to get rid of a headache? Don't Google "headache reliefs", instead, find out what people with migraines and cluster-headaches do, for they're highly motivated to find good solutions.
Anyway, I swear I came up with these ideas before you wrote your post, the similarity is a coincidence though it looks like I just wrote a worse version of your post. I was partly inspired by I Ching hexagram 42 which says something like "When the superior man perceives good, he imitates it; when he perceives faults, he eliminates them in himself"
I've noticed that your essay doesn't differentiate beliefs (about truth), and values (subjective preferences). This implies that ideologies approximates truth, or that truth makes people change their mind, or that you can calculate which preferences are best, and I think all of these assumptions are wrong as value judgements and pure knowledge are entirely separate.
You could argue that having a belief X results in behaviour Y which leads to better well-being for a group of people, but this is unrelated to the truth value of a belief (if it were not so, we'd be able to prove the existence of god by measuring the outcomes of people who believed in him vs those who didn't), but this might depend on the type of belief
I think the ideas are independently useful, but to get the best out of both, I'd probably have to submit a big post (rather than these shortform comments) and write some more related insights (I only shared this one because I thought it might be useful to you). Actually, I know that I'm likely too lazy and unconscientious to ever make such a post, and I invite people to plagiarize, refine and formalize my ideas. I've probably had a thousand insights like this, and after writing them out, they stop being interesting to me, and I go on thinking about the next thing.
I hope my comment was useful to you, though! You can start applying the concept to areas outside of morality. Of feel how postive experiences have the same effect (I have made many good memories on sunny days, so everything connected to brightness and summer is perceived more positively by me). There's no need to "fix" good associations blending together, I personally don't, but I also don't identify as a rationalist. I'm more of a meta-gamer/power-gamer, like a videogame speedrunner looking for new glitches to exploit (because it's fun, not because I'm ambitious).
Sometimes I spend a few hours talking with myself, and finding out what I really believe, what I really value, and what I'm for and against. The effect is clearity of mind and a greater trust in myself. A lot of good and bad things have a low distance to eachother, for instance "arrogance" and "confidence", so without the granularity to differentiate subtle differences, you put yourself at a disadvantage, suspecting even good things.
I suppose another reason that I recommend trusting yourself is that some people, afraid of being misunderstood and judged by others, stay away from anything which can be misunderstood as evil, so they distance themselves from any red flags with a distance of, say, 3 degrees of association.
Having ones associations corrupted because something negative poisons everything without 3 degrees/links of distance has really screwed me over, so I kind of want you to hear me out on this:
I might go to the supermarket, and buy a milkshake, but hate the experience because I know the milkshake has a lot of chemicals in it, because I hate the company which makes them, because I hate the advertisement, because I know the text on the bottle is misleading... But wait a minute, the milkshake tastes good, I like it, the hatred is a few associations away. What I did was sabotage my own experience of enjoying the milkshake, because if I didn't, it would feel like I was supporting something which I hated, merely because something like that existed 2-3 links away in concept space.
I can't enjoy my bed because I think about dust mites, I can't enjoy video-games because I think about exploitative skinners boxes, I can't enjoy pop music because, even though I like the melody, I know that the singer is somewhat talentless and that somebody else wrote the lyrics for them. But, I have some young friends (early 20s) who simply enjoy what they enjoy and hate what they hate, and they do not mix the two. They drink a milkshake and it's tasty, and they listen to the music and it feels good, and they lay down in their bed and it's soft and cozy. Aren't they living in reality and enjoying the moment, while I'm telling my body that my environment is hostile, which probably makes it waste a lot of energy making sure that I don't enjoy anything (as that would be supporting evil) or let my guard down?
I noticed myself doing this, and stopped this unnecessary spread of negative associations, or reduced the distance at least. Politics seems to be "the mind killer" exactly because people form associations/shortcuts like "sharing -> communism -> mass starvation -> death", putting them off even good things which "gets too close" to bad things. And these poisoned clusters can get really big, causing heuristics such as "rock music = the devil". Sorry about the lengthy message by the way.
I think this effect already happened, just not because of AI.
Nietzsche already warned against the possible future of us turning into "The last man", and the meme "Good times create weak men" is already a common criticism/explanation of newer cultures. There's also memes going around calling people "soy", and increases in cuckolding and other traits which seem to indicate falling testosterone levels (this is not the only cause, but I find it hard to put a name on the other causes as they're more abstract)
We're being domesticated by society/"the system". We've built a world where cunning is rewarded over physical aggression, in which standing out in any way is associated with danger, and in which we praise the suppression of human nature, calling it "virtue". Even LW is quite harsh on natural biases.
It's a common saying that the modern society and human nature are a poor fit, and that this leads to various psychological problems. But the average man has nowhere to aim is frustrations, and he has no way to fight back. The enemy of the average person is not anything concrete, they're being harassed by things which are downstream consequences of decisions made far away from them, by people who will never hear what their victims think about their ideas. I think this leads to a generation of "broken men". This is unlikely to change the genetics of society though, unless the most wolf-life of us fight back and get punished for it, or if those who suffer the least from these changes are least wolf-like (which I think may be the case).
Dogs survive much better than wolves in our current society, and I think it's fair to say that social and timid people survive better than aggressive people who stand up to that which offends them, and more so now than in the past (one can still direct their aggression at the correct targets, but this requires a lot more intelligence than aggressive people tend to have)
I think this is likely to continue, though, by which I mean to say that you don't seem incorrect. Did you use AI to write this article? If so, that would explain the downvotes you got. And a personal nitpick with the "Would this even be Bad?" section: "Mood stabilizing" is a misleading term, it actually means mood-reducing. Our "medical solutions" to people suffering in society are basically minor lobotomies. By making people less human, they become a better fit for our inhuman system. If you enjoy the thought of being domesticated, you're probably low on testosterone, or otherwise a piece of evidence that human beings have already been strongly weakened.