Sorted by New

Wiki Contributions


I direct skepticism at boosters supporting fast enough timelines to reach AGI within the near future, that sounds like a doomer only position.

In the end, children are still humans.

Half of childhood is a social construct. (In particular, most of the parts pertaining to the teenage years)

Half of the remainder won't apply to a given particular child. Humans are different.

A lot of that social construct was created as part of a jobs program. You shouldn't expect it to be sanely optimized towards excuses made up fifty years after the fact.

Childhood has very little impact on future career/social status/college results. They've done all sorts of studies, and various nations have more or less education, and the only things I've seen that produce more impact than a couple IQ points are like, not feeding your children. Given access to resources, after the very early years, children are basically capable of raising themselves.

In summary, it's best not to concern yourself with social rituals more than necessary and just learn who the actual person in front of you is, and what they need.

I note one of my problems with "trust the experts" style thinking, is a guessing the teacher's password problem.

If the arguments for flat earth and round earth sound equally intuitive and persuasive to you, you probably don't actually understand either theory. Sure, you can say "round earth correct", and you can get social approval for saying correct beliefs, but you're not actually believing anything more correct than "this group I like approves of these words."

My experience is that rationalists are hard headed and immune to evidence?

More specifically, I find that the median takeaway from rationalism is that thinking is hard, and you should leave it up to paid professionals to do that for you. If you are a paid professional, you should stick to your lane and never bother thinking about anything you're not being paid to think about.

It's a serious problem rationalism that half of the teachings are about how being rational is hard, doesn't work, and takes lots of effort. It sure sounds nice to be a black belt truth master who kicks and punches through fiction and superstition, but just like a real dojo, the vast majority, upon seeing a real black belt, realize they'll never stand a chance in a fight against him, and give up.

More broadly, I see a cooperate defect dilemma where everybody's better off in a society of independent thinkers where everybody else is more wrong, but in diverse ways that don't correlate, such that truth is the only thing that does correlate. However, the individual is better off being less wrong, by aping wholesale whatever everybody else is doing.

In summary, the pursuit of being as unwrong as possible is a ridiculous goodharting of rationality and doesn't work at scale. To destroy that which the truth may destroy, one must take up his sword and fight, and that occasionally, or rather, quite frequently, involves being struck back, because lies are not weak and passive entities that merely wait for the truth to come slay them.

This is sort of restating the same argument in a different way, but:

it is not in the interests of humans to be Asmodeus's slaves.

From there I would state, does assigning the value [True] to [Asmodeus], via [Objective Logic] prove that humans should serve Asmodeus, or does it prove that humans should ignore objective logic? And if we had just proven that humans should ignore objective logic, were we ever really following objective logic to begin with? Isn't it more likely that that this thing we called [Objective Logic] was in fact, not objective logic to begin with, and the entire structure should be thrown out, and something else should instead be called [Objective Logic] which is not that, and doesn't appear to say humans should serve Asmodeus?

So, a number of issues stand out to me, some have been noted by others already, but:

My impression is that there are also less endorsable or less altruistic or more silly motives floating around for this attention allocation.

A lot of this list looks to me like the sort of heuristics where, societies that don't follow them inevitably crash, burn and become awful. A list of famous questions where the obvious answer is horribly wrong, and there's a long list of groups who came to the obvious conclusion and became awful, and it's become accepted wisdom to not do that, except among the perpetually stubborn "It'll be different this time" crowd, and doomers who insist "well, we just have to make it work this time, there's no alternative".

if anyone chooses to build, everything is destroyed

The problem with our current prisoner's dilemma is that China has already openly declared their intentions. You're playing against a defect bot. Also, your arguments are totally ineffective against them, because you're not writing in Chinese. And, the opposition is openly malicious, and if alignment turns out to be easy, this ends with hell on earth, which is much worse than the false worst case of universal annihilation.

On the inevitability of AI: I find current attempts at AI alignment to be spaceships with sliderules silliness and not serious. Longer AI timelines are only useful if you can do something with the extra time. You're missing necessary preconditions to both AI and alignment, and so long as those aren't met, neither field is going to make any progress at all.

On qualia: I expect intelligence to be more interesting in general than the opposition expects. There are many ways to maximize paperclips, and even if technically, one path is actually correct, it's almost impossible to produce sufficient pressure to direct a utility function directly at that. I expect an alien super intelligence that's a 99.9999% perfect paperclip optimizer, and plays fun games on the side, to play above 99% of the quantity of games that a fun game optimizer would get. I accuse the opposition of bigotry towards aliens, and assert that the range of utility functions that produce positive outcomes is much larger than the opposition believes. Also, excluding all AI that would eliminate humanity, excludes lots of likable AI that would live good lives, but reach the obviously correct conclusion that humans are worse than them and need to go, while failing exclude any malicious AI that values human suffering.

On anthropics: We don't actually experience the worlds that we fail to make interesting, so there's no point worrying about them anyway. The only thing that actually matters is the utility ratio. It is granted that, if this worldline looked particularly heaven-oriented, and not hellish, it would be reasonable to maximize the amount of qualia attention by being protective of local reality, but just looking around me, that seems obviously not true.

On Existential Risk: I hold that the opposition massively underestimates current existential risks excluding AI, most of which AI is the solution to. The current environment is already fragile. Any stable evil government anywhere means that anything that sets back civilization threatens stagnation or worse, aka, every serious threat, even those that don't immediately wipe out all life, most notably nuclear weapons, constitutes an existential risk. Propaganda and related can easily drive society into an irrecoverable position using current techniques. Genetics can easily wipe us out, and worse, in either direction. Become too fit, and we're the ones maximizing paperclips. Alternatively, there's the grow giant antlers and die problem where species trap themselves in a dysgenic spiral. Evolution does not have to be slow, and especially if social factors accelerate the divide between losers and winners, we could easily breed ourselves to oblivion in a few generations. Almost any technology could get us all killed. Super pathogens with a spread phase and a kill phases. Space technology that slightly adjusts the pathing of large objects. Very big explosions. Cheap stealth, guns that fire accurately across massive distances, fast transportation, easy ways to produce various poison gasses. There seems to be this idea that just because it isn't exotic it won't kill you.

In sum: I fully expect that this plan reduces the chances of long term survival of life, while also massively increasing the probability of artificial hell.

Something I would Really really like anti-AI communities to consider is that regulations/activism/etc aimed to harm AI development and slow AI timelines do not have equal effects on all parties. Specifically, I argue that the time until the CCP develops CCP aligned AI is almost invariant, whilst the time until Blender reaches sentience potentially varies greatly.

I am Much much more hope for likeable AI via open source software rooted in a desire to help people and make their lives better, than (worst case scenario) malicious government actors, or (second) corporate advertisers.

I want to minimize first the risk of building Zon-Kuthon. Then, Asmodeus. Once you're certain you've solved A and B, you can worry about not building Rovagug. I am extremely perturbed about the AI alignment community whenever I see any sort of talk of preventing the world being destroyed where this moves any significant probability mass from Rovagug to Asmodeus. A sensible AI alignment community would not bother discussing Rovagug yet, and would especially not imply that the end of the world is the worst case scenario.

However, these hypotheses are directly contradicted by the results of the "win-win" condition, where participants were given the ability to either give to their own side or remove money from the opposition.

I would argue this is a simple stealing is bad heuristic. I would also generally expect subtraction to anger the enemy and cause them stab more kittens.

Republicans are the party of the rich, and they get so much money that an extra $1,000,000 won’t help them.

Isn't this a factual error?

Load More