All of Shankar Sivarajan's Comments + Replies

A fun exercise is to look at actual UN resolutions, and see how much they differ from these.

1Arjun Panickssery1d
Here's Resolution 2712 from a few weeks ago, on "The situation in the Middle East, including the Palestinian question:

I like that originalism wins in the end.

As with 500 Million, But Not A Single One More, deicide is a versatile (and inspiring!) framing for human advancement. My objection to your conceptualization of Mot is that it'd be more satisfying to break him up into smaller gods who can be slain individually: one for the god preventing "skyscrapers literally miles high" (whom you can also blame for the Tower of Babel), one for the god of keeping humans from flight (responsible for the Fall of Icarus, now pretty thoroughly dead), one for smallpox (Sopona, also basically dead), etc.

This is great: I now curse the names of both Mot and Moloch. May they be slain, and upon their corpses (or from them, à la Ymir, if you prefer), we will build utopia.

This is true, but I've only ever heard people use it while describing their political views, and at that point, "Sherlockian Abduction" is unnecessary. My example is the kind of the thing people might slip into more casual conversation.

I think this is false. I have seen it regularly used to mean something like "particularly evil liberal."

Anyone who uses the noun adjunct "Democrat" instead of the adjective "Democratic" is some flavor of right wing. (Obviously only applicable to America.)

2Ms. Haze6d
In a similar vein, I have only ever seen the term "classical liberal" used by people who identify with the term.
3the gears to ascension6d
I agree that in my experience this is definitely probabilistically biased in the direction you state, but I don't think it's consistently true. This usage varies.
1Cole Wyeth6d
Being from America, I feel pretty confident seconding this one. I will be visiting America soon and pay attention for examples.

you'd probably be cluttering your mind with largely useless knowledge

You didn't quote Holmes!

I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things, so that he has difficulty laying his hands upon it. Now the skillful workman is very careful indeed as to what he takes into his brain-attic. He will

... (read more)
1Cole Wyeth6d
Yeah that definitely belongs in the post :)

The "… I accidentally clicked a radio button and can't un-answer this question" option is funny: it's a good solution to a common problem, but one that doesn't actually exist in this case. 

3Screwtape8d
Doesn't the problem still exist? As far as I know you still can't unclick a radio button.

Since you're willing to straightforwardly exchange cash for status boosts, you could offer some comparable reward for people fitting the same criteria who will publicly take your side of the bet.

1MadHatter9d
OK, anybody who publicly bets on my predicted outcome to the RCT wins the right to engage me in a LessWrong dialogue on a topic of their choosing, in which I will politely set aside my habitual certainty and trollish demeanor.

I predict that this post will be received favorably: this is what wacky ideas that turn out to work look like. I think your proposed etiology is bunk, and you've stated your hypothesis with way too much confidence, but there seems to be some connection between Alzheimer's and the gut microbiome and your proposed treatment is quite low risk. It's the kind of thing people should try if they've got nothing better, and afaict there is nothing better.

It is my understanding that the US federal government levies taxes on gifts and barter. This proposal sounds to me like something in between the two, and if it is not, can certainly be construed to be by the prosecutor.

2AnthonyC9d
There is a tax on gifts in value greater than $17,000 per recipient per giver per year. Unlikely to be an issue. There is only a tax on barter, essentially sales tax, in the event that the seller makes more than the item bartered cost them. Sharing, and there's no profit or loss. Selling used stuff, and you're unlikely to make a net profit.
2AnthonyC9d
I really don't see how? What tax are you thinking of?

If you think of matrices as lists of vectors, yeah, that works. But I think that's akin to thinking of integers as strings of digits.

In my experience, cranks (at least physics cranks) realize that university email addresses are often public and send emails detailing their breakthrough/insight to as many grad students as they can. These emails never get replies, but (and this might surprise you) often get read. This is not a stupid strategy: if your work is legit (unlikely, but not inconceivable), this will make it known.

2MadHatter10d
Right, this is how Ramanujan was discovered.

In Python, you can zip more than two arrays: the metaphor of a zipper was always a stretch. The word "zip" is common, short, close enough in meaning to "interleave," and has a convenient inverse. "Pack" would probably work too, but "imbricate" would have been a fun choice!

2jefftk10d
Or transpose?

Since you explicitly asked for feedback regarding your downvotes, the "oh, woe is me, my views are so unpopular and my posts keep getting downvoted" lamentations you've included in a few of your posts get grating, and might end up self-fulfilling. If you're saying unpopular things, my advice is to own it, and adopt the "haters gonna hate" attitude: ignore the downvotes completely.

2MadHatter10d
Oh, I do.

Okay, I have not downvoted any of your posts, but I see the three posts you probably mean, and I dislike them, and shall try to explain why. I'm going to take the existence of this question as an excuse to be blunt.

The Snuggle/Date/Slap Protocol: Frankly, wtf? You took the classic Fuck/Marry/Kill game, tweaked it slightly, and said adding this as a feature to GPT-4 "would have all sorts of nice effects for AI alignment." At the end, you also started preaching about your moral system, which I didn't care for. (People gossiping about the dating habits of min... (read more)

1MadHatter10d
I'm definitely a crank, but I personally feel like I'm onto something? What's the appropriate conduct for a crank that knows they're a crank but still thinks they've solved some notorious unsolved problem? Surely it's something other than "crawl into a hole and die"...

I think you could use a better example for "structure purist, content neutral": that's where carefully crafted deception (without being actually false) would go, and you undersell it by using a polite "white lie" as your central example.

2RamblinDash10d
"You could save up to 15% or more on car insurance"? (maybe too political but TBH the best example) "Iraq’s government openly praised the attacks of September the 11. And al Qaeda terrorists escaped from Afghanistan are known to be in Iraq."

Even a libertarian might eventually recognize that the refrain "internalize your externalities" is being used to exploit him: all anyone who wants to infringe on his liberty needs to do is utter the phrase and then make up an externality to suit.

  • You may not engage in homosexual activity because of the externality of God smiting the city and/or sending a hurricane.
  • You must be confined to your house and wear a mask because of the externality of grandma dying.
  • You may not own a gun because of the externality of children getting shot.
  • You must wear a headscarf b
... (read more)
8Eli Tyre11d
Most of these seem legitimate to me, modulo that instead of banning the thing you should pay for the externality you're imposing. Namely, climate change, harming wildlife, spreading contagious diseases, and risks to children's lives. Those are real externalities, either on private individuals or on whole communities (by damaging public goods). It seems completely legitimate to pay for those externalities. The only ones that I don't buy are the religious ones, which are importantly different because they entail not merely an external cost, but a disagreement about actual cause and effect.  "I agree that my trash hurts the wildlife, but I don't want to stop littering or pay to have the litter picked up" is structurally different than "God doesn't exist, and I deny the claim that my having gay sex increases risk of smiting" or "Anthropogenic climate change is fake, and I deny the claim that my pollution contributes to warming temperatures." Which is fine. Libertarianism depends on having some shared view of reality, or at least some shared social accounting about cause and effect and which actions have which externalities, in order to work.  If there are disagreements, you need courts to rule on them, and for the rulings of the courts to be well regarded (even when people disagree with the outcome of any particular case).
-2Morpheus11d
Well the problem is god isn't real. Your hamburger becomes slightly more expensive because there is a carbon tax. I would say your examples are abusing the concept (And I have seen them before because people make trashy arguments all the time). The concept itself makes lots of sense.

all humans gets rights and the vote.

It seems like all your preceding reasoning has been motivated by wanting to get to this point.

1RogerDearnaley15d
Mostly fair comment, though actually this is just a predecessor to the main points I want to get to: why an aligned agent is (pretty much the only) kind of agent that this argument doesn't apply to. Though there has been a pretty observable movement in this direction in recent centuries in most societies (abolition of slavery, wider voting franchises, etc), and I do briefly hypothesize above why that might be and why that hypothesis would mean this trend would continue. Yes, in that part of the discussion I'm moving fast. Do you disagree that functional modern societies should give humans pretty equal rights/moral worth/votes/etc?
3ChristianKl16d
The way the executive can make promises to other countries that are binding for future administrations is to do it as part of a treaty that gets ratified by the Senate.  The German unification happened under the Treaty on the Final Settlement with Respect to Germany which has Russia and the United States as parties. If Russia's position at the time had been that they only agreed with German unification if a promise was made not to expand eastward, they could have asked for it to be included in that treaty.  If they would have done that, it would have been binding for future US administrations in a way that statements by a foreign ministers aren't. There are plenty cases like the sanctions against Belarus that are a much better example of the United States actually not uploading promises it made.
2Viliam17d
I thought the response was more like: "this never happened, you just made that up". You can't break a promise you never made in the first place. It would be hard to make it work in democracy. A politician may promise something, and get replaced by another politician later. Should the latter honor the promises made by the former? (Is Biden required to build the wall, just because Trump promised it?) The entire point of democratic elections is that people can replace the politicians they no longer want.
3positivesum17d
There has been a misunderstanding - I did not mean to say any of this at all, let alone mock. I would absolutely LOVE to hear about more examples of honor playing a significant role in international relations, especially conflict de-escalation! Perhaps I'm simply not familiar enough. It would certainly make my day to learn that this is a more common occurrence than I ever thought it could be!
1bhauth18d
In the cases where societally-better replacements are patented - which is a definite minority of cases - governments have rules in place to force companies to license patents to competitors under reasonable terms. Patents only exist because governments enforce them for the sake of overall benefit to society.

Alternatively:

  1. Type (or copy-and-paste) ``Mądry''.
  2. Run XeLaTeX instead.

"Disagree and commit" is a good policy for servants and subordinates (which seems to be context it's meant for). Among free and equal men, "When the mob and the press and the whole world tell you to move, your job is to plant yourself like a tree beside the river of truth and tell the whole world, 'No, you move'" is better: if you disagree, you don't actually have to commit to whatever the consensus decision is.

I like the Voltaire (actually Evelyn Beatrice Hall) quote as much as the next guy, but if you would punish me for honestly saying what I believe ("freedom of speech does not mean freedom from consequences"), "Fuck you, I'm gonna lie to you" is the morally sound response. That works even if it's society as a whole punishing me: I reject entirely your supposed duty to honestly state one's beliefs.

1Ilio19d
Saying fuck you is helpful when the aim is to exclude whoever disagree with your values. This is often instrumental to construct a social group, or to get accepted in a social group that includes high status toxic characters. I take be nice as the claim that there are always better objectives.

agreeing to recognize Israel within the 1948 borders, and to demilitarization

What you're describing is at best a protectorate, not a sovereign state. (Yes, I know there are some that like to style themselves "protected states" instead.) 

I don't see why this is better than Israel simply conquering the place outright, appointing a governor, and then letting it (or even helping it) develop economically, to become a new Singapore or whatever. 

2Yair Halberstadt19d
Demilitarisation would be necessary initially for this to be acceptable to Israel (apart from such small arms as are necessary to maintain law and order, and a coast guard to prevent smuggling), but could be removed long term given continued peace and economic integration. Also it should be acceptable for Gaza to enter into peace treaties with e.g. Egypt to defend itself from Israeli aggression, and Israel to defend itself against Egyptian aggression. Finally it should be expected that Gaza will maintain authority over it's airspace and waters. I think this would count as a protected state instead of a protectorate, since it would be expected that Gaza will be responsible for it's own international relations. It would also be with a view to becoming a full sovereign state over time.
5ChristianKl19d
If there's a governor appointed by Israel it's likely that there's sustained insurgency against their government.  It's difficult to get economic development when there's an ongoing insurgency. 

Is there a prediction market on whether he'll be reinstated?

5g-w121d
There's a bunch. Here's one: https://manifold.markets/NealShrestha58d3/sam-altman-will-return-to-openai-by

How many e's are supposed to be in that word?

Embrace the diaresis! Say "peeër." 

Why do you believe this complicated policy is better than simply lying?

Because I want to keep the option of being able to make promises. This way, people can trust that, while I might not answer every question they ask, the things that I do say to them are the truth. If I sometimes lie to them, that's no longer the case, and I'm no longer able to trustworthily communicate at all.

Meta-honesty is an alternate proposed policy that could perhaps reduce some of the complication, but I think it only adds new complication because people have to ask you questions on the meta level whenever you say something for which they might suspe... (read more)

Wisdom is well-calibrated intelligence: enough not to get exploited, but not so much that it provokes hatred.

Who said anything about slowly and painfully?

Me. And I don't actually endorse that. That was my point.

And my comments are threatening? Saying I'd defend with my life my loved ones from those (like you) who are happy for them to die to achieve their goals? Sure, I guess, but I stand by that.

Though I may have said otherwise, I do not, in fact, seriously want all "AI safety" researchers to die slowly and painfully, and would not be willing to risk my life to achieve that.

People willing to trade off the lives of my loved ones "for the greater good," on the other hand, yes, certainly. (I do not, however, see a feasible way of getting rid of all of you, or even an appreciable fraction, even at the cost of my life, so rest easy, my fanatical ideals aren't going to translate into deeds.)

1Oliver Sourbut1mo
Who said anything about slowly and painfully?? I don't think a fully reflective you would endorse that. (FWIW I'm not whoever downvoted your comments, though these definitely aren't to my taste and could be interpreted as threatening)

If you met a race of aliens, intelligent, friendly, etc., would you "turn into a Warhammer 40K Inquisitor" who considers the xenos unworthy of any moral consideration whatsoever? If not, why not?

1AlphaAndOmega1mo
I would certainly be willing to aim for peaceful co-existence and collaboration, unless we came into conflict for ideological reasons or plain resource scarcity. There's only one universe to share, and only so much in the way of resources in it, even if it's a staggering amount. The last thing we need are potential "Greedy Aliens" in the Hansonian sense. So while I wouldn't give the aliens zero moral value, it would be less than I'd give for another human or human-derivative intelligence, for that fact alone.

someone's immortality

Everyone's immortality. They don't typically make cartoon villains like that.

4Oliver Sourbut1mo
The framing of OP is specifically about racing just slowly enough that some specific people make the cut[1], which is absolutely a tradeoff between everyone's lives and those specific people's immortality. OP is explicitly indifferent about the effects on other people, including those who die sooner and those whose chances might be ruined by moving too fast for safety. ---------------------------------------- 1. which rests on a possibly-questionable received wisdom that AGI is a sufficient and necessary route to immediate immortality ↩︎
-1Oliver Sourbut1mo
Are you aware I'm a full time AI safety researcher? I don't think you want to 'get rid' of us. Perhaps if you could politically silence me or intimidate me while having me carry on technical work? Naturally I don't endorse this plan even for highly fanatical egoists. Separately, even if I were just my political influence (along with my reference class), I (tentatively) don't believe you're as fanatical as your comment claims.

"In the name of the greatest species that has ever trod this earth, I draw the line in the dust and toss the gauntlet before the feet of tyranny, and I say humanism now, humanism tomorrow, and humanism forever."

3AlphaAndOmega1mo
Ctrl+F and replace humanism with "transhumanism" and you have me aboard. I consider commonality of origin to be a major factor in assessing other intelligent entities, even after millions of years of divergence means they're as different from their common Homo sapiens ancestor as a rat and a whale. I am personally less inclined to grant synthetic AI rights, for the simple reason we can program them to not chafe at their absence, while not being an imposition that doing the same to a biological human would (at least after birth).

At least ternary: an "unsure" option is definitely worth including. (That also seems to be the third most popular option in the questions above.)

I think a fourth option, for "the question is wrong" would also be a good one, but perhaps redundant if there is also a comment section.

And simultaneously, does not even try to satisfy those who favor open access.

2the gears to ascension1mo
agreed.

this would be a big barrier for small teams

Isn't that the point of this exercise?

I think anorexics are typically defined more by a (pathological) fear of being fat. In practice, of course, that results in them being extremely skinny.

I don't think that fact was meant to be surprising, but rather just to point out that not eating in the first place (or exercising intensely to burn off whatever you ate), like anorexics, seems to work better (well, "better") than the bulimics' "binge and purge" approach.

It might be worth making explicit what you're comparing it to, which is probably eating less in the first place. I.e., your question should be, "Can you sate your hunger by eating a lot, and then 'cheat' by throwing up quickly?"

2lc1mo
Yeah, that's my actual question

in a democratic society,

That's true, but irrelevant. In oligarchies, you can just make shit up, à la "Iraq has weapons of mass destruction." If the ruling elites want "secret information" to justify a law they want to pass, such information will conveniently materialize on cue.

My argument (which I'm not all that sure I believe myself):

  • You need computation done "on-board," not in the cloud, and that requirement drives up prices to an order of magnitude too high to be feasible. 

I think this time is different.

That's what people said last time too. And the time before that.

9FlorianH1mo
That's so correct. But still so wrong - I'd like to argue. Why? Because replacing the brain is simply not the same as replacing just our muscles. In all the past we've just augmented our brain, with stronger muscle or calculation or writing power etc. using all sorts of dumb tools. But the brain remained the crucial all-central point for all action. We will have now tools that are smarter, faster, more reliable than our brains. Probably even more empathic. Maybe more loving. Statistics cannot be extrapolated when there's a visible structural break. Yes, it may have been difficult to anticipate, 25 years ago, that computers that calculate so fast etc., don't quickly change society all that fundamentally (although quite fundamentally) already, so the 'this time is different' guys 25 years ago were wrong. But in hindsight, it is not so surprising: As long as machines were not so truly smart, we could not change the world as fundamentally as we now foresee. But this time, we seem to be about to get the truly smart ones. The future is a miracle, we cannot truly fathom how exactly it will look. So nothing is absolutely sure indeed. But merely looking back to the period where mainly muscles were replaceable but not brains, is simply not a way to extrapolate into the future, where something qualitatively entirely new is about to be born. So you need sth more tangible, a more reliable to rebut the hypothesis underlying the article. And the article beautifully, concisely, explains why we're awaiting sth rather unimaginably weird. If you have sth to show where specifically it seems wrong, it'd be great to read that.

Fig 1 in the "extra" paper, is hilarious! It's almost entirely blank, and that you need "physical samples" is classified as "key info."

I'm laboring beneath a grave misunderstanding of what a policy paper is actually intended to be

I think you are. It is my understanding that a "policy paper" is essentially a longer, LaTeXed version of a protest sign, intended to be something sympathetic congressmen can wave around while bloviating about "trusting the Science!" It's not meant to be true.

I expect the supposedly dangerous information (that the authors are careful not to actually tell you) is some combination of obvious (to a person of ordinary skill in the art), useless, and wrong, roughly analogous to the following steps for building a nuclear bomb:

  1. Acquire 100 kilograms of highly enriched uranium.
  2. Assemble into a gun-type fission weapon.
  3. Earth-Shattering Kaboom!

This "draw the rest of the fucking owl" kind of advice is good for a laugh, and as fodder for fear-mongering about actually open AI (not to be confused with the duplicitously named OpenAI), but little else.

0jefftk1mo
I think this is mostly not true: you can pay other people to do the difficult parts for you, as long as you are careful to keep them from learning what it is you're trying to do.
5mnvr1mo
Hey, thanks for the comment. While I agree that "they'll believe what they're programmed to", I feel that's a bit too recent term. I was imagining a farther future where artificial intelligence goes beyond current LLMs and current political scenarios.  I'm doing these thought experiments of a future where artificial intelligence (whether as an evolution of current day LLMs or some other mechanism) have enough agency to seek out novel information. Such agency would be given to the artificial intelligence not out of benevolence, but just as a way to get it to do its job properly. In such a scenario, saying that the AI will just believe what it is programmed to believe is akin to saying that a human child will never believe differently from what education / indoctrination (take your pick) they've been given as a child. That's a good general rule of thumb, but not a given.
2the gears to ascension1mo
corporate progressivism, perhaps.
Load More