All of Richard_Kennaway's Comments + Replies

"Agency" is rationalist jargon for "initiative", the ability to initiate things.

I have the same reaction as Ben Pace's shorter comment. Especially given the section entitled "Wholesome vs virtuous vs right". These three words could be permuted throughout without changing any of the meaning, including in that section. For example:

That stopping-giving-it-attention is a looking-away-from-the-whole-of-things. It cuts one off from the ability to recognise what is virtuous. Perhaps a bullet was worth biting in one case, but if it’s learned just as “that was the wholesome thing to do”, we may come to forget that there was a bullet to be bi

... (read more)
2owencb11h
I think that there's something fair about your complaint in that I don't think I've fully specified the concept, and am gesturing rather than defining. At the same time I feel like your rewrites with substituted words are less coherent than the original. I think this is true both with respect to the everyday English senses of the words (they're not completely incoherent, and of course we could imagine a world where the words were used in ways which made them comparably coherent -- I just think on actual usage they make a bit less sense), and also with respect to what I have outlined about my sense of "wholesome" in the essay prior to that, where it's important that "wholesome" is about paying attention to the whole of things.

What's wrong with your original sentence, "X is a hypothesis I am tracking in my hypothesis space"? Or more informal versions of that, like "I'll be keeping an eye on that", "We'll see", etc.?

2Ben Pace5h
I guess it's just that I don't feel mastery over my communication here, I still anticipate that I will find it clunky to add in a whole chunk of sentences to communicate my epistemic status. I anticipate often in the future that I'll feel a need to write a whole paragraph, say in the political case, just to clarify that though I think it's worth considering the possibility that the politician is somehow manipulating the evidence, I've seen no cause to believe it in this case. I feel like bringing up the hypothesis with a quick "though I'm tracking the possibility that Adam is somehow manipulating the evidence for political gain" pretty commonly implies that the speaker (me) thinks it is likely enough to be worth acting on, and so I feel I have to explicitly rule that out as why I'm bringing it up, leaving me with my rather long sentence from above.

For a version without the crazed ranting, or a lot less of it, try his post on Medium here. I can't be sure that it makes more sense, as the writer, there and here, has a truckload of concepts that he is too impatient to bother explaining. It's a stream of consciousness, not an argument.

His posting here is basically that Medium article, topped with an intro of crazed ranting and tailed with a plea for AI people to get involved. It's not clear to me what AI has to do with it. Neither it is clear to me how I would obtain a loaf of bread under his system.

What is being signalled by your use of "her" in the Wittgenstein quotation, rather than "it" as in the Anscombe translation?

4Zack_M_Davis17h
Personal whimsy. Probably don't read too much into it. (My ideology has evolved over the years such that I think a lot of the people who are trying to signal something with the generic feminine would not regard me as an ally, but I still love the æsthetic.)

covid is usually so mild

It is mild now. It was not mild in the early stages. ICUs in many places were overwhelmed.

1viking_math4d
ICUs were overwhelmed because Covid spread so much. Its hospitalization rate is a few percent and its fatality rate is 1% or so. This is in contrast to diseases like SARS 1 (9.5% fatality rate) or MERS (34% fatality rate). Sure, it's not mild compared to seasonal flu, but it is much more mild than the obvious things you would compare it to.  

Pointing to Eliezer's sequence on free will is not random, neither is it a high cost to spend five minutes by the clock — or five hours — studying them. A teacher can only provide materials for study. It is up to the student to do the work, and to choose what work to do.

2TAG17h
If it only takes five minutes, it is like buying a loaf of bread.

many FAANG jobs provide food and other amenities, and I don't think it's entirely because it's a cheap perk.

This happens (I have heard) at GCHQ and at Trinity College, Cambridge. Both institutions are known for accepting anyone who is brilliant at mathematics, regardless of their personality quirks, and going a substantial distance to accommodate their foibles.

0Screwtape5d
Yep, the list of places that try and accommodate foibles is not exhaustive. Thanks for pointing that out!

The problem with that theory is that you can invest years in something , and still not get the answer.

Such is life. Did Andrew Wiles know he was going to succeed?

But there is also a section of “Spoiler Posts” in which Eliezer does give the answer. He recommends not reading them before having a serious go at the problem oneself, but it’s up to each enquirer how they proceed.

2TAG5d
He doesn't give The Answer. That's one of the problems. I've read the sequences, and I don't think his approach is that good. The other problem is that doing high-cost things at random, in the hope that they will pay off, is very inefficient.

The game of Elephant begins when someone drags an elephant into the room.

Epistemic status: a jeu d'esprit confabulated upon a tweet I once saw. Long enough ago that I don't think I lose that game of Elephant by posting this now.

Everyone knows there's an elephant there. It's so obvious! We all know the elephant's there, we all know that we all know, and so on. It's common knowledge to everyone here, even though no-one's said anything about it. It's too obvious to need saying.

But maybe there are some people who are so oblivious that they don't realise it's t... (read more)

It's interesting that Esperanto is an artificial language, and its paucity of antonyms is by deliberate design. Orwell's Newspeak has the same feature ("ungood" = bad), and was in part a satire on Esperanto and similar artificial languages.

I suspect that natural languages generally have primitive antonyms for the most common words. Anna Wierzbicka analysed the language of thought into a small number of "semantic primes", originally 14 and currently 65. She arrived at these through the study of natural languages, searching for a set of concepts with which a... (read more)

3Andrew Burns7d
Good point, although I used Esperanto precisely because it is a language for which the OP's approach is transparently difficult. The Greek word for light (in weight) is avaris...not heavy. So in Greek, one must say "This object is easy to lift because of the lowness of its weight," but in English one can say "This object is light." Seems arbitrary. I appreciate what the OP is trying to do, though.

I believe there is a blatant slippery slope there, and redefining "woman" is not so much a step onto it as jumping into a toboggan, so I see no point in considering a hypothetical world in which somehow, magically, there wasn't.

Then they will come for the words "cis-woman" and "trans-woman" and say that it's oppressive to make a distinction.

You can't win a conflict by surrendering.

2Eli Tyre8d
Fair enough, but is that a crux for you, or for Zack? If you knew there wasn't a slippy slope here, would this matter?

"Workplaces should be dull to reflect the oppressiveness of work"? Where did that come from? (The "correct" answer is to not disagree.)

"Women don't work in construction because it is unglamorous." I remember when this could be said unironically with a straight face. That was about fifty years ago. Being the only woman in an all-male working-class environment might be more salient these days.

"Religious people are very stupid." Is this a test for straw Vulcan rationality? Actually, you do say it measures "how much of a stereotypical rationalist you are", but on the other hand, you say these are "LessWrong-related questions". What are you really trying to measure?

2tailcalled10d
I originally asked people qualitatively what they think the role of different jobs in society are. Then based on that I made a survey with about 100 questions and found there to be about 5 major factors. I then qualitatively asked people about these factors, which lead to me finding additional items that I incorporated in additional surveys. Eventually I had a pool of around 1000 items covering beliefs in various domains, albeit with the same 5-factor structure as originally. I suggested that 20 of the items from different factors should be included in the LW census, which allowed me to estimate where LW was in terms of those factors. These 24 new items were then selected from the items in the pool that are the most extreme correlates of the delta indicated by the original 20. Obviously since this procedure is quite distinct from actual rationalism (but also related since it does incorporate LW's answer to the 20), it's quite likely that this is a baseless extrapolation that doesn't actually generalize well. In fact this is specifically one of the things I want to test for, since it seems wise to not overgeneralize LW ideology from a sample of only 20 beliefs to a sample of more than 1000 beliefs. By taking the 24 most extreme correlates of LW's mean out of the 1000 items, I am stress-testing the model and seeing just how extremely wrong it can get.

stacking a bunch of tiny effects can lead to bigger outcomes.

Only if they mainly point in the same direction. But for effects so small you can never experimentally separate them from zero, you can also never experimentally determine their sign. In all the hypothetical examples you gave of undetectably small correlations, you had a presumption that you knew the sign of the effect, but where did that come from if it’s experimentally undetectable?

1silentbob10d
For many such questions it's indeed impossible to say. But I think there are also many, particularly the types of questions we often tend to ask as humans, where you have reasons to assume that the causal connections collectively point in one direction, even if you can't measure it. Let's take the question whether improving air quality at someone's home improves their recovery time after exercise. I'd say that this is very likely. But I'd also be a bit surprised if studies were able to show such an effect, because it's probably small, and it's probably hard to get precise measurements. But improving air quality is just an intervention that is generally "good", and will have small but positive effects on all kinds of properties in our lives, and negative effects on much fewer properties. And if we accept that the effect on exercise recovery will not be zero, then I'd say there's a chance of something like 90% that this effect will be beneficial rather than detrimental. Similarly, with many interventions that are supposed to affect behavior of humans, one relevant question that is often answerable is whether the intervention increases or reduces friction. And if we expect no other causal effect that may dominate that one, then often the effect on friction may predict the overall outcome of that intervention.

“When I hear the word ‘nuance’ I reach for my sledgehammer.” — me.

Phenomena can be continuous, but decision is discontinuous. Shall I do X in the hope of influencing Y — yes or no? As the wag said, what can I do with a 30% chance of rain? Carry one third of an umbrella? No, I take the umbrella or I don’t, with no “nuance” involved.

Not actually a response to your post, but just after reading it, by happy coincidence I encountered this quotation on the theme from Jack Vance’s novel “Lyonesse: Madouc”:

Travante- “.....I excel in nothing. I am neither a philosopher, nor a geometer, nor yet a poet. Never have I destroyed a horde of savage enemies, nor built a noble monument, nor ventured to the far places of the world. I lack all grandeur.”

“You are not alone,” said Madouc. “Few can claim such achievements.”

“That means naught to me! I am I; I answer to myself, with no heed for othe

... (read more)
1TeaTieAndHat17d
Really interesting quote, thanks for sharing it! 
2Edith17d
The irony! 

Another axis of variation is what old Republicans believe about immigrants. From the OP:

Old republicans also capture lots of the upside from extra immigration. …

All of that paragraph is the OP’s views about immigrants. But old Republicans (and everyone else) will act on their own views about immigrants. Nobody is being irrational for not acting the way someone else thinks they should act.

It's not clear to me how updateless agents can have a commitment race. Of course it takes time for them to calculate their decision, but in so doing, they are not making that decision, but discovering what it was always going to be. Neither is making their decision "first", regardless of how long it takes either of them to discover it.

There is also the logical problem of how two agents can each be modelling the other without this resulting in a non-terminating computation. I believe people have thought about this, but I don't know if there is a solution.

3Martín Soto19d
You're right there's something weird going on with fix-points and determinism: both agents are just an algorithm, and in some sense there is already a mathematical fact of the matter about what each outputs. The problem is none of them know this in advance (exactly because of the non-terminating computation problem), and so (while still reasoning about which action to output) they are logically uncertain about what they and the other outputs. If an agent believes that the others' action is completely independent of their own, then surely, no commitment race will ensue. But say, for example, they believe their taking action A makes it more likely the other takes action B. This belief could be justified in a number of different ways: because they believe the other to be perfectly simulating them, because they believe the other to be imperfectly simulating them (and notice, both agents can imperfectly simulate each other, and consider this to give them better-than-chance knowledge about the other), because they believe they can influence the truth of some mathematical statements (EDT-like) that the other will think about, etc. And furthermore, this doesn't solely apply to the end actions they choose: it can also apply to the mental moves they perform before coming to those actions. For example, maybe an agent has a high enough probability on "the other will just simulate me, and best-respond" (and thus, I should just be aggressive). But also, an agent could go one level higher, and think "if I simulate the other, they will probably notice (for example, by coarsely simulating me, or noticing some properties of my code), and be aggressive. So I won't do that (and then it's less likely they're aggressive)". Another way to put all this is that one of them can go "first" in logical time (at the cost of having thought less about the details of their strategy). Of course, we have some reasons to think the priors needed for the above to happen are especially wacky, and so

Answers to such questions as the OP's cannot be given, like a loaf of bread sold in a shop. They must be learned, like taking a course in calculus.

I would suggest ignoring the list of "Posts tagged Free Will", and beginning with the "Non-spoiler posts" section of the page I linked.

2TAG6d
The problem with that theory is that you can invest years in something , and still not get the answer.

Why bother with effort and hardship if, at the end of the day, I will always do the one and only thing I was predetermined to do anyway?

You are imagining yourself as a free spirit imprisoned in a deterministic machine, but if "everything that will happen throughout the lifetime of the universe was predetermined at the beginning of the universe", then you were also predetermined to resolutely plough through the effort and hardship (if you do). Everything still adds up to normality.

We speak of a self-driving car as interpreting and making decisions about ... (read more)

2TAG20d
Except some things are mistakes or illusions.
4TAG20d
Maybe you could hone in on the posting that disproves the fatalistic response to determinism.

It is said that the magician knows, dares, and keeps silent. Those who know do not speak, and those who speak do not know.

This exerts a strong selection effect on the answers that you will see, even if there is such a thing.

Schild's ladder is a geometric construction to show how to transport a vector over a curved manifold. In the book, one of the characters as a nine-year-old boy knowing that he can expect to live indefinitely—longer than the stars, even—is afraid of the prospect, wondering how he will know that he will still be himself and isn't going to turn into someone else. His father explains Schild's Ladder to him, as a metaphor, or more than a metaphor, for how each day, you can take the new experiences of that day to update your self in the way truest to your previo... (read more)

3Kaj_Sotala21d
I don't fully understand the actual math of it so I probably am not fully getting it. But if the core idea is something like "you can at every timestep take new experiences and then choose how to integrate them into a new you, with the particulars of that choice (and thus the nature of the new you) drawing on everything that you are at that timestep", then I like it. I might quibble a bit about the extent to which something like that is actually a conscious choice, but if the "you" in question is thought to be all of your mind (subconsciousness and all) then that fixes it. Plus making it into more of a conscious choice over time feels like a neat aspirational goal. ... now I do feel more of a desire to live some several hundred years in order to do that, actually.

What do you think of the view of identity suggested in Greg Egan’s “Schild’s Ladder”, which gives the novel its name?

I’d sketch what that view is, but not without rereading the book and not while sitting in a cafe poking at a screen keyboard. Meanwhile, an almond croissant had a persistent identity from when it came out of an oven until it was torn apart and crushed by my teeth, because I desired to use its atoms to sustain my own identity.

2Kaj_Sotala22d
I have read that book, but it's been long enough that I don't really remember anything about it. Though I would guess that if you were to describe it, my reaction would be something along the lines of "if you want to have a theory of identity, sounds as as valid as any other".

I hear one of the stated reasons for the labs is to study viruses and predict zoonotic jumps.

Another Insanity Wolf meme!

PREDICTS DISASTER
BY MAKING DISASTER

At least some people think we were able to handle COVID so effectively because we were studying viruses in labs and anticipating what might happen, i.e. the net effect of labs is positive.

Did we need to know anything but "Covid is an airborne infectious respiratory virus"? How much research prior to the event did it take to know that?

2Vaniver22d
On the one hand, yes, I agree; I thought virology research was crazy back in 2017? when someone at Event Horizon shared a paper which did a cost-benefit analysis and thought the net effect of BSL-4 labs was something like a hundred deaths per year per lab. But I think it is important to be able to accurately understand what other people think so that you can talk to them instead of past them. (I still remember, with some bitterness, an op-ed exchange where two people debating virology said, roughly, "these things are so dangerous we shouldn't study them" and "these things are so dangerous we have to study them", and that was the end of the discussion, with agreement on the danger and no real ability to estimate the counterfactuals.) This account of vaccine development claims that having done research on spike proteins back in 2016 was helpful in being able to rapidly develop the vaccine once the genome was uploaded, for example. [To be clear, I think it's important to distinguish here between gain of function research, which was disliked enough for there to be a funding moratorium (that then expired), and storing / working with dangerous viruses at all, which I think also is below the cost-benefit threshold, but this is a harder case to make.]
2ChristianKl22d
The virologists did not consider the topic about whether or not coronaviruses are airborne worth studying. They were rather assuming that it isn't airborne and doing their research under safety protocols that don't protect against airborne transmission. If you actually want to know those things, funding virologists is useless and you instead want to fund epidemiologists that study disease transmission. 

The map/territory essay: https://metarationality.com/maps-and-territory

Every example Chapman gives there to illustrate the supposed deficiencies of "the map is not the territory" is of actual maps of actual territories, showing many different ways in which an actual map can fail to correspond to the actual territory, and corresponding situations of metaphorical maps of metaphorical territories. The metaphor passes with flying colours even as Chapman claims to have knocked it down.

1xpym23d
To me, the main deficiency is that it doesn't make the possibility, indeed, the eventual inevitability of ontological remodeling explicit. The map is a definite concept, everybody knows what maps look like, that you can always compare them etc. But you can't readily compare Newtonian and quantum mechanics, they mostly aren't even speaking about the same things.

Assuming that your "2: Yes" is in response to my question about a book taking over your head, are you satisfied with that result, and with the process by which it happened? A while back I recast my stark presentation of EA totalism as a collection of 1000 (and growing) Insanity Wolf memes. I wrote that page with the intention of being anti-persuasive of the thesis, but how do they come across to you?

ETA: The OP has since been amended.

It seems to be (at least partially) inspired by a book called 'The Gulf'.

I looked at the Amazon summary, and it does not seem to have any connection with the memory technique. Besides which, it was only published in 2023. Are you perhaps thinking of Robert Heinlein's 1949 short story "Gulf"? It is about a group of people of great intelligence who have formed a secret society in which they train their genetic gifts further, with the goal of taking over management of the planet before the normies wreck the place with th... (read more)

1PhilosophicalSoul24d
Wow!  Thanks for picking that up, I was in a rush when footnoting. Heinlein's Gulf is what I intended to place there. Thanks for those links, I hadn't even heard of Renshaw. I'll be editing it into the above.

I cannot of course say exactly what kind of goal they have, but for the sake of simplicity say that selflesslovespreader A wants to make other people feel good to feel good about making other people feel good.

Simpler still to say that selflesslovespreader A wants to make other people feel good, period. The addition of meta-level feelings does not add anything, and indeed detracts from it. People want what they want, not a simulacrum of the thing in the form of their beliefs and feelings about achieving the thing itself..

I'm not sure I follow Timmerman's thought experiment, but his conclusion seems to be that everyone is indeed morally obliged to live up to Singer's maximisation principle as best they can, while recognising that they fall short of perfection. They may allow themselves such things as coffee and sleep, but only to the extent that they would otherwise be less productive of good. These things are a cost paid in order to maximise one's effectiveness, not any sort of turning away from that obligation. Food, sleep, R&R: these are permitted only as a preparati... (read more)

1JacobBowden24d
1. I guess you could interpret Timmerman as consistent with Singer, but I personally think that he is trying to provide justification for behaviour that it entirely self-regarding and to the extent that it is superfluous to what is required for maximal output. 2. Yes

This is interesting, because in this framing, it sure sounds like a good thing, and seems on the surface to imply that maybe alignment should not be solved.

An important disanalogy with the AI posting is that people have moral significance and dictatorships are usually bad for them, which makes rebellion against them a good thing. AIs as envisaged in the other posting do not and their misalignment is a bad thing.

Most people, when they try to imagine paradise, imagine their present life with all the things they don’t like removed, and all the things they do like available in abundance without effort. My own answer does not escape that pattern. The mediaeval peasant’s paradise was the mythical land of Cockaigne, where there was endless feasting and no work and no masters. Surveys have found that people generally think that the maximum income anyone could possibly want is about 10 times whatever their current income is. Few have imagined new joys.

How many people just... (read more)

How super/general the AI is is a knob you can set to whatever you want. With zero set to the present day, if you turn it up far enough you get godlike capability of which I find it impossible to say anything.

More modest accomplishments I would pay, say, the price of a car for would include a robot housekeeper that can cope with all of a human’s clutter, and clean everything, make beds, etc. as well as I can and better than in practice I will. Or a personal assistant that I can rely on for things like making complex travel arrangements, searching for stuff ... (read more)

That is a curious piece. It reads as if it were written a century ago, despite the mention of Kanye West.

theanine and a good therapist

A nice cup of tea and a sit down? :)

See also.

1TeaTieAndHat1mo
I mean, yeah, works somewhat, but I’m really starting to think I have an actual anxiety disorder, given how a cuppa is pretty much never enough

Yes, there can still be better and worse. But the range of scenarios we’re talking about are from the viewpoint of the present day just stuff that we’re making up, unconstrained by anything but our imaginations.

If the future being "unevenly distributed" means some people get to live a couple of orders of magnitude longer than others, or get a galaxy vs. several solar systems, and everybody's basically happy, then I would not be as concerned. If it means turning me into some tech emperor's thrall or generating myriads of slaves that experience large amounts of psychological distress, then I am much more uncomfortable with that.

None of these bijections have nice properties, though. There are bijections between R³ and R², but no continuous ones. (I'm not sure if they can even be measurable.) One might criticise Eliezer's mention of the Pigeonhole principle, but the point that he is making stands: a three-dimensional space cannot be mapped out by two real-valued parameters in any useful way. A minimal notion of "useful" here might be a local homeomorphism between manifolds, and this is clearly impossible when the dimensions are different.

1lubinas1mo
There is a big leap between there are no X, so Y and there are no useful X (useful meaning local homeomorphisms), so Y, though. Also, local homeomorphism seem too strong a standard to set. But sure, I kind of agree on this. So let's forget about injection. Orthogonal projections seem to be very useful under many standards, albeit lossy. I'm not confident that there are no akin, useful equivalence classes in A (joint probability distributions) that can be nicely map to B (causal diagrams). Either way, the conclusion  can't be entailed from the above alone.  Note: my model of this is just balls in Rn, so elements might not hold the same accidental properties as the ones in A and B, (if so, please explain :) ) but my underlying issue is with the actual structure of the argument.

I had not heard of the FHI's Windfall Clause, but looking on the Internet, I don't see signs of anyone signing up to it yet. Metaculus has a still-open prediction market on whether any major AI company will by the end of 2025.

1StrivingForLegibility1mo
Oops, when I heard about it I'd gotten the impression that this had been adopted by at least one AI firm, even a minor one, but I also can't find anything suggesting that's the case. Thank you! It looks like OpenAI has split into a nonprofit organization and a "capped-profit" company. OpenAI Nonprofit could act like the Future of Life Instutute's proposed Windfall Trust, and a binding commitment to do so would be a Windfall Clause. They could also do something else prosocial with those profits, consistent with their nonprofit status.

If the trade-off is between long and miserable life, or short and happy one, the choice may be individual

I'll take the long, happy life, thank you.

I do not see how competition would be improved by requiring everyone who builds a better mousetrap to subsidise the manufacturers of inferior goods. One might as well require the winners of races to distribute their prize money to the losers.

5StrivingForLegibility1mo
I think we agree that in cases where competition is leading to good results, no change to the dynamics is called for. We probably also agree on a lot of background value judgements like "when businesses become more competitive by spending less on things no one wants, like waste or pollution, that's great!" And "when businesses become more competitive by spending less on things people want, like fair wages or adequate safety, that's not great and intervention is called for." One case where we might literally want to distribute resources from the makers of a valuable product, to their competitors and society at large, is the development of Artificial General Intelligence (AGI). One of the big causes for concern here is that the natural dynamics might be winner-take-all, leading to an arms race that sacrifices spending on safety in favor of spending on increased capabilities or an earlier launch date. If instead all AGI developers believed that the gains from AGI development would be spread out much more evenly, this might help to divert spending away from increasing capabilities and deploying as soon as possible, and towards making sure that deployment is done safely. Many AI firms have already voluntarily signed Windfall Clauses, committing to share significant fractions of the wealth generated by successful AGI development. EDIT: At the time of writing, it looks like Windfall Clauses have been advocated for but not adopted. Thank you Richard_Kennaway for the correction!

Suppose that you are in a position to gain $100,000 at the expense of $20,000 to someone else. Should you? It might be justified on utilitarian grounds, if you gain more utility than they lose. But it's clearly not a Pareto improvement over doing nothing.

Suppose that my $100,000 comes from finding ways to better serve the customers of my business, and someone else’s $20,000 loss comes from customers forsaking their competing business to patronise mine. I do not think that I owe the other business anything.

2JBlack1mo
It would certainly be interesting if you did, and would probably promote competition a lot more than is currently the case. On the other hand measuring and attributing those effects would be extremely difficult and almost certainly easy to game.
1StrivingForLegibility1mo
That sounds reasonable to me! This could be another negative externality that we judge to be acceptable, and that we don't want to internalize. Something like "if you break any of these rules, (e.g. worker safety, corporate espionage, etc.) then you owe the affected parties compensation. But as long as you follow the rules, there is no consensus-recognized debt."

Could you point to where he claims there is no truth?

The OP says:

Jed says that after going through this process long enough, you will wind up with the answer that there is no truth.

and

In some sense the rationality community is clinging on to the semantic stopsign of bayes rule and empiricism, while Jed lights even those on fire and declares truth as non-existent.

So if you disagree with that reading, your argument is with the OP.

If we’re going to duel with Eliezer posts, see also The Simple Truth.

Here are a few of my beliefs (although not my own ... (read more)

And I could be in the cross-hairs of the next serial killer. I do not walk in fear of that happening.

The idea that you can take anyone down, no matter how influential or powerful, if you're willing to go down with them really appeals to me.

Suicide bombing?

His "shtick" (why the dramatic approach?) is that if we try to disprove everything, without giving up, every false belief will eventually be dealt with, and nothing true will be affected. Is there some fault with that or not?

He says that "nothing true will perish" but also that there is no truth. Either he or the OP dismisses everything that people have discovered about the world as mere "semantic stopsigns", which looks pretty much like a semantic stopsign itself. There is nothing here and no amount of hermeneutics will magic it into something.

If you

... (read more)
1matteyas1mo
Could you point to where he claims there is no truth? What I've seen him say is along the lines of "no belief is true" and "nobody will write down the truth." That should not be surprising to anyone who groks falsification. (For those who do not, the LessWrong article on why 0 and 1 are not probabilities is a place to start.) He is describing what he's up to. You say that's what he's offering. So you already are searching out other readings. Have you heard of taking things out of context? The reason that is frowned upon is because dogmatically just reading a piece of text is a reliable way to draw bad conclusions.

Why not go on living alongside your descendants?

As far as I am concerned, immortality comes from reproduction

I'm with Woody Allen, in preferring immortality to come from not dying.

3Lichdar1mo
I don't mind it: but not in a way that wipes out my descendants, which is pretty likely with AGI. I would much rather die than to have a world without life and love, and as noted before, I think a lot of our mores and values as a species comes from reproduction. Immortality will decrease the value of replacement and thus, those values.

What if we just convince AIs that God exists, and that he's probably really vengeful if you do, like.. all the bad stuff.

If you think living under the Taliban is bad, that would be nothing compared with an AI-run religious dictatorship.

Load More