All of PrometheanFaun's Comments + Replies

Jokes Thread

That's contrary to my experience of epistimology. It's just a word, define it however you want, but in both epistemic logic and pragmatics-stripped conventional usage, possibility is nothing more than a lack of disproof.

Dark Arts of Rationality

Have you seen this explored in mathematical language? Cause it's all so weird that there's no way I can agree with Hofstadter to that extent. As yet, I don't know really know what "smart" means.

5ygert8yYeah, I agree, it is weird. And I think that Hofstadter is wrong: With such a vague definition of being "smart", his conjecture fails to hold. (This is what you were saying: It's rather vague and undefined.) That said, TDT is an attempt to put a similar idea on firmer ground. In that sense, the TDT paper is the exploration in mathematical language of this idea that you are asking for. It isn't Hofstadterian superrationality, but it is inspired by it, and TDT puts these amorphous concepts that Hofstadter never bothered solidifying into a concrete form.
Dark Arts of Rationality

I've never recognised a more effective psychonaut than you. You've probably seen further than I, so I'd appreciate your opinion on a hypo I've been nursing.

You see the way pain reacts to your thoughts. If you respect its qualia, find a way to embrace them, that big semi-cognisant iceberg of You, the Subconscious, will take notice, and it will get out of your way, afford you a little more self control, a little less carrot and stick, a little less confusion, a little closer to the some rarely attained level of adulthood.

I suspect that every part of the subc... (read more)

Dark Arts of Rationality

As I understand it, Hofstadter's advocacy of cooperation was limited to games with some sense of source-code sharing. Basically, both agents were able to assume their co-players had an identical method of deciding on the optimal move, and that that method was optimal. That assumption allows a rather bizarre little proof that cooperation is the result said method arrives at.

And think about it, how could a mathematician actually advocate cooperation in pure, zero knowledge vanilla PD? That just doesn't make any sense as a model of an intelligent human being's opinions.

-1satt8yWhat ygert said. So-called superrationality has a grain of truth but there are obvious holes in it (at least as originally described by Hofstadter). Sadly, even intelligent human beings have been known to believe incorrect things for bad reasons. More to the point, I'm not accusing Hofstadter of advocating cooperation in a zero knowledge PD. I'm accusing him of advocating cooperation in a one-shot PD where both players are known to be rational. In this scenario, too, both players defect. Hofstadter can deny this only by playing games(!) with the word "rational". He first defines it to mean that a rational player gets the same answer as another rational player, so he can eliminate (C, D) & (D, C), and then and only then does he decide that it also means players don't choose a dominated strategy, which eliminates (D, D). But this is silly; the avoids-dominated-strategies definition renders the gets-the-same-answer-as-another-rational-player definition superfluous (in this specific case). Suppose it had never occurred to us to use the former definition of "rational", and we simply applied the latter definition. We'd immediately notice that neither player cooperates, because cooperation is strictly dominated according to the true PD payoff matrix, and we'd immediately eliminate all outcomes but (D, D). Hofstadter dodges this conclusion by using a gimmick to avoid consistently applying the requirement that rational players don't leave free utility on the table.
3ygert8yAgreed. But here is what I think Hofstadter was saying: The assumption that is used can be weaker than the assumption that the two players have an identical method. Rather, it just needs to be that they are both "smart". And this is almost as strong a result as the true zero knowledge scenario, because most agents will do their best to be smart. Why is he saying that "smart" agents will cooperate? Because they know that the other agent is the same as them in that respect. (In being smart, and also in knowing what being smart means.) Now, there are some obvious holes in this, but it does hold a certain grain of truth, and is a fairly powerful result in any case. (TDT is, in a sense, a generalization of exactly this idea.)
The Value (and Danger) of Ritual

Sometimes I will stand and look at the church and wonder if today is the day I get desperate enough to go full sociopath, pretend to join the flock, and use the network to start a deviant christianity offshoot.

LessWrong gaming community

I don't know Civ, but for practising the kind of strategizing you're describing I'd recommend Neptune's Pride.

and I've known people for whom the opposite was tragically true.

Heh. I'm one of those people. I practically fell in love with my first ally. I'm lucky they were really nice when they broke my lines, essentially throwing me a sword and telling me to defend myself before starting the invasion. I'd have been heartbroken otherwise. I guess to an extent I thought they were damning us both to death by zombie bot rush by breaking our alliance, but the... (read more)

LessWrong gaming community

Also, is there some place Lesswrongians go for real-time chat?

IRC channel, #lesswrong on irc.freenode.net

Arguments Against Speciesism

But now I've just discovered that argumentum ad governess is invalid

Where was the argument for that? Non-humans attaining rights by a different path does not erase all other paths.

"If the inequitable society has greater total utility, it must be at least as good as the equitable one" would still hold though, no?

Well... .... yeah, technically. But for example in the model ( worlds={A, B}, f(W)=sum(log(felicity(e)) for e in population(W)) ), such that world A=(2,2,2,2), and world B=(1,1,1,9). f(A) ≥ f(B), IE ¬(f(A) < f(B)), so ¬(A < B), IE, the equitable society is also at least as good as the inequitable, higher sum utility one. So if you want to support all embeddings via summation of an increasing function of ... (read more)

Who Wants To Start An Important Startup?

I propose a new term for what we're trying to do here, not for-profit, nor not-for-profit, but for-results.

[This comment is no longer endorsed by its author]Reply
Who Wants To Start An Important Startup?

The Carcenogen is already doing all it can to demolish any grand central church of atheism that might or might not exist, For example, this kind of antimeme spreads like wildfire. There is no need for us to do anything to encourage dispersal and mutation, it is already underway. And, I'm not sure about this, but doesn't humanity already have swarm intelligence setups for generating new concepts, new categories for people? I wouldn't expect we'd need a machine to do that for us.

Second, there is absolutely no reason for us to settle for an idea that is not profitable.

Would Xodarap agree that the premises are (assuming we have operator overloads for multisets rather than sets)

  • the better set is a superset (A ⊂ B) ⇒ (A < B)

  • or everything in the better set that's not in the worse set is better than everything that's in the worse set that's not in the better set, (∀a∈(A\B), b∈(B\A) value(a) < value(b)) ⇒ (A < B)

If the inequitable society has greater total utility, it must be at least as good as the equitable one.

No, the premises don't necessitate that. "A is at least as good as B", in our language, is ¬(A < B). But you've stated that the lack of an edge from A to B says nothing about whether A < B, now you're talking like if the premises don't conclude that A < B they must conclude ¬(A < B), which is kinda affirming the consequent.

It might have been a slip of the tongue, or it might be an indication that you're overestimating the signific... (read more)

1Xodarap8yThis is a good point, what I was trying to say is slightly different. Basically, we know that (A < B) ==> (f(A) < f(B)), where f is our order embedding. So it is indeed true that f(A) > f(B) ==> ¬(A < B), by modus tollens. Yeah, that's a pretty clever way to get around the constraint. I think my claim "If the inequitable society has greater total utility, it must be at least as good as the equitable one" would still hold though, no?
Better Rationality Through Lucid Dreaming

Great answer, I know this is something I need to do more in life anyway. So I did a little bit of it just now. Sudden increase in levels of curiosity[so virtuous. Wow.]. I'm so curious I even want to know crap like why my housemate sometimes leaves a spoon stuck in the coffee grounds of the compost container. Obviously they used the spoon to move the grounds in there, but why did they leave it stuck there rather than moving it to the cutlery dip in the wash basin? Now that is an extraordinarily minor detail- take that as an indication of just how motivating it is to suspect that you don't look closely enough at the details of your life to know whether you're in a shoddy simulation.

Better Rationality Through Lucid Dreaming

That doesn't answer the question? I'm pretty sure a honed attentiveness to the consistency of text wouldn't raise my overall sanity waterline.

0TheOtherDave8yThat's an excellent point. I must admit, the whole premise that noticing reality-check-violations in my dream-scenarios has some relation to "my overall sanity waterline" (whatever that is) completely fails to resonate with me, so in retrospect it seems I just collapsed the criterion to noticing reality-check-violations in dream scenarios more generally... thereby, as you observe, failing to answer the question. Oops! Thanks for pointing that out.
The Strangest Thing An AI Could Tell You

I tell everyone this all the time. Thankyou AGI, maybe now they'll believe me.

The Strangest Thing An AI Could Tell You

Lesswrong's threads have defeated Death.

How to Become a 1000 Year Old Vampire

Howdy FourFire. At some point after conceiving of a particularly lofty particularly involving plot[details available on request for LWers], I stopped trying to befriend people who wouldn't feature anywhere in it. Whoever I'm with, there's always an objective, though I'll often have to pretend there isn't and come at it sideways, which only makes it more fun.

For me there are two kinds of people, people I can do something with, and people I've got nothing to do with.

The curse of identity

OK, that's got to be a bug..

Rationality Quotes October 2013

I've heard German is bad too. Probably In the very same philosophy of logic class where I heard the name Wittgenstein and was told about his work but which I have completely failed to retain any memory of.

The curse of identity

Dangit I wish I knew who this was. I hope their disassociation isn't a sign of evaporative cooling in action.

2satt8yFortunately the title of the page [http://lesswrong.com/lw/8gv/the_curse_of_identity/5abb] gives it away: it's srdiamond, who I believe still posts occasionally as common_law [http://lesswrong.com/user/common_law/overview/].
How to Become a 1000 Year Old Vampire

is Noticing Boredom a recognized mental skill? Because it should be

Very much agreed. When I started taking online courses I was surprised at how speeding up the video helped my learning. What was happening before, and what still happens when I'm watching slow, informationally dilute speeches, is my mind can't sync up with the presentation and it wanders off on its own way so frequently that I simply can't stop it from happening. I also didn't used to realize how hanging around with crowds who wern't curious and wern't agenty in the same way I was sucked the life out of me. I thought I was just an inattentive, generally disengaged person. I was dead wrong.

1FourFire8yI feel like I am an inattentive, disenganged person, and nonagenty people do suck the life out of me. What changed in your case which made you see things differently?
The best 15 words

That's not helpful. Say I've got an audience who wouldn't like me if they knew me as my inner circle does, who definitely wouldn't be convinced if I wrote as though I were writing for my own. What would Zinsser do? Give up? Write something else? I know that communicating effectively when you don't personally feel what you're saying tends to fail, well yes, it's hard, but that's precisely what I've got to do!

Rationality Quotes October 2013

Since English isn't Sound and like 90% of English words simply don't have real definitions, I'm not sure I want to tangle with this guy's work. It's either going to be tenuous logic with an exploration in equivocation, or a baffling/impressive display of linguistics. Which was it?

5mwengler8yWell he did write it in German.
4somervta8yPhilosophical Investigations is closer to the latter. (There's a big difference between Late and Early Wittgenstein - basically two completely different authors)
You're Calling *Who* A Cult Leader?

if you're trying to not look like a cult, then you're doing it wrong

I disagree. I think it's so easy for a community with widespread, genuine conviction as to their shared radicles to look like a cult, that, well, anyone willing to go through the rather extreme rigors of preventing anyone from seeing you as cult-like.. methinks they protest too much. I say we are- though far from being a cult- cultlike. We are weird, and passionate, and that's all it takes.

Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98

Assuming that Harry's Dark Side is integral to a significant proportion of plays(assuming rather than noting because my memory is patchy and I don't remember if it was like this or if the dark side was more a background character than an oft-employed tool), perhaps we could infer from this that EY considers it to be an natural state of mind that also happens to flourish rarely enough that no character Harry will ever meet is likely to be able to correct his misperception of it. I'd then assume EY must have visited it himself to write it.

0alex_zag_al8yI wouldn't be surprised if it was a magical dark side. But I'd be shocked if it was a magical dark side that could think better than a well trained adult. Now that I think about it, he definitely has a source of insight that doesn't obey natural ten-year-old cognitive constraints.
Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98

I was referring more to that shadowy part of his mind that knows just what to look for. A source of insight that doesn't obey natural human cognitive constraints.

1alex_zag_al8yThis would be explicitly against Yudkowsky's stated goals for the story, All he has that we don't is more facts. (Which is often a hindrance; it was easier for us to figure out Lucius's blood debt, because we had less "memory" to search through.) If he could also exceed natural human cognitive constraints, this wouldn't be rationalist fiction. (source: http://hpmor.com/info/ [http://hpmor.com/info/])
I attempted the AI Box Experiment again! (And won - Twice!)

I sincerely hope that happens. I don't care whether I'm involved, but there must be a group of apt judges who're able to look over the entirety of these results, discuss them, and speak for them.

I attempted the AI Box Experiment again! (And won - Twice!)

Who is going to read it? Hopefully Eliezer, at least?

I will let Eliezer see my log if he lets me read his!

Yet more "stupid" questions

A gaydar doesn't have to depend on how gay a person looks superficially. There are plenty of other cues.

0polymathwannabe8yTrue, I should have used more general wording than "looks gay;" it would only be one component of the gaydar criteria. The problem is finding how to state it in not-loaded language. It would be impractical to use "matches stereotypically effeminate behavior."
Yet more "stupid" questions

I'll agree with that from a different angle. Due to the map≠territory lemma, We never have to accept absolute inability to meet our goals. When faced with seemingly inescapable all-dimensional doom, there is no value at all in resigning oneself to it, the only value left in the universe is in that little vanishingly not-going-to-happen-unlikely possible world where, for example, the heat death can be prevented or escaped. Sure, what we know of thermodynamics tells us it can't, well, I'm going to assume that there's a loophole in our thermodynamic laws that... (read more)

Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98

I think he's never going to do that here. He did that in TWC because if we were able to come up with the winning strategy when pressed, that would indicate that one of the crew members in the story definitely would have, too, proving it would have been unreasonable to write an ending where they did not. In this case our ability to solve the puzzle doesn't really say anything about the plausibility of the work's characters' solving it. Our success would not necessitate theirs, as we're more populous, experienced, and have access to a huge written record. Nor would our failure necessitate theirs, as Harry has magical insights. The groups' capacities say little about each other.

0alex_zag_al8yre: magical insights, yeah - we could have theorized about how potions worked, but we could not test those theories the way Harry did. Since Harry has experiments we don't have access to, he has magical insights we don't have access to
Yet more "stupid" questions

I'm sorry if my kind ever confused you by saying things like "It is important that I make an impressive display in the lek", what I actually mean is "It is likely my intrinsic goals would be well met if I made an impressive display in the lek". There is an ommitted variable in the original phrasing. Its importance isn't just a function of our situation, it's a function of the situation and of me, and of my value system.

So I think the real difference between nihilists and non-nihilists as we may call them, is that non-nihilists [think th... (read more)

Yet more "stupid" questions

So his opinions kind of did change over that time period, but only from "I reject these words" to "alright, if you insist, I'll try to salvage these words". I'm not sure which policy's best. The second risks arguments with people who don't know your definitions. They will pass through two phases, the first is where the two of you legitimately think you're talking about the same thing but the other is a total idiot who doesn't know what it's like. The second phase is perhaps justifiable umbrage on their discovering that you are using a d... (read more)

Yet more "stupid" questions

I find the strangely indefinite way humans name things interesting, but I try to have a safe amount of disinterest in the actual denotations of the names themselves, especially the ones which seem to throw off paradoxes in every direction when you put your weight on them. Whatever they are, they weren't built to be thought about in any depth.

Thoughts on status signals

Could you expound the evidence exposed by the donning of a suit? I'm having trouble fitting myself into these systems. It'd mean a lot to me to get an explanation from someone who knows what a valid argument looks like.

Lesswrong Philosophy and Personal Identity

My reaction to that is we shouldn't be asking "is it me", but "how much of me does it replicate?" Cause, if we make identity a similarity relation, it will have to bridge enough small differentiations that eventually it will connect us to entities which barely resemble us at all.

However, Could you expound the way of this definition of identity under transitivity and symmetry for us? I'm not sure I've got a good handle on what those constraints would permit.

Lesswrong Philosophy and Personal Identity

I think an important part of the rationalist's plight is attempting to understand the design intents behind these built-in unapologetic old mechanisms for recognizing ourselves in the world, which any self-preservation machine capable of rationality must surely have. But I don't know if we can ever really understand them, they wern't designed to be understood, in fact they seem to be designed to permit being misunderstood to a disturbing degree. I find that often when I think "I" have won, finally achieved a some sense of self-comprehension suffi... (read more)

Polyhacking

I thought for a while, and I really can't imagine any cases of works which would be unsuitable for all LWers that arn't worth hanging around and arguing about. I agree. We should be calling these people ignorant and criticising their work, not assigning them a permanent class division, shaking our heads, and going back to our camp.

Humans are utility monsters

I meant the former case, what use are people who's wants don't perfectly align with their utility function? xJ I guess whenever the latter case occurs in my life, that's not really what's happening. The dog thinks it's driving away a threat I don't recognise, when really it's driving away an opportunity it's incapable of recognising. Sometimes it might even be the right thing for them to do, even by my standards, given a lack of information. I still have to manage them like a burdensome dog.

On Juvenile Fiction

Metaphors are [...] incredibly valuable.

Prove it. I really doubt that. I think they're a highly ineffective teaching device relative to clean demonstrative thought-experiment parables. Analogies might be useful as scaffolding or a spec for learners to build to, but metaphors take it to a level of obfuscation that makes successful integration of the underlying principles of any given metaphorical package unlikely to ever occur.

Humans are utility monsters

In more personal terms, if you fit your utility function to your friends and decide what is best for them based on that, rather than letting them to their own alien utility functions and helping them to get what they really want rather than what you think they should want, you are not a good friend. I say this because if the function you're pushing prohibits me from fulfilling my goals, I will avoid the fuck out of you. I will lie about my intentions. I will not trust you. It doesn't matter if your heart's in the right place.

1metastable8yThe definition of want here is ambiguous, and that makes this is a little hard to parse. How are you defining "want" with respect to "utility function"? Do you mean to make them equivalent? If by "want" you mean desire in accord with their appropriately calibrated utility functions, then, well, sure. A friend is selfish by any common understanding if he doesn't care about his buddies' needs. But it seems like you might be saying that he's a bad friend for not helping his friends get what they want regardless of what he thinks they need. While this is one view of friendship, it is not nearly as common, and I can make a strong case against it. Such a view would require that you help addicts continue to use, that you help self-destructive people harm themselves, that you never argue with a friend over a toxic relationship you can see, and that you never really try to convince a friend to try anything he or she doesn't think he or she will like. Sadly, this happens. If you're saying you think it should happen more, okay. But I would consider a friend pretty poor if he or she weren't willing to risk a little alienation because of genuine concern.
The Strangest Thing An AI Could Tell You

It is possible to perceive this, but most people who do just end up labeled as nuts

ONE - DOES NOT EXIST, EXCEPT IN DEATH STATE. ONE IS A DEMONIC RELIGIOUS LIE.

Only your comprehending the Divinity of Cubic Creation will your soul be saved from your created hell on Earth - induced by your ignoring the existing 4 corner harmonic simultaneous 4 Days rotating in a single cycle of the Earth sphere.

T I M E C U B E

Bayesian Judo

He probably didn't see it as an argument proper, but a long misunderstanding. Most people arn't mentally equipped to make high fidelity translations between qualia and words in either direction[superficially, they are Not Articulate. More key, they might be Not Articulable], when you dismantle their words, it doesn't mean much to them, cause you havn't touched their true thoughts or anything that represents them.

Open thread, July 29-August 4, 2013

Oh, hey. Is this the lecture hall for Utopic Fascism Deprogramming 101? Cool, d'you mind if I sit next to you? I'm really excited about this class. We might have to drop it though, I hear that the lecturer might not even be planning on showing up.

"Ray Kurzweil and Uploading: Just Say No!", Nick Agar

Well, OK, What if we change our pitch from "approximate mind simulation" to "approximate identity-focal body simulation"?

-4Laoch8yA simulation of X is not X.
Open thread, July 29-August 4, 2013

but for some reason explicit discussion and debate is discouraged

The reason is an assumption that if we discuss those topics, rationality will leave the building. Since rationality is what we're here for, we must not discuss those topics. Maybe one day we'll be ready to discuss those topics, but I don't think we are at this point.

-2David_Gerard8yThis doesn't make the approval by silence a good thing.
Load More