All of loqi's Comments + Replies

I'd be interested in reading more about your top ten cool possibilities. They sound cool.

My takeaway: Sometimes people don't behave in aggregate the way we think they should. By replacing their money with money\k and convincing them it's still just money, we can manipulate their behavior by jiggling k.*

And it apparently goes without saying that the coupon-issuer has a good way to distinguish "legitimate" reasons to cut back on going out. E.g., flu outbreak, new compelling indoor family activity, all the other stuff no one's even thought of yet, etc.

The Keynesian "key to enlightenment" is that we can cram a knob onto the economy and jack with it?

...as long as you don't mind listening to Sagan drone out "millions, and billions, and millions" for millions, and billions, and millions... basically number-novocaine delivered verbally.

And surely aliens are everywhere, we just haven't noticed them yet.

I tried watching Cosmos about a year ago, and quickly stopped. Is there a case to be made that it's worth soldiering through the awfulness?

Please quote me where I accused you of having faith that you're more reliable than those people.

Right here:

Thanks!

I also won't engage with people who refuse to answer reasonable questions to let me understand their position.

Thanks!

-5brazil8411y

Please quote me where I accused you of having faith that you're more reliable than those people.

0brazil8411y
Right here: http://lesswrong.com/lw/84j/amanda_knox_post_mortem/52b8 [http://lesswrong.com/lw/84j/amanda_knox_post_mortem/52b8] By the way, I have my own rules of debate. One rule is that I will not engage with people who "strawman" me, i.e. misrepresent my position. I also won't engage with people who refuse to answer reasonable questions to let me understand their position. So I will try one last time: Please quote me where I made that assertion. Sure, getting this kind of feedback is a good way to improve one's judgment. Do you seriously disagree? -------------------------------------------------------------------------------- Your choice.

Do you agree that the tone of your post is a bit nasty?

Yes. It's a combination of having little respect for the feelings of typically-wrong pseudonymous internet posters as well as faith in my own ability to look at incomplete justifications for sloppy reasoning and draw snarky conclusions.

1brazil8411y
Ok, and again my questions: Please quote me where I made that assertion. Sure, getting this kind of feedback is a good way to improve one's judgment. Do you seriously disagree?

So, to summarize why you didn't update:

  • You didn't know the names of the people commenting.
  • You have faith that you're more reliable than those people.
  • You would lose your job if you weren't so great at seeing through bullshit.
  • You have often failed to see through bullshit.

Boy was Upton Sinclair ever right.

2brazil8411y
I'm not sure that's the way to put it, but let me ask you this: How much stock do you put in the unsupported assertion of an anonymous person on the internet? Please quote me where I made that assertion. Well I need to be decent at a minimum. But basically yeah. I assess cases day in and day out. That's a huge advantage. I know that I'm much better than I was 15 years ago, even though I was just as smart then as I am now. Sure, getting this kind of feedback is a good way to improve one's judgment. Do you seriously disagree? :shrug: I agree, but employment is sadly not the only motivator for self-deception. Let me ask you this: Do you agree that the tone of your post is a bit nasty?

My inner Hanson asks me

So you've got a case of the Inner Hanson, eh? My estimation of your psychological fortitude is hereby incremented.

0zslastman9y
You have one as well? Thank god I'm not alone. Still waiting for a matching winged figure to pop up on my other shoulder. Grimly suspicious that Inner Hanson may have killed him.

And the best part is, my signalling that is cheap yet credible - the most delicious kind.

Good point, there is some ordering information leaked. This is consistent with identical likelihoods for both setups - learning which permutation of arguments we're feeding into a commutative operator (multiplication of likelihood ratios) doesn't tell us anything about its result.

If you don't mind sharing, how do you plan to do this? Is it as simple as "this controlled substance makes my life better, will you prescribe it for me?" Or are you "fortunate" enough to have a condition that warrants its prescription?

I ask because I've had similar experiences with Modafinil (my nickname for it is "executive lubricant"), and it is terribly frustrating to be stuck without a banned goods store.

Thanks for following up on Almond. Your statements align well with my intuition, but I admit heavy confusion on the topic.

Thanks, that's a concise and satisfying reply. I look forward to seeing where you take this.

And what, if I may ask, are your plans for your grandmother?

3[anonymous]11y
It's gonna be Lazarus Long all over again -_-;

All I see here is Tegmark re-hashed and some assertions concerning the proper definitions of words like "real" and "existence". Taboo those, are you still saying anything?

Have you read any of Paul Almond's thoughts on the subject? Your position might be more understandable if contrasted with his.

1ec42911y
To Minds, Substrate, Measure and Value Part 2: Extra Information About Substrate Dependence [http://www.paul-almond.com/Substrate2.htm] I make his Objection 9 and am not satisfied with his answer to it. I believe there is a directed graph (possibly cyclic) of mathematical structures containing simulations of other mathematical structures (where the causal relation proceeds from the simulated to the simulator), and I suspect that if we treat this graph as a Markov chain and find its invariant distribution, that this might then give us a statistical measure of the probability of being in each structure, without having to have a concept of a physical substrate which all other substrates eventually reduce to. However, I'm not sure that any of this is essential to my OP claims; the measure I assign to structures for purposes of forecasting the future is a property of my map, not of the territory, and there needn't be a territorial measure of 'realness' attached to each structure, any more than there need be a boolean property of 'realness' attached to each structure. I note, though, that, being unable to explain why I find myself in an Everett branch in which experiments have confirmed the Born rule (even though in many worlds (without mangling) there should be a 'me' in a branch in which experiments have consistently confirmed the Equal Probabilities rule), I clearly do not have an intuitive grasp of probabilities in a possible-worlds or modal-realistic universe, so I may well be barking up the wrong giraffe. EDIT: In part 3, Almond characterises the Strong AI Hypothesis thus: I characterise my own position on minds thus: This is because the idea of a 'physical system' is an attachment to physical realism which I reject in the OP.
2ec42911y
I'm saying that our intuitive concepts of "real" and "existence" have no referents, that Tegmark's restriction to computable structures is unnecessary, that nesting (ie. simulation) of worlds is an explicit causal dependence, and that Platonism needn't be as silly and naïve as it sounds. Also to the extent that I am rehashing Tegmark, I'm doing so in order to combine it with Syntacticism and several other prerequisites in order to build a framework that lets me talk about "the existence of infinite sets", because I think Eliezer's 'infinite set atheism' is a confusion. I'll read "Minds, Substrate, Measure and Value" (which seems relevant) and then get back to you on that one, ok?

Intuition is extremely powerful when correctly trained. Just because you want to have powerful intuitions about something doesn't mean it's possible to correctly train them.

If you can't think intuitively, you may be able to verify specific factual claims, but you certainly can't think about history.

Well, maybe we can't think about history. Intuition is unreliable. Just because you want to think intelligently about something doesn't mean it's possible to do so.

Jewish Atheist, in reply to Mencius Moldbug

6CuSithBell12y
I would think this an irrationality quote? "Fuzzy" thinking skills are ridiculously important. "Intuition" may be somewhat unreliable, but in certain domains and under certain conditions, it can be - verifiably - a very powerful method.

Ceteris paribus, I would prefer not to be sad when my friends are sad. But this is incompatible with empathy - I use my sadness to model theirs. I can't imagine "loving" someone while trying not to understand them.

The assumption that we can better determine toxicity with our current understanding of human biology than thousands of years of natural selection seems questionable, but peanuts are certainly a good lower bound on selection's ability.

I also don't have much confidence that the parties responsible for safety testing are particularly reliable, but that's a loose belief.

5DanielLC12y
Natural selection wasn't attempting to make it harmless to humans. Especially in plants that didn't evolve nearby humans.

That's technically true, but in practice the results of selective breeding have undergone "staged deployment" - populations/farmers with harmful variants would have been selected against. Modern GMO can reach a global population much more quickly, so harmful variants have the potential to cause more widespread harm.

Less selected for human non-toxicity?

0DanielLC12y
More, actually. I'm not sure what they go through before selling GMO food for human consumption, but I'm pretty certain peanuts wouldn't have passed the test.

Vitamin D is really important. There is an established causal link between vitamin D and immune function. It doesn't just enhance your immune response - it's a prerequisite for an immune response.

Anecdote: Prior to vitamin D supplementation, I caught something like 4 colds per year on average. I'm pretty sure I never did better than 2. I started taking daily D supplements about a year and half ago, and caught my first cold a few days ago. It's worth taking purely as a preventative cold medicine.

"I" is how feeling stuff from the inside feels from the inside.

Agreed. I don't see significant fungibility here.

Enjoying life and securing the future are not mutually exclusive.

2Document12y
Optimizing enjoyment of life or security of the future superficially is, if resources are finite and fungible between the two goals.
0[anonymous]12y
Downvoted for being simple disagreement.

I hereby extend my praise for:

  • Your right action.
  • Its contextual awesomeness.
  • Setting up a utility gradient that basically forces me to reply to your comment, itself a novel experience.

I really like this breakdown. I do think the first item can be generalized:

usually automatically activated bias has a feeling attached to it

since positive-affect feelings like righteousness are also useful hooks.

2MrMind12y
You're right, they don't even need to be strong emotions: like in the case of positive-affect induced biases building incrementally over time, as in the affective death spirals.

Googling schizophrenia+creativity leads me to suspect that it's more than a cultural expectation. Though I should disclaim the likely bias induced by my personal experience with several creative schizophrenics.

0Kevin12y
It's also something like people with recessive genes for mental illness get some of the benefits (increased creativity) without the debilitation. I have a family history of mental illness but am not mental ill, and I definitely recognize benefits from whatever it is about me that isn't neurotypical.
1Desrtopa12y
Well, given that schizophrenics suffer from hallucinations and delusions, they're probably going to appear compulsively creative simply as a consequence of sharing their reality with other people. That doesn't necessarily mean that their creative works are going to be any good. Witness the website of my schizophrenic former lab partner [http://www.theuniversealive.org/].
0CuSithBell12y
This connection is one I find much more compelling.

I'd actually be a bit surprised if this were true. My guess is that intelligent madmen are more interesting, so we just pay more attention to them. Now I'm tempted to go looking for statistics.

Not doubting the correlation between madness and mathematics, though.

2CuSithBell12y
To expand, it may be that intelligent madmen are the ones who accomplish enough to get famous. Well, also artistic madmen - and we also have a cultural expectation that artists are crazy! I'd definitely be interested in more information here.

What constitutes a "choice" in this context is pretty subjective. It may be less confusing to tell someone they could have a choice instead of asserting that they do have a choice. The latter connotes a conscious decision gone awry, and in doing so contradicts the subject's experience that no decision-making was involved.

2wilkox12y
Good point. Reading my comment again, it seems obvious that I committed the typical mind fallacy [http://wiki.lesswrong.com/wiki/Typical_mind_fallacy] in assuming that it really is a choice for most people.

Downvoted for spending more words explaining your non-response than it would have taken to just give Nesov the benefit of the doubt and be explicit.

Everyone is capable of misunderstanding trivial things, so the notion "should not need to explain" looks suspicious to me (specifically, it looks like posturing rather than honest communication). Can you explain it, or does it self-apply?

-3wedrifid12y
More to the point - far more words than saying absolutely nothing. Almost always the best way to keep free of other people's games.

This is a distinction without a difference. If H bombs D, H has lost

This assumption determines (or at least greatly alters) the debate, and you need to make a better case for it. If H really "loses" by bombing D (meaning H considers this outcome less preferable than proliferation), then H's threat is not credible, and the strategy breaks down, no exotic decision theory necessary. Looks like a crucial difference to me.

That depends on who precommits "first". [...]

This entire paragraph depends on the above assumption. If I grant you... (read more)

But then, Wei Dai's posting was intemperate, as is your comment. I mention this not to excuse mine, just to point out how easily this happens.

Using the word "intemperate" in this way is a remarkable dodge. Wei Dai's comment was entirely within the scope of the (admittedly extreme) hypothetical under discussion. Your comment contained a paragraph composed solely of vile personal insult and slanted misrepresentation of Wei Dai's statements. The tone of my response was deliberate and quite restrained relative to how I felt.

This may be partly th

... (read more)
3Richard_Kennaway12y
It is a gesture concluding a constructive point. This is a distinction without a difference. If H bombs D, H has lost (and D has lost more). That depends on who precommits "first". That's a problematic concept for rational actors who have plenty of time to model each others' possible strategies in advance of taking action. If H, without even being informed of it by D, considers this possible precommitment strategy of D, is it still rational for H to persist and threaten D anyway? Or perhaps H can precommit to ignoring such a precommitment by D? Or should D already have anticipated H's original threat and backed down in advance of the threat ever having been made? I am reminded of the Forbidden Topic. Counterfactual blackmail isn't just for superintelligences. As I asked before, does the decision theory exist yet to handle self-modifying agents modelling themselves and others, demonstrating how real actions can arise from this seething mass of virtual possibilities? Then also, in what you dismiss as "messy real-world noise", there may be a lot of other things D might do, such as fomenting insurrection in H, or sharing their research with every other country besides H (and blaming foreign spies), or assassinating H's leader, or doing any and all of these while overtly appearing to back down. The moment H makes that threat, the whole world is H's enemy. H has declared a war that it hopes to win by the mere possession of overwhelming force. I look around at the world since WWII and fail to see this horror. I look at Wei Dai's strategy and see the horror. loqi [http://lesswrong.com/lw/14a/thomas_c_schellings_strategy_of_conflict/40rg] remarked about Everett branches, but imagining the measure of the wave function where the Cold War ended with nuclear conflagration fails to convince me of anything.

Ack, you're entirely right. "Mark" is somewhat ambiguous to me without context, I think I had imbued it with some measure of goalness from the GP's use.

I have a bad habit of uncritically imitating peoples' word choices within the scope of a conversation. In this case, it bit me by echoing the GP's is-ought confusion... yikes!

Cat overpopulation is an actual problem, gobs of cats are put down by the Humane Society every day. I don't know what they do with their dead cats, but I find wasting perfectly usable meat and tissue more offensive than the proposed barbecue.

FWIW, I am both a cat owner and a vegetarian.

0Desrtopa12y
I was not under the impression that cats tasted good.
8TobyBartels12y
That's a good point. However, the danger with a cat BBQ is that people develop a taste for them and, rather than eating the leftovers from the Humane Society, breed their own for good flavour. In fact, I pretty much guarantee that, should Barbecue-a-Cat-Day ever catch on (and be celebrated in earnest), then this will indeed happen.

I wonder if more or fewer people would adopt cats if the cats would otherwise be barbecued.

0Pavitra12y
That is a ridiculously sensible proposal, and I feel silly for not having thought of it myself.

I was commenting on what he said, not guessing at his beliefs.

I don't think you've made a good case (any case) for your assertion concerning who is and is not to be included in our race. And it's not at all obvious to me that Wei Dai is wrong. I do hope that my lack of conviction on this point doesn't render me unfit for existence.

Anyone willing to deploy a nuclear weapon has a "bland willingness to slaughter". Anyone employing MAD has a "bland willingness to destroy the entire human race".

I suspect that you have no compelling proof tha... (read more)

2shokwave12y
What I was saying was horrendous act is not the same as comment advising horrendous act in hypothetical situation. You conflated the two in paraphrasing RichardKennaway's comment as "comment advising horrendous act in hypothetical situation unfits you for inclusion in the human race" when what he was saying was "horrendous act unfits you for inclusion in the human race".

So says the man from his comfy perch in an Everett branch that survived the cold war.

What I'm really getting at here is that [a comment you made on LW] unfits you for inclusion in the human race.

Downvoted for being one of the most awful statements I have ever seen on this site, far and away the most awful to receive so many upvotes. What the fuck, people.

-5Richard_Kennaway12y
9shokwave12y
I doubt RichardKennaway believes Wei_Dai is unfit for inclusion in the human race. What he was saying, and what he received upvotes for, is that anyone who's blandly willing to murder millions of non-combatants of a friendly power in peacetime because they do not accede to empire-building is unfit for inclusion in the human race - and he's right, that sort of person should not be fit for inclusion in the human race. A comment on LW is not the same as that bland willingness to slaughter, and you do yourself no favours by incorrectly paraphrasing it as such.

When you say that pain is "fundamentally different" than discomfort, do you mean to imply that it's a strictly more important consideration? If so, your theory is similar to Asimov's One Law of Robotics, and you should stop wasting your time thinking about "discomfort", since it's infinitely less important than pain.

Stratified utility functions don't work.

Isn't the true mark of rationality the ability to reach a correct conclusion even if you don't like the answer?

Winning is a truer mark of rationality.

6katydee12y
I think it's more apt to characterize winning as a goal of rationality, not as its mark. In Bayesian terms, while those applying the methods of rationality should win more than the general population on average-- p(winning|rationalist) > p(winning|non-rationalist)-- the number of rationalists in the population is low enough at present that p(non-rationalist|winning) almost certainly > p(rationalist|winning), so observing whether or not someone is winning is not very good evidence as to their rationality.
5NancyLebovitz12y
I wonder about the time scale for winning. After all, a poker player using an optimal strategy can still expect extended periods of losing, and poker is better defined than a lot of life situations.

Your point seems to be roughly that "highly conjunctive arguments are disproportionately convincing". I hate to pick on what may just be a minor language issue, but I really grind to a halt trying to unify this with the phrase "convincing arguments aren't necessarily correct". I don't see much difference between it and "beliefs aren't necessarily correct". The latter is true, but I'm still going to act as if my beliefs are correct. The former is true, but I'm still going to be convinced by the arguments I find most convincing.... (read more)

Indeed. For me, cryptographic hashing is the most salient example of this. Software like git builds entire castles on the probabilistic certainty that SHA-1 hash collisions never happen.

Hume's (and others') point is that we cannot be wrong about things like, "I am seeing blue right now." If you doubt things like that, you must apply at least that same level of doubt to everything else, such as whether you are really reading a LessWrong comment instead of being chased by hungry sharks right now.

Utterly ridiculous comparison. Ever looked at the stars?

Roughly speaking you are often best off choosing what the rational course of action is and then picking the opposite.

I consider this a symptom of poor scenario design - the availability of unpredictably optimal actions is the key technical difference (there are of course social differences) between open-ended and computer-mediated games. If the setting is incompatible with the characters' motivations, it's impossible to maintain the fiction that they're even really trying, and either the setting's incentives or the characters' motivations (or both in ta... (read more)

Agreed, thanks for bringing this up - I threw away what I had on the subject because I was having trouble expressing it clearly. Strangely, Egan occasionally depicts civilizations rendered inaccessible by sheer difference of computing speed, so he's clearly aware of how much room is available at the bottom.

Speaking as someone whose introduction to transhumanist ideas was the mind-altering idea shotgun titled Permutation City, I've been pretty disappointed with his take on AI and the existential risks crowd.

A reoccurring theme in Egan's fiction is that "all minds face the same fundamental computing bottlenecks", serving to establish the non-existence of large-scale intrinsic cognitive disparities. I always figured this was the sort of assumption that was introduced for the sake of telling a certain class of story - the kind that need only be plausib... (read more)

A reoccurring theme in Egan's fiction is that "all minds face the same fundamental computing bottlenecks", serving to establish the non-existence of large-scale intrinsic cognitive disparities.

This still allows for AIs to be millions of times faster than humans, undergo rapid population explosion and reduced training/experimentation times through digital copying, be superhumanly coordinated, bring up the average ability in each field to peak levels (as seen in any existing animal or machine, with obvious flaws repaired), etc. We know that huma... (read more)

It's funny you say that, I once figured out a problem for someone by diagnosing an error message with C++ templates. Wizardry! However, the "base" of the error message looked roughly like

error: unknown type "boost::python::specify_a_return_value_policy_to_wrap_functions_returning<Foo>"

Cryptic, right? It turns out he needed to specify a return value policy in order to wrap a function returning Foo. All I did for him was scan past junk visually looking for anything readable or the word "error".

1TeMPOraL10y
That's the general algorithm of reading STL error messages. I still can't get why people look at you as if you were a wizard, if all that you need to do is to quickly filter out irrelevant 90% of the message. Simple pattern matching exercise.

My intuition is mostly the opposite, specifically that "bad with computers" people often treat applications like some gigantic, arbitrary natural system with lots of rules to memorize, instead of artifacts created by people who are often trying to communicate function and purpose through every orifice in the interface.

It only makes sense to ask the what the words in the menus actually mean if you assume they are the product of some person who is using them as a communication channel.

0mstevens12y
It's perhaps more like maths. There's an element of human communication, and an element of underlying truths. I think there's a problem in education. I've learnt computers through Computer Science based education, so I don't have personal experience of this, but I'm told that computing education for non-specialists is very much focused on learning by rote, "these are the exact steps to do X", with no attempt to understand the system in general. Thus, when people have any problem outside the very specific examples they've learnt, they can't cope. The next question is, obviously, why is computing education structured like this? My theories: A lot of education works like this. We generally believe far too much in rote learning. Rote learning is probably more suited to situations that don't change too much, but is deployed in computing where the details you might rote learn are likely to change drastically in a relatively small number of years. People don't like thinking about computing. They want to do the minimum necessary to accomplish their non-computing task. However they make a falsely small estimate of the amount of computing knowledge required for this, and actually end up making their task more difficult.

I suspect that when examined closely enough, your motivations are also likely to be hard to understand from your point of view.

In fact, non-lucid dreams feel extremely real, because I try to change what's happening the way I would in a lucid dream, and nothing changes - convincing me that it's real.

This has been my experience. And on several occasions I've become highly suspicious that I was dreaming, but unable to wake myself. The pinch is a lie, but it still hurts in the dream.

Load More