All of Davorak's Comments + Replies

Donated.

I would recommend making the donate link large, currently it is the smaller link on the page and is harder to notice. "Donate" or "Donate here" in the link text would also make it more noticeable.* Putting a donate link at the top of the fundraising page, http://lesswrong.com/lw/lfg/cfar_in_2014_continuing_to_climb_out_of_the/ would also make it more noticable and more likely to capture vistors and therefore donations.

  • These things are so common I look for them by default. Some might argue that putting the link at the top or ma
... (read more)

If I remember correctly the second quote was edited to be something along the lines of "will_newsome is awesome."

0adamisom10y
That is cute.. no? More childish than evil. He should just be warned that's trolling. There really should be a comment edit history feature. Maybe it only activates once a comment reaches +10 karma.
0[anonymous]10y
It was edited to add something like "Will Newsome is such a badass" -- Socrates

Interesting, I will be more likely to reply to messages that I feel end the conversation like your last one on this post:

It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don't know exactly how much.

maybe 12-24 hours later just in case the likelihood of update has been reduced by one or both parties having a late night conversation or other mind altering effects.

0Armok_GoB11y
Good idea, please do that.

Speculating that your evidence is a written work that has driven multiple people to suicide, further that the written work was targeted to an individual and happened to kill other susceptible people who happened to read it. I would still rate 2% as overconfident.

Specifically the claim of universality, that "any person" can be killed by reading a short email is over confident. Two of your claims that seem to contradict are, the claim that "any one" and "with a few clicks", this suggests that special or in depth knowledge of the... (read more)

1Armok_GoB11y
I don't remember this post. Weird. I've updated on it thou; my evidence is indeed even weaker than that,a nd you are absolutely correct in every point. I've updated to the point where my own estimate and my estimation of the comunitys estimate are indistinguishable.
1Armok_GoB11y
It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don't know exactly how much.

From my layman perspective it looks professional and very clean, great job.

I do not know if Omega can say that truthfully because I do not know weather the self referential equation representing the problem has a solution.

The problems set out by the OP assumes there is a solution and a particular answer but with out writing out the equation and plugging in his solution to show the solution actually works.

0cousin_it11y
There is a solution because Omega can get an answer by simulating TDT, or am I missing something?

Omega (who experience has shown is always truthful) presents the usual two boxes A and B and announces the following. "Before you entered the room, I ran a simulation of this problem as presented to an agent running TDT.

There seems to be a contradiction here. If Omega siad this to me I would either have to believe omega just presented evidence of being untruthful some of the time.

If Omega simulated the problem at hand then in said simulation Omega must have siad: "Before you entered the room, I ran a simulation of this problem as presented to ... (read more)

5drnickbone11y
Your difficulty seems to be with the parenthesis "(who experience has shown is always truthful)". The relevant experience here is going to be derived from real-world subjects who have been in Omega problems, exactly as is assumed for the standard Newcomb problem. It's not obvious that Omega always tells the truth to its simulations; no-one in the outside world has experience of that. However you can construe the problem so that Omega doesn't have to lie, even to sims. Omega could always prefix its description of the problem with a little disclaimer "You may be one of my simulations. But if not, then...". Or Omega could simulate a TDT agent making decisions as if it had just been given the problem description verbally by Omega, without Omega actually doing so. (Whether that's possible or not depends a bit on the simulation).
1cousin_it11y
Omega could truthfully say "the contents of the boxes are exactly as if I'd presented this problem to an agent running TDT".

I tried entering "Check weather tomorrow" into Toodledoo and it did not automatically set a due date of tomorrow.

I spend ~2 minutes and I found out how to turn on keyboard shortcuts but did not find the page explaining them, it was under a minute for both in RTM. May keyboard short cuts overlapped with gmail and or unix environments in RTM which made them easy to pick up.

I am sure you can find more complete comparisons elsewhere and I was not aware of Toodledoo until your post so it is probably not an evenhanded review on my part.

I graduated ~5 years ago with a engineering degree from a first tier University and I would have consider those starting salaries to be low to decent and not high. This is especially true in places with high cost of living like the bay area.

Having a good internship durring college often ment starting out at 60k/yr if not higher.

If this is significantly different for engineers exiting first tier University now it would be interesting to know.

It is a bad thing if it discourages people you want posting from posting. Which could happen if Luke came off as dominate and territorial. I do not think Luke appears dominate and territorial so this has not registered as a problem to me.

What about:

digital intelligence has certain advantages (e.g. copyability)

No degradation with iterative copying is a an advantage digital media is often thought to have over analog media. What I think they are trying to convey is perfect reproduction is possible and is a large advantage.

edit:spelling

1lukeprog11y
Right. And there is an entire section later on 'advantages from mere digitality'.

Thanks for an overview of a current analytical model of how the nurons learn timing and answering our random neuroscience questions.

You gave yourself a powerful mind altering chemical that most peoples bodies/minds have grown up with and have built up mental models, skill, techniques to handle it. Your mind however did not have a half a life time to learn how to handle it. That is why:

it probably isn’t very helpful in a technological civilization which requires people to sit at computers all day manipulating symbols. My guess is that women are going to rule in such a world, as high testosterone men become increasingly useless and tend to wind up in prison. It may get to the point wh

... (read more)

I consider it a low probability that I have enough experience/knowledge to generalize my understanding/perceptions to a wide audience with fidelity. If you want to talk about it over the phone or on skype some time I would be happy to oblige. Quick iterative discussion can do much to shorten inferential distance and if a common understanding is found easily it might be worth writing up and posting.

Do you just want to learn to control your sneezes? Or are you interested in the photosensitive effect directly? If the former I would encourage you to learn more direct control mechanism rather then using a external trigger like light.

edit: spelling

2spqr0a111y
Primarily I was looking for an exercise in conditioning, any practical benefits are ancillary. If progress continues, I will not sneeze unless a specific trigger is present (staring at a very bright light); so it should be a passive benefit with no long-term upkeep. If you have better ways of control sneezing, I am interested in knowing them.

Deception of children for the purpose challenging them to spot the inconstancy is common practice in my experience. In this case though the inferential distance seems like it would be way to large to overcome with out additional evidence. The additional evidence is often the parent taking on a different tone of voice and method of reasoning while presenting faked evidence. Which makes it hard to tell if the parent is going too far in this example.

If the purpose of this system is what it does, POSIWID, then this tradition of deceiving often trains children ... (read more)

Better memory and processing power would mean that probabilistically more businessmen would realize there are good business opportunities where they saw none before. Creating more jobs and a more efficient economy, not the same economy more quickly.

ER doctors can now spend more processing power on each patient that comes in. Out of their existing repertoire they would choose better treatments for the problem at hand then they would have otherwise. A better memory means that they would be more likely to remember every step on their checklist when prepping f... (read more)

0loup-vaillant11y
The potential problem with your speculation is that the relative reduction of the mandatory-work / cognitive-power ratio may be a strong incentive to increase individual work load (and maybe massive lay-offs). If we're reasonable, and use our cognitive power wisely, then you're right. But if we go the Hansonian Global Competition route, the Uber Doctor won't spend more time on each patient, but just as much time on more patients. There will be too much Doctors, and the worst third will do something else.

It is currently unknown how to apply special relativity SR and general relativity GR to quantum systems and it appears likely that they break down at this level. Thus applying us SR or GR on black holes or the very beginning of the universe is unlikely to result in perfectly accurate description of how the universe works.

But I've heard people talk about such situation as if Schroedinger's belief that the cat was alive or dead was important. Especially in connection with the idea that a waveform only truly collapses when an observation is made by a conscious agent.

No. Strong evidence for consciousness being a fundmental part of reality would be a huge deal.

The whole business seems murky and mysterious to me, and I hope for some enlightenment. And if it is not enlightening, it can at least be entertaining.

It is often not so entertaining for the person trying to expla... (read more)

Definitely when:

  • You are only going in circles. ** You need more data, to do so, you should preform an experiment.
  • You can no longer remember/track your best created strategies.
  • You can not judge value difference between new strategies and existing strategies.
  • You spend x percentage of your time tracking/remember your created strategies. Where x is significant.
  • There are better questions to consider.
  • The value of answering the question will diminish greatly if you spend more time trying to optimize it. ** "It is great you finished the test and got al
... (read more)

There seem to be several problems with the reasoning displayed in your post.

Could you communicate what you want people to take a way from this so I can put the post in a proper context and decide how to communicate the problems I see?

0PhilGoetz11y
My guess is that the Copenhagen interpretation isn't supposed to talk about what your beliefs are; it's just supposed to talk about entanglement of waveforms. So Schroedinger's beliefs about whether the cat is alive or dead don't matter. But I've heard people talk about such situation as if Schroedinger's belief that the cat was alive or dead was important. Especially in connection with the idea that a waveform only truly collapses when an observation is made by a conscious agent. If you don't say that only conscious agents can collapse waveforms, then you have to agree that something in the box collapses the waveform as seen from inside the box, while it's still uncollapsed to Schroedinger. And Schroedinger's opening the box collapses that waveform for him; but it is still uncollapsed for someone outside the room. But if you do say that only conscious agents can collapse waveforms, then it's something about their mental processes that does the collapsing. This could mean their beliefs matter. And then, the cat is always dead. The whole business seems murky and mysterious to me, and I hope for some enlightenment. And if it is not enlightening, it can at least be entertaining.

Another graduate student, I have in general heard a similar opinions from many professors through undergrad and grad school. Never disdan for bays but often something along the lines of "I am not so sure about that" or "I never really grasped the concept/need for bayes." The statistics books that have been required for classes, in my opinion durring the class, used a slightly negative tone while discussing bayes and 'subjective probability.'

It does charge a 5% fee which is not small.

How about college newspapers, forums, meetups, talks, casual lunches and what ever else works. Colleges often act as small semi-closed social ecosystems so it is easier to reach the critical number needed for a self sustaining community, or the critical number of people to take an idea from odd to normal.

Can you think of other online communities that suffer or at least go through great and unpredictable change due to a high influx of new people?

2khafra11y
The only one I can think of that's stayed relatively high-quality for a long time is Hacker News, and they actively discourage large influxes--for example, by flooding the front page with posts on Erlang internals when mentioned in the mass media. Paul Graham also does very active experimentation with HN's reputation system, which I like: There are karma threshholds for voting down comments, higher ones for voting down posts; you cannot vote down a direct reply to your own comment; you cannot vote down a comment more than a few days old (this one wouldn't work as well here). The most radical change he's made is that only you can see the exact karma for your own comments (although comments below zero are progressively lighter shades of grey).

I have heard people talk of punishing abortion on par with other kinds of murder. This view point has the real potential to alienate people. It makes sense that people with that view point and realize this are not shouting it to the world or filing court cases. Instead they judge small changes are the best way to get what they want in the long term and fight those intermediary battles instead of taking it straight on.

For the people down who would down vote this, is it better if she did not respond to lukeprog's post at all? Acknowledging someone when they attempt to communicate to you is considered polite. It often serves the purpose communicating a lack of spite and/or hard feels even as you insist on ending the current conversation.

We could have a google+ account open and offer to hangout with interested parties near by or far. I got the idea from: http://lesswrong.com/r/discussion/lw/731/meetup_proposal_google/

I think the point that others have been trying to make is that gaining the evidence isn't merely of lower importance to the agent than some other pursuits, it's that gaining the evidence appears to be actually harmful to what the agent wants.

Yes I was proposed the alternative situation where the evidence is just considered as lower value as an alternative that produces the same result.

I don't see how the situation is meaningfully different from no cost. "I couldn't be bothered to get it done" is hardly an acceptable excuse on the face of it

... (read more)

I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar.

In a world were STD tests cost absolutely nothing, including time, effort, thought, there would be no excuse to not have taken a test and I do not see a method for generating plausible deniability by not knowing.

Some situation at a college wh

... (read more)
3Desrtopa11y
That's not really the least convenient possible world though, is it? The least convenient possible world is one where STD tests impose no additional cost on him, but other people don't know this, so he still has plausible deniability. Let's say that he's taking a sexuality course where the students are assigned to take STD tests, or if they have some objection, are forced to do a make up assignment which imposes equivalent inconvenience. Nobody he wants to have sex with is aware that he's in this class or that it imposes this assignment.
1MarkusRamikin11y
I don't see how the situation is meaningfully different from no cost. "I couldn't be bothered to get it done" is hardly an acceptable excuse on the face of it, but despite that people will judge you more harshly when you harm knowingly rather than when you harm through avoidable ignorance, even though that ignorance is your own fault. I don't think they do so because they perceive a justifying cost. I think the point that others have been trying to make is that gaining the evidence isn't merely of lower importance to the agent than some other pursuits, it's that gaining the evidence appears to be actually harmful to what the agent wants.

Is the standard then that it's instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one's decisions?

It weighs into the decision, but it seems like it is insufficient by itself. An experiment can change my decision radically but be on unimportant topic(s). Topics that do not effect goal achieving ability. It is possible to imagine spending ones time on experiments that change one's decisions and never get close to achieving any goals. The vague answer seems to be prioritize by how much the experiments will be likely to help achieve ones goals.

Additional necessary assumption seems to be that Alex cares about "Whatever can be destroyed by the truth, should be." He is selfish but does his best to act rationally.

Let's call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn't care about their partners' sexual health (or the knock-on effects of people in general not caring about their partners' sexual health) at all, then this is the right choice instrumentally.

Therefore Alex does not value knowing whether or not his has an s... (read more)

5rocurley11y
I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar. (I don't think this situation is even particularly implausible. Some situation at a college where they'll give you a cookie if you take an STD test seems quite likely, along the same lines as free condoms.)
2DSimon11y
This is a very good point. We cannot gather all possible evidence all the time, and trying to do so would certainly be instrumentally irrational. Is the standard then that it's instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one's decisions?

The few specific situations that I drilled down on I found that "deliberately doing a crappy job of (a)" never came up. Some times however the choice was between doing (a)+(b) with topic (d) or doing (a)+(b) with topic (e), where it is unproductive to know (d). The choice is clearly to do (a)+(b) with (e) because it is more productive.

Then there is not conflict with "Whatever can be destroyed by the truth, should be." because what needs to be destroyed is prioritized.

Can you provide a specific example where conflict with "Whatever can be destroyed by the truth, should be." is ensured?

3DSimon11y
Okay, I think this example from the OP works: Let's call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn't care about their partners' sexual health (or the knock-on effects of people in general not caring about their partners' sexual health) at all, then this is the right choice instrumentally. However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount. Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well. By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational). This is what I meant by "deliberately doing a crappy job of [finding out which ideas are false]". This does bring up an interesting idea, though, which is that it might not be (instrumentally) rational to be maximally (epistemically) rational.

I do not see an obvious and direct conflict, can you provide an example?

0DSimon11y
The conflict seems to be that, according to the advice, a rationalist ought to (a) try to find out which of their ideas are false, and (b) evict those ideas. A policy of strategic ignorance avoids having to do (b) by deliberately doing a crappy job of (a).

Some sense that there's something distinct about her which would mean that lukeprog

This something distinct, would a more detailed set of specs qualify? In your mind, is it that lukeprog seems to have few and shallow specs that bothers you? Or is your "distinct" distinct from specs entirely?

Do you see a difference between that, and stating a intention to leave the relationship if the other person has sex with someone else? Luckily I currently live in a time and place where these two scenarios are often functionally similar.

5jsteinhardt12y
Yes, as long as it is not intended as a threat. If not meant as a threat, then it is just a statement of your own preferences (e.g. "I strongly prefer not to be in a relationship that is non-monogamous"). I find such preferences highly suboptimal, but I'm willing to accept that some people are unable to alter them.

Can you give examples of beliefs and actions of people who believe they "own other people's sexualities."

4jsteinhardt12y
If I am dating you, and I (explicitly or implicitly) forbid you to have sex with anyone else, then I am assuming ownership of your sexuality, by telling you what you can and cannot do with it. (I don't want to speak for eridu and jason, but this is how I interpreted the phrase in the OP.)

I think I understand where you are coming from approximately, but for clarity what specifically would liking her entail above and beyond a set of specs?

5NancyLebovitz12y
Some sense that there's something distinct about her which would mean that lukeprog would care if she were replaced by a different woman who was as good-looking and as interested in sex with him.

Why wish for:

I wish I wasn't as intelligent as I am, wish I was more normal mentally

and had less innate ability for math?

Why not just with for being better at socializing/communicating?

By:

our cultural sentiments surrounding meat consumption

Do you mean the rationalist community or the human community at large?

0Cosmos12y
I meant humanity at large, and I expect the rationalist community to follow suit.

Also what about the children who learn baby sign language before speaking?

Tying objects on top of or in cars for transport is a pretty practical skill.

I would not want it at all in the comments. It might be acceptable to have them on main post.

1Michelle_Z12y
If they're going to be added, they should definitely not be added to the comments. That would clutter it up way too much. If they are added to posts, a line break between the post itself and the signature would help along with making the font a lighter shade of grey. Not so light that you cannot read, but something like the date and time next to the comment's name (about 50% grey? Can't honestly tell. I want to say 40% but that might be too light). It might also be aesthetically pleasing to place it in italics. That way those people who don't really want to see it automatically see the slanted, light font and skip over it, but it's available to someone who does want to read it.

I am no psychologist. I thought one of the benefits of gradual habituation was that it was in a controlled setting that subject could end at any time with essentially no consequences. This contrasts "sometimes forced in to situations", I also have the impression that these forced situations there is no sequential order of events from the least discomfort to the most, in other words no gradualness(Also perhaps these events start at too high of a stimulus level.)

Finding someone capable of setting up a gradual habituation regiem and having the time to follow through with it are the biggest obstacles to experimenting with habituation regiems in my experience.

1Alicorn12y
I did not submit "help me figure out how to deal with sweat" as a True Rejection Challenge, so this line of advice is neither on-topic nor welcome.

Effort is hard enough to judge in person and pretty much impossible over the internet. I have observed more then once in my life people judged as lazy, or many other negative traits, only to have the person years latter discover a perviously unknown medical condition causing the underlying problems. Once it is diagnosed as organ failure, a growth putting pressure in an odd place society stops judging them as lazy or any number of other negative traits.

The initial label of laziness(or other negative trait) was a logical misstep, coming to a conclusion without sufficient evidence.

I understood/understand that was/is your point. I was referring to "select people", meaning people who are more sensitive to reduced food intake or photo sensitive. People not near the mean of the bell curve.

realize that irrational psychological flaws are things that should and in many cases can be overcome (I know, I've done it), not taken as unshakeable premises.

I know I have done it too. However I can not put "psychological flaws" in the right context to understand exactly what you mean by it, since it is not always possible to ... (read more)

My reply to the edited post:

The world is not obligated to be convenient for you.

I assume you state this because you are under the impression that Alicorn believes/acted like/implied the world is obligated to be convenient for Alicorn.

That is not the impression I have obtained by reading the posts in this discussion. What specifically gave you that impression?

-5ShardPhoenix12y

edit: The whole post I responded to was:

  1. and 3. there are essentially true.

The negative consequence of following through with 1 or 3 can be so high for select people that they are not worth doing.

Following through with 1 may cause weight loss but may also cause diminished intelligence, diminished energy, malnutrition, again with select people.

Following through on 3 may cause cancer or increase the risk of cancer to high levels, again with select people.

Also, it's a good idea to get over harmful and unnecessary aversions regardless.

This statement i... (read more)

-4ShardPhoenix12y
I didn't suggest a starvation diet, and sunscreen exists. Besides, my general point is that sometimes you need to try harder [http://lesswrong.com/lw/ui/use_the_try_harder_luke/] instead of giving up due to things that aren't even harmful, and also realize that irrational psychological flaws are things that should and in many cases can be overcome (I know, I've done it), not taken as unshakeable premises.

This think jmed's link has the right idea.

The key to desensitization, in my experience at least, is to be able to force calmness during the whatever is causing the nervousness/stress/fear durring exposure. Start off with the smallest stimuli that invokes fear and do your best to be calm and relaxed. Deep breathing at first and latter activities that require some attention like reading, cooking, stretching, etc. After the current level of stimulus does not interrupt these activities increase the level of stimulus and repeat.

If no progress is made in a mont... (read more)

A few interesting things about LOS where brought up that covered CH1. CH2 is planned for next week.

I enjoyed the paranoid debating more then expected. Three additional people joined the paranoid debating after walking through the room on other business. So It ended up with a total of seven people. Cog has a record of responses and is going to tell us our score latter. It should be fun to track scores over time and see how people adjust after having played the game for awhile. Also it will be fun to see if people will learn how to hide their tells of weathe... (read more)

0JenniferRM12y
Cool! If you see any way to improve the scoring process for paranoid debating, it would be neat to fold the improvements into code [https://github.com/JenniferRM/Paradebate] so other people have an easier time getting started. At a meetup on southern california we speculated that the game might be useful as a test bed for trying out discussion strategies and directly measuring what worked best, but I expect that it will take some work to figure out a process for playing and scoring in a way that can detect significant differences in playing strategies.
Load More