All of Viktor Riabtsev's Comments + Replies

I found the character sheet system to be very helpful. In two words its just a ranked list of "features"/goals you're working towards, with a comment slot (it's just a google sheet).

I could list personal improvements I was able to gain from the regular use of this tool, like weight loss/exercise habits etc., but that feels too much like bragging. Also, I can't prove correlation vs causation.

The cohort system provides a cool social way to keep yourself accountable to yourself.

Dead link for "Why Most Published Research Findings Are False". Googling just the url parameters yields this.

Did anyone else get so profoundly confused that they googled "Artificial Addition"? Only when I was half way though the bullet point list that it clicked that the whole post is a metaphor for common beliefs about AI. And that was on the second time reading, first time I gave up before that point.

I shall not make the mistake again!

You probably will. I think this biases thing doesn't disappear even when you're aware of it. It's a generic human feature. I think self-critical awareness will always slip at the crucial moment; it's important to remember this and acknowledge it. Big things vs small things as it were.

On my more pessimistic days I wonder if the camel has two humps.)

Link is dead. Is this the new link?

It seems less and less like a Prisoner's Dilemma the more I think about it. Chances are, "oops" I messed up.

I still feel like the thing with famous names like Sam Harris, is that there is a "drag" force on his penetration on the culture nowadays because there is a bunch of history that has been (incorrectly) publicized. His name is associated with controversy; despite his best to avoid it.

I feel like you need to overcome a "barrier to entry" when listening to him. Unlike Eliezer, who's public image (in my limited opinion) is actually new user friendly.

Some

... (read more)

I could be off base here. But a lot of cooperate vs non-cooperate classical stories often involve two parties who hate each other's ideologies.

Could you then not say: "They have to first agree and/or fight a Prisoner's Dilemma on an ideological field"?

4Sniffnoy4y
I think you're going to need to be more explicit. My best understanding of what you're saying is this: Each participant has two options -- to attempt to actually understand the other, or to attempt to vilify them for disagreeing, and we can lay these out in a payoff matrix and turn this into a game. I don't see offhand why this would be a Prisoner's Dilemma, though I guess that seems plausible if you actually do this. It certainly doesn't seem like a Stag Hunt or Chicken which I guess are the other classic cooperate-or-don't games. My biggest problem here is the question of how you're constructing the payoff matrices. The reward for defecting is greater ingroup acceptance, at the cost of understanding; the reward for both cooperating is increased understanding, but likely at the cost of ingroup acceptance. And the penalty for cooperating and being defected on seems to be in the form of decreased outgroup acceptance. I'm not sure how you make all these commensurable to come up with a single payoff matrix. I guess you have to somehow, but that the result would be a Prisoner's Dilemma isn't obvious. Indeed it's actually not obvious to me here that cooperating and being defected on is worse than what you get if both players defect, depending on one's priorities, which woud definitely not make it a Prisoner's Dilemma. I think that part of what's going on here is that different people's weighting of these things may substantially affect the resulting game.

So ... a prisoner's dilemma but on a meta level? Which then results in primary consensus.

5Sniffnoy4y
What does this have to do with the Prisoners' Dilemma?

Yep. Just have to get into the habit of it.

Less Wrong consists of three areas: The main community blog, the Less Wrong wiki and the Less Wrong discussion area.

Maybe redirect the lesswrong.com/r/discussion/ link & description to the "Ask a Question" beta?

5TheWakalix4y
This is the old version, kept for the sake of not deleting old things. It is not meant to be an accurate description of modern LW.

That was a great read.

figure out what was going on rather than desperately trying to multiply and divide all the numbers in the problem by one another.

That one hits home. I've been doing a bit of math lately, nothing too hard, just some derivatives/limits, and I've found myself spending inordinate amounts of time trying taking derivatives and do random algebra. Just generally flailing around hoping to hit the right strategy instead pausing to think first: "How should this imply that?" or "What does this suggest?" before doing rote algebra.

UV meters! Thank you! Seems such an obvious idea in hindsight.

Why wonder blindly when you can quantify it. I'll look into getting one.

2Douglas_Knight4y
Or you could just look at the weather report, now that you know what to look for.

Dead link to "scientists shouldn't even try to take ethical responsibility for their work" link is now here

2Raemon4y
fixed

I did that a couple minutes ago. Then tried to fix the formatting, and I think I then subsequently undid your formatting fixes.

5Ben Pace4y
ahaha Added: I fixed it again.

Related:

“Sometimes a hypocrite is nothing more than a man in the process of changing.” ― Brandon Sanderson, Oathbringer (By Dalinar Kholin)

Umm, it's a real thing. ECC memory https://en.m.wikipedia.org/wiki/ECC_memory I'm sure it isn't 100% foolproof (coincidentally the point of this article) but I imagine it reduces error probability by orders of magnitude.

I'd say there are mental patterns/heuristics that can be learned from video games that are in fact useful.

Persistence, optimization, patience.

I won't argue there aren't all sorts of exciting pitfalls and negatives that could also be experienced; I would just point at something like Dark Souls and claim: "yeah, that one does it well enough on the positives".

That's one large part of the traditional approach to the Santa-ism, yeah. But, it doesn't have to be, as Eliezer describes in the top comment.

it is still relatively unlikely that a person disagree for an opportunity to refine their model of the universe.

It still does happen though. I've only gotten this far in the Recommended Sequences, but I've been reading the comments whenever I finish a sub-sequence; and they (a) definitely add to the understanding, and (b) expose occasional comment threads where two people arrive at mutual understanding (clear up lexical miscommunication etc.). "oops" moments are rare, but the whole karma system seems great for occasional productive di... (read more)

35 - 8 = 20 + (15 - 8)

Wow. I've never even conceived of this (on it's own or) as a simplification.

My entire life has been the latter simplification method.

My favorite thing to do in physics/math classes, all the way up 2nd year in university (I went into engineering), was to ask others how they fared on tests, (in order to) then try to figure out why my answers were wrong.

I found genuine pleasure in understanding where I went wrong. Yet this seemed taboo in highschool, and (slightly less) frowned upon in university.

I feel like rewarding the student who messed up, however much or little, with some fraction of the total test score, like 10%; would be a great idea. You gain incentive to figure out what you missed; even if you care little about it. That's better then nothing.

Reading these comment chains somehow strongly reminds of listening to Louis CK.

I found a reference to a very nice overview for the mathematical motivations of Occam's Razor on wikipedia.

It's Chapter 28: Model Comparison and Occam's Razor; from (page 355 of) Information Theory, Inference, and Learning Algorithms (legally free to read pdf) by David J. C. MacKay.

The Solomonoff Induction stuff went over my head, but this overview's talk of trade-offs between communicating increasing numbers of model parameters vs having to communicate less residuals (ie. offsets from real data); was very informative.

then your model says that your beliefs are not themselves evidence, meaning they

I think this should be more like "then your model offers weak evidence that your beliefs are not themselves evidence".

If you're Galileo and find yourself incapable of convincing the church about heliocentrism, this doesn't mean you're wrong.

Edit: g addresses this nicely.

Upvoted for the "oops" moment.

Thank you. I tried using http://archive.fo/ , but no luck.

I'll add https://web.archive.org/ to bookmarks too.

3Said Achmiz4y
Archived version. [https://web.archive.org/web/20110616221521/http://www.avdf.com/feb96/humour_liar.html]

Yeah, you never know if someone in the process of reading the Sequences, won't periodically go back and try to read all the discussions. Like, I am not going to read the twenty posts with 0 karma and 0 replies; but ones with comments? Opposing ideas and discussions spark invigorating thought. Though it does get a bit tedious on the more popularized articles, like this one.

I am going to try and sidetrack this a little bit.

Motivational speeches, pre-game speeches: these are real activities that serve to "get the blood flowing" as it were. Pumping up enthusiasm, confidence, courage and determination. These speeches are full of cheering lines, applause lights etc., but this doesn't detract from their efficacy or utility. Bad morale is extremely detrimental to success.

I think that "Joe has utility-pumping beliefs" in that he actually believes the false fact "he is smart and beautiful"; is the w... (read more)

Show him how to send messages using flashing mirrors.

Oh god. That is actually just humongous in it's possible effect on warfare.

I mean add simple ciphers to it and you literally add another whole dimension to warfare.

Communication lines setup this way are almost like adding radio. Impractical in some situation, but used in regional warfare with multiple engagements? This is like empire forming stuff just from reflective stone plus semi-trivial education equals dominance.

LessWrong FAQ

Hmm, couldn't find a link directly on this site. Figured someone else might want it too (although a google search did kind of solve it instantly).

I suggest the definition that biases are whatever cause people to adopt invalid arguments.

False or incomplete/insufficient data can cause the adoption of invalid arguments.

Contrast this with:

The control group was told only the background information known to the city when it decided not to hire a bridge watcher. The experimental group was given this information, plus the fact that a flood had actually occurred. Instructions stated the city was negligent if the foreseeable probability of flooding was greater than 10%. 76% of the control group concl
... (read more)

drag in Bayes's Theorem and ; the link was moved to http://yudkowsky.net/rational/bayes/, but Eliezer seems to suggest https://arbital.com/p/bayes_rule/?l=1zq over it. (and it's really really good)

Thanks. I bookmarked http://archive.fo/ for these kinds of things.

The Simple Truth link should be http://yudkowsky.net/rational/the-simple-truth/

2habryka4y
Thanks, fixed!

I am guessing that the link what truth is. is meant to be http://yudkowsky.net/rational/the-simple-truth

3habryka4y
Thanks, fixed as well!

something terrible happens link is broken. Was moved to http://yudkowsky.net/other/yehuda/

2habryka4y
Also fixed!
3Vladimir_Nesov4y
There's an archived copy here [http://archive.today/2007.09.27-215558/http://www.singinst.org/blog/2007/06/16/transhumanism-as-simplified-humanism/].

Excellent write up.

"place far more trust in the human+AI system to be metaphilosophically competent enough to safely recursively self-improve " : I think that's a Problem enough People need to solve (to possible partial maximum) in their own minds, and only they should be "Programming" a real AI.

Sadly this won't be the case =/.