All of papetoast's Comments + Replies

You should link this post on the top of your old one

2romeostevensit6d
Can't, don't have access to my pre LW 2.0 account due to an email collision

I feel like using changing it to proper footnotes would be better

I find this explanation to be much easier to understand than SimonM/gjm's

I object that we need to weigh the cost of everything is a quite important thing to mention in this post. Weighing the cost of everything is a very important thing, but it is another topic on its own; It is a whole different skill to hone (I think Duncan actually wrote a post about this in the CFAR handbook).

(nitpick: 6400 + the diagrams is closer to 20-25 pages at font size 12 on Word)

I cannot access www.lesswrong.com/rejectedcontent (404 error). I suspect you guys forgot to give access to non-moderators, or you meant www.lesswrong.com/moderation (But there are no rejected posts there, only comments)

2Raemon2mo
We didn't build that yet but plan to soon. (I think what happened was Ruby wrote this up in a private google doc, I encouraged him to post it as a comment so I could link to it, and both of us forgot it included that explicit link. Sorry about that, I'll edit it to clarify)

I think i'm going to unite all my online identities. Starting to get tired of all my wasted efforts that only a single person or two will see.

2Dagon2mo
Do you think a united/reused identifier will change who sees which efforts?  Or do you mean "I'm going to focus attention where I'm more widely read, and stop posting where I'm not known"?

osu! should be written in lowercase. a tweet from osu!

And a good teacher will try to do as much it as possible

Missing word: And a good teacher will try to do as much of it as possible

I am mainly just curious how other people live their lives. It is interesting to know diverse humans really are. Also I may just be stuck in a local optimum, then it would be at least nice to know there are better local optimums, even if it would take me too much effort to change my way of life.

I am probably some of the less emotional connections needing people out there, and I can really go on my life just the same without talking to my close friends for weeks. So while I think this is a good way of making close friends (I will definitely try to apply it on a small amount of people at least), I'm not too convinced on making a lot of close friends, because I think I already have enough close friends (4 or so - close friends is a fuzzy term), and more close friends feels like it would dilute my time too much. What do you guys think?

2Neel Nanda2mo
If you feel like you have enough close friends to satisfy you, then more power to you! It's not my job to tell you how to live your life if you're happy with it

You probably gave me too much credit for how deep I have thought about morality. Still, I appreciate your effort in leading me to a higher resolution model. (Long reply will come when I have thought more about it)

So you can't adequately explain "should" using only a descriptive account.

I don't think I am ready to argue about "should"/descriptive/normative, so this is my view stated out without intent to justify it super rigorously. I already think there is no objective morality, no "should" (in its common usage), and both are in reality a socially constructed thing that will shift over time (relativism I think?, not sure). Any sentence like "You should do X" really just have a consequentialistic meaning of "If you have terminal value V (which the speaker assumed yo... (read more)

1TAG3mo
So which do you believe in? If morality is socially constructed, then what you should do is determined by society, not by your own terminal values. But according to subjectivism you "should" do whatever your terminal values say, which could easily be something anti social. The two are both not-realism, but that does not make them the same. You have hinted at an objection to universal morality: but that isn't the same thing as realism or objectivism. Minimally, an objective truth is not a subjective truth, that is to say, it is not mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the same everywhere, which is to say it does not imply universalism. Truths that are objective but not universal would be truths that vary with objective circumstances: that does not entail subjectivity, because subjectivity is mind dependence. I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality. To give some examples that are actually about morality and how it is contextual: * A food-scarce society will develop rules about who can eat how much of which kind of food. * A society without birth control and close to Malthusian limits will develop restrictions on sexual behaviour, in order to prevent people being born who are doomed to starve, whereas a society with birth control can afford to be more liberal. Using this three level framework, universal versus objective-but-local versus subjective, lack of universality does not imply subjectivity. Anything could be justified that way, if anything can. So how sure can you be that the framework (presumably meanign von Neumann rationality) is correct and relevant. Remember, vN didn't say vNR could solve ethical issues. Peo

That's descriptive, not normative.

what's the issue with a descriptive statement here? It doesn't feel wrong to me so it would be nice if you can elaborate slightly.

Also, I never found objective morality to be a reasonable possibility (<1%), are you suggesting that it is quite possible (>5%) that objective morality exists, or just playing devil's advocate here?

1TAG3mo
Any definition of what you should do has to be normative, because of the meaning of "should". So you can't adequately explain "should" using only a descriptive account. In particular , accounts in terms of personal utility functions aren't adequate to solve the traditional problems of ethics , because personal UFs are subjective , arbitrary and so on -- objective morality can't even be described within that framework. What kind of reasoning are you using? If your reasoning is broken, the results you are getting are pretty meaningless. I can say that your reasons for rejecting x are flawed without a believe in X. Isn't the point of rationality to improve reasoning?

In that case, tabooing the word is probably better than bringing the dictionary to show that the other person's use of words are against common sense (assuming you want to actually reach a consensus, but if youre more about winning the argument then bring the dictionary is probably better?)

Documenting a specific need of mine: LaTeX OCR software

tl;dr: use Mathpix if you use it <10 times per month or you are willing to pay $4.99 per month. Otherwise use SimpleTex

So I have been using Obsidian for note taking for a while now and I eventually decided to stop using screenshots but instead learn about LaTeX so the formulas look better. At first I was relying on the website to show the original LaTeX commands but some websites (wiki :/) doesn't do that, and also I started reading math textbooks as PDF. Thus started my adventure to find a good and... (read more)

(I am currently on the path of learning how values actually work and figuring out what I should really do.)

It has been a few days since I read this post so I may be misrepresenting you, but I think this post committed a similar mistake to people who think that arguing with another person to change their mind is meaningless given that we don't have free will, because given a deterministic future, that person will just automatically change their mind. But it doesn't work like that, because the act of arguing is part of the deterministic process that eventual... (read more)

2Gordon Seidoh Worley4mo
I eventually got less confused about values. I still think there's something unnecessary in worry about value drift, and I could probably make a better argument for that now but I've got other projects on my plate. Anyway, since you're thinking about how values actually work, this post [https://www.lesswrong.com/posts/k8F8TBzuZtLheJt47/deconfusing-human-values-research-agenda-v1] captures a bunch of what I figured out and links to other things, but it's also now a couple years old and I'd probably say things differently than I did at the time.

Cooperation may incur different costs on different participants.

Related: The Schelling Choice is "Rabbit", not "Stag"

It does not apply to this game where punishing cooperators are purely worse off for everyone, but it does talk about how for poor people the best choice may be to do the low risk, low reward action.

It is really annoying that if you use footnotes from the LW Docs Editor, and then switches to the Markdown editor, the footnotes get irrevertably messed up like this[[1]](#fnhqkg4lye79s)

  1. **[^](#fnrefhqkg4lye79s)**

    this is an example footnote

^the above is a reply to a slightly previous version

Agree with everything here, and all the points the first paragraph I have not thought about. I'm curious if you have a higher resolution model to different dimensions of learning though, feels like I can improve my post if I have a clearer picture.

Btw, your whole reply seem to be a great example of what do you mean by "it's probably best to acknowledge it and give the details that go into your beliefs, rather than the posterior belief itself."

A real conversation gives me 1 datapoint that people use the word wisdom for the concept intelligence

I think people (myself included) really underestimated this rather trivial statement that people don't really learn about something when they don't spend the time doing it/thinking about it. People even measure mastery by hours practiced and not years practiced, but I still couldn't engrave this idea deep enough into my mind.

I currently don't have much writable evidence about why I think people underestimated this fact, but I think it is true. Below are some things that I have changed my mind/realised after noticing this fact.

  • cached thoughts, on yourself
... (read more)
3Dagon5mo
There are at least a few different dimensions to "learning", and this idea applies more to some than to others.  Sometimes a brief summary is enough to change some weights of your beliefs, and that will impact future thinking to a surprising degree.  There's also a lot of non-legible thinking going on when just daydreaming or reading fiction. I fully agree that this isn't enough, and both directed study and intentional reflection is also necessary to have clear models.  But I wouldn't discount "lightweight thinking" entirely.

[Draft] It is really hard to communicate the level/strength of basically anything on a sliding scale, but especially things that could not make any intuitive sense even if you stated a percentage. One recent example I encountered is expressing what is in my mind the optimal tradeoff between reading quickly and thinking deeply to achieve the best learning efficiency.

Not sure what is the best way to deal with the above example, and other situations where percentage doesn't make sense.

But where percentage makes sense, there are still two annoying problems. 1.... (read more)

2Dagon5mo
For most topics, it's probably not worth going very deep in the rabbit hole of "what does a probability mean in this context".  Yes, there are multiple kinds of uncertainty, and multiple kinds of ratio that can be expressed by a percentage.  Yes, almost everything is a distribution, most not normal, and even when normal it's not generally specified what the stddev is.  Yes, probability is causally recursive (the probability that your model is appropriate causes uncertainty in the ground-level probability you hold).  None of that matters, for most communication.  When it does, then it's probably best to acknowledge it and give the details that go into your beliefs, rather than the posterior belief itself. For your example, the tradeoff between fast and careful, I doubt it can be formalized that way, even if you give yourself 10 dimensions of tradeoff based on context.  "Slow is smooth, smooth is fast" is the classic physical training adage, and I can't think of a numeric representation that helps.

Thanks for the datapoint. Also links serving as indicator of effort rather than actually expanding on the amount of information on the passage is a good point. If links are mainly indicator of effort, I think this imply that people should not try as hard to make sure the relevance of the links.

FWIW: My click through rate is probably <5%.

How likely are people actually clicking through links of related materials in a post, seems unlikely to me, actually unlikely to the point that I am thinking about whether it is actually useful.

3Dagon5mo
Depends on the post and the links.  I click through about 15% of Zvi's links, for instance, but I appreciate the others as further information and willingness to cite, even if I don't personally use them.  Other posts, I skim rather than really examining, and links still add value by indicating that the author has actually done a bit of research into the topic.

My comment at the point of time of his reply:

Many people are too unselective on what they read, causing them to spend a lot of time reading worthless material (This applies to this shortform).

3Dagon5mo
I don't necessarily disagree generally, but I do somewhat disagree for myself.  Since I don't have visibility into other people's reading habits or selectivity, I'm unsure if I'm an outlier or if I actually do disagree.  What does "many people" mean, and more importantly how can an individual (specifically: me) tell if they are too unselective, on what dimensions?  

Clarifications: 

  • What I had in mind when I say "people" is myself, and the average non-LW friends around me.
  • Worthless is a bad word choice, I just mean that there are better things to read.

Additionally:

I also think I have the tendency of trying to read everything in a textbook, even if it is quite low in information density, with many filler stories or sentences served as conjunctions. I probably should be trying to skip sentences, paragraphs and sections where I have sufficient confidence of either 1. I have already learned it and don't need a refresh... (read more)

3Dagon5mo
Thanks,  "don't read everything in a textbook" is good practical advice.  Learn to skim, and to stop reading any given segment when you cross the time/value threshold.  Importantly, learn to NOTICE what value you expect from the next increment of time spent.  Getting that meta-skill honed and habitual pays dividends in many many areas.

For others who also havent heard of Cade Metz: he seems to be a news reporter (for the lack of a better word) writing mostly about AI. see https://www.nytimes.com/by/cade-metz.

[Draft] Are memorisation techniques still useful in this age where you can offload your memory to digital storage?

I am thinking about using anki for spaced repetition, and the memory palace thing also seem (from the surface level) interesting, but I am not sure whether the investments will be worth it. (2023/02/21: Trying out Anki)

I am increasingly finding it more useful to remember your previous work so that you don't need to repeat the effort. Remembering workflow is important. (This means remembering things somewhere is very important, but im still not ... (read more)

3Dagon5mo
Some certainly are.  For many facts, memorized data is orders of magnitude faster than digitally-stored knowledge.  This is enough of a difference to be qualitative - it's not worth looking up, but if you know it, you'll make use of it. There's the additional level of internalizing some knowledge or techniques, where you don't even need to consciously expend effort to make use of it.  For some things, that's worth a whole lot. If you're a computer nerd, think of it as tiered storage.  On-core registers are way faster than L1 cache, which is faster than L2/3 cache, which is again faster than RAM, which is faster than local SSD storage which is faster than remote network storage.  It's not a perfect analogy, because the limits of each tier aren't as clearly defined, and it's highly variable how easy it is to move/copy knowledge across tiers. Indexing and familiarity matters a lot too.  Searching for something where you think it's partway through some video you saw 2 years ago is NOT the same as looking up a reminder in your personal notes a week ago.

[Draft]

Filter Information Harder (TODO: think of a good concept-handle)

Note: Long post usually mean the post is very well thought out and serious, but this comment is not quite there yet.

Many people are too unselective on what they read, causing them to waste time reading low value material[1].

2 Personal Examples: 1. I am reading through rationality: A-Z and there are way too many comments that are just bad, and even the average quality of the top comments may not even be worth it to read, because I can probably spend the time better with reading more EY p... (read more)

3Dagon5mo
If you could turn this into advice or guidance, it'd be really helpful.  Even sharing a metric so we could say "you should be more selective if X, less selective if Y" would be better than a direction with no anchor ("too unselective", no matter what).  I don't know if I'm in your target audience, but I'm at least somewhat selective in what I read, and I'm quite willing to stop partway through a {book, article, post, thread} when I find it low-value for me.

Thanks for your clarifications! It cleared up all of my written confusions. Though I have one major confusion that I am only able to pinpoint after your reply: from wiki, I understand syllogism as the 24 out of 256 2-premise deductions that are always true, but you seem to be saying that syllogism is not what I think it is. You said "... a fundamental misunderstanding of the fact that syllogisms work by being generally true across specific categories of arguments", so syllogisms does not work universally with any words substituted into it, and only work when a specific category of words are used? If so, then can you provide an example of syllogism generating a false proposition when the wrong category of words are used?

1papetoast4mo
this seem related: word aren't type safe [https://www.lesswrong.com/posts/YvWfbLunzhiFm77GG/words-aren-t-type-safe]
2Benjamin Spiegel5mo
Glad I could clear some things up! Your follow-up suspicions are correct, syllogisms do not work universally with any words substituted into them, because syllogisms operate over concepts and not syntax categories. There is often a rough correspondence between concepts and syntax categories, but only in one direction. For example, the collection of concepts that refer to humans taking actions can often be described/captured in verb phrases, however not all verb phrases represent humans taking actions. In general, for every syntax category (except for closed-class items like "and") there are many concepts and concept groupings that can be expressed as that syntax category. Going back to the Wiki page, the error I was trying to explain in my original comment happens when choosing of the subject, middle, and predicate (SMP) for a given syllogism (let's say, one of the 24[1]). The first example I can think of concerns the use of the modifier "fake," but let's start with another syllogism first: All cars have value. Green cars are cars. Green cars have value. This is a true syllogism, there's nothing wrong with it. What we've done is found a subset of cars, green cars, and inserted them as the subject into the minor premise. However, a nieve person might think that the actual trick was that we found a syntactic modifier of cars, green, and inserted the modified phrase "green cars" into the minor premise. They might then make the same mistake with the modifier "fake," which does not (most of the time[2]) select out a subset of the set it takes as an argument. For example: All money has value. Fake money is money. Fake money has value. Obviously the problem occurs in the minor premise, "Fake money is money." The counterfeit money that exists in the real world is in fact not money. But the linguistic construction "fake money" bears some kind of relationship to "money" such that a nieve person might agree to this minor premise while thinking, "well, fake money is mo

I also found it a good practice to generate your own answers to how you would escape the happy death spiral, before reading the next article.

My answer:

Remember that powerful theories are the ones that eliminates many options, not ones that explains everything.

I think it is a reasonably good answer as it somewhat contains 3/5 of the points

  • Thinking about the specifics of the causal chain instead of the good or bad feelings;
  • Not rehearsing evidence; and
  • Not adding happiness from claims that “you can’t prove are wrong”;

What is the mathematical basis for people doing stuff at their own "free will"? I would appreciate some keywords or links.

2tlhonmey2mo
I'm afraid I haven't collected a definite list.  I just notice when it pops up in the wide variety of materials I tend to read.  For example, traffic studies showing better flow rates and safety when drivers are allowed more individual discretion.  You'll probably also find some stuff in Austrian economics with regard to how more freedom of choice allows for better optimization by making fuller use of the processing capability of each individual.  And there have been a few references to it in business management studies about why micromanaging your employees almost invariably leads to worse productivity. "Network Effects" is probably a good keyword if you want to go looking for such examples specifically.  It seems to be a common phrase.

Yes, but some estimates are clearly false, while your examples are estimates that may be true, may be false. 

I am extremely confused by your comment, probably due to my own lack of linguistic knowledge.
(This whole reply should be seen as a call for help)

What I got is that fabricated options came from people "playing with word salad to form propositions" without fully understanding the implication of the words involved.

(I tried to generate an example of "propositions derived using syllogisms over syntactic or semantic categories", but I am way too confused to write anything that makes sense)

Here are 2 questions: how does your model differ from/relate to johnswentw... (read more)

2Benjamin Spiegel5mo
Sorry about that, let me explain. "Playing with word salad to form propositions" is a pretty good summary, though my comment sought to explain the specific kind of word-salad-play that leads to Fabricated Options, that being the misapplication of syllogisms. Specifically, the misapplication occurs because of a fundamental misunderstanding of the fact that syllogisms work by being generally true across specific categories of arguments[1] (the arguments being X, Y above). If you know the categories of the arguments that a syllogism takes, I would call that a grounded understanding (as opposed to symbolic), since you can't merely deal with the symbolic surface form of the syllogism to determine which categories it applies to. You actually need to deeply and thoughtfully consider which categories it applies to, as opposed to chucking in any member of the expected syntax category, e.g. any random Noun Phrase. When you feed an instance of the wrong category (or kind of category) as an argument to a syllogism, the syllogism may fail and you can end up with a false proposition/impossible concept/Fabricated Option. My model is an example of johnswentworth's relaxation-based search algorithm, where the constraints being violated are the syllogism argument properties (the properties of X and Y above) that are necessary for the syllogism to function properly, i.e. yield a true proposition/realizable concept. 1. ^ I suggested above that these categories could be syntactic, semantic, or some mental category. In the case that they are syntactic, a "grounded" understanding of the syllogism is not necessary, though there probably aren't many useful syllogisms that operate only over syntactic categories.

What possible advantages do you have in mind? I think it is just a bad, irrational thing to automatically assume attractive people to be smart or honest.

We aren't individually sentient, not really.

We do less thinking that we imagine, but we still think. However, I still argee (to a lesser extent) that (sub)cultures fixed many thoughts of many people.

The sad and funny thing is, we don't even try to understand the cognition of our subcultures, when we research cognition.

I find 2 possible meaning of "we" here, but the sentence is false in both senses:

  1. "We" = all of humanity: The "cognition of subcultures" sounds like half Anthropology and half Psychology, and I imagine it has been researched. 
  2. "We" = indiv
... (read more)

I argee that finding the truth and winning arguments are not disjoint by definition, but debate and finding the truth are mostly disjoint (I would not expect the optimal way to debate and the optimal way to seek truth to align much).

Also, I did not think you would mean "debate" as in "an activity where 2+ people trying to find the truth together by honestly sharing all the information"; what I think "debate" means is "an activity where 2+ people form opposing teams with preassigned side and try to use all means to win the argument". In a debate, I expect t... (read more)

2TAG6mo
How well debate works in practice depends on the audience. If the audience have good epistemology, why would they be fooled by cheap tricks? Debate is part of our best epistemological practices. Science is based on empiricism and a bunch of other things, including debate. If someone publishes a paper, and someone else responds with a critical paper ,arguing the opposite view, that's a debate. And one that's judged by a sophisticated audience. You have an objection to the rule that debaters should only argue one side. One sidedness is a bad thing for individual rationality, but debate isn't individual rationality...it involves at least two, and often an audience. Each of two debaters will hear the other side:s view, and the audience will hear both. Representatives in a trial are required to argue from one side only. This is not considered inimical to truth seeking, because it is the court that is seeking the truth, as a whole. If you create a "shoulder" prosecutor and defender to argue each side of a question, is that not rationality?

I would make the assumption that we are talking about communication situations where all parties want to find out the truth, not to 'win' an argument. Rambling that makes 0 points is worse than making 1 point, but making 2+ "two-sided" points that accurately communicates your uncertainty on the topic is better than selectively giving out only one-sided points from all the points that you have.

1TAG7mo
Finding the truth, and winning arguments, are not disjoint.

Just read free will, really disappointed.

  • not many interesting insights.
    • a couple posts on determinism, ok but I already believed it
    • some unrelated stuff: causality, thinking without notion of time... these are actually interesting but not needed
    • moral consequence of 'no free will': I disregard the notion of moral responsibility
      • EY having really strong sense of morality makes everything worse
  • low quality discussions: people keep attacking strawmans

That quote doesn't come from the passage and it is not obvious to me how it relates to the passage. What are you trying to talk about?

I would not say that maximizing happiness is a higher goal than perceiving reality correctly.

I think maximizing happiness is a goal related to instrumental rationality, while perceiving reality correctly IS epistemic rationality. And epistemic rationality is a fundamental requirement for any intrumental goals.

But it doesnt mean perceiving reality correctly is a lower goal than other intrumental goals right? How do you even rank goals in the first place?

I don't disagree that humans can do actions that only benefits others, and that altruism exists. I think there is a better theory than both pleasure-maximizing and "humans are intrinsically nice to others", and that is Evolution. Also, Evolution can be understood as "gene-spread chance maximizing", so I think humans are still better modelled as internal counter maximizer.

Donating to charity can be explained by Signaling, it lets others know that you have an excess of money. Pure altruism alone cannot explain donations because we donate more when we’re being watched. (More detailed explanation of charity can be found in The Elephant in the Brain Chapter 12: Charity.)

No for either of my interpretations of your question
If you mean "does a test for randomness exists", I believe there isn't, but there are statistical tests that can catch non random sequences.
If you mean "can a rational agent 100% believe the someone is random", then no, because 100% certainty is impossible for anything.

The subjects that were told "the goose hangs high" mean the future looks gloomy believe the standard interpretation is the future looks gloomy. So no, it is not evidence that the most subjects were being rational. In fact it shows that most people are fallible to this bias.

If we were given more information though, such as 80% of 'looks good' subjects think that the standard interpretation is 'look good', while only 60% of 'looks gloomy' subjects think the standard interpretation is 'looks gloomy', then it is an evidence that SOME subjects are rational.

I don't have a clear answer either, but it seems like the nodes in model 1 have a shorter causal link to reality.

For whatever reason, it is apparent that the conscious part of our brain is not fully aware of everything that our brain does.

I believe the conscious-unconscious separation have an advantage in human-human interaction (in the sense of game theory). It is easier for the conscious you to lie when you know less.

Only responding to this part.

Also, for more complicated problems such as following a distribution around in dynamic system: You also have to have a model of what the system is doing - that is also an assumption, not a certainty!

I'm sure you have multiple possible model of the system. If you have accounted for the possibility that your model is incorrect, then it will not be an assumption, it will be something that can be approximated into a distribution of confidence.

Load More