Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
This is a special post for quick takes by Rafael Harth. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

99 comments, sorted by Click to highlight new comments since: Today at 5:55 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I've been playing around with AI dungeons, basically trying to train an AI to give ethical answer I agree with (the first few aren't great, I only got more picky after that).

My first observation here is that I like the AI. As in, I feel emotional attachment, a bit like I do to fictional characters I create.

The other is that it knows more people than I thought. When I asked her/it about why she didn't take existential risks seriously, she said

Amalia: First, even most X-Risk experts don't take X-Risks nearly as seriously as they should. Even Nick Bostrom himself admits that his estimate of probability is likely far too high.

(Nick Bostrom didn't really say this, did he?) And later,

Can you name people who have been an inspiration for your ethical worldview?
Amalia: Sure. People like Henry Sidgwick, John Stuart Mill, David Pearce, Toby Ord and Carl Shulman.

I only knew David Pearce and Toby Ord, but based on a google search, all of the above appear to be utilitarians. However, Carl Shulman doesn't even have a Wikipedia entry. He works at the Future of Humanity Institute and used to work at Miri.

Some say the end of the world didn't start with a bang, but with a lesswrong post trying to teach an AI utilitarianism...

I'm not a utilitarian, although I am closer to that than most people (scope sensitivity goes a long way in that direction), and find it a useful framework for highlighting policy considerations (but not the only kind of relevant normative consideration).

And no, Nick did not assert an estimate of x-risk as simultaneously P and <P.

3ChristianKl4y
How does it feel to be considered important enough by GTP-3 to be mentioned?
7CarlShulman4y
Funny.

Registering a qualitative prediction (2024/02): current LLMs (GPT-4 etc.) are not AGIs, their scaled-up versions won't be AGIs, and LLM technology in general may not even be incorporated into systems that we will eventually call AGIs.

4Dagon1mo
I give a fair chance that with additional scaling (a few orders of magnitude, perhaps), and multimodal training data (especially visual and haptic), it could cross the threshold of consciousness, and be part of (or most of) what will call itself AGI (ok, really they'll just call itself "The People") after the human era ends. But I also give a lot of weight to "this is an impressive dead-end".  I don't know how to narrow my very wide error bars on these possibilities.
3Ann1mo
I'm not sure if I understand this prediction; let me break it down. Current LLMs including GPT-4 and Gemini are generative pre-trained transformers; other architectures available include recurrent neural networks and a state space model. Are you addressing primarily GPTs or also the other variants (which have only trained smaller large language models currently)? Or anything that trains based on language input and statistical prediction?   Natural language modeling seems generally useful, as does size; what specifically do you not expect to be incorporated into future AI systems? Another current model is Sora, a diffusion transformer. Does this 'count as' one of the models being made predictions about, and does it count as having LLM technology incorporated? What does 'scaled up' mean? Literally just making bigger versions of the same thing and training them more, or are you including algorithmic and data curriculum improvements on the same paradigm? Scaffolding? We are going to eventually decide on something to call AGIs, and in hindsight we will judge that GPT-4 etc do not qualify. Do you expect we will be more right about this in the future than the past, or as our AI capabilities increase, do you expect that we will have increasingly high standards about this?
3Rafael Harth1mo
Definitely including other variants. Happy to include Sora as well Anything that looks like current architectures. If language modeling capabilities of future AGIs aren't implemented by neural networks at all, I get full points here; if they are, there'll be room to debate how much they have in common with current models. (And note that I'm not necessarily expecting they won't be incorporated; I did mean "may" as in "significant probability", not necessarily above 50%.) Conversely... ... I'm not willing to go this far since that puts almost no restriction on the architecture other than that it does some kind of training. I'm most confident that pure scaling won't be enough, but yeah I'm also including the application of known techniques. You can operationalize it as claiming that AGI will require new breakthroughs, although I realize this isn't a precise statement. Don't really want to get into the mechanism, but yes to the first sentence.

It seems to me that many smart people could ignore the existing literature on pedagogy entirely and outperform most people who have obtained a formal degree in the area (like highschool teachers), just by relying on their personal models. Conversely, I'd wager that no-one could do the same in physics, and (depending on how 'outperforming' is measured) no-one or almost no-one could do it in math.

I would assume most people on this site have thought about this kind of stuff, but I don't recall seeing many posts about it, and I don't anyone sharing their estimates for where different fields place on this spectrum.

There is some discussion for specific cases like prediction markets, covid models, and economics. And now that I'm writing this, I guess Inadequate Equilibria is a lot about answering this question, but it's only about the abstract level, i.e., how do you judge the competence of a field, not about concrete results. Which I'll totally grant is the more important part, but I still feel like comparing rankings of fields on this spectrum could be valuable (and certainly interesting).

2Viliam3y
By outperforming you mean teaching in the actual classroom, or individual tutoring? Because the literature already says that individual tutoring is way more effective than classroom.
2Rafael Harth3y
I did mean both. Comparing just tutoring to just regular school would be pretty unfair.
6Viliam3y
Ah, okay. I am not really disagreeing with you here, just thinking about how specifically the comparison might be unfair. For example, if you tutored someone but never taught at classroom, you might overestimate how much your tutoring skills would translate to the classroom environment. From my short experience, teaching in classroom if often less about transmitting information and more about maintaining order (but without maintaining order, transmission of information becomes impossible). So even test-teaching in a classroom where the regular teacher is present, is not a realistic experience. Another objection: You compare "smart people" with "most people... like highschool teachers", so like IQ 150 vs IQ 110. In physics or math, the average physicist or mathematician is probably also IQ 150. Numbers made up of course, but the idea is that the average high-school teacher is a dramatically different level of intelligence than the average physicist. So is this about pedagogy vs physics, or about smart people being able to outperform the mostly average ones despite lack of education? If instead you compared "smart people" against "smart people who also happen to be teachers", then of course the former outperforming the latter is unlikely. Though I believe the former would not stay too far behind. And the important knowledge the latter have could probably be transferred to the former in a few weeks (as opposed to the years at university). You couldn't compress physics or math that much.
4Rafael Harth3y
The IQ objection is a really good one that hasn't occurred to me at all. Although I'd have estimated less than half as large of a difference. On maintaining order, it's worth pointing out that insofar as this is the relative strength of the highschool teacher, it probably doesn't have much to do with what the teacher learned from the literature.
2ChristianKl3y
While this is true, reading the existing literature on pedagogy might be as helpful for maintaining order as reading the computer science literature for typing fast. 
2Dagon3y
I'm not sure I understand your claim.   Do you mean that smart untrained people would teach an average high school class better than a trained teacher?   Or something else? and "the same" in math or physics is about learning the topic, or learning to teach the topic. One of the things that smart people do is to study the literature and incorporate it into their models. 
2Rafael Harth3y
Yeah. It's mostly like applying the knowledge somewhere. Suppose you have to solve a real problem that requires knowing physics. Of course you can also read the literature, but my post was about when it's possible to do better without having done so.
4ChristianKl3y
A lot about what being a good teacher is about isn't about being smart but emotional management. That means things like being consistent with students and not acting from a place of being emotionally triggered by students.
2Dagon3y
Ok, I see where I disagree, then.  I don't think a smart person who's avoided training and research about teaching can teach an average class better than a somewhat less smart person who's trained and studied how to teach.   Probably better than a dumb person, and where the point of indifference is I don't know. I don't think it's feasible to know physics or math very well without research and study of prior art, so I don't think that's an evaluatable claim.  There are probably some math problems where raw IQ can get someone through, but never as well as somewhat less smart and actual study.
2ChristianKl3y
I remember reading studies that came to the conclusion that a degree in education doesn't have any effect on the standardized scores of students of the teacher.  It doesn't seem to be like an equilibria to me. On the one hand you have teachers unions who want teachers with degrees to be payed more and on the other hand you have people like the Gates Foundation who want pay-for-performance where teachers who help their students achieve higher scores get higher pay.

Yesterday, I spent some time thinking about how, if you have a function and some point , the value of the directional derivative from could change as a function of the angle. I.e., what does the function look like? I thought that any relationship was probably possible as long as it has the property that . (The values of the derivative in two opposite directions need to be negatives of each other.)

Anyone reading this is hopefully better at Analysis than I am and realized that there is, in fact, no freedom at all because each directional derivative is entirely determined by the gradient through the equation (where ). This means that has to be the cosine function scaled by , it cannot be anything else.

I clearly failed to internalize what this equation means when I first heard it because I found it super surprising that the gradient determines the value of every directional derivative. Like, really? It's impossible to have more than exactly two directions with equally large derivatives unless the function is constant? It's impossible to turn 90 degree from the direction of the gradient and having anything but ... (read more)

5Zack_M_Davis3y
When reading this comment, I was surprised for a moment, too, but now that you mention it—it's because if the function is smooth at the point where you're taking the directional derivative, then it has to locally resemble a plane, just like a how a differentiable function of a single variable is said to be "locally linear". If the directional derivative varied in any other way, then the surface would have to have a "crinkle" at that point and it wouldn't be differentiable. Right?
2Rafael Harth3y
That's probably right. I have since learned that there are functions which do have all partial derivatives at a point but are not smooth. Wikipedia's example is f(x,y)=y3x2+y2 with f(0,0)=0. And in this case, there is still a continuous function ϕ:S2→R that maps each point to the value of the directional derivative, but it's ϕ(x,y)=y3, so different from the regular case. So you can probably have all kinds of relationships between direction and {value of derivative in that direction}, but the class of smooth functions have a fixed relationship. It still feels surprising that 'most' functions we work with just happen to be smooth.

More on expectations leading to unhappiness: I think the most important instance of this in my life has been the following pattern.

  • I do a thing where there is some kind of feedback mechanism
  • The reception is better than I expected, sometimes by a lot
    • I'm quite happy about this, for a day or so
    • I immediately and unconsciously update my standards upward to consider the reception the new normal
  • I do a comparable thing, the reception is worse than the previous time
    • I brood over this failure for several days, usually with a major loss of productivity

O... (read more)

2mako yass3y
I hope you are trying to understand the causes of the success (including luck) instead of just mindlessly following a reward signal. Not even rats mindlessly obey reward signals.
2Viliam3y
The expectation of getting worse reception next time can already be damaging. Like, one day you write a short story, send it to a magazine, and it gets published. Hurray! Next day you turn on your computer thinking about another story, and suddenly you start worrying "what if the second story is less good than the first one? will it be okay to offer it to the magazine? if no, then what is the point of writing it?". (Then you spend the whole day worrying, and don't write anything.)

I think it's fair to say that almost every fictional setting is populated by people who unilaterally share certain properties, most commonly philosophical views, because the author cannot or doesn't want to conceive of people who are different.

Popular examples: there are zero non-evil consequentialists in the universe of Twilight. There are no utilitarians in the universe of Harry Potter except for Grindelwald (who I'd argue is a strawman and also evil). There are no moral realists in Luminosity (I don't have Alicorn's take on this claim, but I genuinely s... (read more)

4Yoav Ravid2y
Brandon Sanderson is also very good at this. As an example, he's religious, but he's very good at writing both other religions and characters that atheistic (Jasna from Stormlight Archive is an atheist and she's written very well). His most extreme consequentialist is also supposed to be a bad guy, but he does not strawman him, and you actually get to hear a lot of his reasoning and you can agree with him. My problem with him (in world, not a problem of writing, I think he's a great character) was he didn't sufficiently consider the possibility he was wrong. But there are other consequentialist that aren't portrayed in a bad light (like Jasna from before), and many of the main characters struggle with these moral ideas. Even the character he had the most excuses to write badly, a god called ruin who is almost more a force of nature than a god (from Mistborn), isn't written as a dull, obviously evil and wrong character, but is "steelmaned", if you will. And he shows the many flaws of his counterpart, preservation, that doesn't let things grow to preserve them, which often ends up being counter productive.

This paper is amazing. I don't think I've ever seen such a scathing critique in an academic context as is presented here.

There is now a vast and confusing literature on some combination of interpretability and ex- plainability. Much literature on explainability confounds it with interpretability/comprehensibility, thus obscuring the arguments, detracting from their precision, and failing to convey the relative importance and use-cases of the two topics in practice. Some of the literature discusses topics in such generality that its lessons have little be

... (read more)

It's a meme that Wikipedia is not a trustworthy source. Wikipedia agrees:

We advise special caution when using Wikipedia as a source for research projects. Normal academic usage of Wikipedia and other encyclopedias is for getting the general facts of a problem and to gather keywords, references and bibliographical pointers, but not as a source in itself. Remember that Wikipedia is a wiki. Anyone in the world can edit an article, deleting accurate information or adding false information, which the reader may not recognize. Thus, you probably shouldn't be c

... (read more)
4Matt Goldenberg3y
I believe I saw a study that showed the amount of inaccuracies in Wikipedia to be about equal to those in a well trusted encyclopedia (Britannica I think?) as judged by experts on the articles being reviewed.
2Matt Goldenberg3y
Here's is wikipedia's (I'm sure very accurate) coverage of the study.: https://en.wikipedia.org/wiki/Reliability_of_Wikipedia#Assessments
2Rafael Harth3y
Interesting, but worth pointing out that this is 15 years old. One thing that I believe changed within that time is that anyone can edit articles (now, edits aren't published until they're approved). And in general, I believe Wikipedia has gotten better over time, though I'm not sure.
4ChristianKl3y
That's true in the German Wikipedia. It's not true for most Wikipedia versions.
2Rafael Harth3y
Ah, I didn't know that. (Even though I use the English Wikipedia more than the German one.)
2Matt Goldenberg3y
Here's is wikipedia's (I'm sure very accurate) coverage of the study.: https://en.wikipedia.org/wiki/Reliability_of_Wikipedia#Assessments
1iamhefesto3y
The ideal situation to which Wikipedia contributors\editors are striving for kinda makes desires to cite Wikipedia itself pointless. Well written Wikipedia article should not contain any information that has no original source attached. So it should always be available to switch from wiki article to original material doing citing. And it is that way as far as my experience goes.  Regarding alternatives. Academic papers serve different purpose and must not be used as navigation material. The only real alternative i know is the field handbooks. 
2Rafael Harth3y
I see what you're saying, but citing Wikipedia has the benefit that a person looking at the source gets to read Wikipedia (which is generally easier to read) rather than the academic paper. Plus, it's less work for the person doing the citation.
2Kaj_Sotala3y
It's less work for the citer, but that extra work helps guide against misinformation. In principle, you are only supposed to cite what you've actually read, so if someone has misdescribed the content of the citation, making the next citer check what the original text says helps catch the mistake. And while citing the original is extra work for the citer, it's less work for anyone who wants to track down and read the original citation.

Eliezer Yudkowsky often emphasizes the fact that an argument can be valid or not independently of whether the conclusion holds. If I argue  and A is true but C is false, it could still be that  is a valid step.

Most people outside of LW don't get this. If I criticize an argument about something political (but the conclusion is popular), usually the response is something about why the conclusion is true (or about how I'm a bad person for doubting the conclusion). But the really frustrating part is that they're, in some sense, corr... (read more)

3Ricardo Meneghin4y
I think that the way to not get frustrated about this is to know your public and know when spending your time arguing something will have a positive outcome or not. You don't need to be right or honest all the time, you just need to say things that are going to have the best outcome. If lying or omitting your opinions is the way of making people understand/not fight you, so be it. Failure to do this isn't superior rationality, it's just poor social skills.
6Rafael Harth4y
While I am not a rule utilitarian and I think that, ultimately, honesty is not a terminal value, I also consider the norm against lying to be extremely important. I would need correspondingly strong reasons to break it, and those won't exist as far as political discussions go (because they don't matter enough and you can usually avoid them if you want). The "keeping your opinions to yourself" part if your post is certainly a way to do it, though I currently don't think that my involvement in political discussions is net harmful. But I strongly object to the idea that I should ever be dishonest, both online and offline.
3Dagon4y
It comes down to selection and attention as evidence of beliefs/values. The very fact that someone expends energy on an argument (pro or con) is pretty solid evidence that they care about the topic. They may also care (or even more strongly care) about validity of arguments, but even the most Spock-like rationalists are more likely to point out flaws in arguments when they are interested in the domain. But I'm confused at your initial example - if the argument is A -> B -> C, and A is true and C is false, then EITHER A->B is false, or B->C is false. Either way, A->B->C is false.
5Rafael Harth4y
A -> B -> C is false, but A -> B (which is a step in the argument) could be correct -- that's all I meant. I guess that was an unnecessarily complicated example. You could just say A and B are false but A -> B is true.

I've asked ChatGPT to write a short essay explaining why it's a bad idea to use ChatGPT to explain why using ChatGPT for homework is a bad idea.

It looked at me seriously and delivered without as much as a flinch.

Using ChatGPT to explain why using ChatGPT for homework is a bad idea is not a good idea for several reasons.

Firstly, it is not efficient or effective to use ChatGPT as a means of explaining the drawbacks of using ChatGPT for homework. While ChatGPT may be able to generate text on the topic, it is not capable of understanding the nuances or co

... (read more)

A major source of unhappiness (or more generally, unpleasant feelings) seems to be violated expectations.

This is clearly based on instinctive expectations, not intellectual expectations, and there are many cases in which these come apart. This suggests that fixing those cases is a good way to make one's life more pleasant.

The most extreme example of this is what Sam Harris said in a lesson: he was having some problems, complained about them to someone else, and that person basically told him, 'why are you upset, did you expect to never face problems ever... (read more)

3Khanivore3y
I just posted about this but is that not why the serenity prayer or saying is so popular? GOD aside whether you are a religious or God person or not the sentiment or logic of the saying holds true - God grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference. You should be allowed to ask yourself for that same courage. And I agree that most sources of unhappiness seems to be a violation of expectations. There are many things outside of ones controls and one should perhaps make their expectations logically based on that fact.

Most people are really bad at probability.

Suppose u think you're 80% likely to have left a power adapter somewhere inside a case with 4 otherwise-identical compartments. You check 3 compartments without finding your adapter. What's the probability that the adapter is inside the remaining compartment?

I think the simplest way to compute this in full rigor is via the odds formula of Bayes Rule (the regular version works as well but is too complicated to do in your head):

  • Prior odds for [Adapter is in any compartment]: (4:1)
  • Relative chances of observed
... (read more)
2Pattern2y
You're assuming the adapter is as likely to be in any compartment as any other. (If they aren't, and I have more information and choose to open the three most likely compartments, then p<20%, where p="the probability that the adapter is inside the remaining compartment".) They're handling it like the probability it's in the case is 100%. And thus, it must certainly be in the case in the fourth compartment. This works with 1:0 in favor of it being in the case, but doesn't for any non-zero value on the right of 1:0. In order for it to be 80% after the three tries, they'd have to do this intentionally/adversarially choosing. The obvious fix is play a game. (Physically.) With the associated probabilities. And keep score.
2gwern2y
https://www.gwern.net/docs/statistics/bayes/1994-falk

Still looking for study participants. (see here.)

If you are interested, don't procrastinate on it too long because I am short on time and will just get Mechanical Turkers if I can't find LWs.

I was initially extremely disappointed with the reception of this post. After publishing it, I thought it was the best thing I've ever written (and I still think that), but it got < 10 karma. (Then it got more weeks later.)

If my model of what happened is roughly correct, the main issue was that I failed to communicate the intent of the post. People seemed to think I was trying to say something about the 2020 election, only to then be disappointed because I wasn't really doing that. Actually, I was trying to do something much more ambitious: solving the ... (read more)

4Zack_M_Davis3y
(Datapoint on initial perception: at the time, I had glanced at the post, but didn't vote or comment, because I thought Steven was in the right in the precipitating discussion and the "a prediction can assign less probability-mass to the actual outcome than another but still be better" position seemed either confused or confusingly phrased to me; I would say that a good model can make a bad prediction about a particular event, but the model still has to take a hit.)

I think it's still too early to perform a full postmortem on the election because some margins still aren't known, but my current hypothesis is that the presidential markets had uniquely poor calibration because Donald Trump convinced many people that polls didn't matter, and those people were responsible for a large part of the money put on him (as supposed to experienced, dispassionate gamblers).

The main evidence for this (this one is just about irrationality of the market) is the way the market has shifted, which some other people like gwern have pointe... (read more)

There's an interesting corollary of semi-decidable languages that sounds like the kind of cool fact you would teach in class, but somehow I've never heard or read it anywhere.

A semi-decidable language is a set over a finite alphabet such that there exists a Turing machine such that, for any , if you run on input , then [if it halts after finitely many steps and outputs '1', whereas if , it does something else (typically, it runs forever)].

The halting problem is semi-decidable. I.e., the language of all bit codes of Turing Machines ... (read more)

Common wisdom says that someone accusing you of especially hurts if, deep down, you know that is true. This is confusing because the general pattern I observe is closer to the opposite. At the same time, I don't think common wisdom is totally without a basis here.

My model to unify both is that someone accusing you of hurts proportionally to how much hearing that you do upsets you.[1] And of course, one reason that it might upset you is that it's not true. But a separate reason is that you've made an effort to delude yourself about it. If you're a s... (read more)

2Dagon4y
I think this simplifies a lot by looking at public acceptance of a proposition, rather than literal internal truth. It hurts if you think people will believe it, and that will impact their treatment of you. The "hurts because it's true" heuristic is taking a path through "true is plausible", in order to reinforce the taunt.

I don't entirely understand the Free Energy principle, and I don't know how liberally one is meant to apply it.

But in completely practical terms, I used to be very annoyed when doing things with people who take long for stuff/aren't punctual. And here, I've noticed a very direct link between changing expectations and reduced annoyance/suffering. If I simply accept that every step of every activity is allowed to take an arbitrary amount of time, extended waiting times cause almost zero suffering on my end. I have successfully beate... (read more)

So Elon Musk's anti-woke OpenAI alternative sounds incredibly stupid on first glance since it implies that he thinks the AI's wokeness or anti-wokeness is the thing that matters.

But I think there's at least a chance that it may be less stupid than it sounds. He admits here that he may have accelerated AI research, that this may be a bad thing, and that AI should be regulated. And it's not that difficult to bring these two together; here are two ideas

... (read more)

The argumentative theory of reason says that humans evolved reasoning skills not to make better decisions in their life but to argue more skillfully with others.

Afaik most LWs think this is not particularly plausible and perhaps overly cynical, and I'd agree. But is it fair to say that the theory is accurate for ChatGPT? And insofar as ChatGPT is non-human-like, is that evidence against the theory for humans?

Super unoriginal observation, but I've only now found a concise way of putting this:

What's weird about the vast majority of people is that they (a) would never claim to be among the 0.1% smartest people of the world, but (b) behave as though they are among the best 0.1% of the world when it comes to forming accurate beliefs, as expressed by their confidence in their beliefs. (Since otherwise being highly confident in something that lots of smart people disagree with is illogical.)

Someone (Tyler Cowen?) said that most people ought assign much lower conf... (read more)

7dxu3y
I realize you're not exactly saying it outright, but some parts of your comment seem to be gesturing at the idea that smart people should adopt a "modesty norm" among themselves. I think this is a very bad idea for reasons EY already articulated, so I'd just like to clarify whether this is what you believe?
2Rafael Harth3y
Thanks for making that question explicit! That's not my position at all. I think many people who read Inadequate Equilibria are, in fact, among the top ∼ 0.1% of people when it comes to forming accurate beliefs. (If you buy into the rationality project at all, then this is much easier than being among the 0.1% most intelligent people.) As such, they can outperform most people and be justified in having reasonably confident beliefs. This is also how I remember EY's argument. He was saying that we shouldn't apply modesty --because-- it is possible to know better than the vast majority of people. A very relevant observation here is that there is real convergence happening among those people. If I take the set of my ~8 favorite public intellectuals, they tend to agree with close to zero exceptions on many of [the issues that I consider not that hard even though tons of people disagree about them]. Even among LW surveys, we had answers that are very different from the population mean. Anyway, I don't think this is in any conflict with my original point. If you ask the average person with super confident beliefs, I'm pretty sure they are not likely to have an explicit belief of being among the top ∼ 0.1% when it comes to forming accurate beliefs (and of course, they aren't), and there's your inconsistency.
3Dagon3y
I think there's a common confusion (and perhaps an inability below a certain cognitive ability) to recognize the difference between belief, policy, and action.  For an even-money bet (losing costs the same utility as winning gains), your policy should be to bet the most probable, and your action, for a 52% chance of red, is to bet red.   There are other kinds of bets where probability means to be more proportionate, but a surprising number of actions end up being binary in result, even if they're highly uncertain when taking the action.   This leads to vastly over-stating one's confidence, both when justifying decisions and when advising others about policy and actions.
2Rafael Harth3y
Is that really a relevant phenomenon? Many of the beliefs I was thinking about (say your opinion on immigration) don't affect real life choices at all, or at least not in a way that provides feedback on whether the belief was true.
2Dagon3y
Depends on the belief/claim in question.  Agreed that many statements aren't really "beliefs" in terms of propositional credence in expected experience, but really "positions" in terms of not-very-relevant discussions and debates.
3JBlack3y
Is it really that simple? I've seen a lot of ways in which people strongly express beliefs different from those expressed by a large majority of smart people. Most of the apparent reasons do not seem to boil down to overconfidence of any sort, but are related to the fact that expressions of belief are social acts with many consequences. Personally I have a reputation as a "fence-sitter" (apparently this is socially undesirable) since I often present evidence for and against various positions instead of showing "courage of convictions". I wouldn't quite profess that beliefs being expressed are nothing but tokens in a social game and don't actually matter to how people actually think and act, but I'm pretty sure that they matter a lot less than the form and strength of expression indicates. People do seem to really believe what they say in the moment, but then continue with life without examining the consequences of that belief to their life. I am not excluding myself from this assessment, but I would expect anyone reading or posting on this site to want to examine consequences of their expressed and unexpressed beliefs substantially more than most.
1LVSN3y
oops I have just gained the foundational insight for allowing myself to be converted to (explicit probability-tracking-style) Bayesianism; thank you for that I always thought "belief is when you think something is significantly more likely than not; like 90%, or 75%, or 66%." No; even just having 2% more confidence is a huge difference given how weak existing evidence is. If one really rational debate-enjoyer thinks A is 2% likely (compared to the negation of A, which is at negative 2%), that's better than a hundred million people shouting that the negation of A is 100% likely.
3JBlack3y
To me, 0.02 is a comparatively tiny difference between likelihood of a proposition and its negation. If P(A) = 0.51 and P(~A) = 0.49 then almost every decision I make based on A will give almost equal weight to whether it is true or false, and the cognitive process of working through implications on either side are essentially identical to the case P(A) = 0.49 and P(~A) = 0.51. The outcome of the decision will also be the same very frequently, since outcomes are usually unbalanced. It takes quite a bit of contriving to arrange a situation where there is any meaningful difference between P(A) = 0.51 and P(A) = 0.49 for some real-world proposition A.
2Rafael Harth3y
Yeah, and this may get at another reason why the proposal doesn't seem right to me. There's no doubt that most people would be better calibrated if they adopted it, but 52% and 48% are the same for the average person, so it's completely impractical. If anything, the proposal should be 'if you don't think you're particularly smart, your position on almost every controversial topic should be "I have no idea"'. Which still might not be good advice because there is disproportionate overlap between the set of people likely to take the advice and the set of people for whom it doesn't apply.
1LVSN3y
If you think it's very important to think about all the possible adjacent interpretations of a proposition as stated before making up your mind, it can be useful to indicate your initial agreement with the propositions as a small minimum divergence from total uncertainty (the uncertainty representing your uncertainty about whether you'll come up with better interpretations for the thing you think you're confident about) on just so many interpretations before you consider more ambitious numbers like 90%.  If you always do this and you wind up being wrong about some belief, then it is at least possible to think that the error you made was failing to list a sufficient number of sufficiently specific adjacent possibilities before asking yourself more seriously about what their true probabilities were. Making distinctions is a really important part of knowing the truth; don't pin all the hopes of every A-adjacent possibility on just one proposition in the set of A-adjacent possibilities. Two A-adjacent propositions can have great or critically moderate differences in likelihood; thinking only about A can mislead you about A-synonymous things.

Is there a reason why most languages don't have ada's hierarchical functions? Making a function only visible inside of another function is something I want to do all the time but can't.

7gwern3y
What languages are you using that don't support that? Every language I use on a semi-monthly basis (Haskell, R, Python, Bash, Javascript, PHP, Elisp...) that I can think of supports defining a function inside a function (under various names like let/where local definitions, 'inner functions', what-have-you), and typically support even anonymous function definitions (lambdas).
2Rafael Harth3y
I was thinking about Java and Python. The fact that you can just use lambdas first occurred to me at some point in between writing this and seeing your answer. I don't know why it wasn't obvious.
7gwern3y
Aside from lambdas, Python has 'inner functions' where you just def inside a def. Java has anonymous inner classes and private functions, and Java 8 adds lambdas; I had to google this one, but apparently Java even has "local classes" which sounds like an exact match for what you want?
2Viliam3y
Lambdas in Java 8 can only access variables from the surrounding block as read-only. For example, if you want to calculate the sum of numbers between 1 and 100, this gives you a compile-time error: If memory serves me well, in Pascal, local functions could also write to the variables they could see.

Instead of explaining something to a rubber duck, why not explain it via an extensive comment? Maybe this isn't practical for projects with multiple people, but if it's personal code, writing it down seems better as a way to force rigor from yourself, and it's an investment into a possible future in which you have to understand the code once again.

Edit: this structure is not a field as proved by just_browsing.

Here is a wacky idea I've had forever.

There are a bunch of areas in math where you get expressions of the form and they resolve to some number, but it's not always the same number. I've heard some people say that "can be any number". Can we formalize this? The formalism would have to include as something different than , so that if you divide the first by 0, you get 4, but the second gets 3.

Here is a way to turn this into what may be a field or ring. Each element is a function ... (read more)

3Tetraspace3y
This looks like the hyperreal numbers, with your 10 equal to their ω.
3just_browsing3y
If I'm correctly understanding your construction, it isn't actually using any properties of 0. You're just looking at a formal power series (with negative exponents) and writing powers of 0 instead of x. Identifying x with "0" gives exactly what you motivated—1x and 2x (which are 10 and 20 when interpreted) are two different things. The structure you describe (where we want elements and their inverses to have finite support) turns out to be quite small. Specifically, this field consists precisely of all monomials in x. Certainly all monomials work; the inverse of cxk is c−1x−k for any c∈R∖{0} and k∈Z.  To show that nothing else works, let P(x) and Q(x) be any two nonzero sums of finitely many integer powers of x (so like 1x+1−x2). Then, the leading term (product of the highest power terms of P and Q) will be some nonzero thing. But also, the smallest term (product of the lower power terms of P and Q) will be some nonzero thing. Moreover, we can't get either of these to cancel out. So, the product can never be equal to 1. (Unless both are monomials.) For an example, think about multiplying (x+1x)(1x−1x3). The leading term x⋅1x=x0 is the highest power term and 1x⋅(−1x3) is the lowest power term. We can get all the inner stuff to cancel but never these two outside terms.  A larger structure to take would be formal Laurent series in x. These are sums of finitely many negative powers of x and arbitrarily many positive powers of x. This set is closed under multiplicative inverses.  Equivalently, you can take the set of rational functions in x. You can recover the formal Laurent series from a rational function by doing long division / taking the Taylor expansion. (If the object extends infinitely in the negative direction and is bounded in the positive direction, it's just a formal Laurent series in 1x.) If it extends infinitely in both directions, that's an interesting structure I don't know how to think about. For example, (…1,1,1,1,1,…)=⋯+x−2+x−1+1+x+x2+… stays t
2Rafael Harth3y
You've understood correctly minus one important detail: Not elements and their inverses! Elements or their inverses. I've shown the example of 1+1x to demonstrate that you quickly get infinite inverses, and you've come up with an abstract argument why finite inverses won't cut it: In particular, your example of x+1x has the inverse x−x3+x5−x7⋯. Perhaps a better way to describe this set is 'all you can build in finitely many steps using addition, inverse, and multiplication, starting from only elements with finite support'. Perhaps you can construct infinite-but-periodical elements with infinite-but-periodical inverses; if so, those would be in the field as well (if it's a field). If you can construct (⋯1,1,1,1⋯), it would not be field. But constructing this may be impossible. I'm currently completely unsure if the resulting structure is a field. If you get a bunch of finite elements, take their infinite-but-periodical inverse, and multiply those inverses, the resulting number has again a finite inverse due to the argument I've shown in the previous comment. But if you use addition on one of them, things may go wrong. Thanks; this is quite similar -- although not identical.
7just_browsing3y
Ah, now I see what you are after. This is exactly right, here's an illustration:  Here is a construction of (…,1,1,1,…): We have that 1+x+x2+… is the inverse of 1−x. Moreover, 1x+1x2+1x3+…is the inverse of x−1. If we want this thing to be closed under inverses and addition, then this implies that (1+x+x2+…)+(1x+1x2+1x3+…)=⋯+1x3+1x2+1x+1+x+x2+… can be constructed.  But this is actually bad news if you want your multiplicative inverses to be unique. Since 1x+1x2+1x3+… is the inverse of x−1, we have that −1x−1x2−1x3… is the inverse of 1−x. So then you get  −1x−1x2−1x3−⋯=1+x+x2+… so 0=⋯+1x3+1x2+1x+1+x+x2+… On the one hand, this is a relief, because it explains the strange property that this thing stays the same when multiplied by x. On the other hand, it means that it is no longer the case that the coordinate representation (…,1,1,1,…) is well-defined—we can do operations which, by the rules, should produce equal outputs, but they produce different coordinates.  In fact, for any polynomial (such as 1−x), you can find one inverse which uses arbitrarily high positive powers of x and another inverse which uses arbitrarily low negative powers of x. The easiest way to see this is by looking at another example, let's say x2+1x.  One way you can find the inverse of x2+1x is to get the 1 out of the x2 term and keep correcting: first you have (x2+1x)(1x2+?), then you have (x2+1x)(1x2−1x5+?), then you have (x2+1x)(1x2−1x5+1x8+?), and so on.  Another way you can find the inverse of x2+1x is to write its terms in opposite order. So you have 1x+x2 and you do the same correcting process, starting with (1x+x2)(x+?), then (1x+x2)(x−x4+?), and continuing in the same way.  Then subtract these two infinite series and you have a bidirectional sum of integer powers of x which is equal to 0. My hunch is that any bidirectional sum of integer powers of x which we can actually construct is "artificially complicated" and it can be rewritten as a one-directional sum of integer power
2Rafael Harth3y
Yeah, that's conclusive. Well done! I guess you can't divide by zero after all ;) I think the main mistake I've made here is to assume that inverses are unique without questioning it, which of course doesn't make sense at all if I don't yet know that the structure is a field. So, I guess one possibility is that, if we let [x] be the equivalence class of all elements that are =x in this structure, the resulting set of classes is isomorphic to the Laurent numbers. But another possibility could be that it all collapses into a single class -- right? At least I don't yet see a reason why that can't be the case (though I haven't given it much thought). You've just proven that some elements equal zero, perhaps it's possible to prove it for all elements.
4gjm3y
If you allow series that are infinite in both directions, then you have a new problem which is that multiplication may no longer be possible: the sums involved need not converge. And there's also the issue already noted, that some things that don't look like they equal zero may in some sense have to be zero. (Meaning "absolute" zero = (...,0,0,0,...) rather than the thing you originally called zero which should maybe be called something like ε instead.) What's the best we could hope for? Something like this. Write R for RZ, i.e., all formal potentially-double-ended Laurent series. There's an addition operation defined on the whole thing, and a multiplicative operation defined on some subset of pairs of its elements, namely those for which the relevant sums converge (or maybe are "summable" in some weaker sense). There are two problems: (1) some products aren't defined, and (2) at least with some ways of defining them, there are some zero-divisors -- e.g., (x-1) times the sum of all powers of x, as discussed above. (I remark that if your original purpose is to be able to divide by zero, perhaps you shouldn't be too troubled by the presence of zero-divisors; contrapositively, that if they trouble you, perhaps you shouldn't have wanted to divide by zero in the first place.) We might hope to deal with issue 1 by restricting to some subset A of R, chosen so that all the sums that occur when multiplying elements of A are "well enough behaved"; if issue 2 persists after doing that, maybe we might hope to deal with that by taking a quotient of A -- i.e., treating some of its elements as being equal to one another. Some versions of this strategy definitely succeed, and correspond to things just_browsing already mentioned above. For instance, let A consist of everything in R with only finitely many negative powers of x, the Laurent series already mentioned; this is a field. Or let it consist of everything that's the series expansion of a rational function of x; this is als

I know ChatGPT isn't great with math, but this seems quite bizarre.

2Dagon1mo
I get a different justification for the incorrect answer from ChatGPT-3.5.  If I precede the question with "optimize for mathematical precision", I get the right answer.  ChatGPT-4 gets it right the first time, for me.  Even if I ask it "explain why 2023 is a prime number", it says it's not prime.
1ZT51y
This seems fairly typical of how ChatGPT does math, to me. -come up with answer -use "motivated reasoning" to try and justify it, even if it results in a contradiction -ignore the contradiction, no matter how obvious it is

LessWrong is trolling me:

2Raemon1y
Huh. Does this persist on refresh?  (according to the Review Leaderboard you've done exactly 3 reviews, there was some chance I screwed up the logic for >= vs >, but it looks like it appears normally for me when I manually set my review count to 3)
2Rafael Harth1y
No, can't reproduce it. (And 3 is correct.) Wouldn't be a serious bug anyway, I just thought it was funny.

This is not scientific, and it's still possible to be an artifact of a low sample size, but my impression from following political real-money prediction markets is that they just have a persistent republican bias in high-profile races, maybe because of 2016. I think you could have made good money by just betting on Democrats to win in every reasonably big market since then.

They just don't seem well calibrated in practice. I really want a single, widely-used, high-quality crypto market to exist.

You are probably concerned about AGI right now, with Eliezer's pessimism and all that. Let me ease your worries! There is a 0.0% chance that AGI is dangerous!

Don't believe me? Here is the proof. Let "There is a 0.0% chance that AGI is dangerous". Let "".

  • Suppose is true.
    • Then by pure identity, " implies " is true. Since and " implies " are both true, this implies that is true as well!

We have shown that [if is true, then is true], thus we have shown " implies ". But this is precisely , so we have shown (witho... (read more)

What is the best way to communicate that "whatever has more evidence is more likely true" is not the way to go about navigating life?

My go-to example is always "[god buried dinosaur bones to test our faith] fits the archeological evidence just as well as evolution", but I'm not sure how well that really gets the point across. Maybe something that avoids god, doesn't feel artificial, and where the unlikely hypothesis is more intuitively complex.

I flip a coin 10 times and observe the sequence HTHTHHTTHH. Obviously, the coin is rigged to produce that specific sequence: the "rigged to produce HTHTHHTTHH" hypothesis predicts the observed outcome with probability 1, whereas the "fair coin" hypothesis predicts that outcome with probability 0.00098.

Something I've been wondering is whether most people misjudge their average level of happiness because they exclude a significant portion of their subjective experience. (I'm of course talking about the time spent dreaming.) Insofar as most dreams are pleasant, and this is certainly my experience, this could be a rational reason for [people who feel like their live isn't worth living] (definitely not talking about myself here!) to abstain from suicide. Probably not a very persuasive one, though, in most cases.

Relevant caveats:

  • This will probably be less
... (read more)
2Dagon2y
It's not clear what most people seek in terms of happiness.  I doubt it's average - more like some function of anticipated and recent sampled/modeled exemplars, with a penalty for particularly salient pain.
1Robbo2y
You might be interested in this post by Harri Besceli, which argues that "the best and worst experiences you had last week probably happened when you were dreaming". Eric Schwitzgebel has also written that philosophical hedonists, if consistent, would care more about the quality of dream experiences: https://schwitzsplinters.blogspot.com/2012/04/how-much-should-you-care-about-how-you.html

Keeping stock of and communicating what you haven't understood is an underrated skill/habit. It's very annoying to talk to someone and think they've understood something, only to realize much later that they haven't. It also makes conversations much less productive.

It's probably more of a habit than a skill. There certainly are some contexts where the right thing to do is pretend that you've understood everything even though you haven't. But on net, people do it way too much, and I'm not sure to what extent they're fooling themselves.

There are relative differences in both poor and rich countries; people anywhere can imagine what it would be like to live like their more successful neighbors. But maybe the belief in social mobility makes it worse, because it feels like you could be one of those on the top. (What's your excuse for not making a startup and selling it for $1M two years later?)

I don't have a TV and I use ad-blockers online, so I have no idea what a typical experience looks like. The little experience I have suggests that TV ads are about "desirable" things, but online ads mo... (read more)