Shortform Content [Beta]

Yitz's Shortform

I'd really like to understand what's going on in videos like this one where graphing calculators seem to "glitch out" on certain equations—are there any accessible reads out there about this topic?

EniScien's Shortform

It's hard to articulate why I dislike so much the views that change depending on what family you were born into (religion, nationalism, patriotism, etc.). It's like priors of one, fallacy of favoring an arbitrary hypothesis, lack of stability of your views under self-changes as in AI alignment, floating beliefs, and what is not truly part of you, unrecoverable knowledge instead of a knowledge generator. And it seems that all these points are connected, which is why Yudkovsky wrote about them all when trying to teach the creation of safe AI. Well, just a... (read more)

2Dagon1d
Do you dislike the meta-view that an individual cares about their family more than they care about distant strangers? The specific family varies based on accident of birth, but the general assumption is close to universal. How many of the views you dislike are of that form, and why are they different?

I didn't quite understand what you mean. The family is not entirely relevant to the topic of that post. Usually it is treated somewhat more logically. And in the post, the conversation was more about beliefs than about duty. I am ready to pay the debt to the family or even the state, but only for what they really did good and partly how much it cost them, and not just for the fact of birth. "Honor your father" clearly does not deserve a place in the 20 commandments, because I was lucky, but my father could beat someone else. Your friends are not only... (read more)

Open & Welcome Thread - May 2022

There's been some discussion recently about there perhaps being a surplus of funding in EA, and not enough good places to apply funds to. I have lots of thoughts on this that I'd like to talk more about at some point, but for now I want to propose an idea that seems pretty obvious and non-controversial to me: give $1M to people like Scott Alexander and Robin Hanson.

Scott has a day job as a psychiatrist. Robin as a university professor. Those day job hours (and slack) could be spent doing other things though. If they were wealthy enough, I assume (but am no... (read more)

Showing 3 of 17 replies (Click to show all)

I seem to have heard from a relatively good source about a study that people who are unemployed feel worse even though they have maintained the same level of well-being. (I don’t remember where it was and I can’t provide a link, maybe someone else can?)

18jazmt5d
It also reminds me of Richard Feynman not wanting a position at the institute for advance study. "I don't believe I can really do without teaching. The reason is, I have to have something so that when I don't have any ideas and I'm not getting anywhere I can say to myself, "At least I'm living; at least I'm doing something; I am making some contribution" -- it's just psychological. When I was at Princeton in the 1940s I could see what happened to those great minds at the Institute for Advanced Study, who had been specially selected for their tremendous brains and were now given this opportunity to sit in this lovely house by the woods there, with no classes to teach, with no obligations whatsoever. These poor bastards could now sit and think clearly all by themselves, OK? So they don't get any ideas for a while: They have every opportunity to do something, and they are not getting any ideas. I believe that in a situation like this a kind of guilt or depression worms inside of you, and you begin to worry about not getting any ideas. And nothing happens. Still no ideas come. Nothing happens because there's not enough real activity and challenge: You're not in contact with the experimental guys. You don't have to think how to answer questions from the students. Nothing! In any thinking process there are moments when everything is going good and you've got wonderful ideas. Teaching is an interruption, and so it's the greatest pain in the neck in the world. And then there are the longer period of time when not much is coming to you. You're not getting any ideas, and if you're doing nothing at all, it drives you nuts! You can't even say "I'm teaching my class." If you're teaching a class, you can think about the elementary things that you know very well. These things are kind of fun and delightful. It doesn't do any harm to think them over again. Is there a better way to present them? The elementary things are easy to think about; if you can't think of a new thought
4adamzerner7d
Oh, cool!
Richard Ngo's Shortform

Imagine taking someone's utility function, and inverting it by flipping the sign on all evaluations. What might this actually look like? Well, if previously I wanted a universe filled with happiness, now I'd want a universe filled with suffering; if previously I wanted humanity to flourish, now I want it to decline.

But this is assuming a Cartesian utility function. Once we treat ourselves as embedded agents, things get trickier. For example, suppose that I used to want people with similar values to me to thrive, and people with different values from me to ... (read more)

I have been asking a similar question for a long time. This is similar to the standard problem that if we deny regularity, will it be regular irregularity or irregular irregularity, that is, at what level are we denying the phenomeno? And only at one level?

2Dagon2d
Fundamentally, humans aren't VNM-rational, and don't actually have utility functions. Which makes the thought experiment much less fun. If you recast it as "what if a human brain's reinforcement mechanisms were reversed", I suspect it's also boring: simple early death. The interesting fictional cases are when some subset of a person's legible motivations are reversed, but the mass of other drives remain. This very loosely maps to reversing terminal goals and re-calculating instrumental goals - they may reverse, stay, or change in weird ways. The indirection case is solved (or rather unasked) by inserting a "perceived" in the calculation chain. Your goals don't depend on similarity to you, they depend on your perception (or projection) of similarity to you.
sudo -i's Shortform

Enlightened: 

Terminal goal -> Instrumental goal -> Planning -> Execution

Buffoonery:

Terminal goal -> Instrumental goal -> Planning -> wait what did [insert famous person] do? Guess I need to get a PhD.

sudo -i's Shortform

There's something really tyrannical about externally imposed KPIs.

I can't stop thinking about my GPA even if I make a conscious choice to stop optimizing for it.

Choosing to not optimize for it actually made it worse. A lower number is louder in my mind.

There's something about a number being used for sorting that completely short circuits my brain, and makes me agonize over it.

Yeah, most sane humans seem to have a deep-seated drive for comparisons with others.  And numeric public comparisons trigger this to a great degree.  GPA is competition-porn.  Karma, for some, is social status junk-food. 

This measure ALSO has some real value in feedback to you, and in signaling for future academic endeavors.  The trick, like with any modern over-stimulus, is in convincing your system 1 to weight the input appropriately.  

EniScien's Shortform

Some time ago I was surprised that narrow professional skills can significantly change your thinking, and not just give you new abilities to specialize. Changes you, not just allowing you to cast a new spell. Something like not scientific, but professional rationality, but definitely not just the ability to make origami. (Specifically, I had this with the principles of good code, including in object-oriented programming) I had never heard of this before, including here on LessWrong. But it makes me think that the virtue of learning is more than being a... (read more)

johnswentworth's Shortform

Weather just barely hit 80°F today, so I tried the Air Conditioner Test.

Three problems came up:

  • Turns out my laser thermometer is all over the map. Readings would change by 10°F if I went outside and came back in. My old-school thermometer is much more stable (and well-calibrated, based on dipping it in some ice water), but slow and caps out around 90°F (so I can't use to measure e.g. exhaust temp). I plan to buy a bunch more old-school thermometers for the next try.
  • I thought opening the doors/windows in rooms other than the test room and setting up a fan w
... (read more)
leogao's Shortform

A thought pattern that I've noticed myself and others falling into sometimes: Sometimes I will make arguments about things from first principles that look something like "I don't see any way X can be true, it clearly follows from [premises] that X is definitely false", even though there are people who believe X is true. When this happens, it's almost always unproductive to continue to argue on first principles, but rather I should do one of: a) try to better understand the argument and find a more specific crux to disagree on or b) decide that this topic isn't worth investing more time in, register it as "not sure if X is true" in my mind, and move on.

For many such questions, "is X true" is the wrong question.  This is common when X isn't a testable proposition, it's a model or assertion of causal weight.  If you can't think of existence proofs that would confirm it, try to reframe as "under what conditions is X a useful model?".

EniScien's Shortform

If you think about it, there is nothing wrong with every person knowing everything that civilization knows now, on the contrary, this return to normality has already accumulated, it is overdue. Previously, there was just a scientist who was aware of all the achievements of science. Now two physicists will not understand each other, because they are from different fields. No one in the world knows how things are, no one sees the whole picture even remotely. One can imagine the horror of the post of a person who met someone who does not even fully know either the history of his planet or the laws of physics.

EniScien's Shortform

Surely someone has already pointed out this, but I have not seen such indications. It seems that humanism follows science, because the idea of ​​progress shows that everyone can win, there is enough for everyone, life is not a zero-sum game, where if you do not harm someone, then you yourself live worse. And the lack of discrimination probably comes from the greater consistency of your reasoning, you see that hating a certain group is a completely arbitrary thing that has nothing to do with it, and just as you could hate any other group. It can be said that you are aware that you cannot be said to be special just because you are, because everyone else may think the same, you have no special reason.

Thomas Kwa's Shortform

Antifreeze proteins prevent water inside organisms from freezing, allowing them to survive at temperatures below 0 °C. They do this by actually binding to tiny ice crystals and preventing them from growing further, basically keeping the water in a supercooled state. I think this is fascinating.

Is it possible for there to be nanomachine enzymes (not made of proteins, because they would denature) that bind to tiny gas bubbles in solution and prevent water from boiling above 100 °C?

Yonatan Cale's Shortform

Is this an AGI risk?

A company that makes CPUs that run very quickly but don't do matrix multiplication or other things that are important for neural networks.

Context: I know people who work there

Perhaps, but I'd guess only in a rather indirect way. If there's some manufacturing process that the company invests in improving in order to make their chips, and that manufacturing process happens to be useful for matrix multiplication, then yes, that could contribute.

But it's worth noting how many things would be considered AGI risks by such a standard; basically the entire supply chain for computers, and anyone who works for or with top labs; the landlords that rent office space to DeepMind, the city workers that keep the lights on and the water runnin... (read more)

David Udell's Shortform

What would it mean for a society to have real intellectual integrity?  For one, people would be expected to follow their stated beliefs to wherever they led.  Unprincipled exceptions and an inability or unwillingness to correlate beliefs among different domains would be subject to social sanction.  Valid attempts to persuade would be expected to be based on solid argumentation, meaning that what passes for typical salesmanship nowadays would be considered a grave affront.  Probably something along the lines of punching someone

... (read more)
5David Udell6d
Cf. "there are no atheists in a foxhole." Under stress, it's easy to slip sideways into a world model where things are going better, where you don't have to confront quite so many large looming problems. This is a completely natural human response to facing down difficult situations, especially when brooding over those situations over long periods of time. Similar sideways tugs can come from (overlapping categories) social incentives to endorse a sacred belief of some kind, or to not blaspheme, or to affirm the ingroup attire [https://www.lesswrong.com/posts/nYkMLFpx77Rz3uo9c/belief-as-attire] when life leaves you surrounded by a particular ingroup, or to believe what makes you or people like you look good/high status. [https://www.libertarianism.org/publications/essays/why-do-intellectuals-oppose-capitalism] Epistemic dignity is about seeing "slipping sideways" as beneath you. Living in reality is instrumentally beneficial, period. There's no good reason to ever allow yourself to not live in reality. Once you can see something, even dimly, there's absolutely no sense in hiding from that observation's implications. Those subtle mental motions by which we disappear observations we know that we won't like down the memory hole … epistemic dignity is about coming to always and everywhere violently reject these hidings-from-yourself, as a matter of principle. We don't actually have a choice in the matter -- there's no free parameter of intellectual virtue here, that you can form a subjective opinion on. That slipping sideways is undignified is written in the [https://www.lesswrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure] very mathematics of inference itself. [http://zackmdavis.net/blog/2016/08/the-fundamental-theorem-of-epistemology/]

Minor spoilers for mad investor chaos and the woman of asmodeus.

I'm only human
Of flesh and blood I'm made

Human
Born to make mistakes
(I am just a man)
Human

--The Human League, "Human"

"Civilization in dath ilan usually feels annoyed with itself when it can't manage to do as well as gods.  Sometimes, to be clear, that annoyance is more productive than at other times, but the point is, we'll poke at the problem and prod at it, looking for ways, not to be perfect, but not to do that much worse than gods."

"If you get to the point in major negotiations wh

... (read more)
LoganStrohl's Shortform

[Crossposted from Facebook.]

Recommendation request:

As part of developing "perceptual dexterity" stuff, I think I want to do a post where I review a few books related to creativity. I've just finished reading A Whack On the Side of the Head, which felt like quite a... I'm not sure what to call it, "corporate"? I think? It felt like a corporate take on creativity. When I started it, I thought I'd do a review of just that book, but after finishing it, I think a comparative study would be a lot more valuable.

I'm now looking for more books to include in the pos... (read more)

lc's Shortform

Noticed something recently. As an alien, you could read pretty much everything Wikipedia has on celebrities, both on individual people and the general articles about celebrity as a concept... And never learn that celebrities tend to be extraordinarily attractive. I'm not talking about an accurate or even attempted explanation for the tendency, I'm talking about the existence of the tendency at all. I've tried to find something on wikipedia that states it, but that information just doesn't exist (except, of course, implicitly through photographs).

It's quite... (read more)

Part of the issue is like that celebrity, as wikipedia approaches the word, is broader than just modern TV, film, etc. celebrity and instead includes a wide variety of people who are not likely to be exceptionally attractive but are well known in some other way. There's individual preferences in terms of who they think are attractive, but many politicians, authors, radio personalities, famous scientists, etc. are not conventionally attractive in the way movie stars are attractive and yet these people are still celebrities in a broad sense. However, I've not dug into the depths of wikipedia to see if, for example, this gap you see holds up if looking at pages that more directly talk about the qualities of film stars, for example.

4Dagon5d
Analyzing or talking about status factors is low-status. You do see information about awards for beauty, much like you can see some information about fiances, but not much about their expenditures or lifestyle.
EniScien's Shortform

(I can’t find where it was, if I find it, I’ll move it there) Someone suggested in light of the problems with AI to clone Yudkowsky, but the problem is that apparently we don’t have the 18 years it takes for the human brain to form, so that even when solving all the other problems, it's just too slow. Well, with any means of accelerating the development of the brain, the problem is already clear.

EniScien's Shortform

I came up with the idea that people can cheer for the protagonist of the book, even if he is a villain, because the political instincts of rationalizing the correctness of your tribe's actions are activated. You are rooting for Main Character, as for your group.

hath's Shortform

Suprising: the fact that Chuck Palahniuk's writing style is visible in lsusr's fiction. More suprising: the fact that Fight Club 2 deals with... memetics, of all things.

I am flattered. Chuck Palahniuk is among my favorite authors.

Load More