moridinamael

Comments

moridinamael's Shortform

I'm writing an effortpost on this general topic but wanted to gauge reactions to the following thoughts, so I can tweak my approach.

I was first introduced to rationality about ten years ago and have been a reasonably dedicated practitioner of this discipline that whole time. The first few years saw me making a lot of bad choices. I was in the Valley of Bad Rationality; I didn't have experience with these powerful tools, and I made a number of mistakes.

My own mistakes had a lot to do with overconfidence in my ability to model and navigate complex situations. My ability to model and understand myself was particularly lacking.

In the more proximal part of this ten year period -- say, in the last five years -- I've actually gotten a lot better. And I got better, in my opinion, because I kept on thinking about the world in a fundamentally rationalist way. I kept making predictions, trying to understand what happened when my predictions went wrong, and updating both my world-model and my meta-model of how I should be thinking about predictions and models.

Centrally, I acquired an intuitive, gut level sense of how to think about situations where I could only see a certain angle, where I was either definitely or probably missing information, or situations involving human psychology. You could also classify another major improvement as being due generally to "actually multiplying probabilities semi-explicitly instead of handwaving", e.g. it's pretty unlikely that two things with independent 30% odds of being true, are both true. You could say through trial and error I came to understand why no wise person attempts a plan where more than one thing has to happen "as planned".

I think if you had asked me at the 5 year mark if this rationality thing was all it was cracked up to be, I very well might have said that it had led me to make a lot of bad decisions and execute bad plans, but after 10 years, and especially the last year or three, it has started working for me in a way that it didn't before.

Building brain-inspired AGI is infinitely easier than understanding the brain

FWIW I have come to similar conclusions along similar lines. I've said that I think human intelligence minus rat intelligence is probably easier to understand and implement than rat intelligence alone. Rat intelligence requires a long list of neural structures fine-tuned by natural selection, over tens of millions of years, to enable the rat to do very specific survival behaviors right out of the womb. How many individual fine-tuned behaviors? Hundreds? Thousands? Hard to say. Human intelligence, by contrast, cannot possibly be this fine tuned, because the same machinery lets us learn and predict almost arbitrarily different* domains.

I also think that recent results in machine learning have essentially proven the conjecture that moar compute regularly and reliably leads to moar performance, all things being equal. The human neocortical algorithm probably wouldn't work very well if it were applied in a brain 100x smaller because, by its very nature, it requires massive amounts of parallel compute to work. In other words, the neocortex needs trillions of synapses to do what it does for much the same reason that GPT-3 can do things that GPT-2 can't. Size matters, at least for this particular class of architectures.

*I think this is actually wrong - I don't think we can learn arbitrarily domains, not even close. Humans are not general. Yann LeCun has repeatedly said this and I'm inclined to trust him. But I think that the human intelligence architecture might be general. It's just that natural selection stopped seeing net fitness advantage at the current brain size.

What are your greatest one-shot life improvements?

I grew up in warm climates and tend to suffer a lot in cold weather. I moved to a colder climate a few years ago and discovered scarves. Wearing scarves eliminated 90% of this suffering. Scarves are not exactly a bold and novel invention, but people from warm climates may underestimate their power.

LessWrong Coronavirus Agenda

Scaling up testing seems to be critical. With easy, fast and ubiquitous testing, huge numbers of individuals could be tested as a matter of routine, and infected people could begin self-isolating before showing symptoms. With truly adequate testing policies, the goal of true "containment" could potentially be achieved, without the need to resort to complete economic lockdown, which causes its own devastating consequences in the long term.

Cheap, fast, free testing, possibly with an incentive to get tested regularly even if you don't feel sick, could move us beyond flattening the curve and into actual containment.

Even a test with relatively poor accuracy helps, in terms of flattening the curve, provided it is widely distributed.

So I might phrase this as a set of questions:

  • Should I get tested, if testing is available?
  • How do we best institute wide-scale testing?
  • How do we most quickly enact wide-scale testing?
How Do You Convince Your Parents To Prep? To Quarantine?

As my brother pointed out to me, arguments are not won in real time. Give them information in packets and calmly deal with objections as they come up, then disengage and let them process.

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

Perhaps there’s some obvious way in which I’m misunderstanding, but if10% of people contract the virus over a shortish time frame then won’t essentially everyone contract it eventually? Why would it reach a 10% penetration and then stop? Isn’t this like asking what happens if 10% of people contract influenza? Maybe in a given year your odds of getting the flu are X% but your odds of getting it once in 10 years are roughly 10*X%. Am I missing something that implies the virus will be corralled and gotten under control at a certain point?

The two-layer model of human values, and problems with synthesizing preferences

Fantastic post; I'm still processing.

One bite-sized thought the occurs to me is that maybe this coupling of the Player and the Character is one of the many things accomplished by dreaming. The mind-system confabulates bizarre and complex scenarios, drawn in some sense from the distribution of possible but not highly probable sensory experiences. The Player provides an emotional reaction to these scenarios - you're naked in school, you feel horrifying levels of embarrassment in the dream, and the Character learns to avoid situations like this one without ever having to directly experience it.

I think that dreaming does this sort of thing in a general way, by simulating scenarios and using those simulations to propagate learning through the hierarchy, but in particular it would seem that viewing the mind in terms of Player/Character gives you a unique closed-loop situation that really bootstraps the ability of the Character to intuitively understand the Player's wishes.

How Doomed are Large Organizations?

I would love to see an answer to or discussion of this question. The premise of the OP that large companies would be better off if split into much much smaller companies is a shocking and bold claim. If conglomeration and growth of large firms were a purely Molochian and net-negative proposition, then the world would look different than it does.

Reality-Revealing and Reality-Masking Puzzles

I'm reminded of the post Purchase Fuzzies and Utilons Separately.

The actual human motivation and decision system operates by something like "expected valence" where "valence" is determined by some complex and largely unconscious calculation. When you start asking questions about "meaning" it's very easy to decouple your felt motivations (actually experienced and internally meaningful System-1-valid expected valence) from what you think your motivations ought to be (something like "utility maximization", where "utility" is an abstracted, logical, System-2-valid rationalization). This is almost guaranteed to make you miserable, unless you're lucky enough that your System-1 valence calculation happens to match your System-2 logical deduction of the correct utilitarian course.

Possible courses of action include:

1. Brute forcing it, just doing what System-2 calculates is correct. This will involve a lot of suffering, since your System-1 will be screaming bloody murder the whole time, and I think most people will simply fail to achieve this. They will break.

2. Retraining your System-1 to find different things intrinsically meaningful. This can also be painful because System-1 generally doesn't enjoy being trained. Doing it slowly, and leveraging your social sphere to help warp reality for you, can help.

3. Giving up, basically. Determining that you'd rather just do things that don't make you miserable, even if you're being a bad utilitarian. This will cause ongoing low-level dissonance as you're aware that System-2 has evaluated your actions as being suboptimal or even evil, but at least you can get out of bed in the morning and hold down a job.

There are probably other options. I think I basically tried option 1, collapsed into option 3, and then eventually found my people and stabilized into the slow glide of option 2.

The fact that utilitarianism is not only impossible for humans to execute but actually a potential cause of great internal suffering to even know about is probably not talked about enough.

ialdabaoth is banned

For the record, I view the fact that I commented in the first place, and that I now feel compelled to defend my comment, as being Exhibit A of the thing that I'm whining about. We chimps feel compelled to get in on the action when the fabric of the tribe is threatened. Making the banning of a badguy the subject of a discussion rather than being an act of unremarked moderator fiat basically sucks everybody nearby into a vortex of social wagon-circling, signaling, and reading a bunch of links to figure out which chimps are on the good guy team and which chimps are on the bad guy team. It's a significant cognitive burden to impose on people, a bit like an @everyone in a Discord channel, in that it draws attention and energy in vastly disproportionate scope relative to the value it provides.

If we were talking about something socio-emotionally neutral like changing the color scheme or something, cool, great, ask the community. I have no opinion on the color scheme, and I'm allowed to have no opinion on the color scheme. But if you ask me what my opinion is on Prominent Community Abuser, I can't beg off. That's not an allowed social move. Better not to ask, or if you're going to ask, be aware of what you're asking.

Sure, you can pull the "but we're supposed to be Rationalists(tm)" card, as you do in your last paragraph, but the Rationalist community has pretty consistently failed to show any evidence of actually being superior, or even very good, at negotiating social blow-ups.

Load More