If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New to LessWrong?

New Comment
60 comments, sorted by Click to highlight new comments since: Today at 3:08 PM

I've become a bit discouraged by the lack of positive reception for my forecasting newsletter on LessWrong, to which I've been publishing it since April 2020. For example, I thought that Forecasting Newsletter: Looking back at 2021 was excellent. It was very favorably reviewed by Scott Alexander here. I poured a bunch of myself into that newsletter. It got 18 karma.

I haven't bothered crossposting it to LW this month, but it continues in substack and on the EA forum.

Alas, that also makes me sad. I wonder whether this means something is going wrong in the basic attention-allocation system on the site. I've enjoyed every newsletter that I read, but I only noticed like 2-3 (and upvoted each of them correspondingly). 

Introspecting on my experience, I actually think the biggest difference for me would have been if you had given any of them a more evocative title that had captured the most important thing in that month's newsletter. I definitely feel a strong sense of boredom if I imagine clicking on "Forecasting Newsletter March 2021" instead of "Nuclear war forecasts & new forecasting platforms (Forecasting Newsletter Mar '21)".

That's sad. 

Looks like you're getting decent engagement on substack. Curious if you know where those people come from? I'm not sure to what extent there's a "forecasting scene", or who is part of it. 

Speaking as a non-forecast-specialized-person – I have a belief that it's good to have a forecasting scene that is developing tech/skills/infrastructure. But so far, whether rightly or wrongly, I've mostly thought of that as something that's good/virtuous for other people to do. A newsletter feels like something that makes sense to read if I want to keep up with the broad strokes of a field. 

It does seem to me that forecasting/prediction should be more integrated into rationalist culture, but I'm not sure how to go about it. 

Curious if you know where those people come from?

Sure, see here: https://imgur.com/a/pMR7Qw4

I'm not sure to what extent there's a "forecasting scene", or who is part of it. 

There is a forecasting scene, made out of hobbyist forecasters and more hardcore prediction market players, and a bunch of researchers. The best prediction market people tend to have fairly sharp models of the world, particularly around elections. They also have a pretty high willingness to bet. 

I've been thinking for a while that maybe forecasting should have its own LessWrong instance, as a place to discuss and post essays (the way EA Forum and AI Alignment have their own instances); curious to get your thoughts on whether this would improve the forecasting scene by having a shared place to meet, or detract by making it harder for newcomers to hear about forecasting?

I really, really wish crossposting and crosslinking was easier between different ForumMagnum instances...

This isn't a particularly informed or confident take, but forecasting strikes me as, I'm not sure what the right words are. Important? Useful? Cool? Impressive? But it doesn't seem to get nearly as much attention as it should. And so I too am sad to learn of the lack of engagement and positive reception.

I just subscribed to the substack because it's something I'd like to keep my eye on.

Well I liked the looking back post - though I have only just now noticed they are in a running sequence. Query - would you prefer to have engagement here, or at substack?

Also, once again note to myself to be what-feels-from-the-inside like gushingly, disgustingly effusive but-in-fact-is just positive feedback at all.

I guess there's not a lot of clickthrough? Wait, the link is to the EA forum. Okay, still, that's weird.

Huh.  I found your forecasting newsletter via LessWrong, and then subscribed to the substack's RSS feed?  Which probably made me less likely to open it/see it in LessWrong?  Dunno.  Maybe your LessWrong traffic moved to substack?  (sample size=1)    

I visited New York City for the first time in my life last week. It's odd coming to the city after a lifetime of consuming media that references various locations within it. I almost feel like I know it even though I've never been. This is the place where it all happens, where everyone important lives. It's THE reference point for everything. The heights of tall objects are compared to The Statue of Liberty. The blast radius of nuclear bombs are compared to the size of Manhattan. Local news is reported as if it is in the national interest for people around the country to know.

The people were different than the ones I'm accustomed to. The drivers honk more and drive aggressively. The subway passengers wear thousand-dollar Balenciaga sneakers. They are taller, better looking, and better dressed than the people you're used to.

And everywhere there is self-reference. In the cities I frequent, paraphernalia bearing the name of the city is confined to a handful of tourist shops in the downtown area (if it exists at all). In New York City, it is absolutely everywhere. Everywhere the implicit experience for sale is the same: I was there. I was part of it. I matter.

I felt this emotion everywhere I went. Manhattan truly feels like the center of the country. I found myself looking at the cost of renting an apartment in Chinatown or in Brooklyn, wondering if I could afford it, wondering who I might become friends with if I moved there, and what experiences I might have that I would otherwise miss.

I also felt periodic disgust with the excess, the self-importance, and the highly visible obsession with status that so many people seem to exhibit. I looked up at the empty $200 million apartments in Billionaire's row and thought about how badly large cities need a land value tax. I looked around at all the tourists in Times Square, smiling for the camera in front large billboards, then frowning as they examined the photo to see whether it was good enough to post on Instagram. I wondered how many children we could cure of malaria if these people shifted 10% of their spending towards helping others.

This is the place where rich people go to compete in zero-sum status games. It breeds arrogant, out-of-touch elites. This is the place where talented young people go to pay half their income in rent and raise a small furry child simulator in place of the one they have forgotten to want. This is, as Isegoria so aptly put it, an IQ grinder, that disproportionately attracts smart and well educated people who reproduce at below replacement rate.

The huge disparities between rich and poor are omnipresent. I watched several dozen people (myself included) walk past a homeless diabetic with legs that were literally rotting away. I briefly wondered what was wrong with society that we allowed this to happen before walking away to board a bus out of the city.

I'm sure all these things have been said about New York City before, and I'm sure they will be said again in the future. I'll probably return for a longer visit sometime in the future.

There's been some discussion recently about there perhaps being a surplus of funding in EA, and not enough good places to apply funds to. I have lots of thoughts on this that I'd like to talk more about at some point, but for now I want to propose an idea that seems pretty obvious and non-controversial to me: give $1M to people like Scott Alexander and Robin Hanson.

Scott has a day job as a psychiatrist. Robin as a university professor. Those day job hours (and slack) could be spent doing other things though. If they were wealthy enough, I assume (but am not sure) they would quit their jobs and have more hours to spend doing cool things. And they both have incredible track records of doing cool things.

Scott and Robin are just the two people that come to my mind first and that I see as the most non-controversial. But I think there are many more examples. Zvi and Kaj Sotala also come to mind. Iirc they both have day jobs.

A related idea is that even people who are currently being paid to do work on ie. AI safety, I assume there is still room to spend money to improve their productivity. Ie. by hiring a maid for them, maybe it frees up X hours a week, and having the extra hours + slack would improve their productivity by enough.

Scott has been offered money to quit his job. I don't know the full reason for why he didn't take it. I think his observation was what his productivity on his blog doesn't go up at all if he doesn't have a job, I think he really values independence from funders, and his job provides him with important grounding that feels important for him to stay sane. 

I see, thanks for clarifying.

I think his observation was what his productivity on his blog doesn't go up at all if he doesn't have a job

(I'm interpreting what you're saying as "doesn't go up moderately" not "doesn't go up at all".)

That sounds implausible to me. Not having a job would mean more hours are available. Would all of those hours be spent on leisure? Is his "blogging bucket" already filled by the amount of blogging he is currently doing? What about his "doing other productive things" bucket? What about the benefits of having more slack?

As a related point, even if Scott's productivity wouldn't benefit from extra hours, I expect that most other people's productivity would benefit, and ultimately I intend for my point to extend past Scott and Robin and into lots of other cool people (including yourself, actually!).

I think he really values independence from funders

What I am proposing is just "here's a briefcase of cash, go do what you want". Ie. no earmarks. So it should provide that independence. This of course requires a lot of trust in the recipient, but I think that for Scott as well as many other people actually, such trust would be justifiable.

and his job provides him with important grounding that feels important for him to stay sane.

That sounds very reasonable to me.

It also reminds me of Richard Feynman not wanting a position at the institute for advance study. 

"I don't believe I can really do without teaching. The reason is, I have to have something so that when I don't have any ideas and I'm not getting anywhere I can say to myself, "At least I'm living; at least I'm doing something; I am making some contribution" -- it's just psychological.

When I was at Princeton in the 1940s I could see what happened to those great minds at the Institute for Advanced Study, who had been specially selected for their tremendous brains and were now given this opportunity to sit in this lovely house by the woods there, with no classes to teach, with no obligations whatsoever. These poor bastards could now sit and think clearly all by themselves, OK? So they don't get any ideas for a while: They have every opportunity to do something, and they are not getting any ideas. I believe that in a situation like this a kind of guilt or depression worms inside of you, and you begin to worry about not getting any ideas. And nothing happens. Still no ideas come.

Nothing happens because there's not enough real activity and challenge: You're not in contact with the experimental guys. You don't have to think how to answer questions from the students. Nothing!

In any thinking process there are moments when everything is going good and you've got wonderful ideas. Teaching is an interruption, and so it's the greatest pain in the neck in the world. And then there are the longer period of time when not much is coming to you. You're not getting any ideas, and if you're doing nothing at all, it drives you nuts! You can't even say "I'm teaching my class."

If you're teaching a class, you can think about the elementary things that you know very well. These things are kind of fun and delightful. It doesn't do any harm to think them over again. Is there a better way to present them? The elementary things are easy to think about; if you can't think of a new thought, no harm done; what you thought about it before is good enough for the class. If you do think of something new, you're rather pleased that you have a new way of looking at it.

The questions of the students are often the source of new research. They often ask profound questions that I've thought about at times and then given up on, so to speak, for a while. It wouldn't do me any harm to think about them again and see if I can go any further now. The students may not be able to see the thing I want to answer, or the subtleties I want to think about, but they remind me of a problem by asking questions in the neighborhood of that problem. It's not so easy to remind yourself of these things.

So I find that teaching and the students keep life going, and I would never accept any position in which somebody has invented a happy situation for me where I don't have to teach. Never."

— Richard Feynman, Surely You're Joking, Mr. Feynman!

I suspect (and this is my interpretation of what he's said) that Alexander's productivity would actually go down if he quit his day job. A lot of his blogging is inspired by his psychiatric work, so he would lose that source of inspiration. Also, a lot of his best works (eg. Meditations on Moloch) were written while he was a medical school resident, working 60 hours a week outside of blogging, so it's not clear to me that the hours of working are really taking away from his best writing. They are certainly taking away from posting as frequently - he's been posting much more frequently now on Substack - but pressure to write daily posts might take away from work on longer high quality posts.

A lot of his blogging is inspired by his psychiatric work, so he would lose that source of inspiration.

I don't get the impression that too much is inspired by his psychiatric work. This is partly based on my being a reader of his posts on and off over the years, and also on a brief skim of recent posts (biographies of presidents, AI safety, pregnancy interventions). But even if that source of inspiration was lost, it'd presumably be replaced by other sources of inspiration, and his writing is broad enough where at best that'd be a large net gain and at worst it'd be a small net loss.

Also, a lot of his best works (eg. Meditations on Moloch) were written while he was a medical school resident, working 60 hours a week outside of blogging, so it's not clear to me that the hours of working are really taking away from his best writing.

That's a really interesting point. Maybe I'm wrong then. Maybe I don't understand the subtleties of what makes for good writing. But even so, writing is only one thing. I expect that with more time people like Scott would come up with other cool things to pursue in addition to writing.

Zvi and Kaj Sotala also come to mind. Iirc they both have day jobs.

Appreciate the thought!

I used to have funding from EA sources to work on my own projects for a number of years. I basically gave it up because working on those projects didn't feel motivating enough and it seemed to me like I'd probably be happier doing something else and keeping any EA stuff as a hobby on the side. (This feels like it's been the right choice.)

I see. Thanks for the response. I'm starting to suspect that this is a common sentiment, wanting some sort of normalcy and doing other stuff on the side.

I'm curious, was that funding you received no strings attached? If not, I wonder if moving to no strings attached would change how you feel.

I'm curious, was that funding you received no strings attached?

Pretty much, yes.

Though it's worth noting that this didn't entirely eliminate a feeling of needing to do something useful with my time. Even when I had guaranteed funding to do basically whatever I wanted for a while (say a year), there was still the question of whether the same source would be willing to fund me for another year if I didn't do enough useful things during that time. And if they decided that they didn't and then I'd need to find another funder or a real job, what would that source think about me having spent a year without accomplishing anything concrete that I could point at.

So in practice even no-strings-attached funding still doesn't let you completely stop worrying about getting results, unless the source credibly commits to providing that funding for a significant fraction of your remaining lifetime. I find that one of the advantages of having a more "normal" day job rather than weird EA funding is that it guarantees that I'm spending at least part of my time on something that helps ensure I can also find another "normal" job later, if need to be. Rather than needing to stress out that if I don't get anything useful done today, then there's nothing really forcing me to do anything useful tomorrow either, nor anything forcing me to do anything useful the day after that, and I really hope that a whole year won't pass with me doing nothing useful until finally the EAs will get tired of funding me and I'll have burned whatever employability I had in the "normal" job market too.

Gotcha. That was a really helpful response, and it makes a lot of sense.

unless the source credibly commits to providing that funding for a significant fraction of your remaining lifetime

What if this happened for you? Suppose you received the funding in a lump sum with no strings attached. Would you prefer that over having the day job? How do you expect it would affect the impact you would have on the world?

Glad it was helpful :)

What if this happened for you? Suppose you received the funding in a lump sum with no strings attached.

Hmm. Certainly it'd make me feel a bit safer, but I'm not sure if it would change what I actually did in a short-term basis at least. My EA-productivity is limited more by motivational and emotional issues than time, and if I did manage to debug those issues enough that time would become the limiting factor, then I might feel fine asking for short-term funding anyway since I would no longer feel doubtful about my productivity.

I could definitely imagine it being helpful anyway, though I'm sufficiently uncertain about this that I think I'd feel bad about accepting any such offer. :) 

I see. Thanks again for the explanation!

Hearing this, it re-opens a line of thought that's been swimming in the back of my mind for quite some time: that helping EA people with mental health is a pretty high-yielding pursuit. Lots of people (including myself) deal with stuff, I presume. And if you can help such people, you can improve productivity by something like, I don't know, 10-200%?

But how do you help them? I don't think I have any great ideas here.

  • I assume most people have access to a therapist if they wanted one.
  • Maybe motivation to see a therapist is the problem, not access. But there's plenty of people talking about and normalizing therapy nowadays, and I'm not sure how fruitful it'd be to continue that process.
  • Maybe difficulty finding the right therapist is the crux? Especially for rationalist-types who have "weird" issues. Maybe. Maybe expanding and/or branching off of something like the Secular Therapy Project would be worthwhile. Or the SlateStarCodex Psychiat-list.
  • Maybe we just need better models of how the mind works and how to repair psychiatric pain. But the world of clinical psychology research already has this covered. Right? Maybe, maybe not. It does seem difficult to break into and have a real impact. However, you Kaj seem to me like one of the few people who might have a comparative advantage in pursuing something like that. I'm thinking of your Multiagent Models of Mind sequence. I was really impressed by it. I'm not sure how much of it was actually novel — maybe parts were, maybe not really, I don't really know — but along the lines of Non-Expert Explanation, I think there's a good amount of value in framing things differently. And in popularizing worthwhile things! That sequence helped me arrive at a pretty good understanding of my own psychological issues, I think, whereas before that I was pretty lost. The understanding hasn't translated to actually feeling any better, but that's n=1 and beside the point. Speaking of which, what is my point? I think it's just to consider all of this food for thought. I can't say I'm confident in the broader points I'm making.

FWIW, my other day job (I have two part-time ones) is related.

Oh, cool!

>access if they wanted one

Based on a nearly universal sense (reading/personal experience/conversations with doctors) that it's hard to find an in-network psychiatrist/psychologist who's will to see a new patient, my prior on this is very low.

>motivation

That would help too. But there's probably a large set of people who could be helped if the "access" barrier was reduced/removed on its own.

>difficulty?

Agreed for both rationalists and non-rationalists.

>knowing how to fix brains?

Which condition(s) specifically come to mind when you think of something that "clinical psychology research already has...covered"? I was unable to think of one off the top of my head.

Based on a nearly universal sense (reading/personal experience/conversations with doctors) that it's hard to find an in-network psychiatrist/psychologist who's will to see a new patient, my prior on this is very low.

Huh, that is very much not the experience I've had personally, or that people I've spoke with have had. I've found that it can be difficult to find a therapist who you like, but to find a therapist who just accepts your insurance and is willing to see you/new patients, I've never spoke with someone who has had trouble with that. Context: I'm in America.

Which condition(s) specifically come to mind when you think of something that "clinical psychology research already has...covered"? I was unable to think of one off the top of my head.

Well I'm not sure how well they really do have it covered (hence my saying "Maybe, maybe not."). I was just thinking about the fact that there is in fact a large group of people working on it, and they seem to have made at least some non-trivial amount of progress. Ie. cognitive behavioral therapy.

and I'll have burned whatever employability I had in the "normal" job market too.

This is probably moot, but I'd like to argue against this sentiment and share part of my own story.

I myself am a programmer and have a lot of anxiety about getting fired and being unable to find another job. And so I've spent a good amount of time trying to debug this. Part of that debugging is asking True Self what he actually thinks. And this is his ~answer.

It is totally implausible that my fears end up actually being realized. Think of it like this:

  • Plan A is to keep my current job. I worry about getting fired, but it is pretty unlikely to actually happen. Look at the base rate. It's low. And I have control over my performance. I can scale it up if I start to worry that I'm getting into risky territory.
  • Plan B is, if I get fired, to apply to, let's call them "reach jobs" (like a reach school when you apply to colleges) and get one of them. Seems somewhat plausible.
  • Plan C is to mass apply to normal jobs that are in my ballpark. It might take a few months, but it seems highly likely I'd eventually get one of them.
  • Plan D1 is to ask friends and family for referrals.
  • Plan D2 is to lower my standards and apply to jobs that I'm overqualified for (and perhaps adjust the resume I use to apply to mitigate against the failure mode of "he would never actually accept this position").
  • Plan D3 is to push even further into my network, asking former coworkers, former classmates, and friends of friends for referrals.
  • Plan D4 is to just have my girlfriend support me.
  • Plan E is to do something adjacent, like work as a coding bootcamp instructor or maybe even in QA.
  • Plan F is to do something like work at a library or a coffee shop. I worked at a library (actually two) in college and it was great. It was low stress and there was plenty of time to screw around on my laptop doing my own thing.

Even if I get "knocked off track" and end up at D2 or whatever, I can always work my way back up. It'd be a setback, but probably nothing too crazy.

And that's actually something I ended up going through. After doing a coding bootcamp and working as a programmer for about a year and a half, I took a year off to self-study computer science, and then about three more years working on a failed startup. It was a little tough finding a job after that, but I managed. From there I worked my way up. Today I actually just accepted an offer at one of those "reach jobs".

Anyway, what I'm trying to say is that taking time off doing EA stuff might be a setback in terms of your ability to get back into the "normal" job market, but I expect that it'd only knock you down a rung or so. I don't think it'd completely knock you of the ladder. Maybe your ladder doesn't look exactly like mine with A through F — I'm pretty fortunate to have the life circumstances I have — but I expect that it's a lot longer than it feels. And even if you do get knocked down a rung, I expect that for you too it'd just be a temporary setback, nothing that'd knock you off course too significantly.

That's not where I expected this was going to go. (Wasn't there some sort of microgrants project somewhere ahile back? I don't know if that was EA, but...)

It doesn't look to me like it would go to people like Scott or Robin either. I am arguing that it should because they are productive people and it would enable them to spend more time being productive via removing the need for a day job, especially if there is a surplus of money available.

I seem to have heard from a relatively good source about a study that people who are unemployed feel worse even though they have maintained the same level of well-being. (I don’t remember where it was and I can’t provide a link, maybe someone else can?)

In the spirit of doing low status things with high potential, I am working on a site to allow commissioning of fringe erotica and am looking to hire a second web developer.

The idea is to build a place where people with niche interests can post bounties for specific stories. In my time moonlighting as an erotic author, I've noticed a lack of good sites to do freelance erotic writing work. I think the reason for this is that most people think porn is icky, so despite there being a huge market for extremely niche content, the platforms currently available are pretty abysmal. This is our opportunity.

We're currently in beta and can pay a junior-level wage, with senior-level equity. If you're a web developer who wants to join a fully remote startup, please reach out. 

As with my other startups, I began this project with the goal of generating wealth to put towards alignment research.

[-]nim2y70

I notice that I am confused by not seeing discourse about using AI alignment solutions for human alignment. It seems like the world as we know it is badly threatened by humans behaving in ways I'd describe as poorly aligned, for an understanding of "alignment" formed mostly from context in AI discussions in this community.

I get that AI is different from people -- we assume it's much "smarter", for one thing. Yet every "AI" we've built so far has amplified traits of humanity that we consider flaws, as well as those we consider virtues. Do we expect that this would magically stop being the case if it passed a certain threshhold?

And doesn't alignment, in the most general terms, get harder when it's applied to "smarter" entities? If that's the case, then it seems like the "less smart" entities of human leaders would be a perfect place to test strategies we think will generalize to "smarter" entities. Conversely, if we can't apply alignment findings to humans because alignment gets "easier" / more tractable when applied to "smarter" entities, doesn't that suggest a degenerate case of minimum alignment difficulty for a maximally "smart" AI?

The AI problem is easier in some ways (and significantly harder in others) because we're not taking an existing system and trying to align it. We want to design the system (and/or systems that produce that system, aka optimization) to be aligned in the first place. This can be done through formal work to provide guarantees, lots of code, and lots of testing.

However, doing that for some arbitrary agent or even just a human isn't really a focus of most alignment research. A human has the issue that they're already misaligned (in a sense), and there are many various technological/ethical/social issues with either retraining them or performing the modifications to get them aligned. If the ideas that people had for alignment were about 'converting' a misaligned intelligence to an aligned one, then humans could maybe be a test-case, but that isn't really the focus. We also are only 'slowly' advancing our ability to understand the body and how the brain works. While we have some of the same issues with neural networks, it is a lot cheaper, less unethical, we can rerun it (for non-dangerous networks), etcetera.

Though, there has been talk of things like incentives, moral mazes, inadequate equilibria and more which are somewhat related to the alignment/misalignment of humans and where they can do better.

[-]nim2y30

Thank you for clarifying! This highlights an assumption about AI so fundamental that I wasn't previously fully aware that I had it. As you say, there's a big difference between what to do if we discover AI, vs if we create it. While I think that we as a species are likely to create something that meets our definition of strong AI sooner or later, I consider it vanishingly unlikely that any specific individual or group who goes out trying to create it will actually succeed. So for most of us, especially myself, I figure that on an individual level it'll be much more like discovering an AI that somebody else created (possibly by accident) than actually creating the thing.

It's intuitively obvious why alignment work on creating AI doesn't apply to extant systems. But if the best that the people who care most about it can do is work on created AI without yet applying any breakthroughs to the prospect of a discovered AI (where we can't count on knowing how it works, ethically create and then destroy a bunch of instances of it, etc)... I think I am beginning to see where we get the meme of how one begins to think hard about these topics and shortly afterward spends a while being extremely frightened.

Yet every "AI" we've built so far has amplified traits of humanity that we consider flaws, as well as those we consider virtues. Do we expect that this would magically stop being the case if it passed a certain threshhold?

Ah, what? (I'm reacting to the "every" qualifier here.)

 

I'd say it comes down to founder effects.

I wouldn't necessarily call it 'using AI alignment solutions for human alignment' though. 

Perhaps a better starting point would be: how to discern alignment. And, are there predictable betrayals? Can that situation be improved?

human leaders

That wasn't the first place I thought of. 

How do you tell if a source is trustworthy? (Of information, or a physical good.)

How do you tell if it's a good idea for someone to join your team?

Overall, human alignment sounds broad, and interesting.

 

There's also some stuff about open source, questions that seem relevant. Less specifically, I read on twitter that:

  • Elon Musk wants to release the twitter algorithm, and for it develop encrypted chat or stomething. (I read the tweet.)
  • I think the person in charge of Mastodon (which is already open source) said something about working on encrypted chat as well. (I read the blog post.)

Somehow I feel it's more likely that Mastodon will end up achieving both conditions than Twitter will.

Two announcements, but one doesn't inspire much confidence. (How often does a not open source project go open source? Not partially, but fully. I see this as a somewhat general issue (open sourcing probability), not just one of specific context, or 'the laws of probability say p(a and b) < p(a) or p(b) independently (if a and b are different), and here p(a) and p(a') are reasonably similar'.)

A cancelled connecting flight has suddenly left me in San Francisco for the next 24 hours (i.e. until late on Sunday 15th). I could just stay in my hotel room and prepare for the next stage of my journey, but I hear there are people interested in AI safety somewhere nearby. If anyone has suggestions or even wants to meet up, message me... 

What information is required to determine a system's consciousness (or lack thereof)? Consider voting in this poll with what you think.

(You can view the results without voting.)

There's something I never understood in HPMoR.

I'm not sure how to articulate it well... but why was Voldemort... evil? He seems like he enjoys intellectual stimulation and puzzles. Why go around killing people and causing chaos? Why not just play chess? Or find the few people who are reasonably smart and interact with them? Or use his powers to make (certain) people smarter so that he has smart people to interact with? (He kinda did this with Harry, but a) he planed on killing Harry and b) why not take it further?)

I suspect that the answer is "shut up and suspend your disbelief". If so, that is fair. I'm just not sure if it actually is the answer or not.

Voldemort does mention in the later chapters that he tried to play the role of the hero before and it didn't do anything for him. He didn't get the warm fuzzies from it. That's fine. It explains why he didn't want to pursue altruism, but it doesn't explain why he chooses chaos instead of chess.

He also mentions the following at the end of chapter 108:

"Because I still would've been surrounded by idiots, and I wouldn't have been able to kill them," Professor Quirrell said mildly. "Killing idiots is my great joy in life, and I'll thank you not to speak ill of it until you've tried it for yourself."

And the context of this statement is Harry questioning him on whether being evil is really what'll make him the most happy. I just don't really buy that killing idiots is really his true answer. The impression I get is that intellectual stimulation is moreso his thing.

I think Dumbledore is right and the answer to this question is "Why not?". Dealing with chaos and complexity is intellectually stimulating as well, so there's no particular reason for Voldemort to not be evil if that's more fun than the alternative.

It's not that Voldemort is optimizing for evilness, it's just that he doesn't see a point to restricting his evil impulses as he pursues whatever goals he has, e.g. taking power to defend the magical world against the muggles.

It's not that Voldemort is optimizing for evilness, it's just that he doesn't see a point to restricting his evil impulses as he pursues whatever goals he has, e.g. taking power to defend the magical world against the muggles.

That would make sense if it were true, but I feel like he is going out of his way to be evil, even if it doesn't involve intellectual stimulation or fun. You don't get that impression?

I'd also expect him to at least try spending some time exploring intellectually stimulating things like chess to see if that is something he wants to spend more time pursuing, but I don't recall him doing that.

[-]jp2y40

Tag suggestion: "Air Quality". There's a bunch of things in a cluster of space around here, you could imagine one or more tags. Carbon dioxide, air particulate pollution, and aerosolized respiratory pathogens. The last one  may seem a bit of an odd duck, but the techniques for dealing with it are often the same as the others.

It probably took me less time to create the tag than it took you to write that comment ;-)

[-]jp2y20

It's much harder to change a tag ontology once created.

True. I just think there's so little activity here on the tags portal that marginally less caution is better than marginally more caution.

Also strong-upvoted your tags contribution :-)

Idea: Various people on LW say in their user profile something like "I enjoy talking to random people. Book a time on my calendar!" It'd be cool to compile a list of such opportunities. This can also be done for other online communities like Hacker News.

I'll note that a lot of people either don't say random or don't mean it. I really like talking to people who are interested enough in topics I write about to click through, like the post enough to hunt around my website to find the contact info, and reach out from that. I think my success rate for arbitrary people, or even arbitrary people who use LW, would be much lower and I wouldn't want to participate in that. 

Oh that's a really good point. Unintended consequences. I suspect a good amount of people feel similarly. Still, I'm moderately confident that enough people would be interested such that this would still be a good idea.

I certainly don't object to anyone making an opt-in directory, only to including people in a directory when they haven't okayed it (even if the information is public elsewhere).

That makes sense. Opt-in does seem like the way to go.

Has the humanity ever been proactive instead of reactive about global risks?

On the danger of being unhelpful, it's being proactive about global risks now.

There was also H. G. Wells, who wrote about GCRs through nuclear weapons. See more in this comment.

I just had a go at using GPT-J to solve the alignment problem. This was its output: 

https://pastebin.com/iTHAZsdV 

Not actually a solution, but interesting to see its attempts. 

Should I write a post about this? Have there been any more successful attempts to use language models to solve the problem? 

It's now January 2023, seven months later. Here is what one can now do in one minute with ChatGPT: 

https://pastebin.com/0EUBbvGd 

Obviously one could e.g. fill a small wiki with ChatGPT's thoughts on the alignment problem, by continuing to ask for questions, and then answers to the questions... 

It's now February 2023. I discussed some aspects of a specific alignment proposal with ChatGPT: 

https://pastebin.com/B1ntAxm7 

The main benefit of the discussion was that in order to correct ChatGPT's vague (but characteristically confident) descriptions of the proposal, I had to refine my own understanding of it!

Feburary 2023 is ending. I just got access to Bing AI (described as an "answer engine" rather than a search engine). 

I missed out on the "Sydney" era, but I found that, once I was past the part where it's just summarizing material from the web, it can still be a stimulating and challenging conversation partner, even when discussing specific ideas for AI alignment: 

https://pastebin.com/DPMenYHU 

How is ""Depression is just contentment with a bad attitude" false exactly?

I'm not trying to claim its true or sport defend flat earth style. I truly believe it's different.

But back in Covid and even early aftermath I remember so often thinking "There's no reason to go out because we're all so happy at home that out likely wont be any better" which I eventually noticed is awfully similar to "There's no reason to go out because I'm so unhappy out that out likely won't be any better." Seemed like a possible window into others' lived experience.

Not really a rationality question but this is the highest concentration I know of people who have known people with depression, and also people who can answer potentially emotionally charged questions rationally.

The initial statement seemed wrong.

awfully similar

I've seen stuff about this, but I don't remember where. I remember stuff like (summarizing the idea):

How awfully convenient it seems to be, for the optimists and the pessimists.

The optimists say, the world is alright, or even, wonderful, awesome, and amazing! (We don't have to do anything.)

The pessimists say, the world is awful, terrible, unspeakably bad - but we can't do anything about it.

 

Either the work is done, or it can never begin.

I have more thoughts on depression.

I look forward to hearing them

Are you saying depression is like pessimism, in the above quote? If so, are you saying they are not in fact different, or are you making a claim I'm missing about the difference?

Also sorry for the premature send if that is externally perceptible

There are other differences between the two, but I would say that depression is stronger than pessimism.

(Content warning: depressed/depressing sentiments.)

'Everything seems to go wrong'

'Why do anything?'

'Nothing is worth doing.'

'Life isn't worth living.'

Only the first of these sounds like pessimism.

 

I don't have a lot to say about the difference. There was a time when I thought things could be better if they were given a critical look. The flip side of that, is that things can be better if improved from an 'optimistic perspective'. 

If that benefit is actually realized, maybe the pessimist (often) avoids food poisoning by not eating at fast food restaurants (often). The optimist may gain from realizing/seizing opportunity, or trying things. 

('Maybe squaring the circle is impossible. But I want to know why.'

'Then just read _'s proof that it's impossible.'

'I don't see any reason it can't be done, it seems like I just have to find a way. So I'm going to give it a go.'

(According to some proofs, squaring the circle is impossible 'using only a [particular set of tools]'.))

There's also something else there: 'What's the point in doing that? I want to.' I think some stuff like doing less has an association with depression.

 

How is ""Depression is just contentment with a bad attitude" false exactly?

I don't know where this is from. (It sounds like it's responding to something.)