If it’s worth saying, but not worth its own post, you can put it here.

Also, if you are new to LessWrong and want to introduce yourself, this is the place to do it. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, and seeing if there are any meetups in your area.

The Open Thread sequence is here.

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 10:28 AM

I’m not very new, but I’ve been mostly lurking, so I think I’ll introduce myself. (Note: 1k words.)

Basic information: high school student, fairly socially clueless. Probably similar to how most people here were as teenagers (if not now) - smart, nerdy, a bit of a loner, etc. I’m saying this because I think it’s relevant enough in limited ways. (My age is relevant to the life plans I describe. The social cluelessness tells you that you should ignore any odd signals I send between the lines, because I didn’t intend to send them. Is there a conversational code, similar in type to Crocker’s rules, that says “I will send as much important information explicitly as possible, please err on the side of ignoring implicit signals”?)

My first introduction to the rationalist community was through Scott Alexander. A few years ago, I was in an online discussion about gender, it turned to tolerance, and somebody linked I Can Tolerate Anything Except The Outgroup. I got hooked on SSC’s clarity and novel ideas, and eventually that led me to LW. I’ve read HPMOR, the Sequences, and so on. It’s taken a while for it to sink in, but it’s had a large effect on my views. Prior to this, I was a fairly typical young atheist nerd, so the changes aren’t very drastic, but I often find myself using the ideas I’ve gotten through the Sequences and the mindset of analytic truthseeking. The object-level belief changes I’ve had are the obvious ones: many worlds, cryonics, intelligence explosion, effective altruism, etc. I’m a humanist transhumanist reductionist materialist atheist, like almost everyone else here. That’s a cluster-membership description, not tribe-membership. Language doesn’t make it easy to Keep Your Identity Small.

I’ve always wanted to go into a career in STEM. I’ve loved mathematics from a young age, and I’ve done pretty well at it too. I started taking university calculus in middle school, and upper-division university mathematics in high school. (I’m not saying that to brag - that wouldn’t even be effective here anyway - but to show that I’m not just a one-in-ten “good at math” person who thinks very highly of themselves. I think I’m at the one-in-ten-thousand level, but I don’t have high confidence in that estimate.) A few months ago, I decided that the best way to achieve my goals was to work in the field of AI risk research. I think I can make progress in that field, and AI risk is probably the most important field in history, so it’s the best choice. (Humanity needs to solve AI risk soon. My 50% estimate for the Singularity is 2040-2060, and the default is we all die. But you’ve heard this before.) I aim to work at MIRI, or a similar organization if that doesn’t work out. It’s a rather high goal I’ve set for myself, but if I can’t have immodest ambitions here, where can I have them? I’ve been accepted to a (roughly) top 10 university for Math/CS, and I read somewhere (80k Hours?) that’s the rough talent level necessary to do good AI risk research, so I don’t think it’s an impossible goal.

My biggest failure point is my inability to carry out goals. That’s what my inner Murphy says would cause my failure to get into MIRI and do good work. That’s probably the most important thing I’m currently trying to get out of LW - the Hammertime sequence looks promising. If anyone has any good recommendations for people who can’t remember to focus, I’d love them.

In fact, any recommendations would be greatly appreciated. Such as: what would you say to someone who’s going into college? What would you say to someone who wants to work in AI risk?

I’m currently working through the MIRI research guide, starting with Halmos’s Naive Set Theory. If anyone else is doing this and would like a study partner, we should study together.

I’ve read some things about AI risk, both through the popularizations available on LW/SSC and through a couple of papers. I’ve had a few ideas already. My Outside View is sane, and I know there’s a very low chance that I’ve seen something that everyone else missed. Should I write a post on LW about it anyway?

To give an example, here’s the idea on my mind right now: it’s probably not possible to encode all our values explicitly into an AI. The obvious solution is to build into it the ability to learn values. This means it’ll start in a state of “moral ignorance”, and learn what it “should want to do” by looking at people. I’m not saying it’ll copy people’s actions, I’m saying its actions will have to be somehow entangled with what humans are and do. Information theory and so on. The crucial point: before it “opens its eyes”, this AI is not a classical consequentialist, right? Classical consequentialists have a ranking over worlds that doesn’t vary by each world. This AI’s terminal goals change depending on which possible world it’s in! I want to explore the implications - are there pitfalls? Does this help us solve problems? What is the best way to build this kind of agent? Should we even build it this way or not? I also want to formalize this kind of agent. It seems very similar to UDT in a sense, so perhaps it’s a simple extension of UDT. But there’s probably complications, and it’s worth turning the fuzzy ideas into math.

Some questions I have: does this seem like a possibly fruitful direction to look? Has someone already done something like this? What advice do you have for someone trying to do what I’m doing? Is there a really good AI risk paper that I could look at and try to mimic in terms of “this is how you formalize things, these are the sorts of questions you need to answer, etc.”? Is there anyone who’d be interested in mentoring a young person who’s interested in the field? (Connotation clarification: communicating, giving advice, kind of a back-and-forth maybe? I don’t know what’s okay to ask for and what’s not, because I’m a young clueless person. But I’m really interested and motivated, and Asking For Help is important, so I’ll put this out there and hope people interpret it charitably.)

Hey, welcome! Glad you made this post.

My biggest failure point is my inability to carry out goals. That’s what my inner Murphy says would cause my failure to get into MIRI and do good work. That’s probably the most important thing I’m currently trying to get out of LW - the Hammertime sequence looks promising. If anyone has any good recommendations for people who can’t remember to focus, I’d love them.

If you elaborate on your productivity issues, maybe we can offer specific recommendations. What's the nature of your difficulty focusing?

My Outside View is sane, and I know there’s a very low chance that I’ve seen something that everyone else missed.

I was a computer science undergraduate at a top university. The outside view is that for computer science students taking upper division classes, assisting professors with research is nothing remarkable. Pure math is different, because there is so much already and you need to climb to the top before contributing. But AI safety is a very young field.

The thing you're describing sounds similar to other proposals I've seen. But I'd suggest developing it independently for a while. A common piece of research advice: If you read what others write, you think the same thoughts they're thinking, which decreases your odds of making an original contribution. (Once you run out of steam, you can survey the literature, figure out how your idea is different, and publish the delta.)

I would suggest playing with ideas without worrying a lot about whether they're original. Independently re-inventing something can still be a rewarding experience. See also. You miss all the shots you don't take.

Final note: When I was your age, I suffered from the halo effect when thinking about MIRI. It took me years to realize MIRI has blind spots just like everyone else. I wish I had realized this sooner. They say science advances one funeral at a time. If AI safety is to progress faster than that, we'll need willingness to disregard the opinions of senior people while they're still alive. A healthy disrespect for authority is a good thing to have.

You lucked out in terms of your journey, the shortcut through SSC may have saved you a number of years ;)

My only advice on careers would be to strongly consider doing some ML capabilities work rather than pure AI risk. This will make it much easier to get enough qualifications and experience to get to work on risk later on. The risk field is so much smaller (and is less well received in academia) that setting yourself that goal may be too much of a stretch. You can always try to pick thesis topics which are as close to the intersection of risk and capabilities as possible.

How much actual time do you spend when you read a paper deeply?

I am currently reading the Minimum Entropy Production Principle paper, dedicating some time to it every day. I am about 3 hours deep now, and only half finished. I am not going as far as to follow all the derivations in detail, only taking the time to put the arguments in the context of what I already know. I expect it will take more than one reading to internalize.

Sometimes I worry this is so slow I am spending my time poorly, but I haven't any idea how it goes for more serious people, so I thought I would ask.

It varies a lot between papers (in my experience) and between fields (I imagine), but several hours for a deep reading doesn't seem out of line. To take an anecdote, I was recently re-reading a paper (of comparable length, though in economics) that I'm planning to present as a guest of a mathematics reading group, and I probably spent 4 hours on the re-read, before starting on my slides and presumably re-re-reading a bunch more.

Grazing over several days (and/or multiple separate readings) is also my usual practice for a close read, fwiw.

Definitely depends on the field. For experimental papers in the field I'm already in, it only takes like half an hour, and then following up on the references for things I need to know the context for takes an additional 0.5-2 hours. For theory papers 1-4 hours is more typical.

For me 'deeply' involves coding. I must have read dozens of papers carefully and then failed to implement the technique first time due to either my misunderstanding something, or finding something in the paper to be unclear, wrong or omitted! Or you realise that the technique works well but would scale horribly / can't be extended etc. That could take several days - so that treatment is reserved for special selections.

It was suggested that alien AI, if it wants to be visible, could create an artificial quasar, and a groups of such quasars could be visible as clearly artificial object on the distances of at least several billion light years. In this comment I will look on the ways how such quasar could be created with surprisingly small efforts and estimate need time for it.

Let's assume that aliens don't have any magical technology to move stars or to convert energy in matter.

In that case, they could create a quasar by directing many normal stars to the center of the galaxy: falling stars will increase the accretion rate in the central black hole and thus its luminosity, and the aliens could regulate the rate and types of falling stars to make changes in the quasar luminosity.

But how to move stars? One idea is that if aliens could change a trajectory of a star slightly, it will eventually pass near another star, make a "gravitational manoeuvre" and will fall to the center of the galaxy. Falling to the center of the galaxy would probably require tens of millions years (based on Sun's orbital period of 250 mln years).

But how make small changes in the trajectory of a star? One idea is to impact the star with large comets. It is not difficult, as remote Oort cloud objects (or wandering small planets, as they are not part of already established orbital movement of the star) need only small perturbations to start falling down on the central star, which could be done via nuclear explosions or smaller impacts by smaller astreoids.

The impacts with comets will have very small effects on the star's trajectory. For example, Pluto's mass is 100 million times less than Sun's mass and impact with Pluto-size object will probably change Sun's trajectory only on 1 mm/sec, which will turn into 1 billion km difference in 20 million years. Close flyby by stars are very rare, so it may take tens of million of years of very complex space billiard to organise need flyby.

All this suggests that creating an artificial quasar is possible, but it may take up to 100 million years in a typical galaxy; changing the galaxy's luminosity by tiling it with Dyson Spheres could be probably done much quicker, in a less than 1 million years. Thus, creating artificial quasars as beacons makes sense only if the difference in 100 mln years is not substantial.

But how make small changes in the trajectory of a star? One idea is to impact the star with large comets. It is not difficult, as remote Oort cloud objects (or wandering small planets, as they are not part of already established orbital movement of the star) need only small perturbations to start falling down on the central star, which could be done via nuclear explosions or smaller impacts by smaller astreoids.

I don't think this works; conservation of momentum means that the impact is almost fully counteracted by the gravitational pull that accelerated the comet to such speed (so that, in the end, the delta-v imparted to the star is precisely what you imparted with your nuclear explosions or smaller asteroids).

Maybe a Shkadov thruster is what you want? (It's slow going, though; this article suggests 60ly/200My, accounting for acceleration.)

To solve the "moment problem", free wandering comets or planets could be used for impacts with a star. There are probably many of them, and they have significant initial radial speeds relative to the star.

I've been writing a simulism essay that strives to resolve a paradox of subjectivity-measure concentration by rolling over a few inconvenient priors about physics towards a halfway plausible conception of naturally occuring gods. I think it's kind of good, but I've been planning on posting it on April 1st because of the very obvious bias that has been leading my hand towards humanity's favourite deus ex machina ("The reason the universe is weird is that a very great big person did it" (to which I answer, "But a great big person, once such beings exist, totally would do it!"))

It will only be funny if it's posted in a context where people might take it halfway seriously, but I'm not sure it's appropriate to post it to lesswrong. If people upvote it, it will still be here on April 2nd, and that might be kind of embarrassing. I'm not sure where to put it.

Summary: It's weird that anthropic measure seems to be concentrated in humans and absent from rock or water or hydrogen (We each have only one data point in favour of that seeming, though). It's plausible that a treaty-agency between mutually alien species would optimise the abundance of life. If universes turn out to be permeable under superintelligence (very conceivable IMO), and if untapped energy turns out to be more common than pre-existing entropy then the treaty-agency could spread through the universe and make more of it alive than not, and if this has occurred, it explains our measure concentration weirdness, and possibly the doomsday weirdness ("if the future will contain more people than the past, it's weird that we're in the past") as well.

Its many predications also include: Either entropy has no subjectivity (I'd have no explanation for this, although it seems slightly intuitive), or perpetual computers (life that produces no heat) within a universe that contains some seeds of entropy already are somehow realisable under superintelligence (o_o;;;;,, Would bet we can refute that already. It might be fun to see if we can figure out a method a superintelligent set of cells in a conway's gol universe could contain a section of randomly initialised cells that it does not know the state of. My current guess is we'd be able to prove that there is no method that works in 90% of possible cases)

I don't think there's anything wrong with posting such a thing. As long as you are clear up front about your state of confidence and that you are exploring an argument instead of trying to persuade, I expect few people would object. There are also many who enjoy unconventional arguments or counter-intuitive conclusions on their own merits.

Worst case scenario, it remains a personal blog post. I say post it.

Yep, seems true to me. I am all in favor of weird exploratory stuff on LW.

perpetual computers (life that produces no heat)

Relevant thermodynamical point: only reversible computations can add nothing to entropy, even in theory. So these computers couldn't do input-output. (This interacts with one of my weird rough-belief-systems. If a process interacts with its surroundings, you must include these interactions in your description of it, so it stops being simple. I think that the simplicity of a world does... something... although I can't figure out what.)

Linkpost.

Apropos of the negative utilitarianism question posted recently, has anyone read any pessimism? I picked up The Conspiracy Against the Human Race: A Contrivance of Horror relatively recently. It is a survey, written by Thomas Ligotti, who is a horror and weird fiction writer.

It is gloriously grim. I recommend against it if you are in a sensitive place, however.

I'm not in a sensitive place but I'm not sure whether I want to read it or not. Can you give a rough sense of what you got out of it?

All I have gotten out of it so far is a morbid entertainment value. It does look like it will spend more time talking about subjects adjacent to the Repugnant Conclusion and the voluntary extinction of Absolute Negative Utilitarianism, but it isn't rigorous (so far) in the sense that we usually prefer here.

The author is a good writer, so it does a pretty good job of holding interest despite the subject matter. I would say it is unproductive aside from the entertainment, and if you find it persuasive even more so.

(I'm beginning to think that if "natural sciences" tell us _how_ things happen but not _why_ they do... then in order to know how I should ask why.)