GeneSmith

Wiki Contributions

Comments

New US Senate Bill on Catastrophic Risk Mitigation [Linkpost]

How well have these types of inter-agency committees tended to work in the past? Is this a good way to actually get things done or does it just add more bureaucracy?

Toni Kurz and the Insanity of Climbing Mountains

Ahh, I should have guessed. I thought perhaps there might be some way to tag it.

Toni Kurz and the Insanity of Climbing Mountains

Sources: "The Beckoning Silence" 1936 Eiger climbing disaster The white spider (book)

I may have gotten some of the details slightly wrong here, as some of the sources are slightly inconsistent, and the main original source for everyone's accounts ("The White Spider") was written several years after the events of the 1936 attempt.

If you're intersted in more stories like these, I strongly recommend "14 Peaks". It's a documenatary about perhaps the greatest climber of our generation, a Nepalese man named "Nims Purja", and his team of climbers who tried to climb all 14 8000+ meter peaks in a seven months. Before this attempt, the record for climbing all 14 peaks was seven years.

The best 'free solo' (rock climbing) video

Just finished watching that video this morning. My hands were sweating through most of it haha.

But I can't even imagine a future me that seriously entertains even an 'easy' free solo like in this video! (I would be – at least – as scared, and thus, hopefully, as focused, as Magnus clearly appears to be in this video.)

Magnus did say in the video that he would not have been able to do the route without Alex there guiding and encouraging him. I think that says something about just how psychologically challenging free soloing is.

Yonatan Cale's Shortform

Metaculus has a question about whether the first AGI will be based on deep learning. The crowd estimate right now is at 85%.

I interpret that to mean that improvements to neural networks (particulary on the hardware side) are most likely to drive progress towards AGI.

Yonatan Cale's Shortform

Perhaps, but I'd guess only in a rather indirect way. If there's some manufacturing process that the company invests in improving in order to make their chips, and that manufacturing process happens to be useful for matrix multiplication, then yes, that could contribute.

But it's worth noting how many things would be considered AGI risks by such a standard; basically the entire supply chain for computers, and anyone who works for or with top labs; the landlords that rent office space to DeepMind, the city workers that keep the lights on and the water running for such orgs (and their suppliers), etc.

I wouldn't worry your friends too much about it unless they are contributing very directly to something that has a clear path to improving AI.

How are people here dealing with AI doomerism? Thoughts about the future of AI and specifically the date of creation of the first recursively self-improving AGI have invaded almost every part of my life. Should I stay in my current career if it is unlikely to have an impact on AGI? Should I donate all of my money to AI-safety-related research efforts? Should I take up a career trying to convince top scientists at DeepMind to stop publishing their research? Should I have kids if that would mean a major distraction from work on such problems?

More than anything though, I've found the news of progress in the AI field to be a major source of stress. The recent drops in Metaculus estimates of how far we are from AGI have been particularly concerning. And very few people outside of this tiny almost cult-like community of AI safety people even seem to understand the unbelievable level of danger we are in right now. It often feels like there are no adults anywhere; there is only this tiny little island of sanity amidst a sea of insanity.

I understand how people working on AI safety deal with the problem; they at least can actively work on the problem. But how about the rest of you? If you don't work directly on AI, how are you dealing with these shrinking timelines and feelings of existential pointlessness about everything you're doing? How are you dealing with any anger you may feel towards people at large AI orgs who are probably well-intentioned but nonetheless seem to be actively working to increase the probability of the world being destroyed? How are you dealing with thoughts that there may be less than a decade left until the world ends?

Open & Welcome Thread - May 2022

I visited New York City for the first time in my life last week. It's odd coming to the city after a lifetime of consuming media that references various locations within it. I almost feel like I know it even though I've never been. This is the place where it all happens, where everyone important lives. It's THE reference point for everything. The heights of tall objects are compared to The Statue of Liberty. The blast radius of nuclear bombs are compared to the size of Manhattan. Local news is reported as if it is in the national interest for people around the country to know.

The people were different than the ones I'm accustomed to. The drivers honk more and drive aggressively. The subway passengers wear thousand-dollar Balenciaga sneakers. They are taller, better looking, and better dressed than the people you're used to.

And everywhere there is self-reference. In the cities I frequent, paraphernalia bearing the name of the city is confined to a handful of tourist shops in the downtown area (if it exists at all). In New York City, it is absolutely everywhere. Everywhere the implicit experience for sale is the same: I was there. I was part of it. I matter.

I felt this emotion everywhere I went. Manhattan truly feels like the center of the country. I found myself looking at the cost of renting an apartment in Chinatown or in Brooklyn, wondering if I could afford it, wondering who I might become friends with if I moved there, and what experiences I might have that I would otherwise miss.

I also felt periodic disgust with the excess, the self-importance, and the highly visible obsession with status that so many people seem to exhibit. I looked up at the empty $200 million apartments in Billionaire's row and thought about how badly large cities need a land value tax. I looked around at all the tourists in Times Square, smiling for the camera in front large billboards, then frowning as they examined the photo to see whether it was good enough to post on Instagram. I wondered how many children we could cure of malaria if these people shifted 10% of their spending towards helping others.

This is the place where rich people go to compete in zero-sum status games. It breeds arrogant, out-of-touch elites. This is the place where talented young people go to pay half their income in rent and raise a small furry child simulator in place of the one they have forgotten to want. This is, as Isegoria so aptly put it, an IQ grinder, that disproportionately attracts smart and well educated people who reproduce at below replacement rate.

The huge disparities between rich and poor are omnipresent. I watched several dozen people (myself included) walk past a homeless diabetic with legs that were literally rotting away. I briefly wondered what was wrong with society that we allowed this to happen before walking away to board a bus out of the city.

I'm sure all these things have been said about New York City before, and I'm sure they will be said again in the future. I'll probably return for a longer visit sometime in the future.

April 2022 Welcome & Open Thread

Yes, this idea pops up every now and then. Does anyone (perhaps UCLA alum) know of a non-offputting way to get in touch with him? I think this community has some good communicators who could explain some of the more mathematically interesting parts of the alignment problem to him.

Load More