Wiki Contributions

Comments

Will working here advance AGI? Help us not destroy the world!

It wasn't my suggestion it was Logan Zoellner's post

Will working here advance AGI? Help us not destroy the world!

Can someone who downvote the agreement karma please enlighten me as to why they disagree? This really seems like the only way forward. (Trying to make my career choice right now as I am beginning my masters research this year)

AGI Ruin: A List of Lethalities

This kind of post scares away the person who will be the key person in the AI safety field if we define "key person" as the genius main driver behind solving it, not the loudest person.  Which is rather unfortunate, because that person is likely to read this post at some point.

I don't believe this post has any "dignity", whatever weird obscure definition dignity has been given now. It's more like flailing around in death throes while pointing fingers and lauding yourself than it is a solemn battle stance against an oncoming impossible enemy.

For context, I'm not some Eliezer hater, I'm a young person doing an ML masters currently who just got into this space and within the past week have become a huge fan of Eliezer Yudkowsky's earlier work while simultaneously very disappointed in the recent, fruitless, output.

We will be around in 30 years

You don't have to say the scenario, but was it removed because someone is going to execute it if they see it?

Experience LessWrong without the Time-Wasting RabbitHole Effect

I very much enjoy that type of style of pop-up. It's quick and feels more like adding an optional paragraph versus opening a never-ending portal every 5 seconds.  Your link css is also not as jarring as a bright color on a white background compared to regular black text

Experience LessWrong without the Time-Wasting RabbitHole Effect

So this was a phase? You went through it, wrote the article and then came out the other side?


From where I am right now it feels never-ending but I suppose if I can read so much that it stops being so provably distracting, that's a nice upside.

Reflections on a year of college

"I realize that the tone of this post may come off as very cynical. "

I actually thought the tone of your post was very optimistic and exciting, that is until you ended it ironically by saying "since this is too cynical, let me lighten it up a bit by reminding everyone here that life is hilariously short and you'll all die so you better enjoy it!"

I do not agree that talking about the greatest sadness in life somehow is a positive nice happy thing, despite people's attempts to twist it in that direction. ("You only can appreciate things that are limited!")

I believe aging is an illness and should be fought against not just accepted as a encourage you to "have fun" and experiment with "innocent-looking gummies".

But to end on a /not mean/ note, this was a really great post (up until the "Enjoy Yourself" subheading) and I very much liked the amalgation of links you found and separately went to go read a lot of those articles.  Was very close to saving this in my favorite articles collection due to how well cultivated the quotes and articles in the first bit were.

MIRI announces new "Death With Dignity" strategy

Yeah I saw this post:

https://www.lesswrong.com/posts/MR6cJKy2LE6kF24B7/why-hasn-t-deep-learning-generated-significant-economic

 

So I'm somewhat confused on how profitable AI is, but maybe I can just start asking random experts and researching AI startups

MIRI announces new "Death With Dignity" strategy

Apologize for long wall of text, at the bottom I dived into your aside more as that's highly relevant to deciding the course of my next 10 years and would appreciate your weighing-in.

Pre-Lesswrong/my entire life I've been really interested in longevity, and I would do anything to help people have more time with their loved ones (and as a child I thought solving this was the only worthy kind of fame I'd ever want.)
I didn't know how to get there, but it was probably somewhere in math and science so I decided I had to do anything to get into MIT.
My hobbies ended up being CS-y instead of biology-y, and I realized that not only was CS profitable for earn to give, but it also might be the best shot for longevity since AI was just infinitely better at problem solving.
So that's where my AI interest comes from. Not in being afraid of it but in using it to solve mortal problems. But the AI safety thing is something that I of course just hear smart people like Eliezer mention and then I think to myself "hmm well they know more about AI than me and I can't use it to cure aging without the AI also maybe destroying us so I should look into that."

 


Your crypto comment is surprising though and I'd like to go further on that. I should be more clear, I'm pretty interested in cryptocurrency not just cryptography and so far trading it has been really profitable, and this summer I'm essentially trying to decide if I'll stop my schooling to do a crypto startup or if I'll do my Masters in AI (or potentially also a crypto thing).

Startups seem like the best thing to do for profit and people are falling over themselves to fund them nowadays so I assumed given how many people have offered me funding to do so, that the crypto startup thing would be far easier to profit from than an ML startup (with ML maybe overtaking it in 7 years or so) 

If this isn't the case, or we're a year away from the flip to ML being the easier startup, I'd love to know, because I'm right on the precipice between pursuing as much ML knowledge as I can and trying to get a pHd (probably eventually do an ML spin-off), versus trying to crypto startup earn-to-give ala FTX Sam. 

Load More