Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Manax10

We will hopefully be fine either way, but I think I would like the AI before some radical biotech revolution. If you think about it, if you first get some sort of super-advanced synthetic biology, that might kill us. But if we're lucky, we survive it. Then, maybe you invent some super-advanced molecular nanotechnology, that might kill us, but if we're lucky we survive that. And then you do the AI. Then, maybe that will kill us, or if we're lucky we survive that and then we get to utopia.

Well, then you have to get through sort of three separate existential risks--first the biotech risks, plus the nanotech risks, plus the AI risks, whereas if we get AI first, maybe that will kill us, but if not, we get through that, then I think that will handle the biotech and nanotech risks, and so the total amount of existential risk on that second trajectory would sort of be less than on the former.

 

I see the optimal trajectory as us going through pretty much ANY other "radical" revolutions before AI, with maybe the exception of uploading or radical human enhancement.  All the 'radical revolutions' I can imagine aren't the phase shift of AGI.  These seem like more akin to "amped up" versions of revolutions we've already gone through and so in some sense more "similar" and "safer" than what AGI would do.  Thus I think these are better practice for us as a society...

On a different note, being overcautious vs undercautious is super easy.  We REALLY want to overshoot, than undershoot.  If we overshoot, we have a thousand years to correct that.  If we undershoot and fail at alignment, we all die and there's no correcting that...  We have seen so many social shifts over the last 100 years, there's little reason to believe we'd be 'stuck' without AGI forever.  It's not a zero chance, but it certainly seems way lower than AGI being unaligned.

Manax42

You wrote "causing Y in order to achieve X" but I believe you meant "causing Y to prevent X"

Manax30

I've often seen this with hooking up computers, TVs and/or audio equipment. Many people seem to treat it as incomprehensible, even though with computers (particularly) it's just cable to connector, no real thinking needed. For a/v equipment it's just "flows" out-to-in.

Specialization is fantastic, but there is real value to cross-training in other disciplines. It's hard to predict what insights in other fields might assist with your primary. Also, even if you use a specialist, it's impossible to evaluate them if you blank-out in the area. For example, auto-mechanics often fall into this category, as mentioned in the article. If a mechanic tells you he "needed to replace the flooge inhibitor", and that was causing the car to "super-slafire", how do you evaluate if he's being honest without spending a lot of money & time doing experiments?

Manax00

The original reason for the 15 minutes sampling was due to how we do billing, but I've never tried to "game" it, and if I'm distracted enough to be able to anticipate the next ping, there is something seriously wrong with me since I'm clearly not focused at all. :) If I work on two projects during an interval, and am not sure (roughly) how I split my time, I'll split it even. It's worked out pretty well.

I'll take a look at tagtime at some point next week. I'd guess that there's a way to tune lambda based on the minimum feature size you're trying to capture, right? It's been a while since I've dealt much with Poisson distributions, and never had to generate them.

Manax20

Interestingly I wrote something very similar to tagtime a number of years ago, and am still using it. I don't do random sampling (didn't think of it at the time), but at 15 minute intervals. I've got short cuts and defaults to remember the last thing I was working on, automatic (and manual) time division when I've worked on multiple projects in the interval. Over the last year, I've gotten it the point where it automatically fills in timesheets for me. Mine too is Perl.

Of course, this sort of thing only works as long as you're honest about what you're working on. Sometimes I'm very good about being honest when I've gone off-task, sometimes less so. But it's easy to go back through my logs and find out how much time I've wasted when I intended to be working.

Manax20

I spent some time learning about this, when I was dabbling with the Netflix contest. There was a fair bit of discussion on their forum regarding SVD & related algorithms. The winner used a "blended" approach, which weighted results from numerous different algorithms. The success of the algorithm was based on the RMSE of the predictions compared with a test set of data.