Wiki Contributions

Comments

joraine6mo-10

Why get the pod cover? Just get a decent air conditioning system, it's far better and isn't $2500. Make the entire room cold when you sleep.

I like this post! Saved the comment about the "a day is all you need" induction to my quote bank,


I was guessing this was going in a slightly different direction, namely: tracking progress (in a spreadsheet is what I do) such that you can actually see the fact you're still making progress (this is why video-games with the clear leveling indicators are so addicting!) and you don't mistakenly believe you're stalling and get demotivated.

I like the new skill idea too though. I am already prone to starting over in new arenas a bit too much, but having a set time for a new skill is a good idea.

I suppose modelling a superintelligent agent as a utility maximizer feels a bit weird but not the weirdest thing, and I'm not sure I can mount a good defense saying that a superintelligent agent definitely wouldn't be aptly modeled by that.

More importantly, the 3-step toy model with  felt like a strange and unrelated leap

I don't know if it's about the not having an answer part. That is probably biasing me. But similar to the cryptography example, if someone defined what security would mean, let's say Indistinguishability under chosen plain text attack. And then proceeded to say "I have no idea how to do that or if it's even possible." Then I would still consider that real even though they didn't give us an answer.

Looking at the paper makes me feel like the authors were just having some fun discussing philosophy and not "ah yes this will be important for the fight later". But it is hard for me to understand why I feel that way.

I am somewhat satisfied by the cryptography comparison for now but definitely hard to see how valuable this is as opposed to general interpretability research.

I do like the comparison to cryptography, as that is a field I "take seriously" and does also have the issue of it being very difficult to "fairly" define terms. 

Indistinguishability under chosen plain text attack being the definition for something to be canonically "secure" seems a lot more defensible than "properly modeling this random weird utility game maybe means something for AGI ??" but I get why it's a similar sort of issue

How are we defining tasty foods? I'm sure if the entire world voted, chocolate would clearly be more in the "tasty food" category than rice cakes, but perhaps you really like how rice cakes taste?

It wasn't my suggestion it was Logan Zoellner's post

Can someone who downvote the agreement karma please enlighten me as to why they disagree? This really seems like the only way forward. (Trying to make my career choice right now as I am beginning my masters research this year)

joraine2y264

This kind of post scares away the person who will be the key person in the AI safety field if we define "key person" as the genius main driver behind solving it, not the loudest person.  Which is rather unfortunate, because that person is likely to read this post at some point.

I don't believe this post has any "dignity", whatever weird obscure definition dignity has been given now. It's more like flailing around in death throes while pointing fingers and lauding yourself than it is a solemn battle stance against an oncoming impossible enemy.

For context, I'm not some Eliezer hater, I'm a young person doing an ML masters currently who just got into this space and within the past week have become a huge fan of Eliezer Yudkowsky's earlier work while simultaneously very disappointed in the recent, fruitless, output.

You don't have to say the scenario, but was it removed because someone is going to execute it if they see it?

Load More