Wiki Contributions

Comments

Scaremongering about an asteroid

Minor typo - I think you accidentally pasted this comment twice by LeCunn

All, you'll find me at Harper and Rye from 7:30PM onward. Call my telephone or text me 

 

213 214 9462

I am looking for papers that support or attack the argument that sufficiently intelligent AIs will be easier to safe because their world model will understand that we don't want it to take our instructions as ruthlessly technical / precise, nor received in bad faith.

My argument that I want either supported or dis-proven is that it would know that we don't want an outcome that looks good but one that is good by our mental definitions. It will be able to look at human decisions through history and in the present to understand this fuzziness and moderation. 

Okay everyone, we have the options of:

I wrote the list in order of my preference. So please let me know if you have been lifetime banned from any of them

Regarding eliminating filler-words, My friend and I have a very effective strategy that we employ about once a year. In fact, we're just about to commence another round -- we called it "No-'um'-November". 

The rules are simple:

  • Make a shortlist of words you want to eliminate. For me, they are typically "Um", "Like", "You know", and "Kind of". Don't pick too many.
  • Every time you use the word, hit yourself hard on one cheek. A real firm, hopefully painful smack. It doesn't matter where you are, nor who you're talking to. No exceptions. Strike.

It is more enjoyable, and less odd, when you take this challenge with one or more friends or colleagues who you see on a daily basis.

The first time I tried this, I cut down nearly all filler-words within the first four days, and the rest of the month I spent simply being on high-alert.

Fortunately for us, we both have cheerful colleagues who are forgiving of "unusual" behaviour in the office.