Posts

Sorted by New

Wiki Contributions

Comments

For this Petrov day, I'm also interested in how many people will have access to the button as a function of time. How many users have 1000+ Karma?

Is there anywhere to see the history of lesswrong Petrov day? I'd be interested in whether we've ever succeeded before. 

Also, I think most people know that the real cost of 1500 people not being able to check lesswrong for 12 hours is essentially 0. It may even be net positive to have a forced hiatus. Perhaps that's just a failure to multiply on my part. Anyway, I view this exercise as purely symbolic.

Interestingly, Jane will probably end up doing the exact same thing as Susan, only on the timescale of years instead of days. She kept those years in prison. If, in one iteration, the years immediately following prison were of some profound importance, she would probably keep those too. In the absence of a solar flair, she would find herself a 70 year old woman whose memories consisted of only the most important selected years from the 10s of thousands that make up her full history.

Thank you for the story.

Thank you for the response. I do think there should be at least some emphasis on boxing. I mean, hell. If we just give AIs unrestricted access to the web, they don't even need to be general to wreak havoc. That's how you end up with a smart virus, if not worse.

Then another basic question? Why have we given up? I know that an ASI will almost definitely be uncontainable. But that does not mean that it can't be hindered significantly given an asymmetric enough playing field.

Stockfish would beat me 100 times in a row, even playing without a queen. But take away its rooks as well, and I can usually beat it. Easy avenues to escaping the box might be the difference between having a fire alarm and not having one.

So in your opinion, is an AI with access to GET requests essentially already out of the box?

I just thought of a question. If there is a boxed AI that has access to the internet, but only through Get requests, it might still communicate with the outside world through network traffic patterns. I'm reading a book right now where the AI overloads pages on dictionary websites to recruit programmers under the guise of it being a technical recruiting challenge.

My question: should we raise awareness of this escape avenue so that if, in the year 2030, a mid level web dev gets a mysterious message through web traffic, they know enough to be suspicious?

You captured this in your post, but for me it really comes down to people dismissing existential fears as scifi. It's not more complicated than "Oh you've watched one too many Terminator movies". What we need is for several well-respected smart figureheads to say "Hey, this sounds crazy, but it really is the biggest threat of our time. Bigger than climate change, bigger than biodiversity loss. We really might all die if we get this wrong. And it really might happen in our lifetimes."

If I could appeal to authority when explaining this to friends, it would go over much better. 

I am pretty concerned about alignment. Not SO concerned as to switch careers and dive into it entirely, but concerned enough to talk to friends and make occasional donations. With Eliezer's pessimistic attitude, is MIRI still the best organization to funnel resources towards, if for instance, I was to make a monthly donation?

Not that I don't think pessimism is necessarily bad; I just want to maximize the effectiveness of my altruism.

This could be the case. However, my instinct is that human intelligence is only incrementally higher than other animals. Sure, we crossed a threshold that allowed us to accomplish great things (language, culture, specialization), but I would honestly be shocked if you told me that evolution was incapable of producing another similarly intelligent species if it started from the baseline intelligence of, say wolves, or crows. If there is a "1-in-a-quadrillion chance" somewhere in our history, I expect that filter to be much further back than the recent evolution of hominids. 

I don't have research to back this up. Just explaining why I personally wouldn't push timelines back significantly based on the anthropic principle. 

Load More