Posts

Sorted by New

Wiki Contributions

Comments

Glad to hear you are doing better!

Ok, that is an interesting route to go. Let "us" know how it goes if you feel for sharing your journey

Hey Sable, I am sorry about your situation. Perhaps I am pointing out the obvious, but you just achieved something. You wrote a post and people are reading it. Keep 'em coming!

Good that you mention it and did NOT get down voted. Yet. I have noticed that we are in the midst of an "AI-washing" attack which is also going on here on lesswrong too. But its like asking a star NFL quarterback if he thinks they should ban football because the risk of serious brain injuries, of course he will answer no. The big tech companies pours trillions of dollars into AI so of course they make sure that everyone is "aligned" to their vision and that they will try to remove any and all obstacles when it comes to public opinion. Repeat after me:

"AI will not make humans redundant."

"AI is not an existential risk."

...

I am not so sure that Xi would like to get to AGI any time soon. At least not something that could be used outside of a top secret military research facility. Sudden disruptions in the labor market in China could quickly spell the end of his rule. Xi's rule is based on the promise of stability and increased prosperity so I think that the export ban of advanced GPU's is a boon to him at time being.

Imagine having a context window that fits something like PubMed or even The Pile (but that's a bit into the future...), what would you be able to find in there that no one could see using traditional literature review methods? I guess that today a company like Google could scale up this tech and build a special purpose supercomputer that could handle a 100-1000 millions token context window if they wanted, or perhaps they already have one for internal research? its "just" 10x+ of what they said they have experimented with, with no mentions of any special purpose built tech.

Dagon thank you for follow up on my comment,

yes, they are in some ways oranges and apples but both of them put a limit on your possibility to create things. One can argue that immaterial rights have been beneficial for humanity as a whole, but it is at the same time criminalizing one of our most natural instincts which is to mimic and copy what other humans do to increase our chance of survival. Which lead to the next question, would people stop innovate and create if they could not protect it?

Dagon, yes that seems like a reasonable setup. Its pretty amazing that world and life altering inventions gets a protection for a maximum of 20 years from the filing date where as if someone doodles something on a paper get a protection that lasts the life of the author plus 70 years. But... maybe the culture war is more important to win than the technology war?

Anyways, with the content explosion on the internet I would assume that pretty much every permutation of everything that you can think of is now effectively copyrighted well into the foreseeable future. Will that minefield prove to be the reason to reform copyright law so that it fits into a digital mass creation age?

Thank you Gerald Monroe for explaining you thoughts further,

And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.

Logan Zoellner thank you for your question,

In my view we need more research, not people that draw inferences on extremely complex matters from what random people without that knowledge bet on a given day. Its maybe fun entertainment, but it does not say anything about anything. 

I do not assign any probabilities. To me it is just silly that whole assigning probabilities game surrounding x-risk and AI safety in general. How can anyone say for instance that it is a 10% risk of human extinction. What does that mean? Is that a 1 in 10 chance at a given moment, during a 23.7678 year period, forever or? And most importantly how do you come up with the figure 10%, based on what exactly? 

Load More