kuira

email: kuiranya (at) proton.me

feel free to ask for my discord

Wiki Contributions

Comments

here's a small improvement for me. i open a lot of tabs every day, sometimes to read them later, etc. it would get really disorganized, till i enabled a setting that makes new tabs open to the right of the current one, rather than to the right of all of them. it still gets disorganized, but not as much. also, now i don't need to scroll all the way to the right on my tab list to get to one i just opened, and can just ctrl + click -> ctrl + tab. 

(there may be a better solution for this, like a tab manager addon, though)

From the linked twitter thread:

[...] Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences.

[...] Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re seeing and hearing in the background.

This is astonishing to me. I wonder if this will be one of its main uses.

I don't think I have enough of a post history to participate. If I did, I'd factor into my bet that there may be less impact to be had in a world with advanced aliens, at least if those aliens could subdue an earth-originated ASI. Therefor, money might be less instrumentally valuable in that world.

Thanks for the offer!

I'm trying to read through a lot of LW and astral codex posts right now. Here are some samples:
https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/
https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators
https://astralcodexten.substack.com/p/janus-simulators 
https://www.lesswrong.com/posts/uyBeAN5jPEATMqKkX/lies-told-to-children-1
https://carado.moe/values-complex-not-objective.html

(if you meant audio as well, then for example, the sequences, LW curated podcast, and astral codex ten podcast all have lots of audio of associated text)

I think I'd be able to ignore things like static. I've listened to some decades-old recordings before with no problem.

If you think you'll forget to check this site, we could continue on a platform you use more often. My email is kuiranya (at) proton.me, I could give you my discord (for example) from there. 

I'm looking into https://play.ht/ as well :)

Thanks for the reply. I did use "plus." I also tried the "commercial" preview, and it's a bit better, I may end up compromising with it if I can't find a better solution.

this question is confusing to me due to being about 'GPT-5.' openAI isn't currently training a 'GPT-5', so the referent is sort of undefined. an AI trained by openAI that they call 'GPT-5' might be a lot more powerful if trained 5 years from now, than 1 year from now, for example.

one interpretation could be that it's asking about both, 'when will openAI develop GPT-5', and also 'when will AIs be capable enough to create more capable AIs', but i think this probably isn't your intent.

thanks for the reply btw, i'd upvote you but the site won't let me yet :p 
 

eta: now i can :3

it's interesting that an intelligence in the 'original'/'top-level' universe also might [if simulation theory is valid] have evidence to assume it's close-to-certainly simulated

maybe it would do acausal trade and precommit to not shutting down simulated intelligences

(status: im newer here, this is a random thought i had, could be obvious to others, might also help when talking to outsiders about ai risk)

humans seem like a good example of an intelligence takeoff. for most of prehistory, species were following the same basic patterns repetitively (eating each other, trying to survive, etc.) 

then at some arbitrary point, one species either passed some threshold in intelligence, or maybe it just gained a pivotal intelligence-unrelated ability (such as opposable thumbs), or maybe it just found itself in the right situation (e.g the agricultural revolution is commonly explained by humans ending up in an environment better suited for plant growth).

and then it spiraled out of control to where we are now.

and in the future, this species is gonna create an even more powerful intelligence. this mirrors our own worries about AI creating a more powerful AI. 

sometimes people say that there's no evidence for AI doom because it hasn't been tested. humans might be moving evidence to such people when framed this way.

this might also have implications for how AI takeoff might go. it might be that there won't be some surprisingly increase in intelligence compared to earlier AIs - it could be more like the biointellegence takeoff, where it happens after some arbitrary-seeming conditions are met. 

Load More