here's a small improvement for me. i open a lot of tabs every day, sometimes to read them later, etc. it would get really disorganized, till i enabled a setting that makes new tabs open to the right of the current one, rather than to the right of all of them. it still gets disorganized, but not as much. also, now i don't need to scroll all the way to the right on my tab list to get to one i just opened, and can just ctrl + click -> ctrl + tab.
(there may be a better solution for this, like a tab manager addon, though)
From the linked twitter thread:
...[...] Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences.
[...] Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re se
I don't think I have enough of a post history to participate. If I did, I'd factor into my bet that there may be less impact to be had in a world with advanced aliens, at least if those aliens could subdue an earth-originated ASI. Therefor, money might be less instrumentally valuable in that world.
Thanks for the offer!
I'm trying to read through a lot of LW and astral codex posts right now. Here are some samples:
https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/
https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators
https://astralcodexten.substack.com/p/janus-simulators
https://www.lesswrong.com/posts/uyBeAN5jPEATMqKkX/lies-told-to-children-1
https://carado.moe/values-complex-not-objective.html
(if you meant audio as well, then for example, the sequences, LW curated podcast, and astral codex ten podcast all have lots of audio of ass...
Thanks for the reply. I did use "plus." I also tried the "commercial" preview, and it's a bit better, I may end up compromising with it if I can't find a better solution.
this question is confusing to me due to being about 'GPT-5.' openAI isn't currently training a 'GPT-5', so the referent is sort of undefined. an AI trained by openAI that they call 'GPT-5' might be a lot more powerful if trained 5 years from now, than 1 year from now, for example.
one interpretation could be that it's asking about both, 'when will openAI develop GPT-5', and also 'when will AIs be capable enough to create more capable AIs', but i think this probably isn't your intent.
it's interesting that an intelligence in the 'original'/'top-level' universe also might [if simulation theory is valid] have evidence to assume it's close-to-certainly simulated
maybe it would do acausal trade and precommit to not shutting down simulated intelligences
(status: im newer here, this is a random thought i had, could be obvious to others, might also help when talking to outsiders about ai risk)
humans seem like a good example of an intelligence takeoff. for most of prehistory, species were following the same basic patterns repetitively (eating each other, trying to survive, etc.)
then at some arbitrary point, one species either passed some threshold in intelligence, or maybe it just gained a pivotal intelligence-unrelated ability (such as opposable thumbs), or maybe it just found itself in the right situ...
On the subject of losing control of the discourse, this tumblr post on the development of traditional social movements seems to have some relevant insights. (this is not to say it's 1:1 applicable) https://balioc.tumblr.com/post/187004568356/your-ideology-if-it-gets-off-the-ground-at-all
(Disclaimer: I'm newer to the alignment problem than most here, I'm not an experienced researcher, just sharing this in case it helps)
...Your ideology – if it gets off the ground at all – will start off with a core base of natural true believers. These are the people for
The individual calculations of the egg question seem within the realm of what most people can do to me. It's possible others might have trouble keeping multiple numbers in mind at once, though.
- multiply 30% with 40% (which can be simplified in a number of ways)
- multiply 10% with 60%
- add 6% + 12%
- divide 12 by 18
It's probably easier to aquire skills (like mental maths) with high intelligence, but no matter the intelligence, you still need to learn it.
Also, I'm not sure this is fully true. Personally, learning math was enough to automatically start doing it mentally, and I never needed to specifically try to learn it.
Sometimes I have an internal desire different to do something different than what I think should be done (for example, I might desire to play a game while also thinking the better choice is to read). I've been experimenting with using randomness to mediate this. I keep a D20 with me, give each side of the dispute some odds proportional to the strength of its resolve, and then roll the die.
In theory, this means neither side will overpower the other, and even a small resolve still has a chance. I'm not sure how useful this is, but it's fun, and can sort of g... (read more)