3 years later, 2025 has come and gone, and now the prediction for weak AGI is still 2 years away (12 Feb 2028 as I'm writing this). The prediction for strong AGI is now a bit over 7 years in the future (Aug 2033).
What do we make of that?
Slightly derailing the conversation from the OP: I came across this variant on German Amazon: https://www.amazon.de/Anyone-Builds-Everyone-Dies-Superintelligent/dp/1847928935/
It notably has a different number of pages (32 more) and a different publisher. Is this just a different (earlier?) version of the book, or is this a scam?
Thank You!
Do you have recommendations/pointers for getting started with Yoga Nidra?
Always try to have 3 hypotheses
This one is important enough to be it's own post.
Duncan Sabien's Split and Commit would seem to be cover (large?) parts of that.
Link to the video: https://www.youtube.com/watch?v=qf7ws2DF-zk
Link to the article: https://en.wikipedia.org/wiki/Arrow's_impossibility_theorem
What I needed to learn was that it is also perfectly ok to set a boundary and decline such a request with a smile and in a non-disruptive way: "Thanks, but I'll pass. Over to, <name of the person sitting beside you>, next."
Yep, I'm currently finding the balance between adding enough examples to posts and being sufficiently un-perfectionistic that I post at all.
I think it was definitively good that you posted this in its current form, over not posting for want of perfectionism!
As an example which works with integers too: The Decide 10 Rating System. This gives me a sense of the space that is covered by that scale, and it somehow works better for my brain.
Weighted factor modelling sounds interesting and maybe useful, will look into that too. Thanks!
Thank you, upvoted! (with what little Karma that conveys from me)
It will certainly live as an open tab in my browser, but it doesn't feel directly usable for me.
What is especially challenging for me is to assign these "is" and "want" numbers with a consistent meaning. My gut feeling doesn't reliably map to a bare integer. What would help me would be an example (or many examples) of what people mean when a connection to another human feels like a "3" to them, or they want to have a "5" connection, and so on.
ought
Should this be "want" to match the actual column name, both in the template and in the screenshot?
What is "physical fiction"?
Afaik, the practicability of pump storage is extremely location dependent. Building it on plain land would require moving enormous amounts of soil to create the artificial mountain for it. Also, there is the issue of evaporation.
Another alternative storage method for your scenario to consider would be molten salt storage. Heat up salt with excess energy, and use the hot salt to power a steam turbine when you need the energy back. https://en.wikipedia.org/wiki/Thermal_energy_storage
This would seem to be related to "Knowing when to lose" from HPMOR.
Is there a dedicated Wiki (or "subject-encyclopedia") for Project Lawful? I feel like collecting dath ilan concepts (like multi-agent-optimal boundary) might be valuable. This could both include an in-universe summary and context of them, and out of universe explanation and references to introductory texts or research papers if needed.
One pivotal act maybe slightly weaker than "develop nanotech and burn all GPUs on the planet", could be "develop neuralink+ and hook up smart AI-Alignment researchers to enough compute so that they get smart enough to actually solve all these issues and develop truly safely aligned powerful AGI"?
While developing neuralink+ would still be very powerful, maybe it could sidestep a few of the problems on the merit of being physically local instead of having to act on the entire planet? Of course, this comes with its own set of issues, because we now have superhuman powerful entities that still maybe have human (dark) impulses.
Not sure if that would be better than our reference scenario of doom or not.
On second thought: Don't we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?
🤔
Not sure if I'm the right person, but it seems worth thinking about how one would maybe approach this if one were to do it.
So the idea is to have an AI-Alignment PR/Social Media org/group/NGO/think tank/company that has the goal to contribute to a world with a more diverse set of high-quality ideas about how to safely align powerful AI. The only other organization roughly in this space that I can think of would be 80,000 hours, which is also somewhat more general in its goals and more conservative in its strategies.
I'm not a sales/marketing person, but a...
I wonder if we could be much more effective in outreach to these groups?
Like making sure that Robert Miles is sufficiently funded to have a professional team +20% (if that is not already the case). Maybe reaching out to Sabine Hossenfelder and sponsoring a video, or maybe collaborate with her for a video about this. Though I guess given her attitude towards the physics community, the work with her might be a gamble and two-edged sword. Can we get market research on what influencers have a high number of followers of ML researches/physicists/mathematicians ...
Strong Agreement.
The overwhelmingly likely outcome is that Nectome fails, even just on priors because 90% of startups fail. Hence one is going to be just as dead as everyone else, but with less money spent on happiness for oneself or others.
So the remaining question is on the margins: Conditional on one actually eventually being resurrected/uploaded, what are the probabilities of this going well vs resulting in lots of suffering for oneself? Looking at the trajectory for where the world is currently headed, I find it hard to see why a positive outcome shou... (read more)