Thank you, upvoted! (with what little Karma that conveys from me)
It will certainly live as an open tab in my browser, but it doesn't feel directly usable for me.
What is especially challenging for me is to assign these "is" and "want" numbers with a consistent meaning. My gut feeling doesn't reliably map to a bare integer. What would help me would be an example (or many examples) of what people mean when a connection to another human feels like a "3" to them, or they want to have a "5" connection, and so on.
ought
Should this be "want" to match the actual column name, both in the template and in the screenshot?
Afaik, the practicability of pump storage is extremely location dependent. Building it on plain land would require moving enormous amounts of soil to create the artificial mountain for it. Also, there is the issue of evaporation.
Another alternative storage method for your scenario to consider would be molten salt storage. Heat up salt with excess energy, and use the hot salt to power a steam turbine when you need the energy back. https://en.wikipedia.org/wiki/Thermal_energy_storage
Is there a dedicated Wiki (or "subject-encyclopedia") for Project Lawful? I feel like collecting dath ilan concepts (like multi-agent-optimal boundary) might be valuable. This could both include an in-universe summary and context of them, and out of universe explanation and references to introductory texts or research papers if needed.
One pivotal act maybe slightly weaker than "develop nanotech and burn all GPUs on the planet", could be "develop neuralink+ and hook up smart AI-Alignment researchers to enough compute so that they get smart enough to actually solve all these issues and develop truly safely aligned powerful AGI"?
While developing neuralink+ would still be very powerful, maybe it could sidestep a few of the problems on the merit of being physically local instead of having to act on the entire planet? Of course, this comes with its own set of issues, because we now have superhuman powerful entities that still maybe have human (dark) impulses.
Not sure if that would be better than our reference scenario of doom or not.
On second thought: Don't we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?
So, here's a thing that I don't think exists yet (or, at least, it doesn't exist enough that I know about it to link it to you). Who's out there, what 'areas of responsibility' do they think they have, what 'areas of responsibility' do they not want to have, what are the holes in the overall space? It probably is the case that there are lots of orgs that work on AI governance/policy, and each of them probably is trying to consider a narrow corner of space, instead of trying to hold 'all of it'.
So if someone says "I have an idea how we should regulate medic...
🤔
Not sure if I'm the right person, but it seems worth thinking about how one would maybe approach this if one were to do it.
So the idea is to have an AI-Alignment PR/Social Media org/group/NGO/think tank/company that has the goal to contribute to a world with a more diverse set of high-quality ideas about how to safely align powerful AI. The only other organization roughly in this space that I can think of would be 80,000 hours, which is also somewhat more general in its goals and more conservative in its strategies.
I'm not a sales/marketing person, but a...
I wonder if we could be much more effective in outreach to these groups?
Like making sure that Robert Miles is sufficiently funded to have a professional team +20% (if that is not already the case). Maybe reaching out to Sabine Hossenfelder and sponsoring a video, or maybe collaborate with her for a video about this. Though I guess given her attitude towards the physics community, the work with her might be a gamble and two-edged sword. Can we get market research on what influencers have a high number of followers of ML researches/physicists/mathematicians ...
Not saying that this should be MIRI's job, rather stating that I'm confused because I feel like we as a community are not taking an action that would seem obvious to me.
I wrote about this a bit before, but in the current world my impression is that actually we're pretty capacity-limited, and so the threshold is not "would be good to do" but "is better than my current top undone item". If you see something that seems good to do that doesn't have much in the way of unilateralist risk, you doing it is probably the right call. [How else is the field going to get more capacity?]
The link to your framework for onboarding habits / SEEP is broken. Here is an archived version of that article: https://web.archive.org/web/20211125065547/http://www.katwoods.co/home/june-14th-2019
Thank you for providing these updates. Being myself not well versed in reading prediction markets and drawing conclusions from them, I appreciate your perspective on it and you sharing your thoughts behind that perspective.
I'm seeing quite some reports that the US is supplying loitering munition, specifically Switchblade drones, to Ukraine. Would that fall under your definition of "small drones with AI" or are you thinking of something else?
No, neither of them was right or wrong. That's just not how probabilities work and simplifying in that way confuses what's going on.
By "wrong" here I mean "incorrectly predicted the future". If there is a binary event, and I predicted the outcome A, but the reality delivered the outcome B, then I incorrectly predicted the future.
Maybe an intuition pump for what I think Christian is pointing at:
Was your prediction wrong?
Thanks!
Regarding the likelihood of a substantial cease fire soon and Putin's continued presidency: recent news makes it seam to me like Putin's administration could be starting to lay the rhetorical groundwork for an exit. Particularly these bits:
1.: Russia announced that it will reduce its operations around Kyiv. I think I read somewhere that they claimed something like "The attack on Kyiv was only made in order to bind Ukranian troops there." but I can't find the source now.
2.: Focussing on the Donetsk region. Actually getting control there seems realist...
This encounter with the guards at the border sounds scary. I'm glad you got through safely.
I hope your new location can provide some respite to you and your family 🌸
I think it was definitively good that you posted this in its current form, over not posting for want of perfectionism!
As an example which works with integers too: The Decide 10 Rating System. This gives me a sense of the space that is covered by that scale, and it somehow works better for my brain.
Weighted factor modelling sounds interesting and maybe useful, will look into that too. Thanks!