All of Celenduin's Comments + Replies

Yep, I'm currently finding the balance between adding enough examples to posts and being sufficiently un-perfectionistic that I post at all.

I think it was definitively good that you posted this in its current form, over not posting for want of perfectionism!

As an example which works with integers too: The Decide 10 Rating System. This gives me a sense of the space that is covered by that scale, and it somehow works better for my brain.

Weighted factor modelling sounds interesting and maybe useful, will look into that too. Thanks!

Thank you, upvoted! (with what little Karma that conveys from me)

It will certainly live as an open tab in my browser, but it doesn't feel directly usable for me.

What is especially challenging for me is to assign these "is" and "want" numbers with a consistent meaning. My gut feeling doesn't reliably map to a bare integer. What would help me would be an example (or many examples) of what people mean when a connection to another human feels like a "3" to them, or they want to have a "5" connection, and so on.

2Severin T. Seehrich14d
Yep, I'm currently finding the balance between adding enough examples to posts and being sufficiently un-perfectionistic that I post at all. My current main criterion is something like "Do these people make me feel good, empowered, and give me a sense of community?" I expect that to change over time. If a simple integer doesn't work for you, maybe split the two columns into several different categories? If you want to go fancy, weighted factor modelling might be a good tool for that.

ought


Should this be "want" to match the actual column name, both in the template and in the screenshot?

1Severin T. Seehrich14d
Feel free to adapt it however it makes sense for you. :)
1RomanHauksson3mo
They meant a physical book (as opposed to an e-book) that is fiction.

Afaik, the practicability of pump storage is extremely location dependent. Building it on plain land would require moving enormous amounts of soil to create the artificial mountain for it. Also, there is the issue of evaporation.

Another alternative storage method for your scenario to consider would be molten salt storage. Heat up salt with excess energy, and use the hot salt to power a steam turbine when you need the energy back. https://en.wikipedia.org/wiki/Thermal_energy_storage

3blackstampede1y
Unless I'm misunderstanding, it seems like pumping the water up from an aquifer to the surface would be enough height to act as a battery- you wouldn't drain it to ground level, you would drain it back down into the aquifer.

This would seem to be related to "Knowing when to lose" from HPMOR.

Is there a dedicated Wiki (or "subject-encyclopedia") for Project Lawful? I feel like collecting dath ilan concepts (like multi-agent-optimal boundary) might be valuable. This could both include an in-universe summary and context of them, and out of universe explanation and references to introductory texts or research papers if needed.

One pivotal act maybe slightly weaker than "develop nanotech and burn all GPUs on the planet", could be "develop neuralink+ and hook up smart AI-Alignment researchers to enough compute so that they get smart enough to actually solve all these issues and develop truly safely aligned powerful AGI"?

While developing neuralink+ would still be very powerful, maybe it could sidestep a few of the problems on the merit of being physically local instead of having to act on the entire planet? Of course, this comes with its own set of issues, because we now have superhuman powerful entities that still maybe have human (dark) impulses.

Not sure if that would be better than our reference scenario of doom or not.

4Nathan Helm-Burger1y
I agree, but I personally suspect that neuralink+ is way more research hours & dollars away than unaligned dangerously powerful AGI. Not sure how to switch society over to the safer path.

On second thought: Don't we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?

So, here's a thing that I don't think exists yet (or, at least, it doesn't exist enough that I know about it to link it to you). Who's out there, what 'areas of responsibility' do they think they have, what 'areas of responsibility' do they not want to have, what are the holes in the overall space? It probably is the case that there are lots of orgs that work on AI governance/policy, and each of them probably is trying to consider a narrow corner of space, instead of trying to hold 'all of it'.

So if someone says "I have an idea how we should regulate medic... (read more)

🤔

Not sure if I'm the right person, but it seems worth thinking about how one would maybe approach this if one were to do it.

So the idea is to have an AI-Alignment PR/Social Media org/group/NGO/think tank/company that has the goal to contribute to a world with a more diverse set of high-quality ideas about how to safely align powerful AI. The only other organization roughly in this space that I can think of would be 80,000 hours, which is also somewhat more general in its goals and more conservative in its strategies.

I'm not a sales/marketing person, but a... (read more)

3Vaniver1y
...yet!
6Celenduin1y
On second thought: Don't we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?

I wonder if we could be much more effective in outreach to these groups?

Like making sure that Robert Miles is sufficiently funded to have a professional team +20% (if that is not already the case). Maybe reaching out to Sabine Hossenfelder and sponsoring a video, or maybe collaborate with her for a video about this. Though I guess given her attitude towards the physics community, the work with her might be a gamble and two-edged sword. Can we get market research on what influencers have a high number of followers of ML researches/physicists/mathematicians ... (read more)

Vaniver1yΩ91813

Not saying that this should be MIRI's job, rather stating that I'm confused because I feel like we as a community are not taking an action that would seem obvious to me. 

I wrote about this a bit before, but in the current world my impression is that actually we're pretty capacity-limited, and so the threshold is not "would be good to do" but "is better than my current top undone item". If you see something that seems good to do that doesn't have much in the way of unilateralist risk, you doing it is probably the right call. [How else is the field going to get more capacity?]

1KatWoods1y
Good catch! Yeah, I'm switching to .org instead of .co and the re-direct link is currently not working for some obscure reason I'm working on. In the meantime, I've updated the link and this is the new one here http://www.katwoods.org/home/june-14th-2019

Thank you for providing these updates. Being myself not well versed in reading prediction markets and drawing conclusions from them, I appreciate your perspective on it and you sharing your thoughts behind that perspective.

I'm seeing quite some reports that the US is supplying loitering munition, specifically Switchblade drones, to Ukraine. Would that fall under your definition of "small drones with AI" or are you thinking of something else?

1zby1y
I must admit I am not an expert in this - but I would assume that the low hanging fruit is patrolling bots with AI for spotting the enemy. The advantage of using AI is two things - one is relieving people from paying attention to the video feeds, another one is that it would compress the communication needs - the bot would only need to communicate after it spots something interesting and do everything else autonomously. I don't know if the loitering munitions have such capabilities - wikipedia only says: "Switchblade has sensors to help spot enemy fighters " - it might be classified.

No, neither of them was right or wrong. That's just not how probabilities work and simplifying in that way confuses what's going on.

By "wrong" here I mean "incorrectly predicted the future". If there is a binary event, and I predicted the outcome A, but the reality delivered the outcome B, then I incorrectly predicted the future.

Maybe an intuition pump for what I think Christian is pointing at:

  1. Assuming you have a 6-faced die, and you predict that the probability that you next will roll a 6 and not one of the other faces is about 16.67%.
  2. Then you roll the die, and the face with the 6 comes up on top.

Was your prediction wrong?

3RomanS1y
Thanks! I think I now see the root of the confusion. These are two closely related but different tasks: * predicting the outcome of an event *  estimating the probability of the outcome In your example, the tasks could be completed as follows: * "the next roll will be a 6" (i.e I know it because the die is unfair) * "the probability of 6 is about 16.67%" (i.e I can correctly calculate it because the die is fair) If one is trying to predict the future, one could fail either (or both) of the tasks.  In the situation there people were trying to predict if Russia invades Ukraine, some of them got the probability right, but failed to predict the actual outcome. And the aforementioned pundits failed both tasks (in my opinion), because for a well-informed person it was already clear that Russia will invade with the probability much higher than 40%.

Thanks!

Regarding the likelihood of a substantial cease fire soon and Putin's continued presidency: recent news makes it seam to me like Putin's administration could be starting to lay the rhetorical groundwork for an exit. Particularly these bits:
1.: Russia announced that it will reduce its operations around Kyiv. I think I read somewhere that they claimed something like "The attack on Kyiv was only made in order to bind Ukranian troops there." but I can't find the source now.
2.: Focussing on the Donetsk region. Actually getting control there seems realist... (read more)

1Primer1y
A different perspective: Putin might cease to be the russian president due to a bunch of reasons (health, assassination, coup, ...). One of those reasons is "overthrown due to military defeat of the russian army in Kyiv". Now the defeat kind of happened, but Putin is still president. How should we update here? One might well argue: There are worlds in which failure to take Kyiv lead to Putin being overthrown quickly. We're not in one of those worlds, so his chances to stay in power go up.

This encounter with the guards at the border sounds scary. I'm glad you got through safely.
I hope your new location can provide some respite to you and your family 🌸