Sorted by New

Wiki Contributions


Afaik, the practicability of pump storage is extremely location dependent. Building it on plain land would require moving enormous amounts of soil to create the artificial mountain for it. Also, there is the issue of evaporation.

Another alternative storage method for your scenario to consider would be molten salt storage. Heat up salt with excess energy, and use the hot salt to power a steam turbine when you need the energy back. https://en.wikipedia.org/wiki/Thermal_energy_storage

This would seem to be related to "Knowing when to lose" from HPMOR.

Is there a dedicated Wiki (or "subject-encyclopedia") for Project Lawful? I feel like collecting dath ilan concepts (like multi-agent-optimal boundary) might be valuable. This could both include an in-universe summary and context of them, and out of universe explanation and references to introductory texts or research papers if needed.

One pivotal act maybe slightly weaker than "develop nanotech and burn all GPUs on the planet", could be "develop neuralink+ and hook up smart AI-Alignment researchers to enough compute so that they get smart enough to actually solve all these issues and develop truly safely aligned powerful AGI"?

While developing neuralink+ would still be very powerful, maybe it could sidestep a few of the problems on the merit of being physically local instead of having to act on the entire planet? Of course, this comes with its own set of issues, because we now have superhuman powerful entities that still maybe have human (dark) impulses.

Not sure if that would be better than our reference scenario of doom or not.

On second thought: Don't we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?


Not sure if I'm the right person, but it seems worth thinking about how one would maybe approach this if one were to do it.

So the idea is to have an AI-Alignment PR/Social Media org/group/NGO/think tank/company that has the goal to contribute to a world with a more diverse set of high-quality ideas about how to safely align powerful AI. The only other organization roughly in this space that I can think of would be 80,000 hours, which is also somewhat more general in its goals and more conservative in its strategies.

I'm not a sales/marketing person, but as I understand it, the usual metaphor to use here is a funnel?

  • Starting with maybe ads / sponsoring trying to reach the right people[0] (e.g. I saw Jane Street sponsor Matt Parker)
  • then more and more narrowing down first with introducing people to why this is an issue (orthogonality, instrumental convergence)
  • hopefully having them realize for themselves, guided by arguments, that this is an issue that genuinely needs solving and maybe their skills would be useful
  • increasing the math as needed
  • finally, somehow selecting for self-reliance and providing a path for how to get started with thinking about this problem by themselves / model building / independent research
    • or otherwise improving the overall situation (convince your congress member of something? run for congress? ...)

Probably that would include copy writing (or hiring copywriters or contracting them) to go over a number of our documents to make them more digestible and actionable.

So, I'm probably not the right person to get this off the ground, because I don't have a clue about any of this (not even entrepreneurship in general), but it does seem like a thing worth doing and maybe like an initiative that would get funding from whoever funds such things these days?

[0] Though, maybe we should also look into a better understanding about who "the right people" are? Given that our current bunch of ML researchers/physicists/mathematicians were not able to solve it, maybe it would be time to consider broadening our net in a somehow responsible way.

I wonder if we could be much more effective in outreach to these groups?

Like making sure that Robert Miles is sufficiently funded to have a professional team +20% (if that is not already the case). Maybe reaching out to Sabine Hossenfelder and sponsoring a video, or maybe collaborate with her for a video about this. Though I guess given her attitude towards the physics community, the work with her might be a gamble and two-edged sword. Can we get market research on what influencers have a high number of followers of ML researches/physicists/mathematicians and then work with them / sponsor them?

Or maybe micro-target this demographic with facebook/google/github/stackexchange ads and point them to something?

I don't know, I'm not a marketing person, but I feel like I would have seen much more of these things if we were doing enough of them.

Not saying that this should be MIRI's job, rather stating that I'm confused because I feel like we as a community are not taking an action that would seem obvious to me. Especially given how recent advances in published AI capabilities seem to make the problem even much legible. Is the reason for not doing it really just that we're all a bunch of nerds who are bad at this kind of thing, or is there more to it that I'm missing?

While I see that there is a lot of risk associated with such outreach increasing the amount of noise, I wonder if that tradeoff might be shifting the shorter the timelines are getting and given that we don't seem to have better plans than "having a diverse set of smart people come up with novel ideas of their own in the hope that one of those works out". So taking steps to entice a somewhat more diverse group of people into the conversation might be worth it?

Thank you for providing these updates. Being myself not well versed in reading prediction markets and drawing conclusions from them, I appreciate your perspective on it and you sharing your thoughts behind that perspective.

I'm seeing quite some reports that the US is supplying loitering munition, specifically Switchblade drones, to Ukraine. Would that fall under your definition of "small drones with AI" or are you thinking of something else?

Load More