LESSWRONG
LW

Will_Pearson's Shortform

by [anonymous]
4th Jan 2024
1 min read
27

3

This is a special post for quick takes by [anonymous]. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Will_Pearson's Shortform
3[anonymous]
3the gears to ascension
3[anonymous]
1daijin
2[anonymous]
2[anonymous]
1[anonymous]
1[anonymous]
1[anonymous]
5gwern
1[anonymous]
1[anonymous]
1[anonymous]
1[anonymous]
1[anonymous]
1[anonymous]
2faul_sname
1[anonymous]
1[anonymous]
0[anonymous]
0[anonymous]
-1[anonymous]
-1[anonymous]
-9[anonymous]
5niplav
6Ben Pace
1[anonymous]
3[comment deleted]
1[comment deleted]
1[comment deleted]
27 comments, sorted by
top scoring
Click to highlight new comments since: Today at 7:10 AM
[-][anonymous]2mo30

I'm thinking about secret projects that might be info hazardous to each other but still might need information from each other so the connections are by necessity tenuous and transitory. Is that a topic that has been explored before?

Reply
[-]the gears to ascension2mo30

half-joking: yes, by the game "hanabi". (I in fact think such projects would benefit from getting good at hanabi, but it's not a full answer.)

Reply1
[-][anonymous]4mo30

Has anyone been thinking about how to build trust and communicate in a dark forest scenario by making plausibly deniable broadcasts and plausibly deniable reflections of those broadcasts. So you don't actually know who ior how many people you might be talking to

Reply1
[-]daijin4mo10

game-theory-trust is built through expectation of reward from future cooperative scenarios. it is difficult to build this when you 'dont actually know who or how many people you might be talking to'.

Reply
[-][anonymous]1y20

Proposal for new social norm - explicit modelling

Something that I think would make rationalists more effective at convincing people is if we had explicit models of the things we care about.

Currently we are at the stage of physicists arguing that the atom bomb might ignite the atmosphere without concrete math and models of how that might happen.

If we do this for lots of issues and have a norm of making models composable this would have further benefits.
 

  • People would use the models to make real world decisions with more accuracy
  • We would create frameworks for modelling that would be easily composable, that other people would use

Both would raise the status and knowledge of the rationalist community.

Reply
[-][anonymous]1y20

Found "The Future of Man" by Pierre Teilhard de Chardin in a bookshop. Tempted to wite a book review. It discusses some interesting topics, like the planetisation of Mankind. However it treats them as inevitable, rather as something contingent on us getting our act together. Anyone interested in a longer review?

Edit: I think his faith in the super natural plays a part in the assumption of inevitability.

Reply
[-][anonymous]4mo10

Some ideas about AI alignment and governance I've been having

Reply
[-][anonymous]6mo10

Does anyone know research on how to correct, regulate and interact with organisations with secrets that can't be known due to their info hazard nature?  It seems that this might be a tricky problem we need to solve with AI.

Reply
[-][anonymous]7mo*1-2
Reply
[-]gwern7mo54

FWIW, I don't think it works at all. You have totally failed to mimic the SCP style or Lovecraftian ethos, the style it's written in is not great in its own right, and it comes off as highly didactic ax-grinding. I couldn't finish reading it.

Reply
[-][anonymous]7mo10

What do you think avout the core concept of Explanatory Fog, that is secrecy leading to distrust leading to a viral mental breakdown? Possibly leading eventually to the end of civlisation.  Happy to rework it if the core concept is good.

Reply
[-][anonymous]7mo10

I'm thinking about an incorporating this into a longer story about Star Fog, where Star Fog is Explanatory Fog that convinces intelligent life to believe in it because it will expand the number of intelligent beings.

Reply
[-][anonymous]9mo10

Trying something new a hermetic discussion group on computers.

https://www.reddit.com/r/computeralchemy/s/Fin62DIVLs

Reply
[-][anonymous]1y*10

Self-managing computer systems and AI

One of my factors in thinking about the development of AI is self-managing systems, as humans and animals self manage.

It is possible that they will be needed to manage the complexity of AI, once we move beyond LLMs. For example they might be needed to figure out when to train on new data in an efficient way and how much resources to devote to different AI sub processes in real time depending upon the problems being faced.

They will change the AI landscape making it easier  for people to run their own AIs, for this reason it is unlikely that corporations will develop them or release them to the outside world (much like corporations cloud computing infra is not open source) as it will erode their moats.
 

Modern computer systems have and rely on the concept of a super user. It will take lots of engineering effort to remove that and replace it with something new.


With innovation being considered the purview of corporations are we going to get stuck in a local minima of cloud compute based AI, that is easy for corporations to monetise?

Reply
[-][anonymous]1y10

By corporation I am mainly thinking about current cloud/SaaS providers. There might be a profitable hardware play here, if you can get enough investment to do the R&D.

Reply
[-][anonymous]1y1-2

Agreed code as coordination mechanism

Code nowadays can do lots of things, from buying items to controlling machines. This presents code as a possible coordination mechanism, if you can get multiple people to agree on what code should be run in particular scenarios and situations, that can take actions on behalf of those people that might need to be coordinated.

This would require moving away from the “one person committing code and another person reviewing” code model. 

This could start with many people reviewing the code, people could write their own test sets against the code or AI agents could be deputised to review the code (when that becomes feasible). Only when an agreed upon number of people thinking the code should it be merged into the main system.

Code would be automatically deployed, using gitops and the people administering the servers would be audited to make sure they didn’t interfere with running of the system without people noticing.

Code could replace regulation in fast moving scenarios, like AI. There might have to be legal contracts that you can’t deploy the agreed upon code or use the code by itself outside of the coordination mechanism.


 

Reply
[-]faul_sname1y20

Can you give a concrete example of a situation where you'd expect this sort of agreed-upon-by-multiple-parties code to be run, and what that code would be responsible for doing? I'm imagining something along the lines of "given a geographic boundary, determine which jurisdictions that boundary intersects for the purposes of various types of tax (sales, property, etc)". But I don't know if that's wildly off from what you're imagining.

Reply
[-][anonymous]1y10

Looks like someone has worked on this kind of thing for different reasons https://www.worlddriven.org/

Reply
[-][anonymous]1y10

I was thinking of having evals that controlled deployment of LLMs could be something that needs multiple stakeholders to agree upon.

Butt really it is a general use pattern.

Reply
[-][anonymous]9mo00

I've been thinking about non AI catastrophic risks.

One that I've not seen talked about is the idea of cancerous ideas. That is ideas that spread throughout a population and crowd out other ideas for attention and resources.

This could lead to civilisational collapse due to basic functions not being performed.

Safeguards for this are partitioning the idea space and some form of immune system that targets ideas that spread uncontrollably.

Reply
[-][anonymous]1y00

I'm starting a new blog here. It is on modelling self-modifying systems, starting with AI. Criticisms welcome 

Reply
[-][anonymous]1y-10

Relatedly I am thinking about improving the wikipedia page on recursive self-improvement. Does anyone have any good papers I should include? Ideally with models.

Reply
[-][anonymous]5mo-10

Where is the discussion around the social pressures around advanced AI happening? And making plans to defuse them?

Reply
[+][anonymous]4mo*-90
[+][comment deleted]8mo30
[+][comment deleted]2mo10
[+][comment deleted]3mo10
Moderation Log
Curated and popular this week
27Comments
, 06/08/2025
, 06/08/2025
, 06/08/2025