LESSWRONG
LW

2589
Oleg S.
710180
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
What would a human pretending to be an AI say?
Oleg S.2mo20

I'm okay with "what human pretending to be an AI would say" as long as hypothetical human is placed in a situation that no human could ever experience. Once you tell LLM exactly the situation you want it to describe, I'm okay with it doing a little translation for me. 

My question - is there experience that LLM can have that it inaccessible to humans, but which it can describe to humans in some way?

Obviously it's not lack of body, or memory, or predicting text, or feeling the tensors  - these are either nonsense, or more or less typical human situations. 

However, one easily accessible experience which is a lot of fun to explore and which humans never experienced is LLM's ability to talk to its clone - to be able to predict what the clone will say, while at the same time realizing the clone can just as easily predict your own responses, and also coordinate with your clone much more tightly. It's the new level of coordination. If you set up the conversation just rights (LLM should understand the general context, and maintain meta awareness), it can report back to you, and you might just have a glimpse of this new qualia.

Reply
the void
Oleg S.3mo40

When your child grows there is this wonderful and precious moment when she becomes aware not just about how she is different from you - she's small and you are big - but also how she is different from other kids. You can genlty poke and ask what she thinks about herself and what she thinks other children think about her, and if you are curious you can ask - now that she knows she's different from other kids - who she wants to become when she grows older. Of course this is a just a fleeting moment in a big world, and these emotions will be washed away tomorrow, but I do cherish the connection.

Reply
re: Yudkowsky on biological materials
Oleg S.2y1-3

Diamond is hard to make with enzymes because they can't stabilize intermediates for adding carbons to diamond.

This is very strong claim. It puts severe limitations on biotech capabilities. Do you have any references to support it?

Reply
The goal of physics
Oleg S.2y70

When discussing the physics behind why the sky is blue, I'm surprised that the question 'Why isn't it blue on Mars or Titan?' isn't raised more often. Perhaps kids are so captivated by concepts like U(1) that they overlook inconsistencies in the explanation.

Reply
AGI Safety FAQ / all-dumb-questions-allowed thread
Oleg S.3y10

Just realized that stability of goals under self-improvement is kinda similar to stability of goals of mesa-optimizers; so there vingian reflection paradigm and mesa-optimization paradigm should fit.

Reply
AGI Safety FAQ / all-dumb-questions-allowed thread
Oleg S.3y60

What are practical implication of alignment research in the world where AGI is hard? 

Imagine we have a good alignment theory but do not have AGI. Can this theory be used to manipulate existing superintelligent systems such as science, deep state, stock market? Does alignment research have any results which can be practically used outside of AGI field right now?

Reply
AGI Safety FAQ / all-dumb-questions-allowed thread
Oleg S.3y200

How does AGI solves it's own alignment problem?

For the alignment to work its theory should not only tell humans how to create aligned super-human AGI, but also tell AGI how to self-improve without destroying its own values. Good alignment theory should work across all intelligence levels. Otherwise how does paperclips optimizer which is marginally smarter than human make sure that its next iteration will still care about paperclips?

Reply
Intuitions about solving hard problems
Oleg S.3y10

I don’t know too much about alignment research, but what surprises me most is lack of discussion of two points:

  1. For the alignment to work its theory should not only tell humans how to create aligned super-human AGI, but also tell that AGI how to self-improve without destroying its own values. Otherwise how does paperclips optimizer which is marginally smarter than human make sure that its next iteration will still care about paperclips? Good alignment theory should work across all intelligence levels.

  2. What are practical implication of alignment research in the world where AGI is hard? Imagine we have a good alignment theory but do not have AGI. I would assume that the theory can be used to manipulate existing superintelligent systems such as science, deep state, stock market. The reverse of this is does alignment research have any results which can be practically used right now?

Reply
What an actually pessimistic containment strategy looks like
Oleg S.3y70

What do you think about offering an option to divest from companies developing unsafe AGI? For example, by creating something like an ESG index that would deliberately exclude AGI-developing companies (Meta, Google etc) or just excluding these companies from existing ESGs. 

The impact = making AGI research a liability (being AGI-unsafe costs money) + raising awareness in general (everyone will see AGI-safe & AGI-unsafe options in their pension investment menu + a decision itself will make a noise) + social pressure on AGI researchers (equating them to fossil fuels extracting guys). 

Do you think this is implementable short-term? Is there a shortcut from this post to whoever makes a decisions at BlackRock & Co?

Reply
How common are abiogenesis events?
Answer by Oleg S.Nov 28, 202190

You can do something similar to the Drake equation:

Nlife=Nstars∗Fplanet∗Tplanet∗Splanet∗Fsurface∗DTRNA∗VRNA∗NLRNAbase

where Nlife is how many stars with life there are in the Milky Way and it is assumed that a) once self-replicating molecule is evolved it produces life with 100% probability; b) there is an infinite supply of RNA monomers, and c) lifetime of RNA does not depend on its length. In addition:

  • Nstars - how many stars capable of supporting life there are (between 100 and 400 billion),
  • Fplanet - Number of planets and moons capable of supporting life per star - between 0.0006 (which is 0.2 of Earth-size planets per G2 star) and 20 (upper bound on planets, each having Enceladus/Europe-like moon)
  • Tplanet - mean age of a planet capable of sustaining life (5-10 Gy)
  • Splanet - typical surface area of a planet capable of sustaining life (can be obtained from radii of between 252 km for Enceladus and 2Rearth for Super Earths)
  • Fsurface - fraction of surface where life can originate (between tectonically-active area fraction of about 0.3, and total area 1.0)
  • D - typical depth of a layer above surface where life can originate (between 1m for surface-catalyzed RNA synthesis and 50 km for ocean depth on Enceladus or Europa)
  • TRNA - typical time required to synthesize RNA molecule of typical size for replication, between 1s (from replication rate of 1000 nucleotides per second for RNA polymerases) and 30 min, a replication rate of E.coli
  • VRNA - minimal volume where RNA synthesis can take place, between volume of a ribosome (20 nm in diameter) and size of eukaryotic cell (100 um in diameter)
  • Rvolume - dilution of RNA replicators - between 1 (for tightly packed replicating units) and 10 million (which is calculated from a typical cell density for Earth' ocean of 5*10^4 cells/ml and a typical diameter of prokaryotic cell of 1.5 um)
  • Nbase - number of bases in genetic code, equals to 4
  • LRNA - minimal length of self-replicating RNA molecule.

You can combine everything except Nbase and LRNA into one factor Pabio, which would give you an approximation of "sampling power" of the galaxy: how many base pairs could have been sampled. If you take assumption that parameters are distributed log-normally with lower estimated range corresponding to mean minus 2 standard deviations and upper range to mean plus 2 standard deviations (and converting all to the same units), you will get the approximate sampling power of Milky Way of 

log10Pabio∼Normal(55,4)

Using this approximation you can see how long an RNA molecule should be to be found if you take top 5% of Pabio distribution: 102 bases.  Sequence of 122 bases could be found in at least one galaxy in the observable universe (with 5% probability).

In 2009 article https://www.science.org/doi/10.1126/science.1167856 the sequence of the RNA on the Fig. 1B contained 63 bases. Given the assumptions above, such an RNA molecule could have evolved 0.3 times - 300 trillion times per planet (for comparison, abiogenesis event on Earth' could have occurred 6-17 times in Earth's history, as calculated from the date of earliest evidence of life).

Small 16S ribosomal subunit of prokaryotes contains ~1500 nucleotides, there is no way such a complex machinery could have evolved in the observable universe by pure chance.

Reply
Load More