Posts

Sorted by New

Wiki Contributions

Comments

TheFigMaster2moΩ010

The thing that we care about is how long it takes to get to agents. If we put lots of effort making powerful Oracle systems or other non-agentic systems, we must assume that agentic systems will follow shortly. Someone will make them, even if you do not. 

TheFigMaster2moΩ010

if you don't do RL or other training schemes that seem designed to induce agentyness and you don't do tasks that use an agentic supervision signal, then you probably don't get agents for a long time

 

Is this really the case? If you imagine a perfect Oracle AI, which is certainly not agenty, it seems to me that with some simple scaffolding, one could construct a highly agentic system. It would go something along the lines of 

  1. Setup API access to 'things' which can interact with the real world. 
  2. Ask the oracle 'What would be the optimal action if you want to do <insert-goal> via <insert-api-functions>?'
  3. Do the actions that are outputted.
  4. Some kind of looping mechanism to gain feedback from the world and account for it.

This is my line of reasoning why AIS matters for language models in general.

Correlation or causation?

Could you site the studies that this section was based on. I would be interested in reading further as this seems to be the sticking point for most people when it comes to the topic of GM for embryos.