Wiki Contributions

Comments

I always thought that in naive MWI what matters is not whether something happens in absolute sense, but what Born measure is concentrated on branches that contain good things instead of bad things.

Timothy Lee struggles to ground out everything in the real world.

Timothy Lee: The last year has been a lot of cognitive dissonance for me. Inside the AI world, there’s non-stop talk about the unprecedented pace of AI improvement. But when I look at the broader economy, I struggle to find examples of transformative change I can write about.

 

Electricity wasn't in wide industrial usage until 1910s, despite technology being very promising from the start. The reason was differenct infrastructure necessary for steam-powered and electric factories. 

I think the same with LLMs: you need specific wrapping and/or experience to make them productive, this wrappings are hard to scale, so most of surplus is going to dissipate into consumer surplus + rise of income of productive workers.

The simplest (in conceptual sense) way to integrate AI in economy is to make it self-integrating, i.e. instead of having humans thinking which input AI need to get and where output will be directed, you should have AI agent which decides for itself.

I mean, the problem is if it works we won't hear about such people - they just live happily ever after and don't talk about uncomfortable period of their life.

Another constraint is from computational complexity; should we treat things that are not polynomial-time computable as basically unknowable? Humans certainly can't solve NP-complete problems efficiently.

Generalized chess is EXPTIME-complete and while chess "exact solution" may be unavailable, we are pretty good at constructing chess engines.

When I read word "bargaining" I assume that we are talking about entities that have preferences, action set, have beliefs about relations between actions and preferences and exchange information (modulo acausal interaction) with other entities of the same composition. Like, Kelly betting is good because it equals to Nash bargaining between versions of yourself from inside different outcomes and this is good because we assume that you in different outcomes are, actually, agent with all arrtibutes of agentic system. Saying "systems consist of parts, this parts interact and sometimes result is a horrific incoherent mess" is true, but doesn't convey much of useful information.

I feel like the whole "subagent" framework suffers from homunculus problem: we fail to explain behavior using the abstraction of coherent agent, so we move to the abstraction of multiple coherent agents, and while it can be useful, I don't think it displays actual mechanistic truth about minds.

When I plan something and then fail to execute plan it's mostly not like "failure to bargain". It's just when I plan something I usually have good consequences of plan in my imagination and this consequences make me excited and then I start plan execution and get hit by multiple unpleasant details of reality. Coherent structure emerges from multiple not-really-agentic pieces.

It doesn't matter? Like, if your locations are identical (say, simulations of entire observable universe and you never find any difference no matter "where" you are), your weight is exactly the weight of program. If you expect dfferences, you can select some kind of simplicity prior to weight this differences, because there is basically no difference between "list all programs for this UTM, run in parallel".

Okay, we have wildly different models of tech tree. In my understanding, to make mind uploads you need Awesome Nanotech and if you have misaligned AIs and not-so-awesome nanotech it's sufficient to kill all humans and start to disassemble Earth. The only coherent scenario that I can imagine misaligned AIs actually participating in human economy in meaningful amounts is scenario where you can't design nanotech without continent-sized supercomputers.

But it still feels that the lesson could be summarized as: "talk like everyone outside the rationalist community does all the time".

If non-rationalist people knew it all along, there wouldn't be need to write such books.

On the other hand, I think if average rationalist person tries to say speech from pure inspiration, the result is going to be weird. Like, for example, speech of HJPEV before the first battle. HJPEV got away with this, because he has reputation of Boy Who Lived and he already pulled some awesome shenanigans, so his weird speech got him weirdness points instead of losing them, but it's not the trick average rationalist should try on first attempt to say inspiring speech.

It's kinda ill-formed question, because you can get the same performance if you compute moves longer with lower power. I guess you are searching for something like"energy per move".

Load More