LESSWRONG
LW

147
jchan
66930680
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Bah-Humbug Sequence
No wikitag contributions to display.
Sublinear Utility in Population and other Uncommon Utilitarianism
jchan8d3-1

so your qualia weren't that valuable

The OP isn't making any claim like this; the question isn't whether any particular experience has value in-and-of-itself, but is only making a claim about the correct way to evaluate the total utility in a world with multiple experience.

By analogy, consider special relativity: If a train is moving at 0.75c relative to the ground, and a passenger on the train throws a ball forward at 0.5c, then that means the ball is moving at 0.91c relative to the ground. But there is no reference frame in which the ball is "really" moving at 0.16c.

Or, more pertinently, suppose we have two identical simulations playing out at the same time. Each one contributes zero marginal utility to a world in which the other one exists (and might be told this by Omega), but that doesn't mean that the two of them together have zero utility.

Reply
Sublinear Utility in Population and other Uncommon Utilitarianism
jchan8d30

I've had exactly the same thought before but never got around to writing it up. Thanks for doing it so I don't have to :-)

There are only so many possible human shaped computations that are valuable to me

I would surmise that value-space is not so much "finite in size" but rather that it fades off into the distance in such a way that it has a finite sum over the infinite space. This is because other minds are valuable to me insofar as they can do superrationality/FDT/etc. with me. In fact, this is the same fading-out function as in the "perturb the simulation" scenario; i.e.:

  • VA(AB) := The value that A places on a world where A and B both exist
  • VA(A¯B) := The value that A places on a world where A exists but B doesn't
  • VA(¯AB) := The value that A places on a world where A doesn't exist but B does
  • Claim: VA(AB)−VA(A¯B)=VA(A¯B)−VA(¯AB)

However, the main problem with this perspective is what to do with quantum many-worlds. Does this imply that "quantum suicide" is rational, e.g. that you should buy a lottery ticket and set up a machine that kills you if you don't win? This is bullet I don't want to bite (so to speak...)

Reply
How does the current AI paradigm give rise to the "superagency" that IABIED is concerned with?
jchan22d1-2

Re 1a: Intuitively what I mean by "lots of data" is something comparable in size to what ChatGPT was trained on (e.g. the Common Crawl, in the roughly 1 petabyte range); or rather, not just comparable in disk-space-usage, but in the number of distinct events to which reinforcement learning is applied. So when ChatGPT is being trained, each token (of which there are a ~quadrillion) is a chance to test the model's predictions and adjust the model accordingly. (Incidentally, the fact that humans are able to learn language with far less data input than this suggests that there's something fundamentally different in the way LLMs vs. humans work.)

Therefore, for a similarly-architected AI that generates action plans (rather than text tokens), we'd expect it to require a training set with a ~quadrillion different historical cases. Now I'm pretty sure this already exceeds the amount of "stuff happening" that has ever been documented in all of history.

I would change my opinion on this if it turns out that AI advancement is making it possible to achieve the same predictive accuracy / generative quality with ever less training data, in a way that doesn't seem to be levelling off soon. (Has work been done on this?)

Re 2a: Accordingly, the reference class for the "experiments" that need to be performed here is not like "growing cells in a petri dish overnight", but rather more like "run a company according to this business plan for 3 months and see how much money you make." And at the end you'll get one data point - just 999,999,999,999,999 to go...

Re 2b:

What do you see as the upper bound for what can, in principle, be done with a plan that an army of IQ-180 humans (aka no better qualitative thinking than what the smartest humans can do, so that this is a strict lower bound on ASI capabilities) came up with over subjective millennia with access to all recorded information that currently exists in the world?

I'll grant that such an army could do some pretty impressive stuff. But this is already presupposing the existence of the superintelligence whose feasibility we are trying to explain.

Re 3c/d:

I haven't looked into this or tried it myself. (I mostly use LLMs for informational purpose, not for planning actions). Do you have any examples handy of AI being successful at real-world goals?

(I may add some thoughts on your other points later, but I didn't want to hold up my reply on that account.)


Stepping back, I should reiterate that I'm talking about "the current AI paradigm", i.e. "deep neural networks + big data + gradient descent", and not the capabilities of any hypothetical superintelligent AI that may exist in the future. Maybe this is conceding too much, inasmuch as addressing just one specific kind of architecture doesn't do much to alleviate fear of doom by other means. But IABIED leans heavily on this paradigm in making its case for concern:

  • the claim that AIs are "grown, not crafted"
  • the claim that AIs will develop desires (or desire-like behavior) via gradient descent
  • the claim that the risk is imminent because superintelligence is the inevitable endpoint of the work that AI researchers are currently doing, and because no new fundamental breakthroughs stand in the way of that outcome.
Reply111
What are non-obvious class markers?
jchan3mo11

skiing

It seems like skiing is a "hereditary" class marker because it's hard to learn how to do it as an adult, and you're probably not going to take your kids skiing unless you yourself were taught as a kid, etc.

Reply
The Boat Theft Theory of Consciousness
jchan4mo42

trying to imagine being something with half as much consciousness

Isn't this what we experience every day when we go to sleep or wake up? We know it must be a gradual transition, not a sudden on/off switch, because sleep is not experienced as a mere time-skip - when you wake up, you are aware that you were recently asleep, and not confused how it's suddenly the next day. (Or at least, I don't get the time-skip experience unless I'm very tired.)

(When I had my wisdom teeth extracted under laughing gas, it really did feel like all-or-nothing, because once I reawoke I asked if they were going to get started with the surgery soon, and I had to be told "Actually it's finished already". This is not how I normally experience waking up every morning.)

Reply
sunwillrise's Shortform
jchan4mo98
  • I think this is mostly just the macro-trend of the internet shifting away from open forums and blogs and towards the "cozy web" of private groupchats etc., not anything specific about LessWrong. If anything, LessWrong seems to be bucking the trend here, since it remains much more active than most other sites that had their heyday in the late 00s.
  • I don't have any dog in the Achmiz/Worley debate, but I'm having trouble getting in the headspace of someone who is driven away from posting here because of one specific commenter.
    • First of all, I don't think anyone is ever under any obligation to reply to commenters at all - simply dropping out of a conversation thread doesn't feel rude/confrontational in the way it would be to say IRL "I'm done talking to you now."
    • Second, I would find it far more demotivating to just get zero engagement on my posts - if I didn't think anybody was reading, it's hard to justify the time and effort of posting. But otherwise, even if some commenters disagree with me, my post is still part of the discourse, which makes it worthwhile.
Reply1
Religion for Rationalists
jchan4mo51

I think this approach wouldn't work for rationalists, for two reasons:

  • The rationality community is based around disputation, not canonicalization, of texts. That is, the litmus test for being a rationalist is not "Do you agree with this list of propositions?" (I have tried many times to draw up such a list, but this always just leads to even more debate), but rather "Are you familiar with this body of literature and do you know how to respond to it?" The kind of person who goes to LW meetups isn't going to enjoy simply being "talked at" and told what to believe - they want to be down in the arena, getting their hands dirty.
  • Your "recommended template" is essentially individualistic - participants come with their hopes and desires already in-hand, and the only question is "How can I use this community to help me achieve my goals?" Just as a gut feeling I don't think this is going to work well in building a community or meaningful relationships (seeing others not merely as means, but as ends in themselves - or something like that). Instead, there needs to be some shared purpose for which involvement in the community is essential and not just an afterthought. Now, this isn't easy. "Solving AI alignment" might be a tall order. But I think the rationality community is doing a passable job at one thing at least - creating a culture of high epistemic standards that will be essential (for both ourselves and the wider world) in navigating the unprecedented challenges our civilization faces.
Reply
How the veil of ignorance grounds sentientism
jchan5mo42

Can't speak for Said Achmiz, but I guess for me the main stumbling block is the unreality of the hypothetical, which you acknowledge in the section "This is not a literal description of reality" but don't go into further. How is it possible for me to imagine what "I" would want in a world where by construction "I" don't exist? Created Already in Motion and No Universally Compelling Arguments are gesturing at a similar problem, that there is no "ideal mind of perfect emptiness" whose reasoning can be separated from its contingent properties. Now, I don't go that far - I'll grant at least that logic and mathematics are universally true even if some particular person doesn't accept them. But the veil-of-ignorance scenario is specifically inquiring into subjectivity (preferences and values), and so it doesn't seem coherent to do so while at the same time imagining a world without the contingent properties that constitute that subjectivity.

Reply
What should I read to understand ancestral human society?
jchan5mo1-2

I think ancient DNA analysis is the space to watch here. We've all heard about Neanderthal intermixing by now, but it's only recently become possible to determine e.g. that two skeletons found in the same grave were 2nd cousins on their father's side, or whatever. It seems like this can tell us a lot about social behavior that would otherwise be obscure.

Reply
johnswentworth's Shortform
jchan5mo142

It took me years of going to bars and clubs and thinking the same thoughts:

  • Wow this music is loud
  • I can barely hear myself talk, let alone anyone else
  • We should all learn sign language so we don't have to shout at the top of our lungs all the time

before I finally realized - the whole draw of places like this is specifically that you don't talk.

Reply1
Load More
3How does the current AI paradigm give rise to the "superagency" that IABIED is concerned with?
Q
22d
Q
4
7Anthropic reasoning intro (notes on Bostrom)
3mo
0
5What we can learn from afterlife myths
5mo
0
38Thoughts on "Antiqua et nova" (Catholic Church's AI statement)
5mo
9
55Ten Modes of Culture War Discourse
2y
15
16On the proper date for solstice celebrations
2y
0
33Proof of posteriority: a defense against AI-generated misinformation
2y
3
17What is some unnecessarily obscure jargon that people here tend to use?
Q
2y
Q
5
22Through a panel, darkly: a case study in internet BS detection
2y
7
8Solstice song: Here Lies the Dragon
3y
1
Load More