Wiki Contributions

Comments

You can talk to EA Funds before applying

I scheduled a conversation with Evan based on this post and it was very helpful. If you're on the fence, do it! For me, it was helpful as a general career / EA strategy discussion, in addition to being useful for thinking about specifically Long-Term Future Fund concerns.

And I can corroborate that Evan is indeed not that intimidating.

A review of Steven Pinker's new book on rationality

"I'm tempted to recommend this book to people who might otherwise be turned away by Rationality: From A to Z."

Within the category of "recent accessible introduction to rationality", would you recommend this Pinker book, or Julia Galef's "Scout Mindset"? Do thoughts on the pros and cons of each, or who would benefit more from each?

Brain-Computer Interfaces and AI Alignment

Thanks for collecting these things! I have been looking into these arguments recently myself, and here are some more relevant things:

  1. EA forum post "A New X-Risk Factor: Brain-Computer Interfaces" (August 2020) argues for BCI as a risk factor for totalitarian lock-in.
  2. In a comment on that post, Kaj Sotala excerpts a section of Sotala and Yampolskiy (2015), "Responses to catastrophic AGI risk: a survey". This excerpts contains links to many other relevant discussions:
    1. "De Garis [82] argues that a computer could have far more processing power than a human brain, making it pointless to merge computers and humans. The biological component of the resulting hybrid would be insignificant compared to the electronic component, creating a mind that was negligibly different from a 'pure' AGI. Kurzweil [168] makes the same argument, saying that although he supports intelligence enhancement by directly connecting brains and computers, this would only keep pace with AGIs for a couple of additional decades.
    2. "The truth of this claim seems to depend on exactly how human brains are augmented. In principle, it seems possible to create a prosthetic extension of a human brain that uses the same basic architecture as the original brain and gradually integrates with it [254]. A human extending their intelligence using such a method might remain roughly human-like and maintain their original values. However, it could also be possible to connect brains with computer programs that are very unlike human brains and which would substantially change the way the original brain worked. Even smaller differences could conceivably lead to the adoption of 'cyborg values' distinct from ordinary human values [290].
    3. "Bostrom [49] speculates that humans might outsource many of their skills to non-conscious external modules and would cease to experience anything as a result. The value-altering modules would provide substantial advantages to their users, to the point that they could outcompete uploaded minds who did not adopt the modules. [...]
    4. "Moravec [194] notes that the human mind has evolved to function in an environment which is drastically different from a purely digital environment and that the only way to remain competitive with AGIs would be to transform into something that was very different from a human."
  3. The sources in question from the above are:
    1. de Garis H 2005 The Artilect War: Cosmists vs Terrans (Palm Springs, CA: ETC Publica-Tions)
    2. Kurzweil, R. (2001). Response to Stephen Hawking. Kurzweil Accelerating Intelligence. September, 5.
    3. Sotala K and Valpola H 2012 Coalescing minds Int. J. Machine Consciousness 4 293–312
    4. Warwick K 2003 Cyborg morals, cyborg values, cyborg ethics Ethics Inf. Technol. 5 131–7
    5. Bostrom N 2004 The future of human evolution ed C Tandy pp 339–71 Two Hundred Years After Kant, Fifty Years After Turing (Death and Anti-Death vol 2)
    6. Moravec H P 1992 Pigs in cyberspace www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1992/CyberPigs.html
  4. Here's a relevant comment on that post from Carl Shulman, who notes that FHI has periodically looked into BCI in unpublished work: "I agree the idea of creating aligned AGI through BCI is quite dubious (it basically requires having aligned AGI to link with, and so is superfluous; and could in any case be provided by the aligned AGI if desired long term)" 
     


 

Pleasure and Pain are Long-Tailed

Thank you for writing about this. It's a tremendously interesting issue. 

I feel qualitatively more conscious, which I mean in the "hard problem of consciousness" sense of the word. "Usually people say that high-dose psychedelic states are indescribably more real and vivid than normal everyday life." Zen practitioners are often uninterested in LSD because it's possible to reach states that are indescribably more real and vivid than (regular) real life without ever leaving real life. (Zen is based around being totally present for real life. A Zen master meditates eyes open.) It is not unusual for proficient meditators to describe mystical experiences as at least 100× more conscious than regular everyday experience.

I'm very curious about the issue of what it means to say that one creature is "more conscious" than another--or, that one person is more conscious while meditating than while surfing Reddit. Especially if this is meant in the sense of "more phenomenally conscious". (I take it that you do mean "more phenomenally conscious", and that's what you are saying by invoking the hard problem. But let me know if that's not right). Can you say more about what you mean? Some background:

Pautz (2019) has been influential on my thinking about this kind of talk about 'more conscious' or 'level of conscious' or 'degree of consciousness'. Pautz distinguishes between many consciousness-related things that certainly do come in degrees. 

On the one hand, we have certain features of the particular character of phenomenally conscious experiences:

  • Intensity level (193)
    • A whisper is less intense than a heavy metal concert; faint pink is less intense than bright red.  And of course, certain pleasures and pains are more intense than others
  • Complexity level
    • The whiff of mint is a 'simpler' experience than visual experience of a bustling London street
  • Determinacy level
    • A tomato in the center of vision is represented more determinately than a tomato in the periphery
  • Access level
    • If you think that things can be more or less 'access' of phenomenal conscious experiences, then there might be some experiences that are not accessed, versus those that are fully accessed--e.g. something right in front of you that you are paying full attention to.

And then there is a 'global' feature of a creature's phenomenal consciousness:

  • Richness of experiential repertoire: the ‘number’ of distinct experiences (types and tokens) the creature has the capacity to have (194). Adult humans probably have a greater richness of experiential repertoire than a worm (if indeed worms are phenomenally conscious).

In light of this, my questions for you:

  1. Along which of these dimensions are you 'more' conscious when meditating? Would love to hear more. (I'm guessing: intensity, complexity, and access?)
  2. Do you think there is some further way in which you are 'more conscious', that is not cashed out in these terms? (Pautz does not, and he uses this to criticize Integrated Information Theory)

Finally: this post has inspired me to be more ambitious about exploring the broader regions of consciousness space for myself. ("Our normal waking consciousness, rational consciousness as we call it, is but one special type of consciousness, whilst all about it, parted from it by the filmiest of screens, there lie potential forms of consciousness entirely different." -William James). And for that, I am grateful.

My Productivity Tips and Systems

Tons of handy stuff here, thanks!

I love the sound of Cold Turkey. I use Freedom for my computer, and I use it less than I otherwise would because of this anxious feeling, almost certainly exaggerated but still with a basis in reality, that whenever I start a full block it is a Really Big Deal and I might accidentally screw myself over - for example, if I suddenly remember I have to do something else. (Say, I'm looking for houses and it turns out I actually need to go look something up). But Cold Turkey, I'd just block stuff a lot more freely without the anxiety - I'll know if I really need something I can unlock it.  All while having the calm that comes from Twitter not being immediately accessible.

I also find the Freedom interface really terrible and that trivial inconvenience can keep me from starting blocks.

How often would you say you spend time-you-don't-endorse after unlocking something with the N random characters? Is it pretty effective at keeping you in line?

Willa's Shortform

I enjoyed reading this and skimming through your other shortforms. I’m intrigued by this idea of using the short form as something like a journal (albeit a somewhat public facing one).

Any tips, if I might want to start doing this? How helpful have you found it? Any failure modes?

Who has argued in detail that a current AI system is phenomenally conscious?
Answer by RobboMay 15, 202110

Jonathan Simon is working on such a project: "What is it like to be AlphaGo"? 

TurnTrout's shortform feed

[disclaimer: not an expert, possibly still confused about the Baldwin effect]

A bit of feedback on this explanation: as written, it didn’t make clear to me what makes it a special effect. “Evolution selects for genome-level hardcoding of extremely important learned lessons.” As a reader I was like, what makes this a special case? If it’s useful lesson then of course evolution would tend to select for knowing it innately - that does seem handy for an organism.

As I understand it, what is interesting about the Baldwin effect is that such hard coding is selected for more among creatures that can learn, and indeed because of learning. The learnability of the solution makes it even more important to be endowed with the solution. So individual learning, in this way, drives selection pressures. Dennett’s explanation emphasizes this - curious what you make of his?

https://ase.tufts.edu/cogstud/dennett/papers/baldwincranefin.htm

Open and Welcome Thread - May 2021

I'm very intrigued by "prosthetic human voice meant for animal use"! Not knowing much about animal communication or speech in general, I don't even know what this mean. Could you say a bit more about what that would be?

Open and Welcome Thread - May 2021

Welcome, David! What sort of math are you looking to level up on? And do you know what AI safety/related topics you might explore? 

Load More