rhollerith_dot_com

Richard Hollerith, 15 miles north of San Francisco. hruvulum@gmail.com

Wiki Contributions

Comments

One reason I'm skeptical is the fact that people whose mood is neutral / normal are already too optimistic, a fact that significantly handicaps their rationality IMO (especially when thinking about the risks of continuing research on AI) and hypomania will make that handicap worse.

I would be more interested in making people more perfectionistic because a large fraction of our scientific inheritance was created by people high in that trait, Newton and Darwin included.

Want to bet on your prediction? I'll give you $100 right now if you'll commit to sending me $200 if the OP does in fact end up sending LW participants at least $200 as his side of this bet.

(The OP is a complete stranger to me.)

It's not random internet strangers: elsewhere he writes, "I was only ever going to engage with people with established reputations because obviously. I reserved the right to choose who to bet with."

I am not willing to bet about the object-level proposition, but I am willing to bet that he gets paid at least .4 of his winnings. In other words, if it turns out that he won the bet, then I would be willing give you $1000 in exchange for $2500 ($1000 times .4) times whatever fraction he ends up collecting (over the ensuing 5 years, say).

You've quadrupled my P(aliens or demons or such have been flying around Earth's atmosphere). Thanks for this post (and this comment in particular).

Thinking about the qualitative mechanism is causal reasoning, which humans prefer to statistical reasoning.

Causal knowledge is obtaining by making statistical observations (and not necessarily big datasets of statistical observations).

Eliezer warns against trying to use that analogy to reason about misaligned AI (for example in his appearance on the Bankless podcast).

Sunlight scattered by the atmosphere on cloudless mornings during the hour before sunrise inspires a subtle feeling ("this is cool, maybe even exciting") that I never noticed till I started intentionally exposing myself to it for health reasons (specifically, making it easier to fall asleep 18 hours later).

More precisely, I might or might not have noticed the feeling, but if I did notice it, I quickly forgot about it because I had no idea how to reproduce it.

I have to get away from artificial light (streetlamps) (and from direct (yellow) sunlight) for the (blue) indirect sunlight to have this effect. Also, it is no good looking at a small patch of sky, e.g., through a window in a building: most or all of the upper half of my field of vision must be receiving this indirect sunlight. (The intrinsically-photosensitive retinal ganglion cells are all over the bottom half of the retina, but absent from the top half.)

Is it OK for me to question a premise of your question? (Some people find that impolite, so I'm asking before I do it.)

Do AIs currently display goal-directed behavior?

Of course: AlphaGo for example has the goal of winning the game. AlphaGo probably doesn't know about anything except for the game of go. Maybe what you want to know is whether AIs like GPT-4 that might plausibly have a general knowledge or model of reality display goal-directed behavior.

If this can sit on my head and allow me to type or do calculations while I’m working in the lab, that would be very convenient. Currently, I have to put gloves on and off to use my phone, and office space with my laptop is a 6-minute round trip from the lab.

I can see an application that combines voice-to-text and AI in a way that makes it feel like you always have a ChatGPT virtual assistant sitting on your shoulder as you do everyday tasks.

Sure, but an audio-only interface can be done with an iPhone and some Airpods; no need for a new interface.

Load More