LESSWRONG
LW

564
RHollerith
2864512380
Message
Dialogue
Subscribe

Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com

My probability that AI research will end all human life is .92.  It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)

Currently I am willing to meet with almost anyone on the subject of AI extinction risk.

Last updated 26 Sep 2023.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
6rhollerith_dot_com's Shortform
4y
33
MAGA speakers at NatCon were mostly against AI
RHollerith5d30

Can you explain "defensive technologies"?

Do any of these defensive technologies allow people to survive an unaligned AI that they wouldn't have survived without the defensive technology?

Reply
Masking on the Subway
RHollerith15d20

Regardless the reason why it is beneficial, I notice that almost all industrial respirators have valves, and the two respirators I know of that do not were designed by amateurs during the COVID pandemic.

That said, you've already purchased a respirator without a valve, so I would keep using it, but if you lose it or need to buy a new one for some reason, I'd go with one with a valve.

Reply
Masking on the Subway
RHollerith2mo20

Almost all the reusable respirators have the valve.

Reply
Masking on the Subway
RHollerith2mo20

Sure, but without a valve, you are breathing back in exhaled CO2.

Even some of the single-use masks have exhale valves.

Reply
Diabetes is Caused by Oxidative Stress
RHollerith3mo21

Grass is mostly (water and) carbs, just not carbs a person can digest and burn with any efficiency.

Reply
rhollerith_dot_com's Shortform
RHollerith4mo71

Good point. Change my final sentence to, "A warning shot is made by the entity capable of imposing damaging consequences on you -- to alert you and to give you a way to avoid the most damaging of the consequences at its disposal."

Reply
rhollerith_dot_com's Shortform
RHollerith4mo297

Many believe that one hope for our future is that the AI labs will makes some mistake that will kill many people, but not all of us, resulting in the survivors finally realizing how dangerous AI is. I wish people would refer to that as a "near miss", not a "warning shot". A warning shot is when the danger (originally a warship) actually cares about you but cares about its mission more, with the result that it complicates its plans and policies to try to keep you alive.

Reply
LLM AGI will have memory, and memory changes alignment
RHollerith4mo*20

I am surprised by that because I've been avoiding learning about LLMs (including making any use of LLMs) till about a month ago, so it didn't occur to me that implementing this might have been as easy as adding to the system prompt instructions for what kinds of information to put in the contextual memory file.

Reply
LLM AGI will have memory, and memory changes alignment
RHollerith4mo20

This contextual memory file is edited by the user, never the AI?

Reply
Questions for old LW members: how have discussions about AI changed compared to 10+ years ago?
RHollerith5mo82

In 2015, I didn't write much about AI on Hacker News because even just explaining why it is dangerous will tend to spark enthusiasm for it in some people (people attracted to power, who notice that since it is dangerous, it must be powerful). These days, I don't let that consideration stop me from write about AI.

Reply
Load More
6rhollerith_dot_com's Shortform
4y
33
15One Medical? Expansion of MIRI?
12y
8
8Computer-mediated communication and the sense of social connectedness
15y
12
-7LW was started to help altruists
15y
36