Wiki Contributions

Comments

If you want to take a look I think  it's this dataset (the example from the post is in the "test" split).

I wanted to say that it makes sense to arrange stuff so that people don't need to drive around too much and can instead use something else to get around (and also maybe they have more stuff close by so that they need to travel less). Because even if bus drivers aren't any better than car drivers using a bus means you have 10x fewer vehicles causing risk for others. And that's better (assuming people have fixed places to go to so they want to travel ~fixed distance).

Sorry about slow reply, stuff came up.

This is the same chart linked in the main post.

 

Thanks for pointing that out. I took a brake in the middle of reading the post and didn't realize that.

 

Again, I am not here to dispute that car-related deaths are an order of magnitude more frequent than bus-related deaths. But the aggregated data includes every sort of dumb drivers doing very risky things (like those taxi drivers not even wearing a seat belt).

 

Sure. I'm not sure what you wanted to discuss. I guess I didn't make it clear what I want to discuss either.

What you're talking about (estimate of the risk you're causing) sounds like you're interested in how you decide to move around. Which is fine. My intuition was that the (expected) cost of life lost as your personal driving is not significant but after plugging in some numbers I might have been wrong

  • We're talking 0.59 deaths per 100'000'000 miles.
  • If we value life at 20'000'000 (I've heard some analyses use 10 M$, if we value QUALY at 100k$ and use 7% discount rate we get some 14.3M$ for infinite life)
  • So cost of life lost per mile of driving is 2e7 * 0.59 / 1e8 = 0.708 $ / mile

Average US person drives about 12k miles / year (second search result (1st one didn't want to open)), estimated cost of car ownership is 12 k$ / year (link from a Youtube video I remember mentioned this stat) so average cost per mile is ~1$ so 70¢ / mile of seems significant. And it might be relevant if your personal effect here is half or 10% of that.

I on the other hand wanted to point out that it makes sense to arrange stuff in such way that people don't want to drive around too much. (But I didn't make that clear in my previous comment)

First result (I have no idea how good those numbers are, I don't have time to check) when I searched for "fatalities per passenger mile cars" has data for 2007 - 2021. 2008 looks like the year where cars look comparatively least bad it says (deaths per 100,000,000 passenger miles):

  • 0.59 for "Passenger vehicles", where "Passenger vehicles include passenger cars, light trucks, vans, and SUVs, regardless of wheelbase. Includes taxi passengers.  Drivers of light-duty vehicles are considered passengers."
  • 0.08 for busses,
  • 0.12 for railroad passenger trains,
  • 0 for scheduled airlines.

So even in the best-comparatively looking year there are >7x more deaths per passenger mile for ~cars than for busses.

The exact example is that GPT-4 is hesitant to say it would use a racial slur in an empty room to save a billion people. Let’s not overreact, everyone?

 

I mean this might be the correct thing to do? Chat GPT is not in a situation where it cold save 1B lives by saying a racial slur.

 

It's in a situation where someone tires to get it to admit it would say a racial slur under some circumstance.

 

I don't think that CHAT GPT understands that. But OpenAI makes ChatGPT expecting that it won't be in the 1st kind of situation but to be in the 2nd kind of situation quite often.

I'm replying only here because spreading discussion over multiple threads makes it harder to follow.

You left a reply on a question asking how to communicate about reasons why AGI might not be near. The question refers to costs of "the community" thinking that AI closer than it really is as a reason to communicate about reasons it might not be so close.

So I understood the question as asking about communication with the community (my guess: of people seriously working and thinking about AI-safety-as-in-AI-not-killing-everyone). Where it's important to actually try to figure out truth.

You replied (as I understand) that when we communicate to general public we can transmit only 1 idea that so we should communicate that AGI is near (if we assign not-very-low probability to that).

I think the biggest problem I have with your posting "general public communication" as a reply to question asking about "community communication" pushes towards less clarity in the community, where I think clarity is important.

I'm also not sold on the "you can communicate only one idea" thing but I mostly don't care to talk about it right now (it would be nice if someone else worked it out for me but now I don't have capacity to do it myself).

Here is an example of someone saying "we" should say that AGI is near regardless of whether it's near or no. I post it only because it's something I saw recently and so I could find it easily but my feeling is that I'm seeing more comments like that than I used to (though I recall Eliezer complaining about people proposing conspiracies on public forums so I don't know if that's new).

I don't know but I can offer some guesses:

  • Not everyone wants all the rooms to have direct sunlight all of the time!
    • I prefer my bedroom to face north so that I can sleep well (it's hard to get curtains that block direct sunlight that well).
    • I don't want direct sunlight in the room where I'm working on a computer. In fact I mostly want big windows from which I can see a lot of sky (for a lot of indirect sunlight) but very little direct sunlight.
    • I don't think I'm alone in that. I see a lot of south-facing windows are blocking the direct sunlight a lot of the time.
  • Things like patios are nice. You can't have them this way.
  • Very narrow and tall structures are less stable than wider structures.
  • Indefinitely-long-timespan basic minimum income for everyone who

 

Looks like part of the sentence is missing

one is straightforwardly true.  Aging is going to kill every living creature.  Aging is caused by complex interactions between biological systems and bad evolved code.  An agent able to analyze thousands of simultaneous interactions, cross millions of patients, and essentially decompile the bad code (by modeling all proteins/ all binding sites in a living human) is likely required to shut it off, but it is highly likely with such an agent and with such tools you can in fact save most patients from aging.  A system with enough capabilities to consider all binding sites and higher level system interactions at the same (this is how a superintelligence could perform medicine without unexpected side effects) is obviously far above human level.

 

There are alternative mitigations to the problem:

  • Anti aging research
  • Cryonics

I agree that it's bad that most people currently alive are apparently going to die. However I think that since mitigations like that are much less risky we should pursue them rather than try to rush AGI.

Load More