I'll try.
TL;DR I expect the AI to not buy the message (unless it also thinks it's the one in the simulation; then it likely follows the instruction because duh).
The glaring issue (to actually using the method) to me is that I don't see a way to deliver the message in a way that:
If "god tells" the AI the message then there is a god in their universe. Maybe AI will decide to do what it's told. But I don't think we can have Hermes deliver the message to any AIs which consider killing us.
If the AI reads the message in its training set or gets the message in similarly mundane way I expect it will mostly ignore it, there is a lot of nonsense out there.
I can imagine that for thought experiment you could send message that could be trusted from a place from which light barely manages to reach the AI but a slower than light expansion wouldn't (so message can be trusted but it mostly doesn't have to worry about the sender of the message directly interfering with its affairs).
I guess AI wouldn't trust the message. It might be possible to convince it that there is a powerful entity (simulating it or half a universe away) sending the message. But then I think it's way more likely in a simulation (I mean that's an awful coincidence with the distance and also they're spending a lot more than 10 planets worth to send a message over that distance...).
This is pretty much the same thing, except breaking out the “economic engine” into two elements of “world needs it” and “you can get paid for it.”
There are things that are economic engines of things that world doesn't quite need (getting people addicted, rent seeking, threats of violence).
One more obvious problem - people actually in control of the company might not want to split it and so they wouldn't grow the company even if share holders/ customers/ ... would benefit.
but much higher average wealth, about 5x the US median.
Wouldn't it make more sense to compare average to average? (like earlier part of the sentence compares median to median)
I wanted to say that it makes sense to arrange stuff so that people don't need to drive around too much and can instead use something else to get around (and also maybe they have more stuff close by so that they need to travel less). Because even if bus drivers aren't any better than car drivers using a bus means you have 10x fewer vehicles causing risk for others. And that's better (assuming people have fixed places to go to so they want to travel ~fixed distance).
Sorry about slow reply, stuff came up.
This is the same chart linked in the main post.
Thanks for pointing that out. I took a brake in the middle of reading the post and didn't realize that.
Again, I am not here to dispute that car-related deaths are an order of magnitude more frequent than bus-related deaths. But the aggregated data includes every sort of dumb drivers doing very risky things (like those taxi drivers not even wearing a seat belt).
Sure. I'm not sure what you wanted to discuss. I guess I didn't make it clear what I want to discuss either.
What you're talking about (estimate of the risk you're causing) sounds like you're interested in how you decide to move around. Which is fine. My intuition was that the (expected) cost of life lost as your personal driving is not significant but after plugging in some numbers I might have been wrong
Average US person drives about 12k miles / year (second search result (1st one didn't want to open)), estimated cost of car ownership is 12 k$ / year (link from a Youtube video I remember mentioned this stat) so average cost per mile is ~1$ so 70¢ / mile of seems significant. And it might be relevant if your personal effect here is half or 10% of that.
I on the other hand wanted to point out that it makes sense to arrange stuff in such way that people don't want to drive around too much. (But I didn't make that clear in my previous comment)
First result (I have no idea how good those numbers are, I don't have time to check) when I searched for "fatalities per passenger mile cars" has data for 2007 - 2021. 2008 looks like the year where cars look comparatively least bad it says (deaths per 100,000,000 passenger miles):
So even in the best-comparatively looking year there are >7x more deaths per passenger mile for ~cars than for busses.
The exact example is that GPT-4 is hesitant to say it would use a racial slur in an empty room to save a billion people. Let’s not overreact, everyone?
I mean this might be the correct thing to do? Chat GPT is not in a situation where it cold save 1B lives by saying a racial slur.
It's in a situation where someone tires to get it to admit it would say a racial slur under some circumstance.
I don't think that CHAT GPT understands that. But OpenAI makes ChatGPT expecting that it won't be in the 1st kind of situation but to be in the 2nd kind of situation quite often.
Well if they consistently make recommendations that in retrospect end up looking good then maybe you're bad at understanding. Or maybe they're bad at explaining. But trusting them when you don't understand their recommendation is exploitable so maybe they're running a strategy where they deliberately make good recommendations with poor explanations so when you start trusting them they can start mixing in exploitative recommendations (which you can't tell apart because all recommendations have poor explanations).
So I'd really rather not do that in community context. There are ways to work with that. Eg. boss can skip some details of employees recommendations and if results are bad enough fire the employee. On the other hand I think it's pretty common for employee to act in their own interest. But yeah, we're talking principal-agent problem at that point and tradeoffs what's more efficient...