Anon User

Posts

Sorted by New

Wiki Contributions

Comments

Worst Commonsense Concepts?

Right, something like "Some objective truths are outside of science's purview" might have been a slightly better phrasing, but as the goal is to stay at the commonsense level, trying to parse this more precisely is probably out of scope anyway, so can as well stay concise...

Worst Commonsense Concepts?

"Some truths are outside of science's purview" (as exemplified by e.g. Hollywood shows where a scientist is faced with very compelling evidence of supernatural, but claims it would be "unscientific" to take that evidence seriously).

My favorite way to illustrate this would be that approximately around the end of 19th century/beginning of 20th century [time period is from memory, might be a bit off] belief in ghosts was commonplace with a lot of interest in doing spiritual seanses, etc, while rare stories of hot rocks falling from the sky were mostly dismissed as tall tales. Then scientists followed the evidence, and now most everybody knows that meteorites are real and "scientific", while ghosts are not, and are "unscientific".

Goodhart's Imperius

For a while, I tended to be running late in certain situations. I would glance at my watch, notice I am late, and think "oh, f***!" One day, I caught myself where being in a similar situation, I glanced at my watch, and immediately thought "oh, f***!" - then realized I did not actually do the step where I figure out what time my watch displayed, and whether I was running behind. In fact, that particular time I was still OK on time...

What could small scale disasters from AI look like?

To clarify - I do not think MCAS specifically is an AI based system, I was just thinking of a hypothetical future similar system that does include a weak AI component, but where, similarly to ACAS the issue is not so much with the flaw in AI itself, but in how it is being used in a larger system.

In other words, I think your test needs to make a distinction between a situation where one needed a trustworthy AI, and the actual AI was unintentionally/unexpectedly untrustworthy vs a situation where perhaps the AI performed reasonably well, but the use of AI was problematic, causing a disaster anyway.

HPMOR illustrated how Universe's best move is to intervene before you make a precommitment like that to prevent you from making it. The redundancy argument does not work - they ought to have some common ancestor earlier in time. So here is me telling you on behalf of the Universe: DO NOT MESS WITH TIME.

What could small scale disasters from AI look like?

Boeing MCAS (https://en.wikipedia.org/wiki/Maneuvering_Characteristics_Augmentation_System) is blaimed by more than 100 deaths. How much "AI" would a similar system need to include for a similar tragedy to count as "an event precipitated by AI"?

Realism about rationality

Actually, there is a logical error in your mathematicians joke - at least compared to how this joke normally goes. When it's their turn, the 3rd mathematician knows that the first two wanted a beer (otherwise they would have said "yes"), and so can say Yes/No. https://www.beingamathematician.org/Jokes/445-three-logicians-walk-into-a-bar.png

A Small Vacation

Why do you think that refugees will be capable of creating better institutions than those that failed them in theis county of origin? Could it be that a small (relatively speaking) number of refugees can benefit from better institutions of their new country, without diluting the locals so much that the implicit institutional knowledge is lost, but a larger influx of immigrants would just import their "bad" institutions with them?

Research productivity tip: "Solve The Whole Problem Day"

I use an alternative technique that works well for me - making sure to walk up the stack on every significant new development at lower levels.

E.g. if on level 5 am trying to solve X with technique Y, and I realize that it does not quite work, but I would probably be able to do X' that is as good with Y', before jumping into Y', I take time to consider - well, X' is as good as X for level 4, but does it perhaps mutate level 4 away from higher-level goals? Maybe the fact that Y does not actually work for X indicates that approach at one of the higher levels is off?

And it's actually similar when Y does succeed for X - once it does, I learned something new, and need to check my stack again. Or maybe I realize that Y is taking me much longer than expected - again, need to walk the stack and figure out whether X and Y are even worth it. This way when I am in the zone on Y, there is no distraction, but I also do not have the stack ignored for too long as beeing in the zone for Y for too long is an indication that something went wrong and the plan needs to be reexamined.

Having hard deadlines, even artificially imposed, helps. Having goals explicit (and explicitly written, so that I can remind yourself how I ended up in the rabbit hole I am in) for each of higher levels helps.

YMMV, of course.

Load More