Joachim Bartosik

Posts

Sorted by New

Wiki Contributions

Comments

The Meta-Puzzle

"V jbefuvc fngna naq V'z zneevrq." ?

Which booster shot to get and when?

One more thing you might want to consider are vaccine certificates.

Where I live certificates are valid for a year and booster shots renews a certificate. Also where I live one becomes eligible for a booster shot 6 months after final vaccine dose. So if one gets booster shot ASAP then one gets 18 months of valid certificate. If one delays booster shot until the last moment then one gets 24 month of a valid certificate.

And valid certificate is very useful over here so there is a real trade off between making one safer against infection vs making more actions available in the future.

I think it kind of sucks that this is a tradeoff one has to consider.

Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

I have only a very vague idea what are different reasoning ways (vaguely related to “fast and effortless “ vs “slow and effortful in humans? I don’t know how that translates into what’s actually going on (rather than how it feels to me)).

Thank you for pointing me to a thing I’d like to understand better.

Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

I was thinking that current methods could produce AGI (because Turing-complete) and they can apparently good at producing some algorithms so they might be reasonably good at producing AGI.

2nd part of that wasn't explicit for me before your answer so thank you :)

Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

 >Which is basically this: I notice my inside view, while not confident in this, continues to not expect current methods to be sufficient for AGI, and expects the final form to be more different than I understand Eliezer/MIRI to think it is going to be, and that theAGI problem (not counting alignment, where I think we largely agree on difficulty) is ‘harder’ than Eliezer/MIRI think it is.

Could you share why you think that current methods are not sufficient to produce AGI?

 

Some context:

After reading  Discussion with Eliezer Yudkowsky on AGI interventions I thought about the question "Are current methods sufficient to produce AI?" for a while. I thought I'd check if neural nets are Turing-complete and quick search says they are. To me this looks like a strong clue that we should be able to produce AGI with current methods.

But I remembered reading some people who generally seemed better informed than me having doubts.

I'd like to understand what those doubts are (and why there is apparent disagreement on the subject).

No, really, can "dead" time be salvaged?

First I'll echo what many others said. You need to rest so be careful to not make things worse (by not resting properly and as a result performing worse at work / school / whatever you do in your "productive time").

That said. If you feel like you're wasting time then it's ok to improve that. Some time ago I felt like I was wasting a big chunk of my time. What worked for me was trying out a bunch of things.

Doing chores. Cooking, cleaning my apartment, replacing my clothes with new ones, maintaining my car. Learning how to get better at chores, in a low effort way. I watched a bunch of youtube videos about how to clean better, how to do laundry better, a ton of recipes. I tried some of those (maybe 1% which looked like it's the least effort / most fun). I like having comfortable clothes, clean apartment, working car. I like some food I can cook better than anything I can buy. So sometimes when I feel tired I enjoy doing chores (since I'm doing them for myself, nobody is forcing me to do them, I can stop doing them whenever I feel like it they are slightly pleasant (very different from when I was doing them on somebody else schedule)).

Reading blogs, watching educational videos. I count things like "videos about game exploits" [1] cooking videos [2], urban planning related videos[3] as educational videos. I count reading blog posts about history or analysing logistics of Lord of the Rings as good things to read[4].

Light exercise. I like walking and going on walks helps me a lot with staying healthy.

Things you'll enjoy while you're resting are probably different than those I enjoy so I'd just try a bunch of things which sound like you might like them and see what sticks.

 

[1] They're fun examples of things working as implemented and very much not working as intended.

[2] I never cook most of them but they're often fun to watch and sometimes I find something I want to try.

[3]  Also fun to watch and I think they help me understand better why I like some places and make it easier to pick a nice place to live.

[4] Because they're taking ideas seriously and it's helps me with learning to notice when things don't make sense.

No, really, can "dead" time be salvaged?

Actually, this is heavily criticized by almost anyone sensible in the field: see for example this post by Nate Soares, director of MIRI.

 

The link is broken. Did you mean to link to this post? 

What should one's policy regarding dental xrays be?

I too want to say that my dentist never even suggested getting an x-ray during a routine check up.

I’ve had a dental x-ray once but it was when looking into a specific problem.

I didn’t have any cavities in years. Back when I had cavities dentist found them by looking at my teeth no x-ray needed.

The description doesn't fully specify what's happening.

  1. Yovanni is answering questions in form of "Did X win the lottery". And gives correct answer 99% of the time. In that case you shouldn't believe that Xavier won the lottery. If you asked the question for all the participants you'd end up with list of (in expectation) about 10'000 people for which Yovanni claimed they won the lottery.
  2. Yovanni is making 1 claim about who won the lottery. And for questions like that Yovanni gives correct answers 99% of the time. In that case Phil got it and probability that Xavier won is 99%.

 

Also I think it's better to avoid using humans in examples like that and try to use something else / not agenty. Because humans can strategically lie (for example somebody can reach very high accuracy in statements they make by talking a lot about simple arithmetic operations. If they later say you should give them money and will receive 10x as much in return then you shouldn't conclude that there is 99+% chance this will work out and you should give them a lot of money).

A way to beat superrational/EDT agents?

You're ignoring that with probability 1/4 agent ends up in room B.n that case you don't get to decide but you get to collect reward. Which is 3 for (the other agent) guessing T, or 0 for (the other agent) guessing H.

So basically guessing H is increasing your own expected reward at the expense of the other agent's expected reward (before you actually went to a room you didn't know if you'll be an agent which gets to decide or not so your expected reward also included part of expected reward for agent which doesn't get an opportunity to make a guess).

Load More