frontier64

Wiki Contributions

Comments

This will be an accepted on payment kind of deal? I need probably another few days to mull it over. I've never committed to a bet where I could potentially have to spend $50,000 in the future. I would feel really dumb if I jumped into it

Clarification, if we agree that the likelihood of non-prosaic UFOs is >50% 4 years into the future but then at the time horizon the likelihood is back down way <50% do I pay or no? This is really unlikely, but what came to top of mind. Also, if I do have to pay in that scenario, how immediate do you want the payment?

I would give 200:1 odds for up to 50,000 of my own dollars.

My likelihood for one of the weird hypotheses you listed being true is higher than .5%. However my odds are much lower that we get any significant evidence of those hypotheses being true within the next 5 years and that UFOs + UAPs are caused by that weird hypothesis.

I think the issue is going to be disagreements about what the > 50% likelihood means. A lot of people are saying the current round of military and federal officials coming forward with their stories about the government keeping alien craft in secret facilities is significant evidence in favor of aliens. I would like a resolution criteria that is either public polling (>50% of people polled say that X hypothesis is true) or maybe a particular public figure taking a serious stance (Scott Alexander seriously claims that UFOs are shadow US government 4d vehicles extending into our visible space).

The reason is to prevent the voluntary participant from later claiming that their participation was involuntary and telling that to the IRB.

'Well if your participation was involuntary, why did you sign this document?'

It kind of limits the arguments someone could make attacking the ethics of the study. The attacker would have to allege coercion on the order of people being forced to lie on forms under threat.

I don't see this as being distinct or better than any typical left wing "republicans are a threat to democracy" article. If there is anything that isn't fit for LW it's this post.

Does this do the thing where a bunch of related events are treated as independent events and their probability is multiplied together to achieve a low number?

edit: I see you say that each event is conditioned on the previous events being true. It doesn't seem like you took that into account when you formulated your own probabilities.

According to your probabilities, in the world where: We invent algorithms for transformative AGI, We invent a way for AGIs to learn faster than humans, AGI inference costs drop below $25/hr (per human equivalent), and We invent and scale cheap, quality robots... There's only a 46% chance that we massively scale production of chips and power. That seems totally unreasonable on its face. How could cheap, quality robots at scale not invariably lead to a massive scaling of the production of chips and power?

I think splitting the odds up this way just isn't helpful. Your brain is not too good at pondering, "What is the probability of X happening if A, B, C, ... W all happen?" Too much of "What is the probability of X independent of everything else" will creep in. Unless you have a mechanistic process for determining the probability of X

I think the problem is that you're clinging to being 100% truthful and precise which is making you think you need to instantiate a meta-conversation when you really don't. In scenario 5, you can just start talking to the person next to you about a different topic, "Oh hey did you see X, what's up with that?" If they start chatting with you then you can pretty clearly tell they don't want to listen to the chatty girl either. The fact that you two are having a conversation also signals to the chatty girl and other people around that you both are less interested in what she has to say. You guys can then keep talking near them, move away, etc. Other friends may follow you or join your conversation and it doesn't matter if that girl was the chattiest of chatty people. She'll be left talking to her boyfriend + 1 and if the +1 is not interested then he'll come up with some excuse to go to the bathroom or something.

This happens subconsciously most of the time, but sometimes you have to take the initiative if you want to resolve the situation. This is way better than coming out and giving a meta explanation to everybody about how the conversation is boring or how you're bored or something. Firstly, you're not risking that you may have misread the room. Maybe your friends actually thought the girlfriend was interesting and if you speak up and say you're bored or that she's talking too much you'll get shot down and probably annoy them. Secondly, this is a lot cleaner for the girl as well. She doesn't suffer any overt embarrassment that she would otherwise experience had someone told her straight to her face in front of multiple people that she talks too much.

Fully putting your feelings into accurate, precise words and expressing them to a group is not always the best solution to uncomfortable social situations. I bet it's not the best solution the majority of the time. But that doesn't mean you just have to suppress your emotions, you can express them in more subtle ways that maybe aren't 100% explicitly honest.

Likely referring to the "Racist e-mail controversy" section on Bostrom and the pervasive FTX and Bankman-Fried references throughout the EA article.

Has Eliezer written more extensively on why AI-boxing won't work than what he wrote a decade ago? Old posts suggest that the best argument against boxing is Eliezer doing the AI box experiments with some people and winning most of them. The idea being: if a brain as dumb as Eliezer can get out of the box, then so can an AI.

Do we have any better evidence than that?

I don't really see the why for your assertions in your post here. For example:

It's not nice to be in a community that constantly hints that you might just not be good enough and that you can't get good enough.

Ok, it's not nice. Its understandable that many people don't want to think they're not good enough. But if they truly are not good enough then the effort they spend towards solving alignment in a way they can't contribute towards doesn't help solving alignment. The niceness of the situation has little bearing on how effective the protocols are.

If we want to actually accomplish anything, we need to encourage people to make bigger bets, and to stop stacking up credentials so that fellow EAs think they have a chance

Ok, this is a notion, maybe it's right, maybe it's not, I'm just not getting much from this post telling me why it's right.

Load More