ofer

Send me anonymous feedback: https://docs.google.com/forms/d/e/1FAIpQLScLKiFJbQiuRYBhrBbVYUo_c6Xf0f8DN_blbfpJ-2Ml39g1zA/viewform

Any type of feedback is welcome, including arguments that a post/comment I wrote is net negative.


Some quick info about me:

I have a background in computer science (BSc+MSc; my MSc thesis was in NLP and ML, though not in deep learning).

You can also find me on the EA Forum.

Feel free to reach out by sending me a PM here or on my website.

Comments

Working in Virtual Reality: A Review

That's very interesting.

I'd be concerned about the potential impact of prolonged usage of a VR headset for many hours per day on eye health. (Of course, I'm not at all an expert in this area.)

I made an N95-level mask at home, and you can too

I followed your tip to just google it but every result in the first 2 pages for me was either out of stock or outdated

As I said in that thread, I was not recommending the mentioned google search as a way to buy respirators, and one's best options (which may include buying from a well-known retailer and having a mechanism to substantially lower risks from counterfeit respirators) may depend on where they live.

Some AI research areas and their relevance to existential safety
ofer7d7Ω3

Great post!

I suppose you'll be more optimistic about Single/Single areas if you update towards fast/discontinuous takeoff?

I made an N95-level mask at home, and you can too

Disclaimer: I'm not an expert.

It turns out that surgical masks are made of the exact same material as N95s! They both filter 95% of 0.1μm particles.

I very much doubt this claim, and the link you provide in support of it is to a website that you later suggest is being run by people that seem to you "a bit sketchy". I also doubt that the way you propose for checking the "electrostatic effect" (on large pieces of paper?) can provide strong evidence that the mask's material provides filtering protection that is similar to a N95 respirator.

[EDIT: sorry, you later cite the Rengasamy et al. paper that seems to support that claim to some extent; I'm not sure how much to update on it.]

As a civilian you can’t purchase an N95 anywhere at any price.

This claim is false (see this thread).

BTW: since presumably surgical masks are not intended to be used in this way, I would also worry about potential risks of breathing too little oxygen or too much carbon dioxide.

BTW2: Maybe it's worth looking into using your approach for "upgrading" cheap KN95 respirators rather than surgical masks (I suspect that cheap KN95 respirators tend to not seal well due to a lack of nose clip and due to bands that go around the ears rather than around the head). Though the above concern regarding oxygen/carbon dioxide might still apply.

[EDIT: BTW3: for a comparison between cloth masks, surgical masks and N95 respirators see this page on examine.com.]

What considerations influence whether I have more influence over short or long timelines?

(They may spend more on inference compute if doing so would sufficiently increase their revenue. They may train such a more-expensive model just to try it out for a short while, to see whether they're better off using it.)

What considerations influence whether I have more influence over short or long timelines?

I didn't follow this. FB doesn't need to run a model inference for each possible post that it considers showing (just like OpenAI doesn't need to run a GPT-3 inference for each possible token that can come next).

(BTW, I think the phrase "context window" would correspond to the model's input.)

FB's revenue from advertising in 2019 was $69.7 billion, or $191 million per day. So yea, it seems possible that in 2019 they used a model with an inference cost similar to GPT-3's, though not one that is 10x more expensive [EDIT: under this analysis' assumptions]; so I was overconfident in my previous comment.

What considerations influence whether I have more influence over short or long timelines?

That said, I'd be surprised if the feed-creation algorithm had as many parameters as GPT-3, considering how often it has to be run per day...

The relevant quantities here are the compute cost of each model usage (inference)—e.g. the cost of compute for choosing the next post to place on a feed—and the impact of such a potential usage on FB's revenue.

This post by Gwern suggests that OpenAI was able to run a single GPT-3 inference (i.e. generate a single token) at a cost of $0.00006 (6 cents for 1,000 tokens) or less. I'm sure it's worth to FB much more than $0.00006 to choose well the next post that a random user sees.

What considerations influence whether I have more influence over short or long timelines?

The frontrunners right now are OpenAI and DeepMind.

I'm not sure about this. Note that not all companies are equally incentivized to publish their ML research (some companies may be incentivized to be secretive about their ML work and capabilities due to competition/regulation dynamics). I don't see how we can know whether GPT-3 is further along on the route to AGI than FB's feed-creation algorithm, or the most impressive algo-trading system etc.

The other places have the money, but less talent

I don't know where the "less talent" estimate is coming from. I won't be surprised if there are AI teams with a much larger salary budget than any team at OpenAI/DeepMind, and I expect the "amount of talent" to correlate with salary budget (among prestigious AI labs).

and more importantly don't seem to be acting as if they think short timelines are possible.

I'm not sure how well we can estimate the beliefs and motivations of all well-resourced AI teams in the world. Also, a team need not be trying to create AGI (or believe they can) in order to create AGI. It's sufficient that they are incentivized to create systems that model the world as well as possible; which is the case for many teams, including ones working on feed-creation in social media services and algo-trading systems. (The ability to plan and find solutions to arbitrary problems in the real world naturally arises from the ability to model it, in the limit.)

What considerations influence whether I have more influence over short or long timelines?

This consideration favors short timelines, because (1) We have a good idea which AI projects will make TAI conditional on short timelines, and (2) Some of us already work there, they seem already at least somewhat concerned about safety, etc.

I don't see how we can have a good idea which project whether a certain small set of projects will make TAI first conditional on short timelines (or whether the first project will be one in which people are "already at least somewhat concerned about safety"). Like, why not some arbitrary team at Facebook/Alphabet/Amazon or any other well-resourced company? There are probably many well-resourced companies (including algo-trading companies) that are incentivized to throw a lot of money at novel, large scale ML research.

"Inner Alignment Failures" Which Are Actually Outer Alignment Failures

you should never get deception in the limit of infinite data (since a deceptive model has to defect on some data point).

I think a model can be deceptively aligned even if formally it maps every possible input to the correct (safe) output. For example, suppose that on input X the inference execution hacks the computer on which the inference is being executed, in order to do arbitrary consequentialist stuff (while the inference logic, as a mathematical object, formally yields the correct output for X).

Load More