1 min read24th Apr 202116 comments
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
This is a special post for quick takes by gwern. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
17 comments, sorted by Click to highlight new comments since: Today at 12:29 AM
[-]gwern1moΩ276826

Warning for anyone who has ever interacted with "robosucka" or been solicited for a new podcast series in the past few years: https://www.tumblr.com/rationalists-out-of-context/744970106867744768/heads-up-to-anyone-whos-spoken-to-this-person-i

"Who in the community do you think is easily flatterable enough to get to say yes, and also stupid enough to not realize I'm making fun of them."

I think anyone who says anything like this should stop and consider whether it is more likely to come out of the mouth of the hero or the villain of a story.

[-]Viliam1mo2316

I think the people who say such things don't really care, and would probably include your advice in the list of quotes they consider funny. (In other words, this is not a "mistake theory" situation.)

EDIT:

The response is too harsh, I think. There are situations where this is a useful advice. For example, if someone is acting under peer pressure, then telling them this may provide a useful outside view. As the Asch's Conformity Experiment teaches us, the first dissenting voice can be extremely valuable. It just seems unlikely that this is the robosucka's case.

You're correct that this isn't something that can told to someone who is already in the middle of doing the thing. They mostly have to figure it out for themself.

I think anyone who says anything like this should stop and consider whether it is more likely to come out of the mouth of the hero or the villain of a story.

 

->

anyone who is trying to [do terrible thing] should stop and consider whether that might make them [a person who has done terrible thing]

can you imagine how this isn't a terribly useful thing to say.

Advice of this specific form has been has been helpful for me in the past. Sometimes I don't notice immediately when the actions I'm taking are not ones I would endorse after a bit of thinking (particularly when they're fun and good for me in the short-term but bad for others or for me longer-term). This is also why having rules to follow for myself is helpful (eg: never lying or breaking promises) 

hmm, fair. I guess it does help if the person is doing something bad by accident, rather than because they intend to. just, don't underestimate how often the latter happens either, or something. or overestimate it, would be your point in reply, I suppose!

[-]gwern10moΩ9350

I have some long comments I can't refind now (weirdly) about the difficulty of investing based on AI beliefs (or forecasting in general): similar to catching falling knives, timing is all-important and yet usually impossible to nail down accurately; specific investments are usually impossible if you aren't literally founding the company, and indexing 'the entire sector' definitely impossible. Even if you had an absurd amount of money, you could try to index and just plain fail - there is no index which covers, say, OpenAI.

Apropos, Matt Levine comments on one attempt to do just that:

Today the Wall Street Journal has a funny and rather cruel story about how SoftBank Group went all-in on artificial intelligence in 2018, invested $140 billion in the theme, and somehow … missed it … entirely?

The AI wave that has jolted up numerous tech stocks has also had little effect on SoftBank’s portfolio of publicly traded tech stocks it backed as startups—36 companies including DoorDash and South Korean e-commerce company Coupang.

This is especially funny because it also illustrates timing problems:

SoftBank missed out on huge gains at AI-focused chip maker Nvidia: The Tokyo-based investor put around $4 billion into the company in 2017, only to sell its shares in 2019. Nvidia stock is up about 10 times since.

Oops. EDIT: this is especially hilarious to read in March 2024, given the gains Nvidia has made since July 2023!

Part of the problem was timing: For most of the six years since Son raised the first $100 billion Vision Fund, pickings were slim for generative AI companies, which tended to be smaller or earlier in development than the type of startup SoftBank typically backs. In early 2022, SoftBank nearly completely halted investing in startups when the tech sector was in the midst of a chill and SoftBank was hit with record losses. It was then that a set of buzzy generative AI companies raised funds and the sector began to gain steam among investors. Later in the year, OpenAI released ChatGPT, causing the simmering interest in the area to boil over. SoftBank’s competitors have spent recent months showering AI startups with funding, leading to a wide surge in valuations to the point where many venture investors warn of a growing bubble for anyone entering the space.

Oops.

Also, people are quick to tell you how it's easy to make money, just follow $PROVERB, after all, markets aren't efficient, amirite? So, in the AI bubble, surely the right thing is to ignore the AI companies who 'have no moat' and focus on the downstream & incumbent users and invest in companies like Nvidia ('sell pickaxes in a gold rush, it's guaranteed!'):

During the years that SoftBank was investing, it generally avoided companies focused specifically on developing AI technology. Instead, it poured money into companies that Son said were leveraging AI and would benefit from its growth. For example, it put billions of dollars into numerous self-driving car tech companies, which tend to use AI to help learn how humans drive and react to objects on the road. Son told investors that AI would power huge expansions at numerous companies where, years later, the benefits are unclear or nonexistent. In 2018, he highlighted AI at real-estate agency Compass, now-bankrupt construction company Katerra, and office-rental company WeWork, which he said would use AI to analyze how people communicate and then sell them products.

Oops.

tldr: Investing is hard; in the future, even more so.

[-]lc7mo20

Sure, investing pre-slow-takeoff is a challenge. But if your model says something crazy like 100% YoY GDP growth by 2030, then NASDAQ futures (which does include OpenAI, by virtue of Microsoft's 50% stake) seem like a pretty obvious choice.

Humanities satirical traditions: I always enjoy the CS/ML/math/statistics satire in the annual SIGBOVIK and Ig Nobels; physics has Arxiv April Fools papers (like "On the Impossibility of Supersized Machines") & journals like Special Topics; and medicine has the BMJ Christmas issue, of course.

What are the equivalents in the humanities, like sociology or literature? (I asked a month ago on Twitter and got zero suggestions...) EDIT: as of March 2024, no equivalents have been found.

Normalization-free Bayes: I was musing on Twitter about what the simplest possible still-correct computable demonstration of Bayesian inference is, that even a middle-schooler could implement & understand. My best candidate so far is ABC Bayesian inference*: simulation + rejection, along with the 'possible worlds' interpretation.

Someone noted that rejection sampling is simple but needs normalization steps, which adds complexity back. I recalled that somewhere on LW many years ago someone had a comment about a Bayesian interpretation where you don't need to renormalize after every likelihood computation, and every hypothesis just decreases at different rates; as strange as it sounds, it's apparently formally equivalent. I thought it was by Wei Dai, but I can't seem to refind it because queries like 'Wei Dai Bayesian decrease' obviously pull up way too many hits, it's probably buried in an Open Thread somewhere, my Twitter didn't help, and Wei Dai didn't recall it at all when I asked him. Does anyone remember this?

* I've made a point of using ABC in some analyses simply because it amuses me that something so simple still works, even when I'm sure I could've found a much faster MCMC or VI solution with some more work.


Incidentally, I'm wondering if the ABC simplification can be taken further to cover subjective Bayesian decision theory as well: if you have sets of possible worlds/hypotheses, let's say discrete for convenience, and you do only penalty updates as rejection sampling of worlds that don't match the current observation (like AIXI), can you then implement decision theory normally by defining a loss function and maximizing over it? In which case you can get Bayesian decision theory without probabilities, calculus, MCM, VI, etc or anything more complicated than a list of numbers and a few computational primitives like coinflip().

Doing another search, it seems I made at least one comment that is somewhat relevant, although it might not be what you're thinking of: https://www.greaterwrong.com/posts/5bd75cc58225bf06703751b2/in-memoryless-cartesian-environments-every-udt-policy-is-a-cdt-sia-policy/comment/kuY5LagQKgnuPTPYZ

Funny that you have your great LessWrong whale as I do, and that you recall that it may be from Wei Dai as well (while him not recalling)

 https://www.lesswrong.com/posts/X4nYiTLGxAkR2KLAP/?commentId=nS9vvTiDLZYow2KSK

Danbooru2021 is out. We've gone from n=3m to n=5m (w/162m tags) since Danbooru2017. Seems like all the anime you could possibly need to do cool multimodal text/image DL stuff, hint hint.

[-]gwern3yΩ240

2-of-2 escrow: what is the exploding Nash equilibrium? Did it really originate with NashX? I've been looking for the history & real name of this concept for years now and have failed to refind it. Anyone?

Gwern,  i wonder what you think about this question i asked a while ago on causality in relation to the article you posted on reddit. Do we need more general causal agents for addressing issues in RL environments? 

Apologies for posting here, i didn't know how to mention/tag someone on a post in LW. 

https://www.lesswrong.com/posts/BDf7zjeqr5cjeu5qi/what-are-the-causality-effects-of-an-agents-presence-in-a?commentId=xfMj3iFHmcxjnBuqY

[+][comment deleted]2y30