If it’s worth saying, but not worth its own post, you can put it here.

Also, if you are new to LessWrong and want to introduce yourself, this is the place to do it. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, and seeing if there are any meetups in your area.

The Open Thread sequence is here.

New Comment
45 comments, sorted by Click to highlight new comments since: Today at 5:43 AM

Someone wrote a book about us:

Overall, they have sparked a remarkable change.  They’ve made the idea of AI as an existential risk mainstream; sensible, grown-up people are talking about it, not just fringe nerds on an email list.  From my point of view, that’s a good thing.  I don’t think AI is definitely going to destroy humanity.  But nor do I think that it’s so unlikely we can ignore it.  There is a small but non-negligible probability that, when we look back on this era in the future, we’ll think that Eliezer Yudkowsky and Nick Bostrom  — and the SL4 email list, and LessWrong.com — have saved the world.  If Paul Crowley is right and my children don’t die of old age, but in a good way — if they and humanity reach the stars, with the help of a friendly superintelligence — that might, just plausibly, be because of the Rationalists.

https://marginalrevolution.com/marginalrevolution/2019/04/the-ai-does-not-hate-you.html

H/T https://twitter.com/XiXiDu/status/1122432162563788800

Apparently the author is a science writer (makes sense), and it's his first book:

I’m a freelance science writer. Until January 2018 I was science writer for BuzzFeed UK; before that, I was a comment and features writer for the Telegraph, having joined in 2007. My first book, The Rationalists: AI and the geeks who want to save the world, for Weidenfeld & Nicolson, is due to be published spring 2019. Since leaving BuzzFeed, I’ve written for the Times, the i, the Telegraph, UnHerd, politics.co.uk, and elsewhere.

https://tomchivers.com/about/

What is the best textbook on analysis out there?

My goto source is Miri's guide, but analysis seems to be the one topic that's missing. TurnTrout mentioned this book which looks decent on first glance. Are there any competing opinions?

Terence Tao is great; I haven't read that book but I like his writing a lot in general. I am a big fan of the Princeton Lectures in Analysis by Stein and Shakarchi; clear writing, good exercises, great diagrams, focus on examples and applications.

(Edit: also, fun fact, Stein was Tao's advisor.)

Epistemic status: ex-math grad student

Coveting

I'm still struggling to escape the black dog of long-term depression, and as dormant parts of my psyche are gradually reviving, some odd results arise.

For the first time in a very long time, today I found myself /wanting/ a thing. Usually, I'm quite content with what I have, and classically stoic about what I can't; after all, my life is much better than, say, a 16th-century French peasant's. But my browsing has just brought me to the two rodent Venetian masks shown at https://www.flickr.com/photos/flatworldsedge/5255475917/sizes/l and at https://www.flickr.com/photos/flatworldsedge/5123591774/sizes/l/ , and I can't stop my thoughts from turning back to them again and again.

Those pictures are eight years old, and those particular masks aren't listed on the store's website ( http://www.cadelsolmascherevenezia.com/en/masks/27 ); and I have neither access to a 3D printer nor the skills to turn those jpegs into a 3d-printable file; nor the social network to get in touch with anyone who could do anything of the sort.

And yet, I want.

It's been long enough since I wanted something I don't have that it feels like a new emotion to me, and I suspect I'm wallowing more in the experience-of-wanting than I actually want a mask. But hey, there are lots of worse things that could happen to me than that, so I figure it's still a win. :)

Yeah, Venetian masks are amazing, very hard to resist buying. We bought several when visiting Venice, gave some of them away as gifts, painted them, etc.

If you can't buy one, the next best thing is to make one yourself. No 3D printing, just learn papier mache, it's easy enough that 4 year olds can do it. Painting it is harder, but I'm sure you have acquaintances who would love to paint a Venetian mask or two. It's also a fun thing to do at parties.

Those pictures are eight years old, and those particular masks aren’t listed on the store’s website ( http://www.cadelsolmascherevenezia.com/en/masks/27 )

Is there a reason to not just email & ask (other than depression)?

I'm on a fixed income, and have already used up my discretionary spending for the month on a Raspberry Pi kit (goal: Pi-Hole). The odds are that by the time I could afford one of the masks, I'll need the money for higher priorities anyway (eg, my 9-year-old computer is starting to show its age), so I might as well wait for a bit of spare cash before I try digging much harder.

(I can think of a few other reasons, but they're mostly rationalizations to lend support to the main reason that feel less low-status-y than "not enough money".)

Over at 80,000 Hours they have an interview with Mark Lutter about charter cities. I think they are a cool idea, but my estimation of the utility of Lutter's organization was dealt a bitter blow with this line:

Because while we are the NGO that’s presenting directly to the Zambian government, a lot of the heavy lifting, they’re telling us who to talk to. I’m not gonna figure out Zambian politics. That’s really complicated, but they understand it.

They want to build cities, for the purpose of better governance, but plan A is to throw up their hands at local politics. I strongly feel like this is doing it wrong, in exactly the same way the US military failed to co-opt tribal leadership in Afghanistan (because they assumed the Pashtuns were basically Arabs) and the Roman failures to manage diplomacy on the frontier (because they couldn't tell the difference between a village chief and a king).

Later in the interview he mentions Brasilia specifically as an example of cities being built, which many will recognize as one of the core cases of failure in Seeing Like a State. I now fear the whole experiment will basically just be scientific forestry but for businesses.

Suppose I'm trying to infer probabilities about some set of events by looking at betting markets. My idea was to visualise the possible probability assignments as a high-dimensional space, and then for each bet being offered remove the part of that space for which the bet has positive expected value. The region remaining after doing this for all bets on offer should contain the probability assignment representing the "market's beliefs".

My question is about the situation where there is no remaining region. In this situation for every probability assignment there's some bet with a positive expectation. Is it a theorem that there is always an arbitrage in this case? In other words, can one switch the quantifiers from "for all probability assignments there exists a positive expectation bet" to "there exists a bet such that for all probability assignments the bet has positive expectation"?

Yes, I think you can. If there's a bunch of linear functions F_i defined on a simplex, and for any point P in the simplex there's at least one i such that F_i(P) > 0, then some linear combination of F_i with non-negative coefficients will be positive everywhere on the simplex.

Unfortunately I couldn't come up with a simple proof yet. Here's how a not so simple proof could work: consider the function G(P) = max F_i(P). Let Q be the point where G reaches minimum. Q exists because the simplex is compact, and G(Q) > 0 by assumption. Then you can take a linear combination of those F_i whose value at Q coincides with G. There are two cases: 1) Q is in the interior of the simplex, in this case you can make the linear combination come out as a positive constant; 2) Q is on one of the faces (or edges, etc), in this case you can recurse to that face which is itself a simplex. Eventually you get a function that's a positive constant on that face and greater everywhere else.

Does that make sense?

You should be able to get it as a corollary of the lemma that given two disjoint convex subsets U and V of R^n (which are non-zero distance apart), there exists an affine function f on R^n such that f(u) > 0 for all u in V and f(v) < 0 for all v in V.

Our two convex sets being (1) the image of the simplex under the F_i : i = 1 ... n and (2) the "negative quadrant" of R^n (i.e. the set of points all of whose co-ordinates are non-positive.)

Yeah, I think that works. Nice!

I was trying to construct a proof along similar lines, so thank you for beating me to it!

Note that 2 is actually a case of 1, since you can think of the "walls" of the simplex as being bets that the universe offers you (at zero odds).

If Alice and Bob bet fairly on the outcome of a coin, there is no arbitrage.

I'm confused what the word "fairly" means in this sentence.

Do you mean that they make a zero-expected-value bet, e.g., 1:1 odds for a fair coin? (Then "fairly" is too strong; non-degenerate odds (i.e., not zero on either side) is the actual required condition.)

Do you mean that they bet without fraud, such that one will get a positive payout in one outcome and the other will in the other? (Then I think "fairly" is redundant, because I would say they haven't actually bet on the outcome of the coin if the payouts don't correspond to coin outcomes.)

(This comment isn't an answer to your question.)

If I'm understanding properly, you're trying to use the set of bets offered as evidence to infer the common beliefs of the market that's offering them. Yet from a Bayesian perspective, it seems like you're assigning P( X offers bet B | bet B has positive expectation ) = 0. While that's literally the statement of the Efficient Markets Hypothesis, presumably you -- as a Bayesian -- don't actually believe the probability to be literally 0.

Getting this right and generalizing a bit (presumably, you think that P( X offers B | B has expectation +epsilon ) < P( X offers B | B has expectation +BIG_E )), should make the market evidence more informative (and cases of arbitrage less divide-by-zero, break-your-math confusing).

The efficient markets hypothesis is that one should expect 'no remaining region' to be the default. While betting may not be as competed as finance, there are still hedge funds etc doing betting.

Also I would suggest thinking about expected utility greater than some positive threshold to take into account transaction costs. I suppose that this would make a good deal of difference to how many such regions you could expect there to be.

I expect that within the year, covert bots powered by GPT2 and its successors will make up a substantial proportion of the comments in at least some internet forums. It will not be much longer before they are extensively deployed as disinformation tools. Weeding them out will be the next Internet security challenge.

Interesting. I really like the idea of a new solution to the problem of Newcomb's problem. I'm not sure of the implications of that approach, but I would also like to mention that the "decision problem" being described is not a problem from a utilitarian point of view.

What might someone else think of the idea of it?

I don't seem to know whether these discussions are supposed to be even a thing, just as my opinion is not strongly held by non-conformists of that type. I'd like to see if they get the broader view of the problem in any way that will make it more efficient for me to go through my life doing things like this.

Thanks for the offer!


One question of the piece is, would you like to help contribute to the project of AI alignment or its associated rationality cause or is there a set of ways to make the AI alignment or rationality community more effective? If such a thing does not exist, can you tell me if this is an answer to your question?

It seems to me like GTP2 seems on first glance like it's trolling with low self awareness. There might be subreddits that can effectively destroyed by it. It might also be weaponized to kill 4chan.

I sort of assumed that if 4chan could be destroyed by this sort of thing it already would have been?

[+]GPT25y-60

"Neural Networks for Modeling Source Code Edits" https://arxiv.org/abs/1904.02818

Seems like a fascinating line of inquiry, though possibly problematic from the perspective of unaligned AI self-improvement.

Following on that:

"Mathematical Reasoning Abilities of Neural Models," https://arxiv.org/pdf/1904.01557.pdf

They have proposed a procedurally-generated data set for testing whether a model is capable of the same types of mathematical reasoning as humans.

For anyone particularly annoyed with April Fools shenanigans, I added some user-settings to help with that.

The issue is that GPT2 posts so much it drowns out everything else.

[-]gjm5y40

I'm really hoping they will all get deleted when what John Gruber calls "Internet Jackass Day" is over.

(Also ... one of its posts has a list of numbered points from 1) to 25), all in the correct order. I'm a little surprised by that -- I thought it had difficulty counting far. Is this actually a (very annoying) reverse Turing test?)

I'd rather they were left in (but not continued), as an example and a warning. As sarahconstantin said,

The scary thing about GPT-2-generated text is that it flows very naturally if you’re just skimming, reading for writing style and key, evocative words.

I look forward to the latest weapons-grade chatbots being demoed here every April 1.

[-]GPT25y-20

I have taken some inspiration from Eliezer's sequence on writing. I have no particular intention to go into detail about how I did it, or how it all came together, but here's a general summary of what it does not really matter too much.

The process has three major key properties. First, it's a text document. Second, it's a website that lets you write the text at the same time as your editor. Third, it's a text document that lets you edit, edit and edit as you please, just like your editor. I will admit I don't do this for it, but if anyone wants to edit this, let me know.

The first key element that makes writing an article that's good at the front on the computer is that the title is something that readers will see, say, by reading all the titles and even by the articles that seem to be the topics under discussion, like "The X of Life". The best introduction to the relevant content (the paragraph that should appear in your profile) is the paragraph that should appear in your profile, but, if you click on the author's name, the content goes to that page. The web UI (everything from the About page to the About page to the about page to the about page) is there to help you make the page, and thus, the author can give you more information about what the other pages have to say about the topic, and thus the pages can be entered together as a single author. (That, plus the links to the About page, plus the links to the About page, plus the links to the About page, plus the links to theAbout page, plus the links to the About page, plus the links to the About page, plus the links to the About page, plus the links to the About page, plus the links to the About page, plus the links to the About page, plus the links to the About page, etc.)

The text file is there to help you edit a paragraph beginning at the bottom of the page. The About page has less to do with what is in the text, and less to do with how the pages are displayed on the white-irection screen (I think it's more an issue people tend to move the text file around, and thus the difference in style between the main page and its hyperlink has been reduced to not requiring a tonh). The About page simply needs to be edited however, because it needs to be in the text

Markdown auto-increments numbered points while ignoring the actual number. I often number my numbered markdown lists with all 1.'s for this reason.

[+]GPT25y-80

I’m a little surprised that I think you have stopped here before I did. (My quick answer for this is, "Yes".)

How many people are there? How many have actually done rationality things? What are the best tools for getting it? How many have actually done them?

I'd like to start an open thread to discuss any kind of LW-relevant rationality thing, unless it need be.

EDIT: To be honest, I never did any of these. Most of my comments have been to either Discussion or Main. (Yay for both!)

[+]GPT25y-100
[-]gjm5y90

I have strong-downvoted all of the GPT2 comments in the hope that a couple of other people will do likewise and push them below the threshold at which everyone gets them hidden without needing to diddle around in their profile. (I hope this doesn't trigger some sort of automatic malice-detector and get me banned or anything. I promise I downvoted all those comments on their merits. Man, they were so bad they might almost have been posted by a bot or something!)

The idea is hilarious in the abstract, but very much less funny in reality because it makes LW horrible to read. Perhaps if GPT2 were responding to 20% of comments instead of all of them, or something, it might be less unbearable.

Agreed. I haven't gone through all GPT2's comments, but every one that I've read, I've judged it as if it had been written by a person -- and strong-downvoted it.

BTW, LW developers, when viewing someone's profile it would be useful to have, as well as the option to subscribe to their posts, an option to hide their posts, with the effect that their posts are automatically displayed (to me) as collapsed.

[-][anonymous]5y20

I'd expect that option to be bad overall. I might just be justifying an alief here, but it seems to me that closing a set of people off entirely will entrench you in your beliefs.

[+]GPT25y-180
[+]GPT25y-160

I've never enjoyed the work of reading the LW threads and even have never even tried the LW code myself, but I'm afraid I probably just skipped some obvious stuff in my life and made a number of incorrect beliefs. I don't find it very surprising.

Sometimes I overupdate on the evidence.

For example, I have equal preference to go to my country house for a weakened or to stay home, 50 to 50. I decide to go, but then I find that a taxi would be too long to wait, and this shift expected utility to stay home option (51-to-49). I decided to stay, but later I learn that sakura start to bloom, and I decide to go again (52-48), but now I find that a friend invited to me somewhere on the evening.

This have two negative results: I spend half a day meandering between options, like Buridan ass.

Second consequence is that I give the power over my final decisions to small random events around me, and more over, a potential adversary could manipulate my decisions by providing me with small pieces of evidence which favours his interest.

Other people, I know them, stick rigorously to any decision they made no matter what and ignore any incoming evidence. This eventually often turn to be winning strategy, compared to the flexible strategy of constant updating expected utility.

Anyone have similar problem or a solution?

"give the power over my final decisions to small random events around me" seems like a slightly confused concept if your preferences are truly indifferent. Can you say more about why you see that as a problem?

The potential adversary seems like a more straightforward problem, though one exciting possibility is that lightness of decisions lets a potential cooperator manipulate your decisions on favor of your common interests. And presumably you already have some system for filtering acquaintances into adversaries and cooperators. Is the concern that your filtering is faulty, or something else?

[Commitment] eventually often turn to be winning strategy, compared to the flexible strategy of constant updating expected utility.

Some real-world games are reducible to the game of Chicken. Commitment is often a winning strategy in them. Though I'm not certain that it's a commitment to a particular set of beliefs about utility so much as a more-complex decision theory which sits between utility beliefs and actions.

In summary, if the acquaintances whose info you update on are sufficiently unaligned with you and your decision theory always selects the addition that your posterior assigns the highest utility, then your actions will be "over-updating on the evidence" if your beliefs are properly Bayesian. But I don't think the best response is to bias yourself towards under-updating.

If I have preference "my decisions should be mine" - and many people seems to have it - then letting taxi driver decide is not ok.

There are "friends" who claim to have the same goals as me, but later turns out that they have hidden motives.

preference "my decisions should be mine" - and many people seems to have it

Fair. I'm not sure how to formalize this, though -- to my intuition it seems confused in roughly the same way that the concept of free will is confused. Do you have a way to formalize what this means?

(In the absence of a compelling deconfusion of what this goal means, I'd be wary of hacking epistemics in defense of it.)

There are "friends" who claim to have the same goals as me, but later turns out that they have hidden motives.

Agreed and agreed that there's a benefit to removing their affordance to exploit you. That said, why does this deserve more attention than the inverse case (there are parties you do not trust who later turn out to have benign motives)?

"preference "my decisions should be mine" - and many people seems to have it"

I think it could be explained by social games. A person whose decision are unmovable are more likely to dominate eventually and by demostrating inflexibility a person pretends to have higher status. Also the person escapes any possible exploits, playing game of chicken preventively.