In this post, I proclaim/endorse forum participation (aka commenting) as a productive research strategy that I've managed to stumble upon, and recommend it to others (at least to try). Note that this is different from saying that forum/blog posts are a good way for a research community to communicate. It's about individually doing better as researchers.
I will try to keep this shot, just want to use some simple problems to point out what I think is a commonly overlooked point in anthropic discussions.
You are among 100 people waiting in a hallway. The hallway leads to a hundred rooms numbered from 1 to 100. All of you are knocked out by a sleeping gas and each put into a random/unknown room. After waking up, what is the probability that you are in room No. 1?
This is just an ordinary probability question. All room numbers are symmetrical, the answer is simply 1%. It is also easy to imagine taking part in similar room-assigning experiments a great number of times, the relative fraction of you waking up in room No.1, or...
To quickly recap my main intellectual journey so far (omitting a lengthy side trip into cryptography and Cypherpunk land), with the approximate age that I became interested in each topic in parentheses:
ensuring AI philosophical competence won't be very hard. They have a specific (unpublished) idea that they are pretty sure will work.
Cool, can you please ask them if they can send me the idea, even if it's just a one-paragraph summary or a pile of crappy notes-to-self?
EA Funds aims to empower thoughtful individuals and small groups to carry out altruistically impactful projects - in particular, enabling and accelerating small/medium-sized projects (with grants <$300K). We are looking to increase our level of independence from other actors within the EA and longtermist funding landscape and are seeking to raise ~$2.7M for the Long-Term Future Fund and ~$1.7M for the EA Infrastructure Fund (~$4.4M total) over the next six months.
Why donate to EA Funds? EA Funds is the largest funder of small projects in the longtermist and EA infrastructure spaces, and has had a solid operational track record of giving out hundreds of high-quality grants a year to individuals and small projects. We believe that we’re well-placed to fill the role of a significant independent grantmaker, because...
By all reports, and as one would expect, Google’s Gemini looks to be substantially superior to GPT-4. We now have more details on that, and also word that Google plans to deploy it in December, Manifold gives it 82% to happen this year and similar probability of being superior to GPT-4 on release.
I indeed expect this to happen on both counts. This is not too long from now, but also this is AI #27 and Bard still sucks, Google has been taking its sweet time getting its act together. So now we have both the UK Summit and Gemini coming up within a few months, as well as major acceleration of chip shipments. If you are preparing to try and impact how things go, now might be...
You store everything on a cloud instance, where you don’t get to see the model weights and they don’t get to see your data either, and checks are made only to ensure you are within terms of service or any legal restrictions.
Is it actually possible to build a fine-tuning-and-model-hosting product such that
I am experimenting with pulling more social media content directly into these digests, in part to rely less on social media sites long-term (since content might be deleted, blocked, paywalled, etc.) That makes these digests longer, but it means there is less need to click on links.
I will still link back to original social media posts in order to give credit and make sharing easier. As always, let me know your feedback.
Patrick Collison has a fantastic list of examples of people quickly accomplishing ambitious things together since the 19th Century. It does make you yearn for a time that feels... different, when the lethargic behemoths of government departments could move at the speed of a racing startup:
...[...] last century, [the Department of Defense] innovated at a speed that puts modern Silicon Valley startups to shame: the Pentagon was built in only 16 months (1941–1943), the Manhattan Project ran for just over 3 years (1942–1946), and the Apollo Program put a man on the moon in under a decade (1961–1969). In the 1950s alone, the United States built five generations of fighter jets, three generations of manned bombers, two classes of aircraft carriers, submarine-launched ballistic missiles, and nuclear-powered
I think probably not. When a dog is asked whether a Human is "conscious", he might mention things like:
In this same way many (perhaps most) AI experts might never agree that LLMs or AGI systems have achieved "consciousness"? As, "consciousness" is just...
I'd enjoy seeing a post or two about your setup and initial experiences, and after some time, about your discoveries and remaining uncertainties. I'm excited about the upcoming tech for this, but I'm not convinced it's quite good enough for me yet - having two large screens and a good keyboard and mouse is pretty good for my workstyle.
This is a linkpost to a recent blogpost from Michael Nielsen, who has previously written on EA among many other topics. This blogpost is adapted from a talk Nielsen gave to an audience working on AI before a screening of Oppenheimer. I think the full post is worth a read, but I've pulled out some quotes I find especially interesting (bolding my own)
...I was at a party recently, and happened to meet a senior person at a well-known AI startup in the Bay Area. They volunteered that they thought "humanity had about a 50% chance of extinction" caused by artificial intelligence. I asked why they were working at an AI startup if they believed that to be true. They told me that while they thought it was
To paraphrase Von Neumann, sometimes we confess to a selfish motive that we may not be suspected of an unselfish one, or to one sin to avoid being accused of another.
[Of] the splendid technical work of the [atomic] bomb there can be no question. I can see no evidence of a similar high quality of work in policy-making which...accompanied this...Behind all this I sensed the desires of the gadgeteer to see the wheels go round.
Doesn't example 3 show that one and two are actually the same? What difference does it make whether you start inside or outside the room?