cata

Programmer, rationalist, chess player, father, altruist.

Comments

Sorted by
cata70

Why is it cheaper for individuals to install some amount of cheap solar power for themselves than for the grid to install it and then deliver it to them, with economies of scale in the construction and maintenance? Transmission cost?

cata158

To me, since LessWrong has a smart community that attracts people with high standards and integrity, by default if you (a median LW commenter) write your considered opinion about something, I take that very seriously and assume that it's much, much more likely to be useful than an LLM's opinion.

So if you post a comment that looks like an LLM wrote it, and you don't explain which parts were the LLM's opinion and which parts were your opinion, then that makes it difficult to use it. And if there's a norm of posting comments that are partly unmarked LLM opinions, that means that I have to adopt the very large burden of evaluating every comment to try to figure out whether it's an LLM, in order to figure out if I should take it seriously.

cata86

I have been to lots of conferences at lots of kinds of conference centers and Lighthaven seems very unusual:

  • The space has been extensively and well designed to be comfortable and well suited to the activities.
  • The food/drink/snack situation is dramatically superior.
  • The on-site accommodations are extremely convenient.

I think it's great that rationalist conferences have this extremely attractive space to use that actively makes people want to come, rather than if they were in like, a random hotel or office campus.

As for LW, I would say something sort of similar:

  • The website and feature set is now dramatically superior to e.g. Discourse or PHPBB.
  • It's operated by people who spend lots of time trying to figure out new adjustments that make it better, including ones that nobody else is doing, like splitting out karma and agree voting, and cultivating the best old posts.
  • Partially as a result, the quality of the discussion is basically off the charts for a free general-interest public forum.

In both cases it seems like I don't see other groups trying to max out the quality level in these ways, and my best guess for why is that there is no other group who is equally capable, has a similarly strong vision of what would be a good thing to create, and wants to spend the effort to do it.

cata106

I would think to approach this by figuring something like the Shapley value of the involved parties, by answering the questions "for a given amount of funding, how many people would have been willing to provide this funding if necessary" and "given an amount of funding, how many people would have been willing and able to do the work of the Lightcone crew to produce similar output."

I don't know much about how Lightcone operates, but my instinct is that the people are difficult to replace, because I don't see many other very similar projects to Lighthaven and LW, and that the funding seems somewhat replaceable (for example I would be willing to donate much more than I actually did if I thought there should be less other money available.) So probably the employees should be getting the majority of the credit.

cata278

I was going to email but I assume others will want to know also so I'll just ask here. What is the best way to donate an amount big enough that it's stupid to pay a Stripe fee, e.g. $10k? Do you accept donations of appreciated assets like stock or cryptocurrency?

cata20

But as a secondary point, I think today's models can already use bash tools reasonably well.

Perhaps that's true, I haven't seen a lot of examples of them trying. I did see Buck's anecdote which was a good illustration of doing a simple task competently (finding the IP address of an unknown machine on the local network).

I don't work in AI so maybe I don't know what parts of R&D might be most difficult for current SOTA models. But based on the fact that large-scale LLMs are sort of a new field that hasn't had that much labor applied to it yet, I would have guessed that a model which could basically just do mundane stuff and read research papers, could spend a shitload of money and FLOPS to run a lot of obviously informative experiments that nobody else has properly run, and polish a bunch of stuff that nobody else has properly polished.

cata50

I'm not confident but I am avoiding working on these tools because I think that "scaffolding overhang" in this field may well be most of the gap towards superintelligent autonomous agents.

If you imagine a o1-level entity with "perfect scaffolding", i.e. it can get any info on a computer into its context whenever it wants, and it can choose to invoke any computer functionality that a human could invoke, and it can store and retrieve knowledge for itself at will, and its training includes the use of those functionalities, it's not completely clear to me that it wouldn't already be able to do a slow self-improvement takeoff by itself, although the cost might be currently practically prohibitive.

I don't think building that scaffolding is a trivial task at all, though.

cata40

I don't have a bunch of citations but I spend time in multiple rationalist social spaces and it seems to me that I would in fact be excluded from many of them if I stuck to sex-based pronouns, because as stated above there are many trans people in the community, of whom many hold to the consensus progressive norms on this. The EA Forum policy is not unrepresentative of the typical sentiment.

So I don't agree that the statements are misleading.

(I note that my typical habit is to use singular they for visibly NB/trans people, and I am not excluded for that. So it's not precisely a kind of compelled speech.)

cata40

I was playing this bot lately myself and one thing it made me wonder is, how much better would it be at beating me if it was trained against a model of me in particular, rather than how it actually was trained? I feel I have no idea.

Load More