LESSWRONG
LW

1655
martinkunev
179101090
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
When is a mind me?
martinkunev9d10

My fear with the teleporter has always been the engineering details - can it get a consistent snapshot of me? what about the last moments after the snapshot and before the old copy is destroyed? can it reliably reconstruct me? what happens in case of a failure?

Assuming many worlds quantum mechanics, we should have similar anticipations for forking into two and for tossing a quantum coin.

Reply
We are likely in an AI overhang, and this is bad.
martinkunev16d10

There is no rollback when open-weight models are almost SOTA. Could we convince people like Zuck that open weights are too risky? I seriously doubt it.

Reply
Female sexual attractiveness seems more egalitarian than people acknowledge
martinkunev1mo10

There are some differences between what women consider attractive female traits and what men actually find attractive. The example I like to give is how some women enlarge their lips even though men (usually) don't find that attractive.

I think these are the relevant points:

Maybe women's sense of each other's beauty is more discriminating than men's.

Some women probably don't distinguish a sex symbol's actual physical attractiveness from the other characteristics that would make her rarely and especially appealing to women (like fame, wealth, etc.), and they're neglecting to account for these factors being less salient to men.

Reply
Corrigibility, Much more detail than anyone wants to Read
martinkunev1mo10

This:

Might agent b rewrite agent a's brain to make agent a better satisfy agent b's utility function?  Most forms of wire-heading inherently limit the ability of agents to affect the future

and this

We have not proved that agent b does not try to affect agent a's utility function (in fact, I expect in many cases agent b does try to influence agent a's utility function).

appear to be in conflict. Are you trying to say that depending on the circumstances b may try to influence a's utility function or avoid doing so?

Reply
Do you even have a system prompt? (PSA / repo)
martinkunev2mo20

How important is it to keep the system prompt short? I guess this would depend on the model, but does anybody have useful tips on that?

Reply
Open weights != Open source
martinkunev3mo-10

well-established

The usage I'm objecting to started, as far as I can tell, about 2 years ago with Llama 2. The term "open weights", which is often used interchangably, is a much better fit.

Reply1
Open weights != Open source
martinkunev3mo20

At some point the open/closed distinction becomes insufficient as a description. You could very well have an open-source wrapper (or fine-tuning) of something which is closed-source. Just try to not mislead people about what you're offering.

Reply
Open weights != Open source
martinkunev3mo10

If I vibe-coded an app prompting, say, Claude, and released it along with the generated code, would you have the same objections to me calling it "open source,"

No, because I don't think this misleads people. Granted, the term "open source" is fuzzy at the boundaries. Should we use the term? I don't know, but if we do, it only makes sense if it means something different from "closed source".

wrong in suggesting they prefer to work with the model by editing the training data and "recompiling" instead of starting with the weights

One doesn't exclude the other. If you're creating v2 of your model, you'd likely: take the training code and data for v1; make some changes / add new things; run the new training code on the new data. For minor changes you may prefer to do fine-tuning on the weights.

Reply
Open weights != Open source
martinkunev3mo*14

wildly more expensive 

Suppose I write a program and let people download the binary. Can I say "I spent 100k on AWS to compile it, therefore the binary is open source"?

not even modification

Would you say compiling source code from scratch (e.g. for a different platform) is not a modification?

Even if you're not intending to retrain the model from scratch, simply knowing what the training data is is valuable. Maybe you don't care about the training data, but somebody else does. I don't think "I could never possibly make use of the source code / training data" is an argument that a binary / weights is actually open source.

How does open source differ from closed source for you in the case of generative models? If they are the same, why use the term at all?

Reply
Pronouns are Annoying
martinkunev4mo30

There is the possibility of misgendering somebody and them taking it seriously. Sometimes it feels like you're walking in a minefield. It's not conducive to a good social interaction.

too few pronouns, and communication becomes vague and cumbersome

I'm wondering why languages like finnish can do just fine with "hän" while english needs he/she.

Reply
Load More
0Open weights != Open source
3mo
8
5Subjective experience is most likely physical
4mo
3
6Understanding Agent Preferences
8mo
2
11What is Randomness?
1y
2
10Is CDT with precommitment enough?
Q
1y
Q
17
4What is Ontology?
2y
0
4Choosing a book on causality
Q
2y
Q
3
24Would you have a baby in 2024?
Q
2y
Q
76
11How useful is Corrigibility?
2y
4
3Disincentivizing deception in mesa optimizers with Model Tampering
2y
0
Load More