LESSWRONG
LW

Viliam
25108Ω15760761
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
How worker co-ops can help restore social trust
Viliam5h41

What I see as a possible problem is that a large amount of trust is already required to start a co-op.

Reply
arisAlexis's Shortform
Viliam6h51

We do NOT have evidence that ever a smarter agent/being was controlled by a lesser intelligent agent/being.

Some people say that we are controlled by our gut flora, not sure if that counts. Also, toxoplasmosis, cordyceps...

Reply
Hunch: minimalism is correct
Viliam14h42

There's this technique that people in minimalism circles talk about where you pack up all your stuff as if you were moving.

Yes, that seems like a reasonable way to approach this. Pack up your stuff, even write the date on the box.

It is possible to err in both directions. Probably it is more natural to collect more things than you need (because once you have those extra things, it requires a conscious decision to get rid of them). But I have also seen people underestimate the fact they they will need more of something (e.g. plates, forks) if someone visits them. Even if you know you will never have visitors, it is good to have an extra plate or two, because sometimes they break.

Reply
Slicing the (Kosher) Hate Salami
Viliam16h32

I agree, but... what would be the proper way, for an average American, to protest against the actions of Israel?

Attacking random people is obviously stupid and immoral, even the people at the embassy are innocent, boycotting is illegal, elections are rare... what would be the proper way to redirect the energy these people obviously have?

Reply
Use AI to Dimensionalize
Viliam18h30

I wrote my piece on Dimensionalization in part to help AIs do it better.

I don't use AI frequently, and I have no idea how useful this is in practice, but it find this approach fascinating, to write one article for humans on how to use the AI and another article for the AI on how to serve the humans.

Reply
Are LLMs being trained using LessWrong text?
Viliam2d35

Potentially good news is that we might contribute to raising the LLM sanity waterline?

Makes me wonder, when LLMs are trained on texts not just from LW but also from Reddit, is the karma information included? That is, is upvoted content somehow considered more important than downvoted, or is it treated all the same way?

If it is all the same, maybe the datasets could be improved by removing negative-karma content?

Reply
Relative Utilitarianism: A Moral Framework Rooted in Relative Joy
Viliam2d20

The problem with making someone work against their will is that it often results in a sloppy work and I suspect that it is usually unprofitable (if you included the salaries of the guards to the total cost). Just consider that some people outside the jail have a problem to find a job, and they have more possibilities and motivation.

This topic is not very important to me, I just try to point out what I see as obvious problems. I haven't studied this topic deeply.

Reply
sam's Shortform
Viliam2d30

I don't know a standard name. I call it "fallacy of the revealed preferences", because these situations have in common "you do X, someone concludes that X is what you actually wanted because that's what you did, duh".

More precisely, the entire concept of "revealed preferences" is prone to the motte-and-bailey game, where the correct conclusion is "given the options and constraints that you had at the moment, you chose X", but it gets interpreted as "X is what you would freely choose even if you had no constraints". (People usually don't state it explicitly like this, they just... don't mention the constraints, or even the possibility of having constraints.)

Reply
Relative Utilitarianism: A Moral Framework Rooted in Relative Joy
Viliam3d20

I don't think the fake punishments would work. As you said, if they are credible, then the people who protest against real punishments would protest against (credible) fake ones, too. And sooner or later the secret would leak, especially in a democratic society with free speech, and checks and balances.

I'm a firm believer that anyone can be helped

Okay, this is a point where we disagree. Some people are psychopaths, so you can't try to make them feel more empathy towards the victims. Some people are too stupid or impulsive, so you can't effectively reason by consequences; either they won' get it, or they will agree but then do it anyway. Some people are insane; they will do crimes because a "voice of God" told them so, or because their paranoia made them think the other person wanted to hurt them so they acted in a supposed self-defense.

Sometimes the only solution is to lock the person and throw away the key (or the cheaper version: kill them).

the ideal is to intervene before people consider crime an option. To get them mentally healthy and to find them a productive place in society

I agree. Of course, actually doing this is complicated for various reasons. One of the reasons is that some people profit from achieving the opposite, e.g. drug dealers make money by ruining other people's health, or entrepreneurs save money when more people are unemployed. So you would meet all kinds of opposition.

Reply
When Machines Do Our Jobs, Will We Remember How to Live?
Viliam3d30

I wonder whether there is some non-obvious signal involved in claiming that your work is the meaning of your life. A possible hypothesis: "the rich do what they want, the poor do what they must". The more money you have, the less you are constrained when choosing your job -- you can afford to choose based on what you like, as opposed to having to take whatever allows you to survive. The extreme case would be a trust fund kid, who can have the job tailored to his or her hobbies. (Maybe not literally "Playing Minecraft LLC", but at least something related; for example producing or distributing computer games.)

Reply
Load More
8Viliam's Shortform
5y
207
No wikitag contributions to display.
29Learned helplessness about "teaching to the test"
20d
15
27[Book Translation] Three Days in Dwarfland
2mo
6
43The first AI war will be in your computer
3mo
10
109Two hemispheres - I do not think it means what you think it means
5mo
21
26Trying to be rational for the wrong reasons
10mo
9
32How unusual is the fact that there is no AI monopoly?
Q
11mo
Q
15
37An anti-inductive sequence
11mo
10
30Some comments on intelligence
1y
5
29Evaporation of improvements
1y
27
9How to find translations of a book?
Q
1y
Q
8
Load More