deepthoughtlife

Posts

Sorted by New

Wiki Contributions

Comments

Jan Czechowski's Shortform

Playing in the morning before work runs the risk of being late to work a bit more often than otherwise, especially since you play longer than planned. Five more minutes five times might make you ten minutes late to work, so you didn't really have a good reason to stop that first time .  . . and by the time you really needed to stop it was habit.

What does knowing the heritability of a trait tell me in practice?

The commenter you are replying to is right. Heritability shows how much of the current variation is genetic in origin, which shows whether the variability of nurture matters within that particular culture. If you want to successfully intervene, it needs to be either in something not very heritable, or with an intervention that is not a significant part of the current landscape of the society. (The fact that outside of context interventions may exist means heritability isn't a measure of how genetic it is, but that wasn't Sleeps Furiously's point.)

Edit: Note, Sleeps Furiously had not replied to spkoc when I saw this and I wrote my response without refreshing.

When Arguing Definitions is Arguing Decisions

I am not imputing that you believe these things personally, only that this argument implies them. People often make arguments that aren't a particularly good match for what they believe, even when completely sincere and careful. (If I thought you really paid no attention to words, and thought they were useless, I would have expected you to have not written it all.) As I stated in my post, I believe that most people are already on the side of not being willing to discuss definitions too much, and that this is a philosophical underpinning for trying to reject it utterly. They dismiss it as 'semantics', as if the meaning of things is unimportant. If you don't like an implication of what you are writing, I do believe you need to discuss the implication directly if possible to avoid endorsing a position you do not hold. (I hold this position much more strongly for written out essays.)

The most important part of my post was a single sentence where I stated that advice needs to be tailored to where people currently are (which is mostly to reject defining things even when that is the useful bit.) The label (I prefer to say definition) is not a decision rule. Decisions rules were carefully affixed to them for usefulness, but if you don't like the rule, argue against that, not that the definition is somehow bad to discuss. 

Any argument you can make about an object (including concepts) requires a shared way to reference it, which is always a definition. Leaving those unexamined means a lack of communication, even when you think you are having a conversation. Discussing which cluster you are referring to in shorthand is necessary to get anywhere.

Since your advice goes too far, I have to point out that it is indeed too far (supposing I reply at all, of course.)  It should be noted that I find the 'Doomsday Button' argument completely unconvincing, and think it is purely imagery rather than careful argumentation.

I have read the sequence you link in a following comment, but it was probably well over a decade ago, and I don't remember it well. He had a lot of good points, but a lot of duds too. I looked at one of the articles in it again "Arguing by Definition", and found it one of his less inspired works.  The reason someone adds 'by definition' to a statement is usually to point out a fact that their opponent is refusing to pay attention to an important facet of the thing, the exact opposite from what Eliezer says it is doing. People often argue that you can't argue by definition when they are trying to sneak by without admitting your argument applies to them, (though also sometimes because they are just sick and tired of having to talk about definitions, which can be exhausting).

[AN #157]: Measuring misalignment in the technology underlying Copilot

Perhaps I was unclear. I object to the idea that you should get attached to any ideas now, not that you shouldn't think about them. People being people, they are much more prone to getting attached to their ideas than is wise. Understand before risking attachment.

The problem with AI governance, is that AI is a mix between completely novel abilities, and things humans have been doing as long as there have been humans. The latter don't need special 'AI governance' and the former are not understood.

(It should be noted that I am absolutely certain that AI will not take off quickly if it ever does takeoff beyond human limits.) 

The Asilomar conference isn't something I'm particularly familiar with, but it sounds like people actually had significant hands on experience with the technology, and understood them already. They stopped the experiments because they needed the clarity, not because someone else made rules earlier. There are not details as to whether they did a good job, and the recommendations seem very generic. Of course, it is wikipedia. We are not at this point with nontrivial AI. Needless to say, I don't think this is against my point.

A Contamination Theory of the Obesity Epidemic

I disagree only in that I don't think the amount of fiber is sufficient to make up for it.

A Contamination Theory of the Obesity Epidemic

I was not stating that I believe a whole foods diet won't be helpful for many people, just pointing out that not all whole foods are good if you need to lose weight. Most diets work a little, and whole foods is one people find easy to understand (and, I suspect, to live with.) It isn't just better than nothing, it could genuinely be useful.

I am implying that adding fruit to a diet is not helpful whatsoever to weight (unless you want to gain weight and just need more calories.) Fruit makes many people much hungrier due to very high sugar and general carb counts, and causes both physical and psychological cravings, while not providing the fats and proteins people need to stop craving food. I do not know of someone trying a fruit only diet (which would be very stupid), so I can't say I have evidence that they would be fat if eating only fruit.

I do agree with you that the minimal extra effort to prepare the fruits for eating does often help reduce the amount eaten, but I would say this works much better for people that don't have significant physiological cravings to eat. If you are normal weight and healthy, it isn't that bad to eat fruit once in a while, just like a cookie or two won't hurt you. For people that actually have trouble due overeating, fruit is still very binge-able. (Fruit cravings are definitely something I've seen a lot of in the obese people I know.)

Minimally processed meats and most vegetables are not prone to fattening people, while I believe certain nuts (like cashews) are. Cashews are not particularly satiating (notably, the body only finds saturated fats satiating, not unsaturated), and do not fill the stomach either. For the same (high) number of calories it would be vastly harder to eat it in meat than cashews, even if you like meat more. I have nothing against fat being part of the diet, but cashews just don't work that well.

edit: moved a paragraph, changed the spelling of a word

A Contamination Theory of the Obesity Epidemic

People get fat eating fruits, which are obviously 'natural', and are basically candy. A lot of natural foods fail on the actual criteria (Cashews also fail, for instance.). Does it fill you up? Does it make you not want to keep eating (physical signals)? Are you satisfied enough (psychologically) to be done? Did it give you the nutrients you need (both for your health and to prevent cravings)? Is all of this finished before you ate too many calories?

Can I teach myself scientific creativity?

My guess is that creativity isn't really something you need to focus on to get good results in it. You have your own personal point of view. Follow it to its natural conclusion within the area you care about. You are probably in a place that no one else has quite gone (though it is likely close). Don't check if it is original, just if it is a good idea. If it is a good idea, do it (as well as possible). What did you learn? Follow it to the new natural conclusion. Repeat. Pretty soon, you'll be in some far off corner of possibility space. Only then, check how your work fits with others. It would be truly bizarre if you ended up where everyone else is. If you did, show that. People will be intrigued about why it is such a natural place to end up [and your journey there was likely novel, or at least close enough.]

I like your analogies to songwriting, and I think it is completely obvious that tying the learning explicitly with the creating will get subpar results. Intuition works best with a lot of relevant (understood) information, but not so well with explicit rules.

When Arguing Definitions is Arguing Decisions

I have a serious problem with this sort of advice. Advice should be tailored to where people already are, and I think most people already reject trying define words much too readily. Thus this advice helps very few people, except those looking to sit back and not listen. Labels are actually important, and we need them to think any kind of complex thoughts. The correct labels make it easier. The wrong labels make it much harder. Rejecting the entire process because it can go wrong is much less useful than pointing out when it does and doesn't help to do it.

When you read what someone writes on a forum, you always have to think if it is worth your time? Is there a good point? Good is a label. Are they a troll? Troll is a label. You should be able to inform your friends that someone is good, or a troll, or even a good troll. Also, sometimes a person is just a liar, and informing people about that is useful, even if the person being informed is just you.

Trying to figure out exactly which parts are the core of the disagreement is useful, but discarding the entire concept because subsets of the concept are useful too is not a good idea. (Also, it is obvious in the dialogue who is right in the dispute. It is useful to everyone to know 'Blair' is likely to lie about how you did on the project, and is thus useless to ask. Convincing Blair not to lie could be useful, but is likely not worth the effort requited, and simply noting his untrustworthiness using 'liar' is much faster. Just because the argument itself won't be useful doesn't mean the idea of labeling things isn't.)

I find it disappointing that you start out showing that labels are useful, and then you seem to promptly forget it just because they can be disputed -which is why arguing definitions is useful (occasionally, with people doing it in good faith). Is every 'gru' a 'leck'? Suppose everyone knows facts A, C, and Z about grus, while lecks are definitely A, B, and Z. Every gru I've seen is also B. Is B true about all grus? If grus are also lecks, then definitely. Should I not then prove that grus are lecks (supposing I can)? Suppose someone can then point out the mistake in my definition. Should they not do that? Some of this might even just be deciding which examples of things similar to a gru should or should not fall under that label.  All of this is arguing definitions, and obviously useful (assuming anyone cares about B in the context of grus).

[AN #157]: Measuring misalignment in the technology underlying Copilot

I am unlikely to post on the EA forum. (I only recently started posting much here, and I find most of EA rather unconvincing, aside from the one sentence summary, which is obviously a good thing.) Considering my negativity toward long-termism, I'm glad you decided more on the productive side for your response. My response is a bit long, I didn't manage to get what I was trying to say down when it was shorter. Feel free to ignore it.

I will state that all of that is AI safety. Even the safety of the AI is determined by the overarching world upon which it is acting. A perfectly well controlled AI is unsafe if regulations followed by defense-bot-3000 state that all rebels must be ended, and everyone matches the definition of a rebel. The people that built defense-bot-3000 probably didn't intend to end humanity because a human law said to. Identically, they probably didn't mean for defense-bot-4000 to stand by and let it happen because a human is required in the loop by the 4000 version, and defense-bot-3000 made sure to kill those in charge of defense-bot-4000 at the start for its instrumental value. 

Should a police bot let criminals it can prove are guilty run free, because their actions are justified in this instance? Should a facial recognition system point out that it has determined that new intern matches a spy for the government of that country? Should people be informed that a certain robot is malfunctioning, and likely to destroy an important machine in a hospital [when that means the people will simply destroy the sapient robot, but if the machine is destroyed people might die]? These are moral, and legal governance questions, that are also clearly AI safety questions. 

I'd like to compare it to computer science where we know seemingly very disparate things are theoretically identical, such as iteration versus recursion, and hardware vs software. Regulation internal to the AI is the narrow construal of AI safety, while regulation external to it is governance. (Whether this regulation is on people or on the AI directly can be an important distinction, but either way it is still governance.)

Governance is thus actually a subset of AI safety broadly construed. And it should be broadly construed, since there is no difference between an inbuilt part of the AI and a part of the environment it is being used in if the lead to the same actions.

That wasn't actually my point though. The definition of whether or not you call it AI safety isn't important. You want to make it safe to have AI in use in society through regulation and cultural action. If you don't understand AI, your regulation and cultural bits will be wrong. You do not currently understand AI, especially what effects it will actually have dealing with people [since sufficient AIs don't exist to get data, and current approaches are not well understood in terms of why they do what they do]. 

Human culture has been massively changed by computers, the internet, cellphones, and so on. If I was older, I'd have a much longer list. If [and this is a big if] AI turns out to be that big of a thing, you can't know what it will look like at this stage. That's why you have to wait to find out [while trying to figure out what it will actually do.] If AI turns out to mostly be good at tutoring people, you need completely different regulation that if it turns out to only be good at war, and both are very different than if it is good at a wide variety of things.

Questions of human society rest on two things. First, what are people truly like on the inside. We aren't good at figuring that out, but we have documented several thousand years of trying, and we're starting to learn. Second, what is the particular culture like? Actual human level AI would massively change all of our cultures, to fit or banish the contours of the actual and perceived effects of the devices. (Also, what are the AI's like on the inside? What are their natures? What cultures would spring up amongst different AIs?)

Load More