That's interesting, I would for sure need some empirical evidence to be fully convinced, but the only problem I have with your argument is "severely limiting your ability to do or improve in anything language related".
The word "severe" implies a great difference between people who listen to music, and those who don't. I think such a difference would be apparent, and my anecdotal personal experience doesn't correlate with it.
Musicians would have to near incoherent to talk to, considering how much music they listen to.
Can you clarify what you mean by "value"?
I can't wrap my head around your argument. For example:
Imagine a shop cashier, he came back home after a whole day of dealing with angry customers, unreasonable management, poor working conditions and is just simply miserable with life. So he decides to smoke some weed to relax and to make life seem a little bit less miserable.
Is it worse than other options of pleasure? Sure. Does it have no value? I think he would be much more miserable, and perhaps do something drastic. Does this have no value?
I would love to see your source for this statement, I'm not aware of any big side effects, and would like to educate myself.
Well, comparisons with meth, cocaine or weed seem a little extreme. After all, music doesn't cause any physical addictions, I'm not actually aware if music causes any addictions.
Is there no value in pleasure? That seems counterintuitive, really. People value pleasure a lot, and you need some pleasure to enjoy life.
Do you also not eat sweets, as they are unhealthy, thus have negative value? Do you not take leisurely walks in nature, as they are time wasting? Do you not watch movies, shows? Read fiction? Have pets?
Music is a pretty "clean" source of pleasure compared to what a lot of people choose. Other than a little focus on what you are doing, music doesn't really have any "side effects".
Creating a realistic simulation of reality that includes an ASI is at least as difficult as creating an ASI.
Well creating an ASI might be not that hard. My whole point was that creating an aligned ASI might be hard, and a big simulation on par with our reality might be the way to test if it's actually aligned.
I mean, other than actually testing it, is there any way to confidently predict something an order of magnitude smarter than you?
I don't think it's likely that we can keep all potentially dangerous AIs sandboxed long enough to be able to run simulations of their interactions with the world.
We won't be able to do it,... (read more)
While listening to Eliezer Yudkowsky's interview here, he said regarding alignment, "If we just got unlimited retries, we could solve it." That got me thinking: could we run a realistic enough simulation to perfect ASI alignment before unleashing it? That’s one tall task—humanity won’t be ready for a long while. But what if it's already been done, and we are the simulation?
If we assume that the alignment problem can't be reliably solved on the first try, and that a cautious advanced civilization would rather avoid taking that risk, then running simulations to allow for as many attempts as necessary might be one of their options.
Is... (read more)
Im out of the loop, can you point to an example please?
Yes, I do think that. They don't actively diminish thought, after all, it's a tool you decide to use. But when you use it to handle a problem, you lose the thoughts, and the growth you could've had solving it yourself. It could be argued, however, that if you are experienced enough in solving such problems, there isn't much to lose, and you gain time to pursue other issues.
But as to why I think this way: people already don't learn skills because chatGPT can do it for them, as lesswronguser123 said "A lot of my friends will most likely never learn coding properly and rely solely on ChatGPT", and not just his... (read more)
I'd imagine everyone prefers to build such an AI. The problem is, that we don't know how to do it, because we have only a basic understanding on how even current non AGI (LLM)models are able to do what they do.
An AI that does what we want it to do, is called an aligned AI. In your case it would be an aligned AI that reasons on first principles.
The use case behind such a proposal, is that while we don't know how to make an aligned AI, suppose we can build a sufficiently advanced AI that can actually do alignment research(or something else productive) better than a human, but because we haven't solved the alignment problem yet, we are unsure if we can trust it. This is how we can establish a basis of trust. (I don't think its a good idea until the questions in footnote 2 are answered, but its good to think about it further)
That is the problem I have with your argument: not listening to music only gives an "edge". This edge, for now, lies in the realm of anecdotal evidence. You can say "I have seen this person who didn't listen to music and he was good at verbal reasoning" and I can i say "I have seen this person who did listen to music and he was good at verbal reasoning" and this won't prove anything really.
To have such claims you need empirical evidence to base your assumptions on. For example this study reached an opposite conclusion that music instead helps speech-related problems, and learning music can help train your brain.