That's interesting, I would for sure need some empirical evidence to be fully convinced, but the only problem I have with your argument is "severely limiting your ability to do or improve in anything language related".
The word "severe" implies a great difference between people who listen to music, and those who don't. I think such a difference would be apparent, and my anecdotal personal experience doesn't correlate with it.
Musicians would have to near incoherent to talk to, considering how much music they listen to.
Can you clarify what you mean by "value"?
I can't wrap my head around your argument. For example:
Imagine a shop cashier, he came back home after a whole day of dealing with angry customers, unreasonable management, poor working conditions and is just simply miserable with life. So he decides to smoke some weed to relax and to make life seem a little bit less miserable.
Is it worse than other options of pleasure? Sure. Does it have no value? I think he would be much more miserable, and perhaps do something drastic. Does this have no value?
I would love to see your source for this statement, I'm not aware of any big side effects, and would like to educate myself.
Well, comparisons with meth, cocaine or weed seem a little extreme. After all, music doesn't cause any physical addictions, I'm not actually aware if music causes any addictions.
Is there no value in pleasure? That seems counterintuitive, really. People value pleasure a lot, and you need some pleasure to enjoy life.
Do you also not eat sweets, as they are unhealthy, thus have negative value? Do you not take leisurely walks in nature, as they are time wasting? Do you not watch movies, shows? Read fiction? Have pets?
Music is a pretty "clean" source of pleasure compared to what a lot of people choose. Other than a little focus on what you are doing, music doesn't really have any "side effects".
Creating a realistic simulation of reality that includes an ASI is at least as difficult as creating an ASI.
Well creating an ASI might be not that hard. My whole point was that creating an aligned ASI might be hard, and a big simulation on par with our reality might be the way to test if it's actually aligned.
I mean, other than actually testing it, is there any way to confidently predict something an order of magnitude smarter than you?
I don't think it's likely that we can keep all potentially dangerous AIs sandboxed long enough to be able to run simulations of their interactions with the world.
We won't be able to do it, not anytime soon. But the whole point was that an advanced enough civilization can (maybe).
And there is probably a limit to being able to discern that you are in a simulation. After all at some point, you make a sandbox as close to reality as reality itself.
Im not promoting the simulation hypothesis, by the way, but this is an interesting thought experiment.
While listening to Eliezer Yudkowsky's interview here, he said regarding alignment, "If we just got unlimited retries, we could solve it." That got me thinking: could we run a realistic enough simulation to perfect ASI alignment before unleashing it? That’s one tall task—humanity won’t be ready for a long while. But what if it's already been done, and we are the simulation?
If we assume that the alignment problem can't be reliably solved on the first try, and that a cautious advanced civilization would rather avoid taking that risk, then running simulations to allow for as many attempts as necessary might be one of their options.
Is such a scenario even possible? Could someone simulate reality just to see if the ASI goes rogue? I can only speculate here, but even to a civilization at a level of simulating entire worlds, an ASI would be a powerful tool. You just have to make a big enough "sandbox" to see if it's truly aligned.
This is a pretty good argument for the simulation hypothesis, as it answers a lot of "why" (Why would someone simulate a reality? Why is it so unnecessarily detailed?)
However, here's a counterpoint (and a potential benefit of living in a simulation): if there exists any flaw in the simulation and the ASI notices it, it might act as if it's observed constantly, and will act aligned, even if it's misaligned, until the end of the simulation. (And even then, this flaw could be on purpose, to see if ASI acts differently, with or without the flaw.)
Of course, this is all silly to think about and improbable, and I hope nobody takes this too seriously.
Im out of the loop, can you point to an example please?
Yes, I do think that. They don't actively diminish thought, after all, it's a tool you decide to use. But when you use it to handle a problem, you lose the thoughts, and the growth you could've had solving it yourself. It could be argued, however, that if you are experienced enough in solving such problems, there isn't much to lose, and you gain time to pursue other issues.
But as to why I think this way: people already don't learn skills because chatGPT can do it for them, as lesswronguser123 said "A lot of my friends will most likely never learn coding properly and rely solely on ChatGPT", and not just his friends use it this way. Such people, at the very least, lose the opportunity to adopt a programming mindset, which is useful beyond programming.
Outside of people not learning skills, I also believe there is a lot of potential to delegate almost all of your thinking to chatGPT. For example: I could have used it to write this response, decide what to eat for breakfast, tell me what I should do in the future, etc. It can tell you what to do on almost every day-to-day decision. Some use it to a lesser extent, some to a greater, but you do think less if you use it this way.
Does it redistrubute thinking to another topic? I believe it depends on the person in question, some use it to have more time to solve a more complex problem, others to have more time for entertainment.
I'd imagine everyone prefers to build such an AI. The problem is, that we don't know how to do it, because we have only a basic understanding on how even current non AGI (LLM)models are able to do what they do.
An AI that does what we want it to do, is called an aligned AI. In your case it would be an aligned AI that reasons on first principles.
The use case behind such a proposal, is that while we don't know how to make an aligned AI, suppose we can build a sufficiently advanced AI that can actually do alignment research(or something else productive) better than a human, but because we haven't solved the alignment problem yet, we are unsure if we can trust it. This is how we can establish a basis of trust. (I don't think its a good idea until the questions in footnote 2 are answered, but its good to think about it further)
That is the problem I have with your argument: not listening to music only gives an "edge". This edge, for now, lies in the realm of anecdotal evidence. You can say "I have seen this person who didn't listen to music and he was good at verbal reasoning" and I can i say "I have seen this person who did listen to music and he was good at verbal reasoning" and this won't prove anything really.
To have such claims you need empirical evidence to base your assumptions on. For example this study reached an opposite conclusion that music instead helps speech-related problems, and learning music can help train your brain.