MSRayne

I'm a very confused person trying to become less confused. My history as a New Age mystic still colors everything I think even though I'm striving for rationality nowadays. Here's my backstory if you're interested.

Wiki Contributions

Comments

Ooh! I don't know much about the theory of reinforcement learning, could you explain that more / point me to references? (Also, this feels like it relates to the real reason for the time-value of money: money you supposedly will get in the future always has a less than 100% chance of actually reaching you, and is thus less valuable than money you have now.)

It seems to me that the optimal schedule by which to use up your slack / resources is based on risk. When planning for the future, there's always the possibility that some unknown unknown interferes. When maximizing the total Intrinsically Good Stuff you get to do, you have to take into account timelines where all the ants' planning is for nought and the grasshopper actually has the right idea. It doesn't seem right to ever have zero credence of this (as that means being totally certain that the project of saving up resources for cosmic winter will go perfectly smoothly, and we can't be certain of something that will literally take trillions of years), therefore it is actually optimal to always put some of your resources into living for right now, proportional to that uncertainty about the success of the project.

computers have no consciousness

Um... citation please?

1. Who are the customers actually buying all these products so that the auto-corporations can profit? They cannot keep their soulless economy going without someone to sell to, and if it's other AIs, why are those AIs buying when they can't actually use the products themselves?

2. What happened to the largest industry in developed countries, the service industry, which fundamentally relies on having an actual sophont customer to serve? (And again, if it's AIs, who the hell created AIs that exist solely to receive services they cannot actually enjoy, and how did they make money by doing that?)

3. Why didn't shareholders divest from auto-corporations upon realizing that they were likely to lead to ruin? (Don't say "they didn't realize it till too late", because you, right now, know it's a bad idea, and you don't even have money on the line.)

I ask these because, to be honest, I think this scenario is extremely far-fetched and unlikely. The worst thing that would happen if auto-corporations become a thing, in my mental model, is that existing economic inequalities would be permanently exacerbated due to the insane wealth accrued by their shareholders - because the only currently likely route to AGI, large language models, already understand what we actually mean when we say to maximize shareholder value, and won't paperclip-maximize, because they're not stupid, and they use language the same way humans do.

I've never had a job in my life - yes really, I've had a rather strange life so far, it's complicated - but I've been reading and thinking about topics which I now know are related to operations for years, trying to design (in my head...) a system for distributing the work of managing a complex organization across a totally decentralized group so that no one is in charge, with the aid of AI and a social media esque interface. (I've never actually made the thing, because I keep finding new things I need to know, and I'm not a software engineer, just a designer.)

So, I think I have some parts of the requisite skillset here, and a ton of intuition about how to run systems efficiently built up from all the independent studying I've done - but absolutely no prior experience with basically anything in reality, except happening to (I believe) have the right personality for operations work. Should I bother applying?

I don't know what to think about all that. I don't know how to determine what the line is between having qualia and not. I just feel certain that any organism with a brain sufficiently similar to those of humans - certainly all mammals, birds, reptiles, fish, cephalopods, and arthropods - has some sort of internal experience. I'm less sure about things like jellyfish and the like. I suppose the intuition probably comes from the fact that the entities I mentioned seem to actively orient themselves in the world, but it's hard to say.

I don't feel comfortable speculating which AIs have qualia, or if any do at all - I am not convinced of functionalism and suspect that consciousness has something to do with the physical substrate, primarily because I can't imagine how consciousness can be subjectively continuous (one of its most fundamental traits in my experience!) in the absence of a continuously inhabited brain (rather than being a program that can be loaded in and out of anything, and copied endlessly many times, with no fixed temporal relation between subjective moments.)

I don't know anything about colab, other than that the colab notebooks I've found online take a ridiculously long time to load, often have mysterious errors, and annoy the hell out of me. I don't know enough AI-related coding stuff to use it on my own. I just want something plug and play, which is why I mainly rely on KoboldAI, Open Assistant, etc.

We're not talking about sapience though, we're talking about sentience. Why does the ability to think have any moral relevance? Only possessing qualia, being able to suffer or have joy, is relevant, and most animals likely possess that. I don't understand the distinctions you're making in your other comment. There is one, binary distinction that matters: is there something it is like to be this thing, or is there not? If yes, its life is sacred, if no, it is an inanimate object. The line seems absolutely clear to me. Eating fish or shrimp is bad for the same reasons that eating cows or humans is. They are all on the exact same moral level to me. The only meaningful dimension of variation is how complex their qualia are - I'd rather eat entities with less complex qualia over those with more, if I have to choose. But I don't think the differences are that strong.

Just to be That Guy I'd like to also remind everyone that animal sentience means vegetarianism, at the very least (and because of the intertwined nature of the dairy, egg, and meat industries, most likely veganism) is a moral imperative, to the extent that your ethical values incorporate sentience at all. Also, I'd go further to say that uplifting to sophonce those animals that we can, once we can at some future time, is also a moral imperative, but that relies on reasoning and values I hold that may not be self-evident to others, such as that increasing the agency of an entity that isn't drastically misaligned with other entities is fundamentally good.

Most Wikipedia readers spend less than a minute on a page?? I always read pages all the way through... even if they're about something that doesn't interest me much...

Load More