AnthonyC

Wiki Contributions

Comments

Sorted by

Edit to add: Just thinking about the converse, you could also make it sound more ridiculous by rewriting it with more obscure parts of the legendarium, too.

Conquer Morgoth with Ungoliant. Turn Maiar into balrogs. Glamdring among the morgul-blades.

I would assume that his children in particular would be quite familiar with their usage, though, and that seems to be who a lot of the legendarium-heavy letters are written to.

I also think that it sounds at least slightly less ridiculous to rewrite that passage in the language of Star Wars rather than Starcraft. Conquer the Emperor with the Dark Side. Turn Jedi into Sith. An X-Wing among the TIE fighters. Probably because it's more culturally established, with a more deeply developed mythos.

How does this interact with MA's salary transparency laws? If you are in a role where no one else shares your title, then no problem. Otherwise, this could enable an employer to pressure others to take pay cuts or smaller raises, or it could force them to tell prospective new employees a much lower lower bound in the salary range for the role they're applying to.

The the first objection: To the extent that AGI participates in status games with humans, they will win. They'll be better at it than we are. You could technically have a gang of toddlers play basketball with an NBA all star team, but I don't think you can really say they're competing with each other, or that they're both playing the same game in the sense the people talking about status games mean it.

To the second objection: It is not at all clear to me whether any biological intelligence augmentation path puts humans on a level playing field with AI systems, except insofar as we're talking about a limited subset of AI systems. At which point, how similar do they need to be before we deem the AIs to count as essentially human for this purpose, or the upgraded humans to not count as human? I don't find the semantic questions all that interesting in themselves, but I also don't think it would be interesting or enjoyable to play games with a fully digital version of a person with 1000x more compute than me. 

If we want future humans to face games and challenges they find meaningful, that puts extra constraints on what kinds of entities we can have them competing against.

FWIW I am mostly uninterested in human status games now, and don't anticipate that changing much in the future. I really don't like this vision of the future of humanity. It's not the worst, but I think we can do much better. We just have to be more creative in our understanding of what makes something meaningful.

FWIW I think it probably would be, between those two. Land and houses are different, even if we usually buy them together. When I bought my first house, the appraisal included separate line items for the house vs the land it was on, and the land was a majority of the price I was paying. I don't know what the OP actually meant, but to my own thinking, owning land (in the limit of advanced technology making everything buildable and extractable) means owning some fixed share of Earth's total supply of energy, water, air, and minerals. Building a house, given the space and materials, might become arbitrarily cheap in the future through automation, but competition for space and materials might become arbitrarily intense as the number of competing uses for them increases. Depends a lot on what order things happen in, and whether property rights remain a meaningful concept at all.

Right now, for a few hundred thousand dollars, you can build a home and (if you have enough land) buy enough equipment to make it self-sustaining in food, water, and energy (until things break). In the limit of high technology you can extend that to almost all needs, make the overall system indefinitely self-repairing, and increase carrying capacity of any given plot of land. You can also earn money by selling a flow of excess energy, which may be valuable even to an AI provided the AI respects your rights to same in the first place.

One pet peeve of mine is that actual weather forecasts for the public don't disambiguate interpretations of rain chance. Is it the chance of any rain at some point in that day or hour? Is it the expected proportion of that day or hour during which it will be raining?

I sympathize with this viewpoint, and it's hardly the worst outcome we could end up with. But, while both authors would seem to agree with a prohibition on calling up gods in a grab for power, they do so with opposite opinions about the ultimate impact of doing so. Neither offers a long-term possibility of humans retaining life, control, and freedom.

For Tolkien, I would point out first that the Elves successfully made rings free of Sauron's influence. And second, that Eru Iluvatar's existence guarantees that Sauron and Morgoth can never truly win, and at or after the Last Battle men will participate in the Second Music that perfects Arda in ways that could not have happened without Isildur's failure. You might destroy yourself and your contemporaries in Middle Earth, but not the light cone. Failures have bounded impact.

For Lovecraft, yes, calling up outer gods is always a terrible idea, but even if you don't and you stop anyone else who wants to, they'll eventually destroy everything you care about anyway just because they can exist at all. The mythos doesn't provide a pathway to truly avoiding the danger. Successes have bounded impact.

I suppose, but 1) there has been no build-up/tolerance, the effects from a given dose have been stable, 2) there are no cravings for it or anything like that, 3) I've never had anything like withdrawal symptoms when I've missed a dose, other than a reversion to how I was for the years before I started taking it at all. What would a chemical dependency actually mean in this context?

My depression symptoms centered on dulled emotions and senses, and slowed thinking. This came on gradually over about 10 years, followed by about 2 years of therapy with little to no improvement before starting meds. When I said that for me the effects kicked in sharply, I meant that on day three after starting the drug, all of a sudden while I was in the shower my vision got sharper, colors got brighter, I could feel water and heat on my skin more intensely, and I regained my sense of smell after having been nearly anosmic for years. I immediately tested that by smelling a jar of peanut butter and started to cry, after not crying over anything for close to 10 years. Food tasted better, and my family immediately noticed I was cooking better because I judged seasonings more accurately. I started unconsciously humming and singing to myself. My gait got bouncier like it had been once upon a time before my depression all started. There was about a week of random euphoria after which things stayed stable. Over the first few months, if I missed my dose by even a few hours, or if I was otherwise physically or emotionally drained, I would suddenly become like a zombie again. My face went slack, my eyes glazed over, my voice lost any kind of affect, my reactions slowed down dramatically. By suddenly, I mean it would happen mid-conversation, between sentences. These events decreased to 1-2x/month on an increased dose, and went away entirely a few years later upon increasing my dose again. I have also, thankfully, had no noticeable side effects. Obviously a lot of other things have happened in 6 years, many quite relevant, that I don't feel like getting into here, but those are mostly related to me regaining the ability to build capacity to actually live my life.

Yes, it is theoretically possible a placebo could have done that. I don't think it is plausible, or that any study (maybe I should say, any study that did not include me? Even then I'm not sure what a study on me-now could entail that would be convincing).

I do realize my experiences on these meds are atypical, my depression presented somewhat unusually, and SNRIs are not SSRIs. I got extremely lucky. But that was kind of my point in my original comment.

I'm sure they would. And some of those ways ASI can help would include becoming immortal, replacing relationships with other humans things like that. But compared to an ASI, it is easier for a human to die, to have their mind changed by outside influences, and to place intrinsic value on the whole range of things humans care about including other people.

I don't have nearly enough information to have an opinion on that.

Load More