All of Alex Beyman's Comments + Replies

Ah yes, the age old struggle. "Don't listen to them, listen to me!" In Deuteronomy 4:2 Moses declares, “You shall not add to the word which I am commanding you, nor take away from it, that you may keep the commands of the Lord your God which I command you.” And yet, we still saw Christianity, Islam and Mormonism follow it. 

A conspiracy theory about Jeffrey Epstein has 264 votes currently:

How commonly are arguments on LessWrong aimed at specific users? Sometimes, certainly. But it seems the rule, rather than the exception, that articles here dissect commonly encountered lines of thought, absent any attribution. Are they targeting "someone not in the room"? Do we need to put a face to every position?

By the by, "They're making cognitive errors" is an insultingly reductive way to characterize, for instance, the examination of value hierarchies and how awareness of them vs unawareness influence both our reasoning and appraisal of our fellow man's morals. 

The majority of such complaints that do well on LW are in reference to users or discussions on LW or related groups.  Not always a specific individual, but often a specific set of posts or comment patterns.  There are exceptions, where someone complains about some ideas in oped or twitter, but those tend to get downvoted unless they're truly pervasive. And even so, if they're not steelmanning the opposition, or pointing out some interesting pattern or reasoning for the disagreement, they tend to do poorly here.

When I tried, it didn't work. I don't know why. I agree with the premise of your article, having noticed that phenomenon in journalism myself before. I suppose when I say truth, I don't mean the same thing you do, because it's selective and with dishonest intent. 

"Saying you put the value of truth above your value of morality on your list of values is analogous to saying you put your moral of truth above your moral of values; it's like saying bananas are more fruity to you than fruits."

I'm not sure if I understand your meaning here. Do you mean that truth and morality are one in the same, or that one is a subset of the other?

"Where does non-misleadingness fall on your list of supposedly amoral values such as truth and morality? Is non-misleadingness higher than truth or lower?"

Surely to be truthful is to be non-misleading...?

You can quote text using a caret (>) and a space. Read the linked post; this is not so. You can mislead with the truth. You can speak a wholly true collection of facts that misleads people. If someone misleads using a fully true collection of facts, saying they spoke untruthfully is confusing. Truth cannot just always lead to good inferences; truth does not have to be convenient, as you say in OP. Truth can make you infer falsehoods.

>"Perhaps AIs would treat humans like humans currently treat wildlife and insects, and we will live mostly separate lives, with the AI polluting our habitat and occasionally demolishing a city to make room for its infrastructure, etc."

Planetary surfaces are actually not a great habitat for AI. Earth in particular has a lot of moisture, weather, ice, mud, etc. that poses challenges for mechanical self replication. The asteroid belt is much more ideal. I hope this will mean AI and human habitats won't overlap, and that AI would not want the Earth's minerals simply because the same minerals are available without the difficulty of entering/exiting powerful gravity wells.

I suppose I was assuming non-wrapper AI, and should have specified that. The premise is that we've created an authentically conscious AI.

Humans are not wrapper-minds.

Aren't we? In fact, doesn't evolution consistently produce minds which optimize for survival and reproduction? Sure, we're able to overcome mortal anxiety long enough to commit suicide. But survival and reproduction is a strongly enough engrained instinctual goal that we're still here to talk about it, 3 billion years on.  

Bad according to whose priorities, though? Ours, or the AI's? That was more the point of this article, whether our interests or the AI's ought to take precedence, and whether we're being objective in deciding that. 

Note that most AIs would also be bad according to most other AIs' priorities. The paperclip maximizer would not look kindly at the stamp maximizer. Given the choice between the future governed by human values, and the future governed by a stamp maximizer, a paperclip maximizer would choose humanity, because that future at least contains some paperclips.

Rarely do I get such insightful feedback but I appreciate when I do. It's so hard to step outside of myself, I really value the opportunity to see my thoughts reflected back at me through other lenses than the one I see the world through. I suppose I imagined the obsolete tech would leave little doubt that the Sidekicks aren't sentient, but the story also sort of makes the opposite case throughout when it talks about how personality is built up by external influences. I want the reader to be undecided by the end and it seems I can't have that cake and eat it too (have the protag be the good guy). Thanks again and Merry Christmas

Because the purpose of horror fiction is to entertain. And it is more entertaining to be wrong in an interesting way than it is to be right. 

>""I'm going to do high-concept SCP SF worldbuilding literally set in a high-tech underground planet of vaults"

I do not consider this story scifi, nor PriceCo to be particularly high tech.

>"and focus on the details extensively all the way to the end - well, except when I get lazy and don't want to fix any details even when pointed out with easy fixes by a reader"

All fiction breaks down eventually, if you dig deep enough. The fixes were not easy in my estimation. I am thinking now this story was a poor fit for this platform however

And it is more entertaining if the reader is sold on the worldbuilding instead of thinking to themselves 'decades? has this guy ever done his own grocery shopping? does he realize how little food a standard PriceCo warehouse contains?', and buy into the high-concept SF before the twist reveal ending that it was horror all along and the world-building and Robinson Crusoe stuff was a trap that the reader fell into just like the characters in every loop do. Arguably true of Mouse Utopia (I'm not saying it's not interesting or entertaining, I'm saying that it is false and I think you are doing a minorly bad thing by endorsing it), but not the others, which are neither interesting nor right. You don't consider a story of mad science societal engineering across millennia of underground generation-ship-style arcologies, with implied brainwashing or cloning tech of some sort, and literal invisibility cloaks, to be 'high tech'? That is one way you could react to criticism, sure: not make a single fix and leave. If you only want adulatory feedback, then yes, I do not think the LW platform is for you.

You may also enjoy these companion pieces:

I purposefully left it indeterminate so readers could fill in the blanks with their own theories. But broadly it represents a full, immediate and uncontrolled comprehension of recursive, fractal infinity. The pattern of relationships between all things at every scale, microcosm and macrocosm. 

More specifically to the story I like to think they were never human, but always those creatures dreaming they were humans, shutting out the awful truth using the dome which represents brainwashing / compartmentalization. Although I am not dead-set on this interp... (read more)

Fair point. But then, our most distant ancestor was a mindless maximizer of sorts with the only value function of making copies of itself. It did indeed saturate the oceans with those copies. But the story didn't end there, or there would be nobody to write this. 

Good catch, indeed you're right that it isn't standard evolution and that an AI studies how the robots perish and improves upon them. This is detailed in my novel Little Robot, which follows employees of Evolutionary Robotics who work on that project in a subterranean facility attached to the cave network:

This is a prologue of sorts. It takes place in the same world as The Shape of Things to Come, The Three Cardinal Sins, and Perfect Enemy (Recently uploaded at the time of writing) with The Answer serving as the epilogue. 

Ah! I shall read those next, then. Thank you.

I appreciate your insightful post. We seem similar in our thinking up to a point. Where we diverge is that I am not prejudicial about what form intelligence takes. I care that it is conscious, insofar as we can test for such a thing. I care that it lacks none of our capacities, so that what we offer the universe does not perish along with us. But I do not care that it be humans, specifically, and feel there are carriers of intelligence far more suited to the vacuum of space than we are, or even cyborgs. Does the notion of being superceded disturb you? 

Yes, the notion of being superceded does disturb me. Not in principle, but pragmatically. I read your point, broadly, to be that there are a lot of interesting potential non-depressing outcomes to AI, up to advocating for a level of comfort with the idea of getting replaced by something "better" and bigger than ourselves. I generally agree with this! However, I'm less sanguine than you that AI will "replicate" to evolve consciousness that leads to one of these non-depressing outcomes. There's no guarantee we get to be subsumed, cyborged, or even superceded. The default outcome is that we get erased by an unconscious machine that tiles the universe with smiley faces and keeps that as its value function until heat death. Or it's at least a very plausible outcome we need to react to. So caring about the points you noted you care about, in my view, translates to caring about alignment and control.

Well put! While you're of course right in your implication that conventional "AI as we know it" would not necessarily "desire" anything, an evolved machine species would. Evolution would select for a survival instinct in them as it did in us. All of our activities you observe fall along those same lines are driven by instincts programmed into us by evolution, which we should expect to be common to all products of evolution. I speculate a strong AI trained on human connectomes would also have this quality, for the same reasons. 

Conservatism, just not absolute. 

This feels like an issue of framing. It is not contentious on this site to propose that AI which exceeds human intelligence will be able to produce technologies beyond our understanding and ability to develop on our own, even though it's expressing the same meaning.

Then why limit things to light cones?

>"A lot of the steps in your chain are tenuous. For example, if I were making replicators, I'd ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3."

This assumes three things: First, the continued use of deterministic computing into the indefinite future. Quantum computing, though effectively deterministic, would also increase the opportunity for copying errors because of the added difficulty in extracting the result. Second, you assume that the mechanism which ensures faithful ... (read more)

Is it reasonable to expect that every future technology be comprehensible to the minds of human beings alive today, otherwise it's impossible? I realize this sounds awfully convenient/magic-like, but is there not a long track record in technological development of feats which were believed impossible, becoming possible as our understanding improves? A famous example being the advent of spectrometry making possible the determination of the composition of stars, and the atmospheres of distant planets:

"In his 1842 book The Positive Philosophy, the French phil... (read more)

No, but it's also not reasonable to privilege a hypothesis.

Nooo >:0 The ending has to be bleak, what have you done

How about this instead? //start quote The statue made a rising whine as the lights began to pulse rhythmically. The legs stretched out, probing a bit in random directions for an instant before one found the surface of the floor and the rest immediately followed, each with its own sharp little click. When the machine appeared sure of its footing, it began to slowly push itself up while the weapon on its back glowed a dull red and swiveled around sharply. It was so beautiful! And a bit terrifying. I took a step back, and the statue seemed to notice! I can't say how I knew, but I was sure it looked right at me. //end quote There was a thudding sound. I turned around. I was alone now. The priest was being consumed by the statue. Seconds later,  a tremendous crashing noise was heard as an appendage burst out into the open air. And, the voice of John Henry boomed in my ears as he assured me that I had done the right thing. I had a bright future ahead as the first of a new group of clear-headed priests. We were going to do so much together!

>"and that the few who do are now even more implausibly superhuman at chipping tunnels hundreds of miles long out of solid rock."

No, there have just been a lot of them over a very long period of time. Each made a little progress on the tunnel before dying out. 

>"Look at Biosphere 2 or efforts at engineering stable closed ecosystems: it is not easy!"

This is not a true closed system.

>"and in the long run, protein deficiency as they use up stores, lose a bunch of crops to various errors (possibly contaminating everything), and the soil beco... (read more)

Which is impossible: if each one made a little progress, how did the later ones make a little progress as well when the task is so much harder due to the extraordinary distance? Did they learn how to teleport? Build a little high-speed levitating railroad to get to the end of the tunnel for the day's supplies? Ask the Russians how you supply a front line which is ever further away... Indeed, but as written, the resets take place after everyone has died, which requires it to be stable for decades or centuries, which is wildly unlikely when Biosphere 2 couldn't keep it stable for like a year. This is bait-and-switch: "I'm going to do high-concept SCP SF worldbuilding literally set in a high-tech underground planet of vaults and focus on the details extensively all the way to the end - well, except when I get lazy and don't want to fix any details even when pointed out with easy fixes by a reader, and then it's all 'oh it's only horror fiction, it was never meant to be in a spirit of hard sci-fi, I'm not you, and I wrote it like I wanted to, there's no arguing taste'". Yeah, that's great, but I read it in the spirit conveyed. (And even with all that, that doesn't excuse endorsing scientific myths like Mouse Utopia: yes, Mouse Utopia makes a great story - that's why you know it in the first place! because it's bullshit but it's a great story! Now the question is, why do you not care and feel no shame at making the reader more wrong, rather than less wrong?)

It does not say anywhere that every group finishes the tunnel, nor that the tunnel is filled in between cycles. But it does hint that there have been many many groups before who lived and died without leaving the starting PriceCo. This solves the problem of tunnel length vs digging time.

Food supply duration is solved by farming, as explained in the story. There is an unlimited supply of energy and water, after all.

The other issues remain, but then, it's fiction meant to entertain and is tagged as such.

That makes the problem worse, not better, because now you are postulating that groups somehow fail to do the very obvious and empirically easy thing to try to escape, and that the few who do are now even more implausibly superhuman at chipping tunnels hundreds of miles long out of solid rock. Bracketing the problem of seeds (seeds in regular foods would be unfertilized, AFAIK, but we'll postulate a gardening department with some usable vegetables like tomatoes), farming is a lot harder than just stirring in some energy and water. You are losing irreplaceable minerals every cycle due to extensive inefficiencies everywhere: every visit to the bathroom, organic material is lost permanently. Look at Biosphere 2 or efforts at engineering stable closed ecosystems: it is not easy! That's why farmers rely so heavily on fertilizer, crop rotation, and other things. As written, the most basic outcome is that they start farming with the bags of soil in a gardening department, are weakened badly by low calorie harvests (no wheat or other cereal seeds in most gardening departments, and if you're lucky and there's some for a purpose like 'catgrass' it'll be a bunch of growing seasons before you have enough to spare to eat rather than replant...) and in the long run, protein deficiency as they use up stores, lose a bunch of crops to various errors (possibly contaminating everything), and the soil becomes exhausted. It's fiction, yes - high-concept world-building fiction which lives or dies on the plausibly of the world-building which it goes into extensively. (Certainly it's not a character-driven story, as there are no characters in it, only cardboard cutouts and talking stereotypes.) My point is that most of your errors are easily fixed if you think about it a little more. You don't need Mouse Utopia at all - it's a bad title because there's a thousand things named that already, and there's plenty of ways such a prison-society is doomed (eg shadow-people-worshipping cults) wit

Not to worry, no offense was taken. Indeed though, I have heard it said our ancestors were already cyborgs the minute one of them first sharpened a rock. 

The ending is because I normally write horror and take perverse delight in making small additions to wholesome things which totally subvert and ruin them. It's a compulsion at this point, lol.

The black goo is called Vitriol. A sort of a symbolic constant across many of my stories, present in different forms for various purposes. Typically it represents the corrosive hatred we indulge in, a poisoned well we cannot help but keep returning to even as we feel it killing us. 

I'm thankful for your readership and will endeavor not to disappoint you. Tomorrow's will be a neat one. 

I have noticed this tendency. Sometimes it's really well done though, I loved the ending for ||the ehancer story||. (spoilers for another one of Alex's stories)

"I'm not exactly sure what the point is though"

Not to fear transhumanism, not to regard ourselves as finished products, but also not to assume that more intelligent/powerful = more moral

"an earth-swallowing sea of maximizer AI nano"

That's not what the black sea is, but that angle makes sense in retrospect

Ah, I see. By the way, I didn't intend to sound hostile or condescending but I may have done so, in which case I'm sorry. I think it's evidence you have potential as a writer, and it's probably longer than anything I've ever written lol (I'm terrible at finishing things) - I just would have preferred you go through a revision phase or two before publishing it, and considered how to make the characters a bit more realistic. (For instance, it seems like he adjusts psychologically to the transformation much faster than a normal person would, and suddenly becomes a hero when previously he was just a kid.) Another possible moral, btw, is that the freedom to develop oneself and experiment with alternative modes of being and organization is part of what makes us human - not our mere body plan. There is a case to be made that the people of Cloud Nine are more human than the people of the Founder's perfect city.

I appreciate your readership and insights. Some of these challenges have answers, some were just oversights on my part.

1. The central theme was about having the courage to reject an all powerful authority on moral grounds even if it means eternal torment, rather than endlessly rationalize and defend its wrongdoing out of fear. "Are you a totalitarian follower who receives morality from authority figures or are you able to determine right and wrong on your own despite pressure to conform" is the real moral test of the Bible, in this story, rather than being... (read more)

1Trevor Hill-Hand1y
This was the most interesting part of the whole story to me, and it's an angle I haven't quite seen in this type of story before. However, I think it was in competition with the personalities of Elohim and Shaitan. They felt too petty and talking-past-each-other to make sense as people from an enlightened race. Maybe if their "conflict" was also a pre-planned part of their strategy, instead of a squabble? The cultural and literary references didn't bother me, but they did mean that by the end of the first few paragraphs I was like, "Oh okay, we're doing an Erich von Daniken/Assasin's Creed/Prometheus," and then everything played out about how I expected. I wanted a few more surprises, I think. At first it felt like maybe the main characters were far-future humans, and maybe it would have been fun to let that possibility linger for longer. Or just focus in more on the central theme and how it could subvert and/or support the Ancient Aliens narrative. But I did enjoy reading it! Got me visualizing some neat things.

If the distance is not more than he can manage before sleeping. The story isn't really about overcoming physical barriers, but mental ones. Thank you for your feedback

Also to set up the visitors at the end, who I still feel arrived too abruptly

It is abrupt, but to me that wasn't a bad thing. If them arriving had taken an extra x paragraphs but the story worked out exactly the same way I think that for me that would be worse not better. If there was some clever way of hinting them earlier maybe. The solar flare works quite well for that.

It was originally two novellas. I combined them, not seeing a point to publishing them separately. Should I separate them?

Possibly. I had great fun with the first half of this story, and then I saw where the scroll thingy on my screen was and so stopped with the intention of coming back for the second half. (Which I just did. Second half also great fun!) I don't think its a flaw in the story, I think that it might just be that its length puts it as a bit of an outlier in terms of webpage reading. Solar flares are a great plot device for "I need some machine to malfunction".
I think you should not act on my advice alone. I might be an outlier. Furthermore, even if I correctly detected what makes the story worse (for a group of people larger than myself) it does not automatically mean that following my advice would improve it. People are better at detecting what they don't like than at improving things. (For example, I could say which meals taste better and which worse, but I couldn't cook the meals I like most.) The last objection is that, in long term, rewriting this story is irrelevant. If my complaint makes sense (which still remains to be verified), the best reaction is to keep writing other things, and not make the same mistake again. So... I guess just leave it as it is. If you later decide to publish it elsewhere, probably two parts will be better. When writing new stories, consider not changing the style in the middle (or if you do, keep the parts separate).

I would need an agent for that. I am in the process of sending query letters to agents specializing in the genres I write.

By styling I mean aesthetic flourish, which is largely irrelevant to aerodynamics. The point I'm making is that aesthetic styling isn't predictable because it isn't governed by the physics of rocketry, where the features necessary to its function are predictable.

The Wernher Von Braun rocket has a very pointy head which is bad for aerodynamics, which was not obvious at the time. Even starship is more pointy than would be optimal for silly meme reasons.  All the images of the car models also have bad aerodynamics.  The styling of the solar car prototypes we have is very much driven by aerodynamics. The styling differences of Tesla's battery-driven trucks compared to previous trucks are largely driven by aerodynamics.  In the time before wind tunnels and computer simulations aerodynamics considerations were not obvious and thus the predictions didn't include them. Our cars currently having silly side-mirrors also isn't aesthetics but just bad lawmaking. Our cars have seats that allow the driver to have their head against the seat. That's not just aesthetics but reduces whiplash.  I'm not even a car nerd. Someone who knows a lot about cars can likely tell you a lot of additional elements that are present in today's cars but that don't exist in those images that exist for functional reasons. 

Thank you. Can you devise an organic way to work this information into the article while keeping it approachable to an audience of mostly laypersons, who will understand what particles are but not the importance of fields? 

I don’t know what purpose it serves in the post. There are more significant reasons why copies of deceased persons would never be exact anyway, without needing to go into anything beyond classical physics.

Not to worry, I'm secure in my talents, as a tradpubbed author of ten years. If by this time I could not write well, I would choose a different pursuit. I appreciate your good intentions but my ego is uninjured and not in need of coddling. It is a hardened mass of scar tissue as a consequence of growing up autistic in a less sensitive time.

This article in fact was originally posted on a monetized platform, which is why it's in that style you dislike. You certainly have a nose for it. I didn't know to tailor it to this community's preferences as I have only... (read more)

>"the world is made of fields, not particles"

Is this the mainstream view? It's the first time I'm hearing this. Thank you for the insights btw

It’s the mainstream view, but not the only one and not necessarily quite correct. The Standard Model is a quantum field theory incorporating special relativity and the particles are thought of as being quanta of fields. Regardless of whether the particles are entirely reducible to fields, fields are clearly more important overall than particles.

Horror movies are quite a popular genre, despite depicting awful, bleak scenarios. Imagine if the only genre of film was romcom. Imagine if no sour or bitter foods existed and every restaurant sold only desserts. I am of the mind that there is much to appreciate about life as a human, even as there is also much to hate. I am not here only to be happy, as such a life would be banal and an incomplete representation of the human experience. Rollercoasters are more enjoyable than funiculars because they have both ups and downs. 

>"It is. An argument is only as strong as its weakest link."

If the conclusion hinges upon that link, sure.

>"Reversing entropy and simulation absolutely are."

You do not need to reverse entropy to remake a person. Otherwise we are reversing entropy every time we manufacture copies of something which has broken. Even the "whole universe scan" method does not actually wind back the clock, except in sim. 

>"Well you suggest in the article that our simulators would resurrect us, am I missing something?"

Yes. If every intelligent species takes the att... (read more)

>"You might begin by arguing that the US military is generally trustworthy, wouldn't ever release doctored footage to spread misinformation"

When the government denied UAPs, the response was "it's not officially real, the authorities have not verified it". Now the government says it is real, and the response has shifted to "you trust the authorities??"

>"Would you think a good title for that article would be "The US military is generally trustworthy"? I think that would be a bad title"

See above. It's always lose/lose with goalpost movers. This doe... (read more)

You seem to have completely misunderstood the point of my UAP example, which was to point out something about titles, and not any of the other things you seem to have taken it to be. In particular: * I was not at all trying to argue for or against any particular view of what UAPs have what sort of explanation. * I was not at all making any claim about what an article about UAPs would contain, beyond (1) "I can imagine an article that covers roughly these points" and (2) "if so, I think X would be a poor title". * I was not at all making any claim about what one should and shouldn't trust any given bit of the government about. * I was not at all making any comment on the relative merits of trying to decide how to explain any particular UAP versus letting AOIMSG do it, though I'm a little puzzled by your comment since so far as I know the number of UAP-analyses AOIMSG has released so far is zero. Yes, different places with different people and different incentives have different cultures. Maybe clickbait and misdirection are necessary when using Medium as a tool for extracting money from advertisers or readers. They will not go down well here. Another thing that may not go down well here is if your goal in argument is recreation rather than truth-seeking. You've said several times that you came here looking for disagreement, but I don't see any sign that any of that disagreement has caused you to reconsider anything even slightly. Obviously my opinions on vaccines have precisely nothing to do with your article about resurrecting the dead, which is what this discussion was about before you 100% misunderstood an analogy I made. But, since you ask: I think vaccination is one of humanity's greatest and most important inventions; I think the vast majority of concern about serious vaccine side-effects is grossly misplaced, and in many cases deliberately and dishonestly fostered by people who are happy to cause harm for financial or political gain; I think it's likel

>"Well, as you yourself outline in the article people have basically just accepted death. How much funding is currently going into curing aging? (Which seems to be a much lower hanging fruit currently than any kind of resurrection.) Much less than should be IMO."

A good point. I'm not sure how or if this would change. My suspicion is that as the technology necessary to remake people gets closer to readiness, developed for other reasons, the public's defeatism will diminish. They dare not hope for a second life unless it's either incontrovertibly possible... (read more)

It is. An argument is only as strong as its weakest link. Sorry if it came across that way, I did not stop at the first possible objection, I am specifically questioning the parts that seem the weakest to me. (For the argument regarding bringing back past people, indeed starship and self-driving are not too relevant. Reversing entropy and simulation absolutely are.) I don't have any issues with the idea of resurrecting people based on a sufficiently detailed scan. (You write that "There’s a lot of people today who speculate that some kind of weirdness happens in the brain that can never be reduced to physics.", but I don't think anyone (on LW at least) would seriously argue that human brains can't be simulated for some weird reason.) The idea that we could recover past states of the universe in sufficient detail is by far the most suspicious claim, and it is central to the idea of bringing back past people, that's why I was addressing that specifically. Well you suggest in the article that our simulators would resurrect us, am I missing something?

Enjoyable, digestible writing style and thought provoking. Aligns pretty closely with some of my own ideas concerning technological resurrection. 

Point taken re: formatting. But what you consider meandering, to me, is laying contextual groundwork to build the conclusion on. I cannot control for impatience. 

The former is necessary to establish the credibility of the latter imo

Possibly necessary, but not sufficient. In any case, suppose you wanted to expand on your remarks about UAPs. You might begin by arguing that the US military is generally trustworthy, wouldn't ever release doctored footage to spread misinformation, is full of people capable of finding good "normal" explanations for things when they exist, etc.; then you might review some examples of UAP reports, possible explanations for them, and why you find some more credible than others; and finally you might put together the foregoing analysis to reach the conclusion: "We are being pranked by aliens". (Note: my guess is that that is not in fact your position.) Would you think a good title for that article would be "The US military is generally trustworthy"? I think that would be a bad title. If I read that article with that title I would think something like "This person chose a deliberately misleading title, probably because he knew that a title stating the actual thesis of the article would put people off. In future, if I read things he writes, I should expect rhetorical tricks and manipulation rather than straightforwardness." Maybe that's unfair? I don't think I really endorse the principle that the only honest way to title an article that argues for a particular thesis is for the title to be a brief statement of that thesis. But I do think that that's the default thing to do with the title, and that if you do something else there should probably be a specific good reason, and if the only reason is "I think people won't take me seriously if they know my actual opinion going in" then I think that's a bad reason. (Also, for what it's worth, I don't think the proposition that it may one day be possible to something-like-resurrect at least some of the dead is in fact one that would get you regarded as a crackpot around here, even though I am not at all convinced that you have made a good case for the particular version of that proposition your article argues for.)
Sorry for being a bit harsh and overly to the point in my first comment. You are obviously a good writer. This is clearly not a case of someone who cannot figure out how to structure a post: it was a post optimised for grabbing and holding attention, which is what (too) many sites optimise for, sometimes to optime ad revenue, but sometimes (like this case) for no good reason at all. In particular this sentence made it clear to me that you knew what you were doing, and I felt almost mocked for still paying attention to you: As I said, you are a good writer and you have something to say. If you structure your posts in a way that optimises for giving value to the reader instead of for grabbing attention, Im looking forward to seening more post from you.

These are good points. Can we agree a more accurate title would be "Futurists with STEM knowledge have a much better prediction track record than is generally attributed to futurists on the whole"? Though considerably longer and less eye catching. 

UAPs seem to perform something superficially indistinguishable from antigravity btw, whatever they are. Depending of course on whether the US government's increasingly official, open affirmation of this phenomenon persuades you of its authenticity. If there exists an alternate means to do the same kinds of things we wanted antigravity for in the first place, the impossibility of antigravity specifically seems like a moot point. 

It would be a more accurate title, but it would then have even less to do with the bulk of the actual article, which is not about futurists' track records but about the prospects for resurrecting the dead.

Because it ties in to the earlier point you mentioned about demand driving technological development. What is there more demand for than the return of departed loved ones? Simulationism was one of two means of retrieving the necessary information to reconstitute a person btw, though I have added a third, much more limited method elsewhere in these comments (mapping the atomic configuration of the still living).

>"You are talking about them in past tense as if they have already achieved their claimed capabilities. I have no doubt that practical mars vehic... (read more)

Well, as you yourself outline in the article people have basically just accepted death. How much funding is currently going into curing aging? (Which seems to be a much lower hanging fruit currently than any kind of resurrection.) Much less than should be IMO. Sorry, but this just seems like a generic counterargument. The key word here is "if". 1. Reversing entropy is a very shaky idea, as other comments already outlined in more detail. 2. The simulation hypothesis seems like a hotly debated topic, but there does not seem to be an accepted way to even put probabilities on it, depending on your priors you can get answers anywhere between 0 and 1. Also taking this to its logical conclusion just seems nonsensical. If we will be resurrected later anyway, why care about anything at all right now? I think much of EY's writing on many-worlds can be applied here. (The idea of being resurrected in many different possible worlds seems quite similar.)

No, it isn't unnecessary as multiple potential methods of retrieving the necessary information exist, and I wanted to cover them when I felt it was appropriate. Are you behaving reasonably? Is it my responsibility to anticipate what you're likely to assume about the contents of an article before you read it?  Or could you have simply finished reading before responding? I intend no hostility, though I confess I do feel frustrated. 

I read it and didn't know what to make of it, since you sketch out some of the reasons why we obviously don't live in a simulation. One man's modus ponens is another man's modus tollens. Creating a universe like our own would be a crime unprecedented in history. If I thought you could do it, I'm not saying I'd do whatever it took to prevent you - but if someone else killed you for it, and if I were inexplicably placed on the jury, I'd prevent a conviction. Hopefully enough other beings think the same way - and again, you present an argument that they would - to rule out the possibility of such an abomination.
I was merely explaining why I missed the part you quoted, to give you some feedback as an author. I also largely agree with the top comment. Due to very limited time I tend to skim or skip meandering/bloviating text. I think this article would benefit from sections, TLDR, and summary.

This exact argument is already presented late in the article: "In the event that we’re in a simulation already, many of the barriers facing the scanning of an entire universe (or at least a solar system, accounting for miniscule external gravitational influences) are solved. That information already exists in the simulation back end."

You might've finished reading before you replied

[This comment is no longer endorsed by its author]Reply
Load More