All of Darklight's Comments + Replies

I still remember when I was a masters student presenting a paper at the Canadian Conference on AI 2014 in Montreal and Bengio was also at the conference presenting a tutorial, and during the Q&A afterwards, I asked him a question about AI existential risk. I think I worded it back then as concerned about the possibility of Unfriendly AI or a dangerous optimization algorithm or something like that, as it was after I'd read the sequences but before "existential risk" was popularized as a term. Anyway, he responded by asking jokingly if I was a journalist... (read more)

The average human lifespan is about 70 years or approximately 2.2 billion seconds. The average human brain contains about 86 billion neurons or roughly 100 trillion synaptic connections. In comparison, something like GPT-3 has 175 billion parameters and 500 billion tokens of data. Assuming very crudely weight/synapse and token/second of experience equivalence, we can see that the human model's ratio of parameters to data is much greater than GPT-3, to the point that humans have significantly more parameters than timesteps (100 trillion to 2.2 billion), whi... (read more)

I recently interviewed with Epoch, and as part of a paid work trial they wanted me to write up a blog post about something interesting related to machine learning trends. This is what I came up with:

http://www.josephius.com/2022/09/05/energy-efficiency-trends-in-computation-and-long-term-implications/

I should point out that the logic of the degrowth movement follows from a relatively straightforward analysis of available resources vs. first world consumption levels.  Our world can only sustain 7 billion human beings because the vast majority of them live not at first world levels of consumption, but third world levels, which many would argue to be unfair and an unsustainable pyramid scheme.  If you work out the numbers, if everyone had the quality of life of a typical American citizen, taking into account things like meat consumption to arabl... (read more)

I'm using the number calculated by Ray Kurzweil for his book, the Age of Spiritual Machines from 1999.  To get that figure, you need 100 billion neurons firing every 5 ms, or 200 Hz.  That is based on the maximum firing rate given refractory periods.  In actuality, average firing rates are usually lower than that, so in all likelihood the difference isn't actually six orders of magnitude.  In particular, I should point out that six orders of magnitude is referring to the difference between this hypothetical maximum firing brain and the ... (read more)

Okay, so I contacted 80,000 hours, as well as some EA friends for advice.  Still waiting for their replies.

I did hear from an EA who suggested that if I don't work on it, someone else who is less EA-aligned will take the position instead, so in fact, it's slightly net positive for myself to be in the industry, although I'm uncertain whether or not AI capability is actually funding constrained rather than personal constrained.

Also, would it be possible to mitigate the net negative by choosing to deliberately avoid capability research and just take an ML engineering job at a lower tier company that is unlikely to develop AGI before others and just work on applying existing ML tech to solving practical problems?

I previously worked as a machine learning scientist but left the industry a couple of years ago to explore other career opportunities.  I'm wondering at this point whether or not to consider switching back into the field.  In particular, in case I cannot find work related to AI safety, would working on something related to AI capability be a net positive or net negative impact overall?

1[comment deleted]1y
1Yonatan Cale1y
Working on AI Capabilities: I think this is net negative, and I'm commenting here so people can [V] if they agree or [X] if they disagree. Seems like habryka [https://www.lesswrong.com/users/habryka4] agrees [https://www.lesswrong.com/posts/vnoi5umkiS7bqdWBe/will-working-here-advance-agi-help-us-not-destroy-the-world?commentId=D6SX7YGqZv7NDzZms]?  Seems like Kaj [https://www.lesswrong.com/users/kaj_sotala] disagrees [https://www.lesswrong.com/posts/EzAt4SbtQcXtDNhHK/confused-why-a-capabilities-research-is-good-for-alignment]? I think it wouldn't be controversial to advise you to at least talk to 80,000 hours about this before you do it, as some safety net to not do something you don't mean to by mistake? Assuming you trust them. Or perhaps ask someone you trust. Or make your own gears-level model. Anyway, seems like an important decision to me

Even further research shows the most recent Nvidia RTX 3090 is actually slightly more efficient than the 1660 Ti, at 36 TeraFlops, 350 watts, and 2.2 kg, which works out to 0.0001 PetaFlops/Watt and 0.016 PetaFlops/kg.  Once again, they're within an order of magnitude of the supercomputers.

So, I did some more research, and the general view is that GPUs are more power efficient in terms of Flops/watt than CPUs, and the most power efficient of those right now is the Nvidia 1660 Ti, which comes to 11 TeraFlops at 120 watts, so 0.000092 PetaFlops/Watt, which is about 6x more efficient than Fugaku.  It also weighs about 0.87 kg, which works out to 0.0126 PetaFlops/kg, which is about 7x more efficient than Fugaku.  These numbers are still within an order of magnitude, and also don't take into account the overhead costs of things like coo... (read more)

3Darklight2y
Even further research shows the most recent Nvidia RTX 3090 is actually slightly more efficient than the 1660 Ti, at 36 TeraFlops, 350 watts, and 2.2 kg, which works out to 0.0001 PetaFlops/Watt and 0.016 PetaFlops/kg.  Once again, they're within an order of magnitude of the supercomputers.

Another thought is that maybe Less Wrong itself, if it were to expand in size and become large enough to roughly represent humanity, could be used as such a dataset.

So, I had a thought.  The glory system idea that I posted about earlier, if it leads to a successful, vibrant democratic community forum, could actually serve as a kind of dataset for value learning.  If each post has a number attached to it that indicates the aggregated approval of human beings, this can serve as a rough proxy for a kind of utility or Coherent Aggregated Volition.

Given that individual examples will probably be quite noisy, but averaged across a large amount of posts, it could function as a real world dataset, with the post conte... (read more)

1Darklight2y
Another thought is that maybe Less Wrong itself, if it were to expand in size and become large enough to roughly represent humanity, could be used as such a dataset.

A further thought is that those with more glory can be seen almost as elected experts.  Their glory is assigned to them by votes after all.  This is an important distinction from an oligarchy.  I would actually be inclined to see the glory system as located on a continuum between direct demcracy and representative democracy.

So, keep in mind that by having the first vote free and worth double the paid votes does tilt things more towards democracy.  That being said, I am inclined to see glory as a kind of proxy for past agreement and merit, and a rough way to approximate liquid democracy where you can proxy your vote to others or vote yourself.

In this alternative "market of ideas" the ideas win out because people who others trust to have good opinions are able to leverage that trust.  Decisions over the merit of the given arguments are aggregated by vote.  As lon... (read more)

Perhaps a nitpick detail, but having someone rob them would not be equivalent, because the cost of the action is offset by the ill-gotten gains.  The proposed currency is more directly equivalent to paying someone to break into the target's bank account and destroying their assets by a proportional amount so that no one can use them anymore.

As for the more general concerns:

Standardized laws and rules tend in practice to disproportionately benefit those with the resources to bend and manipulate those rules with lawyers.  Furthermore, this proposal... (read more)

As for the cheaply punishing prolific posters problem, I don't know a good solution that doesn't lead to other problems, as forcing all downvotes to cost glory makes it much harder to deal with spammers who somehow get through the application process filter.  I had considered an alternative system in which all votes cost glory, but then there's no way to generate glory except perhaps by having admins and mods gift them, which could work, but runs counter to the direct democracy ideal that I was sorta going for.

What I meant was you could farm upvotes on your posts.  Sorry.  I'll edit it for clarity.

And further to clarify, you'd both be able to gift glory and also spend glory to destroy other people's glory, at the mentioned exchange rate.

The way glory is introduced into the system is that any given post allows everyone one free vote on them that costs no glory.

So, I guess I should clarify, the idea is that you can both gift glory, which is how you gain the ability to post, and also you gain or lose glory based on people's upvotes and downvotes on your posts.

1Darklight2y
And further to clarify, you'd both be able to gift glory and also spend glory to destroy other people's glory, at the mentioned exchange rate. The way glory is introduced into the system is that any given post allows everyone one free vote on them that costs no glory.

I have been able to land interviews at a rate of about 8/65 or 12% of the positions I apply to.  My main assumption is that the timing of COVID-19 is bad, and I'm also only looking at positions in my geographical area of Toronto.  It's also possible that I was overconfident early on and didn't prep enough for the interviews I got, which often involved general coding challenges that depended on data structures and algorithms that I hadn't studied since undergrad, as well as ML fundamentals for things like PCA that I hadn't touched in a long time a... (read more)

Actually, apparently I forgot about the proper term: Utilitronium

0MrMind6y
Well, it depends. Utilitronium is matter optimized for utility. Friendtronium is (let's posit) matter optimized to run a FAI. Not necessarily the two are the same thing.

I would urge you to go learn about QM more. I'm not going to assume what you do/don't know, but from what I've learned about QM there is no argument for or against any god.

Strictly speaking it's not something that is explicitly stated, but I like to think that the implication flows from a logical consideration of what MWI actually entails. Obviously MWI is just one of many possible alternatives in QM as well, and the Copenhagen Interpretation obviously doesn't suggest anything.

This also has to due with the distance between the moon and the earth and

... (read more)
2Viliam6y
Perhaps in some other universe the local people are happy that the majority of their universe does not consist of dark matter and dark energy, and that their two moons have allowed them to find out some laws of physics more easily.

Interesting, what is that?

The idea of theistic evolution is simply that evolution is the method by which God created life. It basically says, yes, the scientific evidence for natural selection and genetic mutation is there and overwhelming, and accepts these as valid, while at the same time positing that God can still exist as the cause that set the universe and evolution in motion through putting in place the Laws of Nature. It requires not taking the six days thing in the Bible literally, but rather metaphorically as being six eons of time, or some ... (read more)

0MrMind6y
I was asking because positronium is an already estabilished name for an exotic atom, made of an electron and a positron. I suggest you change your positronium into something like friendtronium, to avoid confusion.

I might be able to collaborate. I have a masters in computer science and did a thesis on neural networks and object recognition, before spending some time at a startup as a data scientist doing mostly natural language related machine learning stuff, and then getting a job as a research scientist at a larger company to do similar applied research work.

I also have two published conference papers under my belt, though they were in pretty obscure conferences admittedly.

As a plus, I've also read most of the sequences and am familiar with the Less Wrong culture... (read more)

0Stuart_Armstrong6y
Interesting. Can we exchange email addresses?

Well, as far as I can tell, the latest progress in the field has come mostly through throwing deep learning techniques like bidirectional LSTMs at the problem and letting the algorithms figure everything out. This obviously is not particularly conducive to advancing the theory of NLP much.

I consider myself both a Christian and a rationalist, and I have read much of the sequences and mostly agree with them, albeit I somewhat disagree with the metaethics sequence and have been working on a lengthy rebuttal to it for some time. I never got around to completing it though, as I felt I needed to be especially rigorous and simply did not have the time and energy to make it sufficiently so, but the gist is that Eliezer's notion of fairness is actually much closer to what real morality is, which is a form of normative truth. In terms of moral phil... (read more)

3denimalpaca6y
I would urge you to go learn about QM more. I'm not going to assume what you do/don't know, but from what I've learned about QM there is no argument for or against any god. This also has to due with the distance between the moon and the earth and the earth and the sun. Either or both could be different sizes, and you'd still get a full eclipse if they were at different distances. Although the first test of general relativity was done in 1919, it was found later that the test done was bad, and later results from better replications actually provided good enough evidence. This is discussed in Stephen Hawking's A Brief History of Time. There are far more stars than habitable worlds. If you're going to be consistent with assigning probabilities, then by looking at the probability of a habitable planet orbiting a star, you should conclude that it is unlikely a creator set up the universe to make it easy or even possible to hop planets. Right, the sizes of the moon and sun are arbitrary. We could easily live on a planet with no moon, and have found other ways to test General Relativity. No appeal to any form of the Anthropic Principle is needed. And again with the assertion about habitable planets: the anthropic principle (weak) would only imply that to see other inhabitable planets, there must be an inhabitable planet from which someone is observing. So you didn't provide any evidence for any god; you just committed a logical fallacy of the argument from ignorance. The way I view the universe, everything you state is still valid. I see the universe as a period of asymmetry, where complexity is allowed to clump together, but it clumps in regular ways defined by rules we can discover and interpret.
0MrMind6y
Interesting, what is that? Are you familiar with the writings of Frank J. Tipler? That would be computronium-based I suppose.

I don't really know enough about business and charity structures and organizations to answer that quite yet. I'm also not really sure where else would be a productive place to discuss these ideas. And I doubt I or anyone else reading this has the real resources to attempt to build a safe AI research lab from scratch that could actually compete with the major organizations like Google, Facebook, or OpenAI, which all have millions to billions of dollars at their disposal, so this is kind of an idle discussion. I'm actually working for a larger tech company now than the startup from before, so for the time being I'll be kinda busy with that.

That is a hard question to answer, because I'm not a foreign policy expert. I'm a bit biased towards Canada because I live there and we already have a strong A.I. research community in Montreal and around Toronto, but I'll admit Canada as a middle power in North America is fairly beholden to American interests as well. Alternatively, some reasonably peaceful, stable, and prosperous democratic country like say, Sweden, Japan, or Australia might make a lot of sense.

It may even make some sense to have the headquarters be more a figurehead, and have the comp... (read more)

0jyan6y
Figurehead and branches is an interesting idea. If data, code and workers are located all over the world, the organization can probably survive even if one or few branches are taken. Where should the head office be located, and in what form (e.g. holding company, charity)? These type of questions deserve a post, do you happen to know any place to discuss building safe AI research lab from scratch?

I've had arguments before with negative-leaning Utilitarians and the best argument I've come up with goes like this...

Proper Utility Maximization needs to take into account not only the immediate, currently existing happiness and suffering of the present slice of time, but also the net utility of all sentient beings throughout all of spacetime. Assuming that the Eternal Block Universe Theory of Physics is true, then past and future sentient beings do in fact exist, and therefore matter equally.

Now the important thing to stress here is then that what matte... (read more)

Well, if we're implying that time travellers could go back and invisibly copy you at any point in time and then upload you to whatever simulation they feel inclined towards... I don't see how blendering yourself now will prevent them from just going to the moment before that and copying that version of you.

So, reality is that blendering yourself achieves only one thing, which is to prevent the future possible yous from existing. Personally I think that does a disservice to future you. That can similarly be expanded to others. We cannot conceivably prev... (read more)

0RedMan6y
Time traveling super-jerks are not in my threat model. They would sure be terrible, bu as you point out, there is no obvious solution, though fortunately time travel does not look to be nearly as close technologically as uploading does. The definition of temporal I am using is as follows: "relating to worldly as opposed to spiritual affairs; secular." I believe the word is appropriate in context, as traditionally, eternity is a spiritual matter and does not require actual concrete planning. I assert that if uploading becomes available within a generation, the odds of some human or organization doing something utterly terrible to the uploaded are high not low. There are plenty of recent examples of bad behavior by instituions that are around today and likely to persist.

I recently made an attempt to restart my Music-RNN project:

https://www.youtube.com/playlist?list=PL-Ewp2FNJeNJp1K1PF_7NCjt2ZdmsoOiB

Basically went and made the dataset five times bigger and got... a mediocre improvement.

The next step is to figure out Connectionist Temporal Classification and attempt to implement Text-To-Speech with it. And somehow incorporate pitch recognition as well so I can create the next Vocaloid. :V

Also, because why not brag while I'm here, I have an attempt at an Earthquake Predictor in the works... right now it only predicts the hi... (read more)

This actually reminds me of an argument I had with some Negative-Leaning Utilitarians on the old Felicifia forums. Basically, a common concern for them was how r-selected species tend to appear to suffer way more than be happy, generally speaking, and that this can imply that was should try to reduce the suffering by eliminating those species or at least avoiding the expansion of life generally to other planets.

I likened this line of reasoning to the idea that we should Nuke The Rainforest.

Personally I think a similar counterargument to that argument appl... (read more)

0RedMan6y
Thank you for the thoughtful response! I'm not convinced that your assertion successfully breaks the link between effective altruism and the blender. Is your argument consistent with making the following statement when discussing the inpending age of em? If your mind is uploaded, a future version of you will likely subjectively experience hell. Some other version of you may also subjectively experience heaven. Many people, copies of you split off at various points, will carry all the memories of your human life' If you feel like your brain is in a blender trying to conceive of this, you may want to put it into an actual blender before someone with temporal power and an uploading machine decides to define your eternity for you.

I may be an outlier, but I've worked at a startup company that did machine learning R&D, and which was recently acquired by a big tech company, and we did consider the issue seriously. The general feeling of the people at the startup was that, yes, somewhere down the line the superintelligence problem would eventually be a serious thing to worry about, but like, our models right now are nowhere near becoming able to recursively self-improve themselves independently of our direct supervision. Actual ML models basically need a ton of fine-tuning and en... (read more)

0DustinWehr6y
I've kept fairly up to date on progress in neural nets, less so in reinforcement learning, and I certainly agree at how limited things are now. What if protecting against the threat of ASI requires huge worldwide political/social progress? That could take generations. Not an example of that (which I haven't tried to think of), but the scenario that concerns me the most, so far, is not that some researchers will inadvertently unleash a dangerous ASI while racing to be the first, but rather that a dangerous ASI will be unleashed during an arms race between (a) states or criminal organizations intentionally developing a dangerous ASI, and (b) researchers working on ASI-powered defences to protect us against (a).

I think the basic argument for OpenAI is that it is more dangerous for any one organization or world power to have an exclusive monopoly on A.I. technology, and so OpenAI is an attempt to safeguard against this possibility. Basically, it reduces the probability that someone like Alphabet/Google/Deepmind will establish an unstoppable first mover advantage and use it to dominate everyone else.

OpenAI is not really meant to solve the Friendly/Unfriendly AI problem. Rather it is meant to mitigate the dangers posed by for-profit corporations or nationalistic g... (read more)

0jyan6y
If a new non-profit AI research company were to be built from scratch, which regions or countries would be best for the safety of humanity?

Well, that's... unfortunate. I apparently don't hang around in the same circles, because I have not seen this kind of behaviour among the Effective Altruists I know.

I think you're misunderstanding the notion of responsibility that consequentialist reasoning theories such as Utilitarianism argue for. The nuance here is that responsibility does not entail that you must control everything. That is fundamentally unrealistic and goes against the practical nature of consequentialism. Rather, the notion of responsibility would be better expressed as:

  • An agent is personally responsible for everything that is reasonably within their power to control.

This coincides with the notion of there being a locus of control, which ... (read more)

6PhilGoetz6y
Benquo isn't saying that these attitudes necessarily follow, but that in practice he's seen it happen. There is a lot of unspoken LessWrong / SIAI history here. Eliezer Yudkowsky and many others "at the top" of SIAI felt personally responsible for the fate of the human race. EY believed he needed to develop an AI to save humanity, but for many years he would only discuss his thoughts on AI with one other person, not trusting even the other people in SIAI, and requiring them to leave the area when the two of them talked about AI. (For all I know, he still does that.) And his plans basically involve creating an AI to become world dictator and stop anybody else from making an AI. All of that is reducing the agency of others "for their own good." This secrecy was endemic at SIAI; when I've walked around NYC with their senior members, sometimes 2 or 3 people would gather together and whisper, and would ask anyone who got too close to please walk further away, because the ideas they were discussing were "too dangerous" to share with the rest of the group.

Interesting. I should look into more of Bostrom's work then.

Depending on whether or not you accept the possibility of time travel, I am inclined to suggest that Alpha could very well be dominant already, and that the melioristic progress of human civilization should be taken as a kind of temporal derivative or gradient suggesting the direction of Alpha's values. Assuming that such an entity is indifferent to us I think is too quick a judgment on the apparent degree of suffering in the universe. It may well be that this current set of circumstances is a necessary evil and is already optimized in ways we cannot at ... (read more)

I suppose I'm more optimistic about the net happiness to suffering ratio in the universe, and assume that all other things being equal, the universe should exist because it is a net positive. While it is true that humans suffer, I disagree with the assumption that all or most humans are miserable, given facts like the hedonic treadmill and the low suicide rate, and the steady increase of other indicators of well being, such as life expectancy. There is of course, the psychological negativity bias, but I see this as being offset by the bias of intelligent... (read more)

That percentage changes rather drastically through human history and gods are supposed to be if not eternal than at least a bit more longer-lasting than religious fads

Those numbers are an approximation to what I would consider the proper prior, which would be the percentages of people throughout all of spacetime's eternal block universe who have ever held those beliefs. Those percentages are fixed and arguably eternal, but alas, difficult to ascertain at this moment in time. We cannot know what people will believe in the future, but I would actually c... (read more)

As I previously pointed out:

Pascal’s Fallacy assumes a uniform distribution on a large set of probable religions and beliefs. However, a uniform distribution only makes sense when we have no information about these probabilities. We in fact, do have information in the form of the distribution of intelligent human agents that believe in these ideas. Thus, our prior for each belief system could easily be proportional to the percentage of people who believe in a given faith.

Given the prior distribution, it should be obvious that I am a Christian who wo... (read more)

4Lumifer6y
That percentage changes rather drastically through human history and gods are supposed to be if not eternal than at least a bit more longer-lasting than religious fads. So... if -- how did you put it? -- "a benevolent superintelligence already exists and dominates the universe" then you have nothing to worry about with respect to rogue AIs doing unfortunate things with paperclips, right?

Okay, so the responses so far seem less than impressed with these ideas, and it has been suggested that maybe this shouldn't be so public in the first place.

Do people think I should take down this post?

4Lumifer7y
Nah. It's not like we're making vewwy sekrit plans over here.

It's not for underhanded secret deals. It's to allow you to know who you can trust with information such as "I am an effective altruist and may be a useful ally who you can talk to about stuff".

Ideally one might want to overtly talk about effective altruism, but what if circumstances prohibit it. Imagine Obama or Elon Musk one day gives this gesture while talking about, say, foreign aid to Africa. Then you know that he's with us, or at least knows about Effective Altruism. There could be a myriad of reasons why he doesn't want to talk about i... (read more)

4ChristianKl7y
Obama uses a secret code for signaling that he's an EA would be ammunition for Fox News. Him speaking positively about the AMF or even speaking positively about GiveWell wouldn't give Fox News good ammunition.
6Lumifer7y
/rolls eyes This is middle-school Secret Club for Spies and Benefactors of Humanity crap. By the way, what you describe is known in politics as a "dog whistle [https://en.wikipedia.org/wiki/Dog-whistle_politics]" :-P

Another "passive" sign that might work could be the humble white chess knight piece. In this case, it symbolizes the concept of a white knight coming to help and save others, but also because it is chess, it implies a depth of strategic, rational thinking. So for instance, an Effective Altruist might leave a white chess knight piece on their desk, and anyone familiar with what it represents could strike up a conversation about it.

1polymathwannabe7y
Illuminati conspiranoids are so going to freak over this.

The in-group, out-group thing is a hazard I admit. Again, I'm not demanding this be accepted, but merely offering out the idea for feedback, and I appreciate the criticism.

I haven't had a chance to properly learn sign-language, so I don't know if there are appropriate representations, but I can look into this.

It's doubtful that if this were to gain that much traction (which it honestly doesn't look like it will) that the secret could be kept for particularly long anyway.

I'm not really sure what would make a good passive sign to indicate Effective Altruism. One assumes that things like the way we talk and show cooperative rational attitudes might be a reasonable giveaway for the more observant.

We could borrow the idea of colours, and wear something that is conspicuously, say, silver, because silver is representative of knights in shining armour or something like that, but I don't know if this wouldn't turn into a fad or trend rather than a serious signal.

Well, there's obviously lots of possible uses for gestures like these. I'm only choosing to emphasize one that I think is reasonable to consider.

Mmm... I admit this is a possible way to interpret it... I'm not sure how to make it more obviously pro-cooperation than to maybe tilt the hand downward as well?

7Lumifer7y
Interesting -- I don't know of any non-verbal more-or-less-culturally-neutral signal that says "I want to cooperate". There's "I mean no harm" (showing empty hands), there's "I trust you" or "I submit" (bowing, kneeling, in general making yourself vulnerable), but nothing comes to mind with respect to equal cooperation. I wonder what does this say about humanity.

Well, I was hoping that people could be creative in coming up with uses, but I suppose I can offer a few more ideas.

For instance, maybe in the business world, you might not want to be so overt about being an Effective Altruist because you fear your generosity being taken advantage of, so you might use a subtle variant of these gestures to signal to other Effective Altruists who you are, without giving it away to more egoistic types.

Alternatively, it could be used to display your affiliation in such a way that signals to people in, say, an audience during a... (read more)

6gjm7y
Sorry, still not seeing it. Why are you trying to give these cryptic signals to other EAs at work? Is the idea that EAs will start cutting one another specially favourable deals and giving preference to one another for good jobs? That seems likely to be hugely counterproductive, because as soon as such things get suspected it's going to be bad for the whole EA movement. If you're overtly signalling affiliation while making a political speech or something, why not just do it by talking about effective altruism? If it's a speech in which it doesn't make any sense to do so, then what the hell are you doing signalling affiliation in the first place? Again, this is the kind of thing that gives a movement a bad name. (Ditto, even more so, if it's covert.) OK, so I am, let's say, an investor considering putting some money into a hedge fund. I go to visit their offices.The fund manager or one of his colleagues greets me by putting his hand behind his back and giving a thumbs-up gesture. Are you suggesting this is a subtle gesture that won't make anyone suspicious? Again, maybe I'm just missing something. But every time I actually try to imagine a concrete situation in which this sort of gesture might be useful, I can't do it.

I guess I don't understand then? Care to explain what your "subjective self" actually is?

I think what you're doing is something that in psychology is called "Catastrophizing". In essence you're taking a mere unproven conjecture or possibility, exaggerating the negative severity of the implications, and then reacting emotionally as if this worst case scenario were true or significantly more likely than it actually is.

The proper protocol then is to re-familiarize yourself with Bayes Theorem (especially the concepts of evidence and priors), compartmentalize things according to their uncertainty, and try to step back and look at your ac... (read more)

-2Fivehundred8y
No, I rated the death outcome as having a 10% chance of being true. But now I rate it much lower. This: Basically, the fact that we do it only a little bit accounts for our observations in ways that other cosmological theories can't. Er, you don't understand the problem. I was worried about my subjective self dying.
Load More