The average human lifespan is about 70 years or approximately 2.2 billion seconds. The average human brain contains about 86 billion neurons or roughly 100 trillion synaptic connections. In comparison, something like GPT-3 has 175 billion parameters and 500 billion tokens of data. Assuming very crudely weight/synapse and token/second of experience equivalence, we can see that the human model's ratio of parameters to data is much greater than GPT-3, to the point that humans have significantly more parameters than timesteps (100 trillion to 2.2 billion), whi...
I recently interviewed with Epoch, and as part of a paid work trial they wanted me to write up a blog post about something interesting related to machine learning trends. This is what I came up with:
http://www.josephius.com/2022/09/05/energy-efficiency-trends-in-computation-and-long-term-implications/
I should point out that the logic of the degrowth movement follows from a relatively straightforward analysis of available resources vs. first world consumption levels. Our world can only sustain 7 billion human beings because the vast majority of them live not at first world levels of consumption, but third world levels, which many would argue to be unfair and an unsustainable pyramid scheme. If you work out the numbers, if everyone had the quality of life of a typical American citizen, taking into account things like meat consumption to arabl...
I'm using the number calculated by Ray Kurzweil for his book, the Age of Spiritual Machines from 1999. To get that figure, you need 100 billion neurons firing every 5 ms, or 200 Hz. That is based on the maximum firing rate given refractory periods. In actuality, average firing rates are usually lower than that, so in all likelihood the difference isn't actually six orders of magnitude. In particular, I should point out that six orders of magnitude is referring to the difference between this hypothetical maximum firing brain and the ...
Okay, so I contacted 80,000 hours, as well as some EA friends for advice. Still waiting for their replies.
I did hear from an EA who suggested that if I don't work on it, someone else who is less EA-aligned will take the position instead, so in fact, it's slightly net positive for myself to be in the industry, although I'm uncertain whether or not AI capability is actually funding constrained rather than personal constrained.
Also, would it be possible to mitigate the net negative by choosing to deliberately avoid capability research and just take an ML engineering job at a lower tier company that is unlikely to develop AGI before others and just work on applying existing ML tech to solving practical problems?
I previously worked as a machine learning scientist but left the industry a couple of years ago to explore other career opportunities. I'm wondering at this point whether or not to consider switching back into the field. In particular, in case I cannot find work related to AI safety, would working on something related to AI capability be a net positive or net negative impact overall?
Even further research shows the most recent Nvidia RTX 3090 is actually slightly more efficient than the 1660 Ti, at 36 TeraFlops, 350 watts, and 2.2 kg, which works out to 0.0001 PetaFlops/Watt and 0.016 PetaFlops/kg. Once again, they're within an order of magnitude of the supercomputers.
So, I did some more research, and the general view is that GPUs are more power efficient in terms of Flops/watt than CPUs, and the most power efficient of those right now is the Nvidia 1660 Ti, which comes to 11 TeraFlops at 120 watts, so 0.000092 PetaFlops/Watt, which is about 6x more efficient than Fugaku. It also weighs about 0.87 kg, which works out to 0.0126 PetaFlops/kg, which is about 7x more efficient than Fugaku. These numbers are still within an order of magnitude, and also don't take into account the overhead costs of things like coo...
Another thought is that maybe Less Wrong itself, if it were to expand in size and become large enough to roughly represent humanity, could be used as such a dataset.
So, I had a thought. The glory system idea that I posted about earlier, if it leads to a successful, vibrant democratic community forum, could actually serve as a kind of dataset for value learning. If each post has a number attached to it that indicates the aggregated approval of human beings, this can serve as a rough proxy for a kind of utility or Coherent Aggregated Volition.
Given that individual examples will probably be quite noisy, but averaged across a large amount of posts, it could function as a real world dataset, with the post conte...
A further thought is that those with more glory can be seen almost as elected experts. Their glory is assigned to them by votes after all. This is an important distinction from an oligarchy. I would actually be inclined to see the glory system as located on a continuum between direct demcracy and representative democracy.
So, keep in mind that by having the first vote free and worth double the paid votes does tilt things more towards democracy. That being said, I am inclined to see glory as a kind of proxy for past agreement and merit, and a rough way to approximate liquid democracy where you can proxy your vote to others or vote yourself.
In this alternative "market of ideas" the ideas win out because people who others trust to have good opinions are able to leverage that trust. Decisions over the merit of the given arguments are aggregated by vote. As lon...
Perhaps a nitpick detail, but having someone rob them would not be equivalent, because the cost of the action is offset by the ill-gotten gains. The proposed currency is more directly equivalent to paying someone to break into the target's bank account and destroying their assets by a proportional amount so that no one can use them anymore.
As for the more general concerns:
Standardized laws and rules tend in practice to disproportionately benefit those with the resources to bend and manipulate those rules with lawyers. Furthermore, this proposal...
As for the cheaply punishing prolific posters problem, I don't know a good solution that doesn't lead to other problems, as forcing all downvotes to cost glory makes it much harder to deal with spammers who somehow get through the application process filter. I had considered an alternative system in which all votes cost glory, but then there's no way to generate glory except perhaps by having admins and mods gift them, which could work, but runs counter to the direct democracy ideal that I was sorta going for.
And further to clarify, you'd both be able to gift glory and also spend glory to destroy other people's glory, at the mentioned exchange rate.
The way glory is introduced into the system is that any given post allows everyone one free vote on them that costs no glory.
So, I guess I should clarify, the idea is that you can both gift glory, which is how you gain the ability to post, and also you gain or lose glory based on people's upvotes and downvotes on your posts.
I have been able to land interviews at a rate of about 8/65 or 12% of the positions I apply to. My main assumption is that the timing of COVID-19 is bad, and I'm also only looking at positions in my geographical area of Toronto. It's also possible that I was overconfident early on and didn't prep enough for the interviews I got, which often involved general coding challenges that depended on data structures and algorithms that I hadn't studied since undergrad, as well as ML fundamentals for things like PCA that I hadn't touched in a long time a...
I would urge you to go learn about QM more. I'm not going to assume what you do/don't know, but from what I've learned about QM there is no argument for or against any god.
Strictly speaking it's not something that is explicitly stated, but I like to think that the implication flows from a logical consideration of what MWI actually entails. Obviously MWI is just one of many possible alternatives in QM as well, and the Copenhagen Interpretation obviously doesn't suggest anything.
...This also has to due with the distance between the moon and the earth and
Interesting, what is that?
The idea of theistic evolution is simply that evolution is the method by which God created life. It basically says, yes, the scientific evidence for natural selection and genetic mutation is there and overwhelming, and accepts these as valid, while at the same time positing that God can still exist as the cause that set the universe and evolution in motion through putting in place the Laws of Nature. It requires not taking the six days thing in the Bible literally, but rather metaphorically as being six eons of time, or some ...
I might be able to collaborate. I have a masters in computer science and did a thesis on neural networks and object recognition, before spending some time at a startup as a data scientist doing mostly natural language related machine learning stuff, and then getting a job as a research scientist at a larger company to do similar applied research work.
I also have two published conference papers under my belt, though they were in pretty obscure conferences admittedly.
As a plus, I've also read most of the sequences and am familiar with the Less Wrong culture...
Well, as far as I can tell, the latest progress in the field has come mostly through throwing deep learning techniques like bidirectional LSTMs at the problem and letting the algorithms figure everything out. This obviously is not particularly conducive to advancing the theory of NLP much.
I consider myself both a Christian and a rationalist, and I have read much of the sequences and mostly agree with them, albeit I somewhat disagree with the metaethics sequence and have been working on a lengthy rebuttal to it for some time. I never got around to completing it though, as I felt I needed to be especially rigorous and simply did not have the time and energy to make it sufficiently so, but the gist is that Eliezer's notion of fairness is actually much closer to what real morality is, which is a form of normative truth. In terms of moral phil...
I don't really know enough about business and charity structures and organizations to answer that quite yet. I'm also not really sure where else would be a productive place to discuss these ideas. And I doubt I or anyone else reading this has the real resources to attempt to build a safe AI research lab from scratch that could actually compete with the major organizations like Google, Facebook, or OpenAI, which all have millions to billions of dollars at their disposal, so this is kind of an idle discussion. I'm actually working for a larger tech company now than the startup from before, so for the time being I'll be kinda busy with that.
That is a hard question to answer, because I'm not a foreign policy expert. I'm a bit biased towards Canada because I live there and we already have a strong A.I. research community in Montreal and around Toronto, but I'll admit Canada as a middle power in North America is fairly beholden to American interests as well. Alternatively, some reasonably peaceful, stable, and prosperous democratic country like say, Sweden, Japan, or Australia might make a lot of sense.
It may even make some sense to have the headquarters be more a figurehead, and have the comp...
I've had arguments before with negative-leaning Utilitarians and the best argument I've come up with goes like this...
Proper Utility Maximization needs to take into account not only the immediate, currently existing happiness and suffering of the present slice of time, but also the net utility of all sentient beings throughout all of spacetime. Assuming that the Eternal Block Universe Theory of Physics is true, then past and future sentient beings do in fact exist, and therefore matter equally.
Now the important thing to stress here is then that what matte...
Well, if we're implying that time travellers could go back and invisibly copy you at any point in time and then upload you to whatever simulation they feel inclined towards... I don't see how blendering yourself now will prevent them from just going to the moment before that and copying that version of you.
So, reality is that blendering yourself achieves only one thing, which is to prevent the future possible yous from existing. Personally I think that does a disservice to future you. That can similarly be expanded to others. We cannot conceivably prev...
I recently made an attempt to restart my Music-RNN project:
https://www.youtube.com/playlist?list=PL-Ewp2FNJeNJp1K1PF_7NCjt2ZdmsoOiB
Basically went and made the dataset five times bigger and got... a mediocre improvement.
The next step is to figure out Connectionist Temporal Classification and attempt to implement Text-To-Speech with it. And somehow incorporate pitch recognition as well so I can create the next Vocaloid. :V
Also, because why not brag while I'm here, I have an attempt at an Earthquake Predictor in the works... right now it only predicts the hi...
This actually reminds me of an argument I had with some Negative-Leaning Utilitarians on the old Felicifia forums. Basically, a common concern for them was how r-selected species tend to appear to suffer way more than be happy, generally speaking, and that this can imply that was should try to reduce the suffering by eliminating those species or at least avoiding the expansion of life generally to other planets.
I likened this line of reasoning to the idea that we should Nuke The Rainforest.
Personally I think a similar counterargument to that argument appl...
I may be an outlier, but I've worked at a startup company that did machine learning R&D, and which was recently acquired by a big tech company, and we did consider the issue seriously. The general feeling of the people at the startup was that, yes, somewhere down the line the superintelligence problem would eventually be a serious thing to worry about, but like, our models right now are nowhere near becoming able to recursively self-improve themselves independently of our direct supervision. Actual ML models basically need a ton of fine-tuning and en...
I think the basic argument for OpenAI is that it is more dangerous for any one organization or world power to have an exclusive monopoly on A.I. technology, and so OpenAI is an attempt to safeguard against this possibility. Basically, it reduces the probability that someone like Alphabet/Google/Deepmind will establish an unstoppable first mover advantage and use it to dominate everyone else.
OpenAI is not really meant to solve the Friendly/Unfriendly AI problem. Rather it is meant to mitigate the dangers posed by for-profit corporations or nationalistic g...
Well, that's... unfortunate. I apparently don't hang around in the same circles, because I have not seen this kind of behaviour among the Effective Altruists I know.
I think you're misunderstanding the notion of responsibility that consequentialist reasoning theories such as Utilitarianism argue for. The nuance here is that responsibility does not entail that you must control everything. That is fundamentally unrealistic and goes against the practical nature of consequentialism. Rather, the notion of responsibility would be better expressed as:
This coincides with the notion of there being a locus of control, which ...
Depending on whether or not you accept the possibility of time travel, I am inclined to suggest that Alpha could very well be dominant already, and that the melioristic progress of human civilization should be taken as a kind of temporal derivative or gradient suggesting the direction of Alpha's values. Assuming that such an entity is indifferent to us I think is too quick a judgment on the apparent degree of suffering in the universe. It may well be that this current set of circumstances is a necessary evil and is already optimized in ways we cannot at ...
I suppose I'm more optimistic about the net happiness to suffering ratio in the universe, and assume that all other things being equal, the universe should exist because it is a net positive. While it is true that humans suffer, I disagree with the assumption that all or most humans are miserable, given facts like the hedonic treadmill and the low suicide rate, and the steady increase of other indicators of well being, such as life expectancy. There is of course, the psychological negativity bias, but I see this as being offset by the bias of intelligent...
That percentage changes rather drastically through human history and gods are supposed to be if not eternal than at least a bit more longer-lasting than religious fads
Those numbers are an approximation to what I would consider the proper prior, which would be the percentages of people throughout all of spacetime's eternal block universe who have ever held those beliefs. Those percentages are fixed and arguably eternal, but alas, difficult to ascertain at this moment in time. We cannot know what people will believe in the future, but I would actually c...
As I previously pointed out:
Pascal’s Fallacy assumes a uniform distribution on a large set of probable religions and beliefs. However, a uniform distribution only makes sense when we have no information about these probabilities. We in fact, do have information in the form of the distribution of intelligent human agents that believe in these ideas. Thus, our prior for each belief system could easily be proportional to the percentage of people who believe in a given faith.
Given the prior distribution, it should be obvious that I am a Christian who wo...
Okay, so the responses so far seem less than impressed with these ideas, and it has been suggested that maybe this shouldn't be so public in the first place.
Do people think I should take down this post?
It's not for underhanded secret deals. It's to allow you to know who you can trust with information such as "I am an effective altruist and may be a useful ally who you can talk to about stuff".
Ideally one might want to overtly talk about effective altruism, but what if circumstances prohibit it. Imagine Obama or Elon Musk one day gives this gesture while talking about, say, foreign aid to Africa. Then you know that he's with us, or at least knows about Effective Altruism. There could be a myriad of reasons why he doesn't want to talk about i...
Another "passive" sign that might work could be the humble white chess knight piece. In this case, it symbolizes the concept of a white knight coming to help and save others, but also because it is chess, it implies a depth of strategic, rational thinking. So for instance, an Effective Altruist might leave a white chess knight piece on their desk, and anyone familiar with what it represents could strike up a conversation about it.
The in-group, out-group thing is a hazard I admit. Again, I'm not demanding this be accepted, but merely offering out the idea for feedback, and I appreciate the criticism.
I haven't had a chance to properly learn sign-language, so I don't know if there are appropriate representations, but I can look into this.
It's doubtful that if this were to gain that much traction (which it honestly doesn't look like it will) that the secret could be kept for particularly long anyway.
I'm not really sure what would make a good passive sign to indicate Effective Altruism. One assumes that things like the way we talk and show cooperative rational attitudes might be a reasonable giveaway for the more observant.
We could borrow the idea of colours, and wear something that is conspicuously, say, silver, because silver is representative of knights in shining armour or something like that, but I don't know if this wouldn't turn into a fad or trend rather than a serious signal.
Well, there's obviously lots of possible uses for gestures like these. I'm only choosing to emphasize one that I think is reasonable to consider.
Mmm... I admit this is a possible way to interpret it... I'm not sure how to make it more obviously pro-cooperation than to maybe tilt the hand downward as well?
Well, I was hoping that people could be creative in coming up with uses, but I suppose I can offer a few more ideas.
For instance, maybe in the business world, you might not want to be so overt about being an Effective Altruist because you fear your generosity being taken advantage of, so you might use a subtle variant of these gestures to signal to other Effective Altruists who you are, without giving it away to more egoistic types.
Alternatively, it could be used to display your affiliation in such a way that signals to people in, say, an audience during a...
I think what you're doing is something that in psychology is called "Catastrophizing". In essence you're taking a mere unproven conjecture or possibility, exaggerating the negative severity of the implications, and then reacting emotionally as if this worst case scenario were true or significantly more likely than it actually is.
The proper protocol then is to re-familiarize yourself with Bayes Theorem (especially the concepts of evidence and priors), compartmentalize things according to their uncertainty, and try to step back and look at your ac...
I still remember when I was a masters student presenting a paper at the Canadian Conference on AI 2014 in Montreal and Bengio was also at the conference presenting a tutorial, and during the Q&A afterwards, I asked him a question about AI existential risk. I think I worded it back then as concerned about the possibility of Unfriendly AI or a dangerous optimization algorithm or something like that, as it was after I'd read the sequences but before "existential risk" was popularized as a term. Anyway, he responded by asking jokingly if I was a journalist... (read more)