Posts

Sorted by New

Wiki Contributions

Comments

Apropos the Wikipedia article, in what way is grey goo a "transhumanist theory"?

Grey goo scenarios are relatively straightforward extrapolations of mundane technological progress and complex system dynamics with analogues in real biological systems. Subscribing to transhumanism is not a prerequisite to thinking that grey goo is a plausible region of the technological development phase space.

Your mention of Zipcar in the context of Netflix is an astute point. Zipcar has a very nice and well-developed infrastructure that would be nearly ideal for the transition. The question is whether or not Zipcar is thinking that far ahead, and I do not know the answer.

Many people do not know that even though Netflix has only been streaming video for a few years, they were very actively building their business around that transition over a decade ago, pretty much from their inception. They built out all of the elements required to take advantage of that transition long before it was technologically viable. Even though their DVD by mail business was highly successful, it was in many ways seen merely as a strategic waypoint. I think Zipcar might be well-advised to take a similar view of their business model, being prepared to strategically cannibalize it when the market is ready for driverless cars.

Everyone is over-thinking this. I used to live in Nevada and political process is driven by the unusual history and heuristics of the state.

The politicians do not care about technology, safety, or even being first per se. Nevada has very successfully built a political economy based on doing legislative and regulatory arbitrage against neighboring states, particularly California. If they think there is a plausible way to drive revenue by allowing things that other states do not allow, it is a surprisingly easy sale. The famous liberalism of the state, where a very atypical range of activities are legal and/or unregulated for a US state, is really just a consequence of this heuristic applied over time. If California disallows something that can generate revenue for Nevada, even if just for tourism, Nevada's instinct is to allow it as a response.

It is cheap for them, passing legislation to allow people to do something is almost free. As history shows, as a state it is pretty comfortable being the first to do a lot of things; it is not as prone to precautionary "what ifs" when there is an argument that the basic risks are manageable. It has worked out well for Nevada.

There are many, many examples of this. Everyone is familiar with "instant" weddings and divorces, which used to be much more difficult to do in most states, as well as gambling, prostitution, and other vices that were outlawed across the border. Nevada's economy is, in large part, based on making things legal and inexpensive.

There are also numerous boring examples, such as approving the construction of power plants along the California border when California had the power shortages but refused to approve power plants in the state; making it a tax-free and highly effective place to run Internet fulfillment centers (e.g. Amazon, B&N, etc are all there); they managed to designate areas of their cities as international ports to bypass California; they allow Californians to do their DMV paperwork in Nevada for the registration fees (I had a Nevada driver's license with a California address for years); they will approve almost any spectacle with minimal hassle, no matter how bizarre, if it brings in tourists from out of state.

All Google had to do was convince the politicians that they could bring money into Nevada that otherwise would end up in California. It is a calculated risk but Nevada politics has always been very comfortable doing things that are politically too risky in other states. Google probably made an argument from both jobs (the development Google does needs to take place somewhere) and tourism potential. Las Vegas is very fond of people movers that make it easier to fully exploit the city (they have a privately funded mono-rail system after all) so it could also be sold on that basis.

In short, the only foresight or rationality at work here is driving revenue by legalizing something that other states are unlikely to allow. This is an old modus operandi for Nevada legislative activity and someone at Google probably knew this.

I have typically sought advice (and occasionally received unsolicited advice) from fashion-aware women, most of whom are happy to demonstrate their domain expertise. This has proven to be an efficient strategy that produces good results for relatively low cost. Most of the men I know that dress well rely on a similar strategy; the dearth of men who are savvy at this suggests a somewhat complex signaling game at work.

Take advantage of specialization. It is no different than when individuals solicit advice for me on a matter about which I am perceived as knowledgeable. People enjoy demonstrating their expertise.

There is no reason we cannot massively parallelize algorithms on silicon, it just requires more advanced computer science than most people use. Brains have a direct connect topology, silicon uses a switch fabric topology. An algorithm that parallelizes on the former may look nothing like the one that parallelizes on the later. Most computer science people never learn how to do parallelism on a switch fabric, and it is rarely taught.

Tangentially, this is why whole brain emulation on silicon is a poor way of doing things. While you can map the wetware, the algorithm implemented in the wetware probably won't parallelize on silicon due to the fundamental topological differences.

While computer science has focused almost solely on algorithms that require a directly connected network topology to scale, there are a few organizations that know how to generally implement parallelism on switch fabrics. Most people conflate their ignorance with there being some fundamental limitation; it requires a computational model that takes the topology into account.

However, that does not address the issue of "foom". There are other topology invariant reasons to believe it is not realistic on any kind of conventional computing substrate even if everyone was using massively parallel switch fabric algorithms.

A question that needs to be asked is where are you willing to go to find a job? San Jose? The best choices are somewhat context dependent.

Seaside's economy is based on a military post and agriculture, neither of which are conducive to an intellectually interesting job scene. There is a shortage of good computer people an hour north, so if you are looking up there and having trouble then there is probably a presentation gap. At the same time, I would not be surprised at all if you found the options in your area to be unsatisfactory.

The ASVAB is not an exemplar of careful correctness and it is not targeted at people for which that would be beneficial. When I took it many years ago there were a few questions with glaring ambiguities and questionable assumptions; I simply picked the answer that I thought they would want me to pick if I was ignorant of the subject matter.

I maxed the test.

The test is not aimed at intelligent, educated people. It is designed to filter out people of low intelligence. I've met many people that struggled to achieve 50%, something I used to find shocking. If there are a few technical ambiguities then that is of little consequence for its intended purpose. While there is some basic occupation recommendations based on the ASVAB, it is not designed to identify the significantly above average -- quite the opposite.

Define "top 1%". Many programmers may be "top 1%" at some programming domain in some sense but they will not be "top 1%" for every programming domain. It is conceivable that there are enough specializations in software such that half of all programmers are "top 1%" at something, even if that something is neither very interesting nor very important in any kind of grand sense. It is not just by domain either, many employers value a particular characteristic within that niche e.g. speed versus thoroughness versus optimization. Most employers are filling a small niche.

The rare kind of programmer is one who is top 1% across a broad swath of domains. These programmers are rare, highly valued, and very difficult to find; for these it is probably more like 0.1% and they are more likely to select you than you them. The closer you get to a truly general "top 1%" the rarer the specimens become.

So the question becomes, are employers hiring the top 1% of programmers as an average of their skill and performance across hundreds of metrics or are they hiring the top 1% for the narrow set of skills and characteristics they value? In my experience, it is usually the latter.

Anecdotally, I hire on a slightly different critierion than either of the above. I hire people who can become a top 1% in any particular domain required very quickly; I've met candidates with little domain expertise and an extraordinary aptitude at acquiring it. My reasoning is simple: given enough time and exposure, they will become that rare generalist top 1%.

What would a survey of a cross-section of "computer experts" have looked like predicting the Internet in 2005 from 1990? The level of awareness required to make that prediction accurately is not generally found, people who did understand it well enough to make an educated guess would be modeled as outliers. The above survey is asking people to make a similar type of prediction.

An important aspect of AI predictions like the above is that it is asking people who do not understand how AI works. They are definitely experts on the history of past attempts but that does not imply the domain knowledge required to predict human-level AI. It is a bit like asking the Montgolfier brothers to predict when man would land on the moon -- experts on what has been done but not on what is required.

There are many reasoned extrapolations of technology arrival dates based on discernible trends -- think Moore's Law -- but something comparable in AI does not exist. The vast majority of AI people have no basis on which to assert the problem, something they generally can't define, will be solved next week or next century. The few that might know something will be buried in the noise floor. Consequently, I do not find much values in these group predictions.

Zeitgeist is not predictive except perhaps in a meta way.

A problem is that karma attempts to capture orthogonal values in a single number. Even though you can reduce those values to a single number they still need to be captured as separate values e.g. slashdot karma system for a half-assed example.

Karma seems to roughly fall into one of three buckets. The first is entertainment value e.g. a particularly witty comment that nonetheless does not add material value to the discussion. The second is informational value e.g. posting a link to particularly relevant literature of which many people are unaware. The third is argumentative value e.g. a well-reasoned and interesting perspective. All of these are captured as "karma" to some extent or another.

Objections are that this makes it difficult to filter content based on karma, which raises questions about its value. If, for example, I am primarily interested in reading hilarious witticism and interesting layman opinions, there is no way to filter out comments that contain dry references to academic literature. Alternatively, if I lack an appropriate sense of humor I might find the karma attributed to immaterial witticism inexplicable.

Even if a clever system was devised and ease of use was ignored, there are still issues of gaming and perverse incentives (e.g. Gibbard-Satterthwaite theorem et al). To misappropriate an old saying, "karma is a bitch".

Load More