don't forget the political environment:
- locally, there's a meaningful "break up big tech" current which could make it politically difficult to simultaneously sell AI as a paradigm shift and monopolize it for yourself via the legal apparatus. cynically, firms might view regulation as a path to achieve similar ends but with fewer political repercussions, less blatant than if they leveraged patents.
- globally, the country which presently enjoys the lead in AI sees itself in an economic battle against a competitor unlikely to respect its intellectual property...
it’s odd to leap to things like housing markets and consumer debt without considering the demographics of startup employees. i believe your graphs are national averages, so are these employees expected to hold more or less debt relative to average? more or less likely to be homeowners v.s. renters? more or less likely to live in specific regions of the country?
the initial shock of covid 3.5 years ago was just massive. i get that it was in many ways transformative and not strictly destructive, but still hypotheticals like “a hundred billion decrease in
It seems mostly correct to accept the new calculations in the Improved COTI, which represent a -25% adjustment, and then include the 13% adjustment for taxes, resulting in about a -13% adjustment. This still represents an increase in the cost of thriving.
is COTI actually an inverted measure of the literal “cost of thriving”? i.e. the index goes up when the cost goes down? otherwise, this apparent inverted sign (a -13% change in COTI representing an “increase in the cost of thriving”) is throwing me for a loop.
...In broad terms, families with children hav
To learn gravity, you need additional evidence or context; to learn that the world is 3D, you need to see movement. To understand that movement, you have to understand how light moves, etc. etc.
for the 3d part: either the object of observation needs to move, or the observer needs to move: these are equivalent statements due to symmetry. consider two 2D images taken simultaneously from different points of observation: this provides the same information relevant here as were there to be but 2 images of a moving object from a stationary observer at slightl...
rationally, automating more tasks in my life should make for an easier life that’s subject to fewer demands. rationally, when this isn’t the case — when individuals each working to automate more things causes them to instead be subjected to more demands (learn new skills, else end up on the street), you shouldn’t expect doubling down on this strategy to be long-term viable.
rationally, if you’re predicting the proportion of people able to stay afloat to be always decreasing up to the singularity — a point at which labor becomes valueless — you shouldn’t exp...
> The tradeoff for connecting with similar people is not connecting with people different from us.
disagree. as you say, micro-communities are aligned very narrowly. which means that if you pair any two random individuals from the same micro-community, they'll be extremely similar along only one particular metric, but randomly different across every metric not relevant to that community. the easiest example of this is nationality: to the degree LW is a micro-community, it connects people of many different nationalities. perhaps the disappointment is that...
i’d love for anyone to present the argument against this. eq says it’s things like karaoke which make friendships great. the friends i know who are eager to do karaoke are the same ones who will start wild, speculative conversation when we’re idly sitting in the living room together. they’re the interesting people.
the people in my life whom, come the first lull in smalltalk after dinner get uncomfortable and declare “great meal, time to go” instead of opening themselves up for those late-night intimate conversations, are the same people who would turn down...
OpenAI estimated that the energy consumption for training GPT-3 was about 3.14 x 10^17 Joules.
sanity checking this figure: 1 kWh is 1000 x 60 x 60 = 3.6 MJ. then GPT-3 consumed 8.7 x 10^10 kWh. at a very conservative $0.04/kWh, that’s $3.5B just in the power bill — disregarding all the non-power costs (i.e. the overheads of operating a datacenter).
i could believe this number’s within 3 orders of magnitude of truth, which is probably good enough for the point of this article, but i am a little surprised if you just took it 100% at face value.
Should there be an opt-out from A.I. systems? Which ones? When is an opt-out clause a genuine choice, and at what point does it become merely an invitation to recede from society altogether, like saying you can choose not to use the internet or vehicular transport or banking services if you so choose.
the examples given are all networks, with many of the nodes human. if “receding from society” means being less connected with the other humans, then there’s no debate: to opt out of these networks is necessarily to “recede from society”.
but LLMs don’t have ...
Instincts to punish people are how actual humans precommit.
i think you could equally frame this as “people precommit due to an expectation of reciprocity”. like, i don’t generally follow through on my commitments to plans with friends because i fear punishment for breaking them. it’s more that i expect whatever amount i invest into the friendship will be reciprocated (approximately).
you could frame the fallout of a commitment failure as “punishment”, but if the risk of punishment exceeded the benefit of cooperation that would discourage me from pre-comm...
no love for it from me either, i’m sorry to say. the “society only exists when we overcome our base sexual desires” meme is tired. my university days were simultaneously my most promiscuous and my most productive (subjectively, measured by my extra-curricular contributions to technology). that’s a sample size of 1 (or dozens? depends how you measure it), but Huxley doesn’t even claim a single sample for the opposing view — much less an experiment, despite claiming this foundational assumption as “scientific”.
are complex systems like societies path-dependen...
UBI will always have some power imbalance. if not due to how that income is provided, then by how that income is exchanged for the basic goods. if we want to universally provide for the basic needs, while avoiding that kind of power imbalance, it seems sensible to focus exactly on that: automate more and more of the housing/food production chain, and distribute the tools for that to decrease the power of whichever hierarchies might otherwise bar access to them.
so Universal Basic Income is the practical implementation for providing basic needs for as long a...
consider a few scenarios around these two characters: a possibly-depressed Pierre and probably-sociopathic Eliza:
it’s scenario 1 which is horrific. in scenario 2, a Pierre-like viewer is far less likely to end his life after leaving the theater, ditto with scenario 3.
i think some of us already think of these chatbots as “acting out a role” — t...
the second-order effects of turning off the WiFi are surely comprised of both positive and negative effects, and i have no idea which valence it nets out to.
these days homes contain devices whose interconnectedness is used to more efficiently regulate power use. for example, the classic utility-controlled water heater, which reduces power draw when electricity is more expensive for the utility company (i.e. when peakers would need to come online). water heaters mostly don’t use WiFi but thermostats like Nest, programmable light bulbs, etc do: when you disr...
interoperability. we take it forgranted everywhere else in life: when you have to replace a fridge it’s easy because they all have the same electrical/water hookups. replace a door, same thing: standardized size, hinges, and knobs. going further, i’ve been upgrading the cabinets/drawers in my kitchen: they’re standard size so i can buy 3rd party silverware inserts, or even inserts made specifically to organize anything that’s k-cup shaped. i replaced the casters on my office chair with oversized carpet-friendly wheels: standardized attachments. so many thi...
further down on that page:
We are also now offering dedicated instances for users who want deeper control over the specific model version and system performance. By default, requests are run on compute infrastructure shared with other users, who pay per request. Our API runs on Azure, and with dedicated instances, developers will pay by time period for an allocation of compute infrastructure that’s reserved for serving their requests.
...Developers get full control over the instance’s load (higher load improves throughput but makes each request slower), th
commenting on the body, separate from the incident that prompted this. when i was in school:
no mention of relationships yet. but all these activities are exactly those avenues by which people learn about each other and by which they form bonds. the professors i bonded with were exactly those professors whose office hours i attended most. and vice versa for the student...
More importantly, if we have some one value, that values are to be valued, so much as to enact for, not only to want them - then we have a value which has no opposite in utilitarianism.
sounds a little like Preference Utilitarianism.
this observation means, if we align to mere values of humanity: AI can simply modify the humans, so to alter their values and call it a win; AI aligns you to AI. In general, for fulfillment of any human value, to make the human value it, seems absolutely the easiest, for any case.
here “autonomy”, “responsibility”, “self-d...
i’m naive to the details of GPT specifically, but it’s easy to accidentally make any reduction non-deterministic when working with floating point numbers — even before hardware variations.
for example, you want to compute the sum over a 1-billion entry vector where each entry is the number 1. in 32-bit IEEE-754, you should get different results by accumulating linearly (1+(1+(1+…))) vs tree-wise (…((1+1) + (1+1))…).
in practice most implementations do some combination of these. i’ve seen someone do this by batching groups of 100,000 numbers to sum linearly...
I'm relatively OOTL on AI since GPT-3. My friend is terrified that we need to halt it urgently: I couldn't understand his point of view; he mentioned this book to me. I see a number of pre-readers saying the version they read is well-suited exactly for convincing people like me. At which point: if you believe the threat is imminent, why delay the book four months? I'll read a digital copy today if you point me to it.