LESSWRONG
LW

FlorianH
32391591
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The Rising Premium of Life, Or: How We Learned to Start Worrying and Fear Everything
FlorianH2d10

Somewhat related: Topic reminds me of a study I've once read about where Buddhist Monks, somewhat surprisingly, supposedly had high fear of death (although I didn't follow more deeply; when googling the study pops up immediately).

Reply
The Rising Premium of Life, Or: How We Learned to Start Worrying and Fear Everything
FlorianH2d42

In some sense, this should be surprising: Surely people have always wanted to avoid dying? But it turns out the evidence that this preference has increased over time is quite robust.

Maybe for almost everything there's "some" sense in which it should be surprising. But an increase in 'not wanting to die', and in particular in the willingness to pay for not wanting to die in modern society, should, I think rather be the baseline expectation. If anything, an absence of it would require explanation: (i) basic needs are met, let's spend the rest on reducing risk to die; (ii) life has gotten comfy, let's remain alive -> these two factors that you also mention in the link would seem to be pretty natural explanations/expectations (and I could easily imagine a large quantitative effect, and recently also the internet contributing to it; now that LW or so is the coolest thing to spend my time with and it's free, why trade off my live for expensive holidays or sth.. maybe TV already used to have a similar type of effect generations ago though I personally cannot so readily empathize with that one).

(Fwiw, this is the same reason why I think we're wrong when complaining about the fact that an increasing percentage of GDP is being spent on (old age) health care (idk whether that phenomenon of complaint is prevalent in other countries, in mine it is a constant topic): Keeping our body alive unfortunately is the one thing we don't quite master yet in the universe, so until we do, spending more and more on it is just a really salient proof that we've gotten truly richer in which case this starts to make sense. Of course nothing in this says we're doing it right and having the right balance in all this.)

Reply
Applying right-wing frames to AGI (geo)politics
FlorianH4d1616

All of these ideas are very high-level, but they give an outline of why I think right-wing politics is best-equipped to deal with the rise of AI.

While I think you raise some interesting points, I don't buy the final conclusion at all - or you're "right-wing politics" is a more theoretical or niche politics compared to the right-wing politics I'm used to see rather globally:

To cope with advanced AI we need powerful redistribution, within and across nations (and I think traditional protectionism doesn't seem to help you here). But right-wing politics in practice does exactly the contrary as far as I see it a bit everywhere: populist right-wing movements, who admittedly are not only stupid per se - they rightly call out the left being narrowly minded extreme left in their policies, alienating with wokeism or the like the more middle-ground people - but focus a lot on dismantling social cohesion/welfare state; are fully in bed with big business and help it profit at the expense of the common men domestically & abroad, lowering taxes, .. -> all in all the very contrary of building a robust society able to cope intelligently with AI.

Reply
Applying right-wing frames to AGI (geo)politics
FlorianH5d23

I empathize a lot in particular with exactly the downside you point out from free trade. But, on

But we’re heading towards a future in which a) most people become far wealthier in absolute terms due to AI-driven innovation, and b) AIs will end up wielding a lot of power in not-very-benevolent ways

It seems imho absolutely uncertain AI makes "most people far wealthier". I fear to the very contrary (and more so given recent years' political developments), AI may make some very rich but a majority poor; we'll be too dumb (or as elite too egoistical and/or self-serving biased) (including unable or unwilling to internationally coordinate) to have reasonable redistribution to the job-losers from automation.

Reply
Intelligence Futures
FlorianH7d30

My gut feeling: compute might not be the first thing for which futures markets naturally work—even if the theoretical case is compelling.

Two reasons come to mind - though I know I'm not strictly proofing anything and wonder whether I might be simplifying too much:

A) Large share of demanders don't know their demand beforehand

Futures markets make the most sense when both sides have clearly foreseeable needs:

  • Supply = OpenAI et al.: they can plan and invest now—makes sense.
  • Demand = much fuzzier. Individual firms don't know their quantity demanded in advance; their demand curve as (qty, price) relations can fluctuate widely as fct. of future developments they don't control: Big compute demand tends to rise and fall with the whims of the broader data economy (and data tech advancements specifically even for given final demands for goods and services).

By contrast, think of the electricity market:
Power plants know their output years in advance. Industrial buyers and data centers often know large parts of their demand 5–10+ years out. ==> We have deep futures markets.
Where demand isn’t clearly known, buyers rationally, complacently just act as price takers.

B) Buyers face no strong local constraints or monopolies

Firms (or individuals) may typically seek long-term contracts when they're locked into a particular supplier—e.g., food or housing markets with low local competition. But for compute, there's global supply and lots of competition. If prices rise in one spot, you can often source capacity elsewhere. That reduces the pressure to hedge and makes price-taking a pretty reasonable default.

 

==> Might compute simply remain a exactly not natural candidate for futures markets? I don't think 'my' demand for compute (as individual or as firm) will become really long-term predictable. Maybe it's because if I'm big in the compute business, I mainly do things that esp. these days anyway are every few months or years rewritten how they're done in terms of compute.

Maybe there's also something of 'I don't really know my compute demand before I have computed my answer'; at least in my computational modelling jobs that's really quite true; my eventual compute demand can vary x-fold (x easily 10+) compared to my original predictions, even for an already precisely defined project and only few weeks or months ahead. (That said I recently bought some more compute hardware myself because cloud felt so expensive, which might speak against my own argument)

Reply
If you want to be vegan but you worry about health effects of no meat, consider being vegan except for mussels/oysters
FlorianH13d101

Thanks. This might just be the nudge to finally try out what I long thought about without ever trying it in practice. Quick questions:

  1. Do we know of any safe upper dose threshold exists (e.g. any excessive accumulated heavy metals from polluted sea or any other imbalances)?
  2. Do we know whether that simple organism is comparably powerful in providing some micro-nutriments that we think we might lack in vegan diet?
    1. To be clear what I mean: It is trivial to get nearly arbitrary "grams of protein" from vegan sources (and I guess most minerals and things too), but in the end, even that doesn't seem to be equivalent at all to eating plant proteins. So: Rather obvious that mussles do cover just perfectly what we need, or actually not so clear?
Reply
The Boat Theft Theory of Consciousness
FlorianH1mo31

I think that's very silly and obviously consciousness helps us steal boats.

Assuming you mean the evolved strategy is to separate out a limited amount of information into the conscious space, having that part control what we communicate externally, so our dirty secrets are more safely hidden away within our original, unconscious space.

Essentially: Let outwards appearance be nicely encapsulated away, with a hefty dose of self-serving bias ad what not, give it only what we deem it useful to know.

Intriguing!! Feels like a good fit with us feeling and appearing so supposedly good always and everywhere, while in reality, we're rather deeply nasty as humans in so many ways.

  1. On the one hand: I find it on one hand intuitively able to help explain a split into two layers, conscious and sub-conscious processes, indeed.
  2. On the other hand: If the aim is to explain 'consciousness as phenomenal consciousness' (say if we're not 100% illusionists): I don't see how the separating into two layers would necessarily create phenomenology something something, as opposed to more 'basic' information processing layers.
Reply
Against asking if AIs are conscious
FlorianH1mo30

Your point (incl. your answer to slientbob) seems to be based on rather fundamental principles; implicitly you'd seem to suggest - I dare interpret a bit freely and wonder what you say:

If you upended your skills, so the AI you build becomes.... essentially a hamun - defined as being basically similar as a human but artificially built by you as biological AI instead of via usual procreation - one could end up tempted to say the same thing: Actually, asking about their phenomenal consciousness is the wrong thing.

Taking your "Human moral judgement seem easily explained as an evolutionary adaptation for cooperation and conflict resolution, and very poorly explained by perception of objective facts." from your answer to silentbob, I have the impression you'd have to say: Yep, no particular certainty about having to take hamun's as being moral patients.

Boils down to some strong sort of illusionism? Do you have, according to your premises, a way to 'save' our conviction of moral value of humans? Or might you actually try to?

Maybe I'm over-interpreting all this, but would be keen to see how you see it.

Reply
Alexander Gietelink Oldenziel's Shortform
FlorianH3mo61

Judging merely from the abstract, the study seems a little bit of a red herring to me:
1. Barely anyone talks about "imminent labor market transformation", instead we say, it may soon turn things upside down. And the study can only show past changes.

2. That "imminent" vs. "soon" may feel like nitpicking but it's crucial: Current tools the way they are currently used, are not yet what completely replaces so many workers 1:1, but if you look at the innovative developments overall, the immense human-labor-replacing capacity seems rather obvious.

Consider as an example a hypothetical 'usual' programmer at a 'usual' company. Would you have strong expectations for her salary to have changed much just because in the past 1-2 years we have been able to have her become faster at coding? Not necessarily, in fact, as we cannot do the coding fully without her yet, it might be for now the value of her marginal product of labor is a bit greater, or maybe a bit lower but AI boom anyway means an IT demand explosion in the near term, so seeing little net effect is surely not any particular surprise, for now. Or the study writer. Language improves, maybe some reasoning in the studies slightly, but habits of how we commission and overall organize, conduct studies haven't changed yet at all; she also has kept her job so far. Or teaching. I'm still teaching just as much as I did 2y ago, of course. The students are still in the same program that they started 2y ago. 80% of incoming students are somewhat ignorant, 20% somewhat concerned about what AI will mean to their studies, but there's no known alternative to them yet than to follow the usual path. We're now starting to reduce contact time at my uni not least due to digital tech, so this may change soon. But, so, until yesterday: +- same old seemingly; no major changes so far on that front either, when one just looks at aggregate macroeconomic data. But this not least reflects the 2 or so years since the large LLMS have broken through; 1 or so year since people have widely started to really use them; is a short time, so we see nothing much in most domains quite yet.

Look at microlevel details, and I'm sure you find already quite a few hints of what might be coming though really, expecting to see much more 'soon'ish than 'right now already'.

Reply
Load More
No wikitag contributions to display.
4FlorianH's Shortform
5mo
1
1Essential LLM Assumes We're Conscious—Outside Reasoner AGI Won't
8d
0
4FlorianH's Shortform
5mo
1
9Alienable (not Inalienable) Right to Buy
6mo
6
7Relativity Theory for What the Future 'You' Is and Isn't
1y
49
5How much should e-signatures have to cost a country?
Q
2y
Q
5
10"AI Wellbeing" and the Ongoing Debate on Phenomenal Consciousness
2y
6
4Name of the fallacy of assuming an extreme value (e.g. 0) with the illusion of 'avoiding to have to make an assumption'?
Q
2y
Q
1
9SunPJ in Alenia
3y
19
5Am I anti-social if I get vaccinated now?
Q
4y
Q
14