All of gbear605's Comments + Replies

I haven't read Fossil Future, but it sounds like he's ignoring the option of combining solar and wind with batteries (and other types of electrical storage, like pumped water). The technology is available today and can be more easily deployed than fossil fuels at this point.

If you only have solar + wind + batteries, you have a problem when you have a week of bad weather. Batteries can effectively move energy that's produced at noon to the night but they are not cost effective for charging batteries in summer to be used in bad months in the winter. 
While I think Epstein's treatment of solar/wind and batteries is too brief, his main points are: 1. Large portions of the energy we need have nothing to do with the grid. Specifically, transportation (global shipping, flight) and industrial process heat (to make steel, concrete, etc.) comprise a large percentage of our energy needs and solar/wind are pretty useless (far too inefficient) for meeting those needs. 2. Epstein also points out that replacing current fossil fuels with solar/wind + batteries will require massive amounts of a) batteries, b) transmission lines, and c) solar and wind farms, which the environmental movement seem to oppose locally whenever possible. Just because the technology exists doesn't mean we're capable, as a society, of deploying it at scale.

Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences

The citation is to an unreputable journal. Some of their sources might have basis (though a lot of them also seem unreputable), but I wouldn't take this at face value. 

There can also be meaning that the author simply didn't intend. In biblical interpretation, for instance, there have been many different (and conflicting!) interpretations given to texts that were written with a completely different intent. One reader reads the story of Adam and Eve as a text that supports feminism, another reader sees the opposite, and the original writer didn't intend to give either meaning. But both readers still get those meanings from the text.

But that's because the meaning is underdetermined, there is information (explicit meaning) within the texts that constraints the space of interpretations, but it still allows for several different ones. How much the text is underdetermined is both a function of the text and of the reader, the reader may lack (as I said) cultural or idiosyncratic context, acquaintance with the object of reference; or the text (which is what provides the new information) being too short to disambiguate.

Interestingly, it apparently used to be Zebra, but is now Zulu. I'm not sure why they switched over, but it seems to be the predominant choice since the early 1950s. 

somewhat far-fetched guess: internet -> everybody does astrology now -> zebra gets confused with Libra -> replacement with Zulu

I understand that definition, which is why I’m confused for why you brought up the behavior of bacteria as evidence for why bacteria has experience. I don’t think any non-animals have experience, and I think many animals (like sponges) also don’t. As I see it, bacteria are more akin to natural chemical reactions than they are to humans.

I brought up the simulation of a bacteria because an atom-for-atom simulation of a bacteria is completely identical to a bacteria - the thing that has experience is represented in the atoms of the bacteria, so a perfect simulation of a bacteria must also internally experience things.

If bacteria have experience, then I see no reason to say that a computer program doesn’t have experience. If you want to say that a bacteria has experience based on guesses from its actions, then why not say that a computer program has experience based on its words?

From a different angle, suppose that we have a computer program that can perfectly simulate a bacteria. Does that bacteria have experience? I don’t see any reason why not, since it will demonstrate all the same ability to act on intention. And if so, then why couldn’t a different computer progra... (read more)

If you look far enough back in time, humans are are descended from animals akin to sponges that seem to me like they couldn’t possibly have experience. They don’t even have neurons. If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience. But at some point along the line, animals developed the ability to have experience. If you believe in a higher being, then maybe it introduced it, or maybe some other metaphysical cause, but otherwise it seems like qualia has to arise spontaneously from the evolut... (read more)

My disagreement is here. Anyone with a microscope can still look at them today. The ones that can move clearly demonstrate acting on intention in a recognizable way. They have survival instincts just like an insect or a mouse or a bird. It'd be completely illogical not to generalize downward that the ones that don't move also exercise intention in other ways to survive. I see zero reason to dispute the assumption that experience co-originated with biology. I find the notion of "half consciousness" irredeemably incoherent. Different levels of capacity, of course, but experience itself is a binary bit that has to either be 1 or 0.

Nit: "if he does that then Caplan won't get paid back, even if Caplin wins the bet" misspells "Caplan" in the second instance.

Fixed. Thank you.

Cable companies are forcing you to pay for channels you don’t want. Cable companies are using unbundling to mislead customers and charge extra for basic channels everyone should have.

I think this would be more acceptable if either everything was bundled or nothing was. But generally speaking companies bundle channels that few people want, to give the appearance of a really good deal, and unbundle the really popular channels (like sports channels) to profit. So you sign up for a TV package that has "hundreds of channels", but you get lots of channels that you don't care about and none of the channels you really want. You're screwed both ways.

I think you're totally spot on about ChatGPT and near term LLMs. The technology is still super far away from anything that could actually replace a programmer because of all of the complexities involved.

Where I think you go wrong is looking at the long term future AIs. As a black box, at work I take in instructions on Slack (text), look at the existing code and documentation (text), and produce merge requests, documentation, and requests for more detailed requirements (text). Nothing there requires some essentially human element - the AI just needs to be g... (read more)

Prediction market on whether the lawsuit will succeed:

I’m not a legal expert, but I expect that this sort of lawsuit, involving coordination between multiple states’ attorneys general and the department of justice, would take months of planning and would have to have started before public-facing products like ChatGPT were even released.

That actually goes a long way towards answering the question. This means that in order for it to be connected, the lawsuit would have been on the backburner and the OpenAI-MSFT partnership somehow was either the straw that broke the camel's back, or it mostly-by-itself triggered a lawsuit that was held in reserve against google. Highly relevant info either way, thank you.

The feared outcome looks something like this:

  • A paperclip manufacturing company puts an AI in charge of optimizing its paperclip production.
  • The AI optimizes the factory and then realizes that it could make more paperclips by turning more factories into paperclips. To do that, it has to be in charge of those factories, and humans won’t let it do that. So it needs to take control of those factories by force, without humans being able to stop it.
  • The AI develops a super virus that will be an epidemic to wipe out humanity.
  • The AI contacts a genetics lab and
... (read more)
1Program Den5mo
I get the premise, and it's a fun one to think about, but what springs to mind is Phase 1: collect underpants Phase 2: ??? Phase 3: kill all humans As you note, we don't have nukes connected to the internet. But we do use systems to determine when to launch nukes, and our senses/sensors are fallible, etc., which we've (barely— almost suspiciously "barely", if you catch my drift[1]) managed to not interpret in a manner that caused us to change the season to "winter: nuclear style". Really I'm doing the same thing as the alignment debate is on about, but about the alignment debate itself. Like, right now, it's not too dangerous, because the voices calling for draconian solutions to the problem are not very loud.  But this could change.  And kind of is, at least in that they are getting louder.  Or that you have artists wanting to harden IP law in a way that historically has only hurt artists (as opposed to corporations or Big Art, if you will) gaining a bit of steam. These worrying signs seem to me to be more concrete than the, similar, but not as old, nor as concrete, worrisome signs of computer programs getting too much power and running amok[2].     1. ^ we are living in a simulation with some interesting rules we are designed not to notice 2. ^ If only because it hasn't happened yet— no mentats or cylons or borg history — tho also arguably we don't know if it's possible… whereas authoritarian regimes certainly are possible and seem to be popular as of late[3]. 3. ^ hoping this observation is just confirmation bias and not a "real" trend. #fingerscrossed

We're worried about AI getting too powerful, but logically that means humans are getting too powerful, right?

One of the big fears with AI alignment is that the latter doesn't logically proceed from the first. If you're trying to create an AI that makes paperclips and then it kills all humans because it wasn't aligned (with any human's actual goals), it was powerful in a way that no human was. You do definitely need to worry about what goal the AI is aligned with, but even more important than that is ensuring that you can align an AI to any human's preferences, or else the worry about which goal is pointless.

1Program Den5mo
I think the human has to have the power first, logically, for the AI to have the power. Like, if we put a computer model in charge of our nuclear arsenal, I could see the potential for Bad Stuff.  Beyond all the movies we have of just humans being in charge of it (and the documented near catastrophic failures of said systems— which could have potentially made the Earth a Rough Place for Life for a while).  I just don't see us putting anything besides a human's finger on the button, as it were.   By definition, if the model kills everyone instead of make paperclips, it's a bad one, and why on Earth would we put a bad model in charge of something that can kill everyone?  Because really, it was smart — not just smart, but sentient! — and it lied to us, so we thought it was good, and gave it more and more responsibilities until it showed its true colors and… It seems as if the easy solution is: don't put the paperclip making model in charge of a system that can wipe out humanity (again, the closest I can think of is nukes, tho the biological warfare is probably a more salient example/worry of late).  But like, it wouldn't be the "AI" unleashing a super-bio-weapon, right?  It would be the human who thought the model they used to generate the germ had correctly generated the cure to the common cold, or whatever.  Skipping straight to human trials because it made mice look and act a decade younger or whatnot. I agree we need to be careful with our tech, and really I worry about how we do that— evil AI tho? not so much so

The Flynn effect isn't really meaningful outside of IQ tests. Most medieval and early modern peasants were uneducated and didn't know much about the world far from their home, but they definitely weren't dumb. If you look at the actual techniques they used to run their farms, they're very impressive and require a good deal of knowledge and fairly abstract thinking to do optimally, which they often did. 

Also, many of the weaving patterns that they've been doing for thousands of years are very complex, much more complex than a basic knitting stitch.

-5Angela Pretorius5mo
  • At least 90% of internet users could solve it within one minute.

While I understand the reasoning behind this bar, having a bar greater than something like 99.99% of internet users is strongly discriminatory and regressive. Captchas are used to gate parts of the internet that are required for daily life. For instance, almost all free email services require filling out captchas, and many government agencies now require you to have an email address to interact with them. A bar that cuts out a meaningful number of humans means that those humans become unable t... (read more)

5Bruce G5mo
If only 90% can solve the captcha within one minute, it does not follow that the other 10% are completely unable to solve it and faced with "yet another barrier to living in our modern society". It could be that the other 10% just need a longer time period to solve it (which might still be relatively trivial, like needing 2 or 3 minutes) or they may need multiple tries. If we are talking about someone at the extreme low end of the captcha proficiency distribution, such that the person can not even solve in a half hour something that 90% of the population can answer in 60 seconds, then I would expect that person to already need assistance with setting up an email account/completing government forms online/etc, so whoever is helping them with that would also help with the captcha. (I am also assuming that this post is only for vision-based captchas, and blind people would still take a hearing-based alternative.)

Workers at a business are generally more aligned with each other than they are with the shareholders of the business. For example, if the company is debating a policy that has a 51% chance of doubling profit and a 49% chance of bankrupting the company, I would expect most shareholders to be in favor (since it's positive EV for them). But for worker-owners, that's a 49% chance of losing their job and a 51% chance of increasing salary but not doubling (since it's profit that is doubling, not revenue, and their salaries are part of the expenses), so I would e... (read more)

I always like it when I can upvote and disagree :)  I think you have to be in VERY far mode, and still squint a bit, to think of that as "alignment" to the degree that distinguishes socialist from conventional organizations.  Sure, employees as a group will prefer higher median wages over more profits (though maybe not if they're actual owners to a great degree), but I have yet to see a large organization where workers care all that much about other workers (distant ones, with different roles, who compete for prestige and compensation even while cooperating for  delivery).   Conventional org owners/leaders care a lot about worker retention and productivity, which is often summarized as "satisfaction".  I have seen no evidence in my <mumble> years at companies big and small, including both tech and non-tech workers that office workers care more about warehouse workers than senior management does.  There is probably slightly more for warehouse workers caring about workers in other warehouses, but even then, there's cut-throat hatred for closing "my" warehouse rather than someone else's.

I think the biggest issue in software development is the winner-takes-all position with many internet businesses. For the business to survive, you have to take the whole market, which means you need to have lots of capital to expand quickly, which means you need venture capital. It's the same problem that self-funded startups have. People generally agree that self-funded startups are better to work at, but they can't grow quite as fast as VC-funded startups and lose the race. But that doesn't apply outside of the software sphere (which is why VC primarily ... (read more)

So Diplomacy is not a computationally complex game, it's a game about out-strategizing your opponents where roughly all of the strategy is convincing other of your opponents to work with you. There are no new tactics to invent and an AI can't really see deeper into the game than other players, it just has to be more persuasive and make decisions about the right people at the right time. You often have to do things like plan ahead to make your actions so that in a future turn someone else will choose to ally with you. The AI didn't do any specific psycholog... (read more)

 What does this picture [pale blue dot] make you think about?

This one in particular seems unhelpful, since the picture is only meaningful if the viewer knows what it's a photo of. Sagan's description of it does a lot to imbue so much emotion into it.

Thank you for your input on this. The idea is to show people something like the following image [see below] and give a few words of background on it before asking for their thoughts. I agree that this part wouldn't be too helpful for getting people's takes on the future, but my thinking is that it might be nice to see some people's reactions to such an image. Any more thoughts on the entire action sequence?

That seems like a really limiting definition of intelligence. Stephen Hawking, even when he was very disabled, was certainly intelligent. However, his ability to be agentic was only possible thanks to the technology he relied on (his wheelchair and his speaking device). If that had been taken away from him, he would no longer have had any ability to alter the future, but he would certainly still have been just as intelligent. 

It's just the difference between potential and actualized.

I don’t have any experience with data centers or with deploying machine learning at scale. However, I would expect that for performance reasons it is much more efficient to have a local cache of the current data and then either have a manual redeploy at a fixed schedule or have the software refresh the cache automatically after some amount of time.

I would also imagine that reacting immediately could result in feedback loops where the AI overreacts to recent actions.

A mitigating factor for the criminality is that smarter people are usually less in need of committing crimes. Society values conventional intelligence and usually will pay well for it, so someone who is smarter will tend to get better jobs and make more money, so they won't need to resort to crime (especially petty crime).

It could also be that smarter people get caught less often, for any given level of criminality.
Additionally, if you have a problem which can be solved by either (a) crime or (b) doing something complicated to fix it, your ability to do (b) is higher the smarter you are.

My understanding of Spanish (also not a Spanish speaker) is that it's a palatal nasal /ɲ/, not a palatalized alveolar nasal /nʲ/. With a palatal nasal, you're making the sound with the tip of your tongue at the soft palate (the soft part at the top of your mouth, behind the alveolar ridge). With a palatalized nasal, it's a "secondary" articulation, with the body of your tongue moving to the soft palate.

That said, the Spanish ñ is a good example of a palatal or palatalized sound for an English speaker.

And Irish (Gaelic) has both! (/ɲ/ is slender ng, /nʲ/ is slender n)

Yeah, that's absolutely more correct, but it is at least a little helpful for a monolingual English speaker to understand what palatalization is.

Perhaps many Americans know at least some basics of Spanish? I think the Spanish ñ letter, as in "el niño", is proper palatalization. (But I do not speak Spanish.)

Not sure I can explain it in text to a native English speaker what palatalization is; you would need to hear actual examples.


There are some examples in English. It's not quite the same as how Slavic languages work*, but it's close enough to get the idea: If you compare "cute" and "coot", the "k" sound in "cute" is palatalized while the "k" sound in "coot" is not. Another example would be "feud" and "food".

British vs American English differ sometimes in palatalization. For instance, in British English (RP), "tube" is pronounced with a palatalized "t" ... (read more)

I would just call this an extra 'y' sound before the vowel. ([ˈkjuːt] vs. [ˈkuːt])
This explains something I was confused about, thank you.

The risk is a good point given some of the uncertainties we’re dealing with right now. I’d estimate maybe 1% risk of those per year (more weighted towards the latter half of the time frame, but I’ll assume that it’s constant), so perhaps with a discounting rate of that it would need to be more like $1400. That’s still much less than the assumption.

Looking at my consumption right now, I objectively would not spend the $1000 on something that lasts for more than 30 years, so I believe that shouldn’t be relevant. To make this more direct, we could phrase it as something like “a $1000 vacation now or a $1400 vacation in 30 years”, though that ignores consumption offsetting.

For the point about smoothing consumption, does that actually work given that retirement savings are usually invested and are expected to give returns higher than inflation? For instance, my current savings plan means that although my income is going to go up, and my amount saved will go up proportionally, the majority of my money when I retire will be from early in my career. 

For a more specific example, consider two situations where I'm working until I'm 65 and have returns of 6% per annum (and taking all dollar amounts to be inflation adjusted):

  • I s
... (read more)
6Andrew Currall8mo
This sounds nuts to me. Firstly, what about risk? You might be dead in 30 years. We might have moved to a different economy where money is worthless. You might personally not value money (or not value the kind of things you can get with money) as much. Admittedly there's also some upside risk, but it's clearly lower than the downside.  We're ignoring investment possibilities, of course. But even then, in any case, if you have £1000 now, you can use it to buy something that would last more than 30 years and benefit you over that time. 

if I put things in my cart, don't check out, and come back the next day, I'm going to be frustrated if the site has forgotten my selections!

Ironically, I get frustrated by the inverse of this. If I put something in my shopping cart, I definitely don’t still want it tomorrow. I keep on accidentally having items that I don’t want in my order when I check out, and then I have to go back through all the order steps to remove the item (since there’s hardly ever a removal button from the credit card screen). It’s so frustrating! I don’t want you to remember things about me from previous days, just get rid of it all.

I see that for longer periods, but even overnight?

A single human is always going to have a risk of a sudden mental break, or perhaps simply not having been trustworthy in the first place. So it seems to me like a system where the most knowledgeable person has the single decision is always going to be somewhat more risky than a situation where that most knowledgeable person also has to double check with literally anyone else. If you make sure that the two people are always together, it doesn’t hurt anything (other than the salary for that person, I suppose, but that’s negligible).

For political reasons, we ... (read more)

1M. Y. Zuo8mo
There is the problem of the less knowledgeable being deceived by a false alarm or ignoring a genuine alarm. Since the consequences are so enormous for either case, due to competitive dynamics between multiple countries, it still doesn't seem desirable, or even credible, to entrust this to anything larger then a small group at best.  In the case of extreme time pressure, such as the hypothetical 5 minute warning, trying to coordinate between a small group of hastily assembled non-experts, under the most extreme duress imaginable, will likely increase the probability of both immensely undesirable scenarios. (Assuming they can even be assembled and communicate quickly enough) On the other hand, this removes the single point of failure, and leaving it to a single individual does have the other downsides you mentioned.  So there may not be a clear answer, if we assume communication speeds are sufficient, leaving it to a political choice.  Perhaps this might have been feasible before the invention of the internet.  Nowadays, this seems practically impossible, as anyone competent enough to understand building half a weapon will be very likely capable of extrapolating to the full weapon in short order. Also, more than likely capable of bypassing any blocks society may establish to prevent communication between those with complementary knowledge. Even if it was split 10 ways, the delay may only be a few years to decades until the knowledge is reassembled.

The policy could just be “at least one person has to agree with the President to launch the nuclear arsenal”. It probably doesn’t change the game that much, but it at least gets rid of the possible risk that the President has a sudden mental break and decides to launch missiles for no reason. Notably it doesn’t hurt the ability to respond to an attack, since in that situation there would undoubtedly be at least one aide willing to agree, presumably almost all of them.

Actually consulting with the aide isn’t necessary, just an extra button press to ensure that something completely crazy doesn’t happen.

1M. Y. Zuo8mo
But the probability of a false alarm can never be reduced to zero.  In this case wouldn't it be most desirable to have the most knowledgeable person, with the best internal estimate of the probability of a false alarm, to make the final decision? Leaving it to anyone other than the person with the best estimate seems to be intentionally tolerating a higher than minimal possibility of senseless catastrophe.

What I’m referring to is the two-man rule:

US military policy requires that for a nuclear weapon to actually be launched, two people at the silo or on the submarine have to coordinate to launch the missile. The decision still comes from a single person (the President), but the people who follow out the order have to be double checked, so that a single crazy serviceman doesn’t launch a missile.

It wouldn’t be crazy for the President to require a second person to help make the decision, since the President is going ... (read more)

1M. Y. Zuo8mo
'Consulting' with any random aide that happens to be the nearest on duty seems even less desirable then making the decision alone. If you mean a rotating staff of knowledgeable military attaches or similar, maybe. If they literally stay nearby 24/7.  But then wouldn't it be the military attache making the final decision, since they will always have the more up-to-date knowledge that cannot be fully elaborated in a few minutes?

In the case of nuclear weapons, they infamously have been made to require two individuals to both press the button (or turn the key) to launch the missile. Even if some situations aren’t currently setup like that, they certainly all could be made to require at least two people.

1M. Y. Zuo8mo
Submarine launched supersonic missiles already reduce the decision time to just a few minutes for the major capitals. As they're all within a few hundred km to international waters, leaving it very unlikely that even a second person could be consulted in that timeframe. The only exception would be if they were coincidentally in the same room. This is regardless of what laws or systems or organizational structures happen to exist or be developed in the future, assuming the capital stays put. Since the physical distance to the capital cannot be feasibly changed by any human action.  (There is a possibility for an even more extreme scenario to arise, if fractional orbital bombardment ever becomes implemented, which would reduce the decision time to under a minute. In such a case it would not be physically credible for any human to have any authority at all in deciding on a counterattack.)

For the "Will Russia use chemical or biological weapons in 2022?" question, the creator provided information about an ambiguous outcome, though it seems very subjective:

If, when the question closes, there is widespread reporting that Russia did the attacks and there is not much reported doubt, then I will resolve YES. If it seems ambiguous I will either resolve as N/A or at a percentage that I think is reasonable. (Eg. resolve at 33% if I think there’s a 2/3 chance that the attacks were false flag attacks.) This of course would also go the other way if there are supposed Ukrainian attacks that are widely believed to be caused by Russia.

I think there are several similar such markets - the one I was looking at was at [] and lacks such a comment.   EDITED: Ah, you are correct and I am wrong, the text you posted is present, it's just down in the comments section rather than under the question itself.  That does make this question less bad, though it's still a bit weird that the question had to wait for someone to ask the creator that (and, again, the ambiguity remains). I'll update the doc with links to reduce confusion - did not do that originally out of a mix of not wanting to point too aggressively at people who wrote those questions and feeling lazy.

In my experience, paying for the extra seat room often gives you a seat that doesn’t actually have more legroom, or actually have less (!!) legroom. when the payment is so disconnected from the actual experience, it becomes useless as a signal.

Exactly. The worst transatlantic flight I ever had was one where I paid for "extra legroom". turns out it was a seat without a seat in front, i.e., the hallway got broader there. However, other passengers and even the flight attendants certainly didn't act like this extra legroom belonged to me. Someone even stepped on my foot! On top of that I had to use an extremely flimsy table that folded out of the armrest. Since most of us aren't weekly business flyers, this is a far cry from a free market.

I suspect that many researchers both consider is probably hopeless, but still worth working on given their estimates of how possible it is, how likely unaligned AI is, and how bad/good unaligned/aligned AI would be. A 90% chance of being hopeless is worth it for a 10% chance of probably saving the world. (Note that this is not Pascal’s Wager since the probability is not infinitesimal.)

This comment got me to change the wording of the question slightly. “so many” was changed to “most”. You answered the question in good faith, which I’m thankful for, but I don’t feel your answer engaged with the content of the post satisfactorily. I was asking about the set of researchers who think alignment, at least in principle, is probably not hopeless, who I suspect to be the majority. If I failed to communicate that, I’d definitely appreciate if you could give me advice on how to make my question more clear. Nevertheless I do agree with everything you’re saying, though we may be thinking of different things here when we use the word “many”.

Doing some research, it sounds like imposters syndrome is totally present among mountain climbers. Unless you’ve conquered Everest, there’s always some taller or more dangerous mountain that someone else has done.

See, for instance, this article about a climber feeling imposters syndrome after climbing a difficult cliff, because “I felt like it must not be as hard as people said it was because I was able to do it.” It also quotes a psychologist who works with athletes as saying “Imposter syndrome is very common, very pervasive, ... It’s most common among hi... (read more)

I've always wondered if part of the reason impostor syndrome is so common among high achievers might be because imposter syndrome helps people become high achievers. If you never think you're good enough, you will never be satisfied and will always keep striving to do better. And that's what it really takes to be the best.

I’m more familiar with DALL-E 2 than with Midjourney, so I’ll assume that they have the same shortcomings. If not, feel free to ignore this. It seems like there are still some crucial details that cause problems with AI art that will prevent it from being used for many types of art that will probably soon be fixed, and that’s why I would say “on the cusp” rather than “it’s already here”. I think the biggest issue for your example with Magic cards, there’s a certain level of art style consistency between the cards in a set that is necessary. From my experie... (read more)

At least for actual Magic cards, it's not just a matter of consistency in some abstract sense. Cards from the same set need to relate to each other in very precise ways and the related constraints are much more subtle than "please keep the same style".

Here you can find some examples of real art descriptions that got used for real cards (just google " art descriptions" for more examples). I could describe further constraints that are implicit in those already long descriptions. For example, consider the Cult Guildmage in the fourth ima... (read more)

Hmm, I haven't had much trouble getting Dall-E to output consistent styles. (I think there's some upfront cost in figuring out how to get the style I want, but then it tends to work pretty reliably, or at least I develop a sense of how to tweak it to maintain the style. (albeit, this does take extra time, and is part of why in my other comment I note that I think Davis is undercounting the cost of AI art)

I think the biggest issue for your example with Magic cards, there’s a certain level of art style consistency between the cards in a set that is necessary. From my experience with DALL-E, that consistency isn’t possible yet. You’ll create one art piece with a prompt, but then edit the prompt slightly and it will have a rather different style.

As I keep emphasizing, DALL-E makes deliberate tradeoffs and is deliberately inaccessible, deliberately barring basic capabilities it ought to have like letting you use GLIDE directly, and so is a loose lower bound ... (read more)

Midjourney seems to be better at stylistic consistency. E.g. see the images on the post, which are pretty stylistically consistent:

Just a note on 1.: the LessWrong upvote system allows strong upvoting and upvotes from users with more karma update the total more. Seeing eight karma on a post doesn’t mean too much since it could be just from one or two people.

A lot of city dwellers then were doing manual labor (factory lines, construction), but I’m really not sure about the office workers from them. It’s a good question!

Hm, this was mostly anecdotal from speaking to German friends (including people in Munich!), so I guess I was speaking too generally. Certainly more people drink bottled water in Germany at restaurants compared to many other countries, but I see that I was overstating the case for at home.

Restaurants in Germany don't tend to offer free tap water, so you need to buy bottled water. I think that Germans just like the taste of sparkling mineral water, hence why they drink it so much.

Per Jeffrey L. Singman, Daily Life in Medieval Europe, Westport, Connecticut: Greenwood Press, 1999, P. 54 - 55 (and text copied from, “A prosperous English peasant in the 14th century would probably consume 2 - 3 pounds of [rye, oats, or barley] bread, 8 ounces of meat or fish or other protein and 2 - 3 pints of ale per day”, which works out to about 3500 to 5000 calories per day.

That same page lists various farm chores as burning 1500-7500 calories over an 8 hour perio... (read more)

3Ege Erdil1y
Thanks, that's interesting. Intuitively I would not have expected them to be burning so many calories.
3Lone Pine1y
How much did city dwellers in the early 20th century eat? There must have been a period when people were not doing so much manual labor but before the obesity epidemic.

Farm laborers historically ate a lot of calories just to be able to get through their days. Their calories weren’t very appetizing, but they had to eat a lot because they burned a lot.

3Ege Erdil1y
That makes sense to me but I'd really need to see the data on how many calories they ate in 1700. I notice I would still be surprised if an average farm laborer in 1700 in prime age ate more than 2400 kcal/day. It's not really relevant to my main point but if you have some data proving that I would be interested to see it.

In German, the tap water is known to be very hard, so essentially no one drinks tap water. Instead, it's common to drink alcoholic beverages (brewed, so they don't have the same mineral level) or bottled mineral water. The mineral water does contain some lithium (and some mineral waters contain very high levels), but most bottled water in Germany does not have very high levels of lithium. In this study, the "medium" sample was 171 µg/L while the "high lithium" sample was 1724 µg/L. So people who generally drink the high lithium bottled water would be at the lower end of SMTM's guesses, and everyone else would be safely outside of it.

Our local tap water (in a town close to Munich) is roughly as soft as tap water can be, and I drink nothing else. But if you've found statistics on how countries differ in how much tap water their citizens drink, I'd be interested to see them. Unfortunately, searching for "tap water consumption" includes all the other uses like showering etc.

Unfortunately, that question was set up poorly so that it is impossible to guess lower than a median of $5010. Of course, that's because the actual price of ETH was around $4800 back in November 2021, so the predictors are basically saying that it won't recover to that price by the end of 2022.

I'm not super familiar, but I just read the one page summary of the report. One of the supposed catalysts for $150k was EIP1559, which went into effect last August and didn't seem to affect the price. The other catalyst was supposed to be PoS coming shortly after, which has been continually delayed (though probably coming soon) and will have had a much longer gap after EIP1559. ETH HODLing seems to not be significant as expected, so that's another driver that has failed. The narrative also isn't there, and the recent crypto crashes are working against it. ... (read more)

Looks like Squish himself now considers his thesis falsified. []

I suspect most people that would say that they wouldn't kill Grandma would also say the same about a situation where they can kill someone else's grandma to give the money to their own family. Actually, in the hypothetical, you're not one of Grandma's heirs, so I interpreted it as if you're some random person who happens to be around Grandma, not one of her actual grandchildren.

So really, I think that it is either something like "the moral weight of the person next to me versus distant strangers" or "choosing to kill someone is fundamentally different than choosing to save someone's life and you can't add them up". 

California introduced the CCPA following the GDPR, which covers a lot of the same regulations although generally less strict.

It seems to me that the only thing that seems possible is to treat it like a human that took inspiration from many sources. In the vast majority of cases, the sources of the artwork are not obvious to any viewer (and the algorithm cannot tell you one). Moreover, any given created piece is really the combination of the millions of pieces of the art that the AI has seen, just like how a human takes inspiration from all of the pieces that it has seen. So it seems most similar to the human category, not the simple manipulations (because it isn’t a simple manip... (read more)

I agree that this is a plausible outcome, but I don't think society should treat it as a settled question right now. It seems to me like the sort of technology question which a society should sit down and think about.  It is most similar to the human category, yes absolutely, but it enables different things than the human category. The consequences are dramatically different. So it's not obvious a priori that it should be treated legally the same.  You argue against a complete ban by pointing out that not all relevant governments would cooperate. I don't think all governments have to come to the same decision here. Copyright enforcement is already not equal across countries. I'm not saying I think there should be a complete ban, but again, I don't think it's totally obvious either, and I think artists calling for a ban should have a voice in the decision process.  But I also don't agree with your argument that the only two options are a complete ban or treating it exactly like human-generated art. I don't agree with your argument that a requirement to display the closest images from the training data would be useless. I agree that it is easily circumvented, but it does make it much easier to avoid accidental infringement by putting in prompts which happen to be good at pulling out exact duplicates of some datapoint, unbeknownst to you.  I also think it would be within the realm of reasonable possibility to simply apply different legal standards for infringement in the two cases. Perhaps it's fine for human artists to copy a style, but because it's so easy to do it with an AI, it is considered a form of infringement to copy a style that way. IDK, possibly that is a terrible idea, but my point is that it's not clear to me that there are no options at all.
Load More