Jay

Wiki Contributions

Comments

Strongly upvoted.  A few comments:

I think of a human being as a process, rather than a stable entity.  We begin as embryos, grow up, get old, and die.  Each step of the process follows inevitably from the steps before.  The way I see it, there's no way an unchanging upload could possibly be human.  An upload that evolves even less so, given the environment it's evolving in.

On a more practical level, the question of whether a software entity is identical to a person depends on your relationship to that person.  Let's take Elizer Yudkowski for example:

  • I personally have never met the guy but have read some of the stuff he wrote.  If you told me that he'd been replaced with a LLM model six months ago, I wouldn't be able to prove you wrong or have much reason to care.
  • His friends as family would feel very differently, because they have deeper relationships to him and many of the things they need from him cannot be delivered by an LLM.
  • To Elizer himself, the chatbot would obviously not be him.  Elizer is himself, the chatbot is something else.  Uniquely, Elizer doesn't have a demand for Elizer's services; he has a supply of those services that he attempts to find demand for (with considerable success so far).  He might consider the chatbot a useful tool or an unbeatable competitor, but he definitely wouldn't consider it himself.  
  • To Elizer's bank it's a legal question.  When the chatbot orders a new server, does Elizer have to pay the bill?  If it signs a contract, is Elizer bound?  
    • Does the answer change if there's evidence that it was hacked?  What sorts of evidence would be sufficient?
  • If asked, AI-lizer would claim to perceive itself as Elizer.  Whether it actually has qualia, and what those qualia are like, we will not know.

A lot of the nonprofit boards that I've seen use a "consent agenda" to manage the meeting.  The way it works is:

  • The staff create the consent agenda and provide it to the board members perhaps a week in advance.
  • Any single board member can take any item off the consent agenda and onto the regular agenda.
  • The consent agenda is passed in a single motion.  It always passes unanimously, because anything that any member thinks merits attention has been moved onto the regular agenda (where it is separately discussed and voted on).

It doesn't do much for governance directly, but fewer time-wasting consent votes can make room for more discussion of issues that matter.

In the US, parties still aren't recognized by the Constitution.  Every election is a choice between all of the people who qualify for the ballot for each office.  Several groups of like-minded politicians quickly emerged, and over time these became our major parties.  

It's not uncommon for an American candidate to run as an independent (i.e. not affiliated with a party), although they hardly ever win. 

To the extent that I understand what you're saying, you seem to be arguing for curiosity as a means of developing a detailed, mechanistic ("gears-level" in your term) model of reality.  I totally support this, especially for the smart kids.  I'm just trying to balance it out with some realism and humility.  I've known too many people who know that their own area of expertise is incredibly complicated but assume that everything they don't understand is much simpler.  In my experience, a lot of projects fail because a problem that was assumed to be simple turned out not to be.

I get your point, and I totally agree that answering a child's questions can help the kid connect the dots while maintaining the kid's curiosity.  As a pedagogical tool, questions are great.  

Having said that, most people's knowledge of most everything outside their specialties is shallow and brittle.  The plastic in my toothbrush is probably the subject of more than 10 Ph.D. dissertations, and the forming processes of another 20.  This computer I'm typing on is probably north of 10,000.  I personally know a fair amount about how the silicon crystals are grown and refined, have a basic understanding of how the chips are fabricated (I've done some fabrication myself), know very little about the packaging, assembly, or software, and know how to use the end product at a decent level.  I suspect that worldwide my overall knowledge of computers might be in the top 1% (of some hypothetical reasonable measure).  I know very little about medicine, agriculture, nuclear physics, meteorology, or any of a thousand other fields.

Realistically, a very smart* person can learn anything but not everything (or even 1% of everything).  They can learn anything given enough time, but literally nobody is given enough time.  In practice, we have to take a lot of things on faith, and any reasonable education system will have to work within this limit.  Ideally, it would also teach kids that experts in other fields are often right even when it would take them several years to learn why.

*There are also average people who can learn anything that isn't too complicated and below-average people who can't learn all that much.  Don't blame me; I didn't do it.

Being honest, for nearly all people nearly all of the time questioning firmly established ideas is a waste of time at best.  If you show a child, say, the periodic table (common versions of which have hundreds of facts), the probability that the child's questioning will lead to a significant new discovery are less that 1 in a billion* and the probability that they will lead to a useless distraction approach 100%.  There are large bodies of highly reliable knowledge in the world, and it takes intelligent people many years to understand them well enough to ask the questions that might actually drive progress.  And when people who are less intelligent, less knowledgeable, and/or more prone to motivated reasoning are asking the questions, you can get flat earthers, Qanon, etc.

*Based on the guess that we've taught the periodic table to at least a billion kids and it's never happened yet.

I think a better way to look at it is that frequentist reasoning is appropriate in certain situations and Bayesian reasoning is appropriate in other situations.  Very roughly, frequentist reasoning works well for descriptive statistics and Bayesian reasoning works well for inferential statistics.  I believe that Bayesian reasoning is appropriate to use in certain kinds of cases with a probability of (1-delta), where 1 represents the probability of something that has been rationally proven to my satisfaction and delta represents the (hopefully small) probability that I am deluded.

Wars are an especially nasty type of crisis because there's an enemy.  That enemy will probably attempt to use your software for its own ends.  In the case of your refugee heatmap idea, given that the Russians are already massacring civilians, that might look like a Russian artillery commander using it to deliberately target refugees.  Alternately, they might target incoming buses to prevent the refugees from getting out of the Ukrainian military's way and make the Ukrainians spend essential resources on feeding and protecting them.  

Does the Russian military even have the tech dependencies that would make them vulnerable to cyber attacks?  I think they're pretty analog.

I spent about 20 years in academic and industrial research, and my firm belief is that almost nobody spends nearly enough time in the library.  There have been hundreds of thousands of scientists before you; it is overwhelmingly likely that your hot new idea has been tried before.  The hard part is finding it; science is made up of thousands of tiny communities that rarely talk to each other and use divergent terminology.  But if you do the digging, you may find a paper from Egypt in 1983 that describes exactly why your project isn't working (real example).  Finding that paper two weeks into the project is much better than finding it five years later.

Load More