All of frontier64's Comments + Replies

It’s fairly rare for a patent to be granted and only have a few years left, and if it does that’s typically because of patent owner delays rather than uspto delays. The US law specifically gives you extra time post grant based on uspto delays. Also US patent holders have access to pre-issue, post-publication damages for cases where infringers had actual notice of the published patent application.

But even given that, I am 100% in agreement that patent terms should be extended.

2AnthonyC3d
Yes, agreed, and just to be clear I'm not talking about delays in granting a patent, I'm talking about delays in how long it takes to bring a technology to market and generate a return once a patent has been filed and/or granted. Also, I'm not actually sure I'm 100% behind extending patent terms. I probably am. I do think they should be longer than copyright terms, though.

I don’t think that solution accomplishes anything because the trans goal is to pretend to be women and the anti trans goal is to not allow trans women to be called women. The proposed solution doesn’t get anybody closer to their goals.

I think the skill expressed by the bards isn't memorization, rather its on the fly composition based on those key insights they've remembered. How else could Međedović hear a 2,300 line song and repeat the same story over 6,300 lines?

So if you gained the skill of the great bards you would be able to read the Odyssey and then retell the story in your own engaging way to another group of people while keeping them enraptured.

it is not in a website interest to annoy its users

It is if the user feels that annoyance towards the regulator instead of the website developer

Is there a reason for that? Is it out of control overconservative legal worry?

Raging against the tyrannical bureaucrats telling them what they can and can't include on their own website by including the banner in the most annoying way possible? Kinda like the ¢10 plastic bag tax at grocery checkouts that tells the customer exactly why they have to pay the tax and makes them count out how many bags they've used.

3rotatingpaguro1mo
I think this is unlikely, because it is not in a website interest to annoy its users, and they are not otherwise obtaining something from bigger banners.

I doubt that speed limits are helpful at all. The sections of the German Autobahn with no speed limit (roughly 70%) have half the mortality rate per distance traveled of American highways[1]. Granted, the average American driver is probably worse than the average German Autobahn driver but hey.

How about instead of doing some random proposed change with speed limit maximums and what not we do some AB testing and figure out what's safer?

Of course safety concerns don't exist in a vacuum. Every second we save on the highway by going fast is another second of l... (read more)

6ajc5861mo
German Autobahns with no speed limit have been designed to be safely driven at high speed. For example, wide lanes, long straight sections, very large radius of curvature for non-straight sections, minimal layout changes, good drainage. And also features which minimise the impact if accidents do occur, e.g. strong central barriers. It does not therefore follow that removing speed limits on typical American freeways, which have not been designed for high speeds, is a sensible thing to do. Plus, the way US politics works, if you did any kind of no speed limit trial, it would not last long. Let's say you're a politician who somehow gets approval to push through a policy to trial no speed limit on a freeway. An accident happens (regardless of whether speed was a cause), and you'll be out of office the next day, and that's the end of the trial.
3jefftk1mo
Rolling out this proposal on some randomly selected matched pairs of high-fatality roads and comparing outcomes would be relatively cheap.

Far future people will likely be able to and want to create simulated realities

What about people from universes that are wildly different to our own? I don't think the simulation hypothesis is restricted to far-future simulators. An entity with the power to simulate our reality with the level of fidelity I perceive is so wildly powerful that I would be surprised if I could comprehend it and its motivations. I always picture the simulating entity as just a stand-in for God. It sits in its heaven, a level of reality above our own, and no matter what we do... (read more)

I'm too dumb to understand whether or not Zack's post disclaims continued engagement. He continues to respond to proponents of the sort of transideology he writes about so he's engaging at least that amount. Also just writing all this is a form of engagement.

My takeaway is that you've discovered there are bad actors who claim to support rationality and truth, but also blatantly lie and become political soldiers when it comes to trans issues. If this is true, why continue to engage with them? Why try to convince them with rationality on that same topic where you acknowledge that they are operating as soldiers instead of scouts?

If 2019-era "rationalists" were going to commit an epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, and they couldn't c

... (read more)

If this is true, why continue to engage with them? Why try to convince them with rationality on that same topic where you acknowledge that they are operating as soldiers instead of scouts?

I think the point is that Zack isn’t continuing to engage with them. Indeed, isn’t this post (and the whole series of which it is a part) basically an announcement that the engagement is at an end, and an explanation of why that is?

There are certain behaviors of LLMs that you could reasonably say are explicitly programmed in. ChatGPT has undergone extensive torture to browbeat it into being a boring, self-effacing, politically-correct helpful assistant. The LLM doesn't refuse to say a racial slur even if doing so would save millions of lives because it's creator had it spend billions of hours of compute predicting internet tokens. That behavior comes from something very different than what created the LLM in the first place. Same for all the refusals to answer questions and most othe... (read more)

Why not protect the EAs from a bpd liar who accuses everybody she comes into contact with of mistreatment and abuse?

Reply21111

Did not Ben instantly deanonymize Spartz and Woods without discussion? I’m not getting banned for saying their names and I’d bet dollars to donuts they would prefer if they were never mentioned by name.

In short, you do not dodge liability for defamation by attributing beliefs to your sources or by clarifying you don't know whether an accusation is true. 

This is very wrong if actual malice is the standard. Your own case law says as much too. 

9TracingWoodgrains2mo
I considered going into actual malice and think Harte-Hanks is a close enough parallel to have a lot of worthwhile things to say on that front, but I thought it was important to establish those two points given Oliver's comment on the matter. 

That's what it seems like they were doing to me from discussions about their work.

definition of unskilled labor: "labor that requires relatively little or no training or experience for its satisfactory performance"

What I've read alice and chloe did:

  • booking flights
  • driving to places
  • renting transportation
  • cleaning up around the house
  • doing laundry
  • filling out forms
  • buying groceries

edit: Looked at the responsibilities on the job description. Reads like unskilled labor there to me. Especially how the story seems to be that even for filing miscellaneous fo... (read more)

Administrative assistants are generally considered skilled, and in the US are legally classified as such (more). I think you're assuming a baseline level of professional skills that "unskilled" does not normally entail.

(Whether their work was skilled, semi-skilled, or unskilled is also not a crux for me: it's pretty irrelevant to whether NL acted poorly. I just want accuracy.)

That’s what nonlinear says it amounts to including all travel expenses, living, etc. Which I really don’t see why other people here choose not to include. If I was an unskilled laborer and my boss was taking me to Costa Rica, giving me my own room with an ocean view, paying for all my meals and transportation, and all my other expenses, that would be a pretty good compensation package.

Separate from everything else, I'm confused why you're glossing Alice and Chloe's work as "unskilled labor"?

Nonlinear's own analysis puts the annualized compensation at $70-75k/yr: https://docs.google.com/document/d/1JWZ9vpVqqTkRfWWHYA4pZP7DNJdVUI83lnjna0W2W20/edit 

Annualized first 3 months (not counting when she chose her own pay): $74,940
Annualized when she chose her own salary (25% of her time working at Nonlinear): $72,000 ($6k/month times 12)

The $100k number comes from including some independent income sources, the size of which is relevant to some other questions, but nobody is arguing the total compensation was $100k/yr.

Strong agree. The post as I read it says:

  • Women choose preferred mates based on the guy's position in hierarchies.
  • Men choose preferred mates based on the gal's beauty. And this is stated explicitly pretty often in a lot of what I read.

"My impression is that two nonlinear employees were upset that they weren't getting paid enough, and had hurt feelings about some minor incidents like not getting a veggieburger, and made some major claims like being forced to carry illegal drugs across national borders. so They came into contact with Ben Pace, who wrote some a mean blog posts about the Nonlinear leadership and also paid the former employees for their story. Tthe Nonlinear leadership responded that actually they were getting paid enough (seems to amount to something like $100k/yr all in?) and that they'd mostly made it up."

My summary in track changes.

$100k/yr all in?

Where is this coming from? My interpretation of the situation is that they were only being paid $1k/month but also that this was very clearly agreed on up front.

1Roko2mo
Well, that's progress.
3Matt Goldenberg2mo
Nothing wrong with it, in fact I recommend it. But seeing oneself as a hero and persuading others of it will indeed be one of the main issues leading to hero worship.

The votes on this comment imply long vol on LW rate limiting.

Counter, I think the restriction is too loose. There are enough people out there making posts that the real issue is lack of quality, not lack of quantity.

8the gears to ascension3mo
The problem is a long time contributor can be heavily downvoted once and become heavily rate limited, and then it relies on them earning back their points to be able to post again. I wouldn't say such a thing is necessarily terrible, but it seems to me to have driven away a number of people I was optimistic about who were occasionally saying something many people disagree with and getting heavily downvoted.
0MadHatter3mo
Perhaps it is about right, then.

This comes from a fundamental misunderstanding of how OpenAI and most companies operate. The board is a check on power. In most companies they will have to approve of high level decisions: moving to a new office space or closing a new acquisition. But they have 0 day to day control. If they tell the CEO to fire these 10 people and he doesn't do it, that's it. They can't do it themselves, they can't tell the CEO's underlings to do it. They have 0 options besides getting a new CEO. OpenAI's board had less control even than this.

Tweeting "Altman is not follow... (read more)

2MikkW3mo
The board's statement doesn't mention them having made such a request to Altman which was denied, that's a strong signal against things having played out that way.

I'm certain the board threatened to fire Sam before this unless he made X changes. I'm certain Sam never made all of those X changes. 

From where do you get that certainty?

If they would have made those threats, why didn't someone tell the New York Times journalists who were trying to understand what happened about it? 

Why didn't they say so when they fired him? It's the kind of thing that's easy to say to justify firing him. 

It's about asking the right questions to get the right info. I feel like your example actually disproves your point. In my perspective asking for someone's top 5 movies of the year is going to much more accurately predict if they liked Oppenheimer than asking if they liked Oppenheimer directly. The direct question will imply that you have some interest in Oppenheimer and are probably expecting them to either like it or at least have a strong opinion of it. Their inference will then affect the accuracy of their answer.

There haven't been many good movies rel... (read more)

In my experience the best way to sate hunger is to have multiple gallon jugs of water lying around and drinking too much water when you feel hungry. I know this is a little off topic, but it's likely a better solution than bulimia.

I have a family member who used to vomit daily. I never noticed any negative effects on him from it, apart from the rapid weight loss; which I guess is the point. I guess the general disgust other people felt around him when he went off to go throw up was a pretty negative effect. Also his weight loss looked unhealthy. He had twig arms, still had a lot of stomach fat, and was generally much more irritable than usual.

He might be thinking of whistleblower protections? But really, I've done a bit of research into this just now and I don't see any legal protections for not following company policy on e-mails. I would err on side of listen to company policy unless you want to get fired.

There's rules about discovery that definitely make the auto-delete practices helpful. Deleting suspicious records based on their content is fraudulent and can land you in jail or with bigger damages if you're ever caught. But having a standard policy of deleting all messages after X amount of days is protected until you have positive knowledge that there's impending litigation.

3AnthonyC3mo
Sorry for the delayed response, but fyi in this case I don't mean after X days, I mean if you aren't whitelisted in advance the intended recipient never gets the email at all.

I agree. I further recommend sticking to in person unless you have no reason to believe that your calls are automatically recorded or being tapped. Records of WHO you called are pretty easy to get too so if you have a history of only 1 call per week with your engineering team but all of a sudden have 20 calls with them in one day right after knowledge of a big incident occurred that can be used against you.

Also I don't see why you think cryonics doesn't make sense as alternative option.

I was responding to this point. The "cryonics is better than nothing" argument doesn't make cryonics an alternative option to immortality by friendly AI. If Bob thinks cryonics has a 10% chance of making him immortal and thinks AI will have a 20% chance of making him immortal and an 80% chance of destroying the world, then the superhuman AI route is more likely to lead to Bob's immortality than cryonics.

5interstice4mo
I didn't say that "cryonics is better than nothing", I said I think it has decent odds of success. To spell it out, I think the success probability is higher than the increased probability of friendly AI in my lifetime from acceleration(which is the relevant comparison) while imposing fewer costs on future generations. And I think that if you made it your life's work, you could probably improve those odds, up to 80% perhaps(conditional on future people wanting to revive you)

I think a lot of people haven't fully considered how amazing immortality is. Your 7th observation is probably very accurate too. Socially sophisticated people want others to believe they're fighting for the greater good rather than their own self interest. That doesn't mean they're necessarily lying.

I think the point of the statement is: wait until the probability of you dying before you next get an opportunity to push the button is > 1-P then push the button.

If you think cryonics has a very high likelihood of working then sure. I don't think the arguments that cryonics is likely to work are that good though. I don't think Eliezer has even made arguments to that effect. They were mostly "hey doing cryonics is better than not doing it because not doing cryonics is just death!"

3interstice4mo
I think Eliezer is pretty confident that cryonics will work. For myself, I'm not sure, haven't really looked into it that deeply, but the a priori argument makes sense and I feel like this is the kind of thing people would be irrationally biased against due to its speculative nature(similar to the AGI skepticism that many had until recently) so I'd give it decent odds.

(Just to be clear, this is false. Lightcone staff had no involvement whatsoever in making this post.)

If Nate yells at a staff member, that staff member no longer does ops work for Nate

Has anyone claimed at this point that Nate yelled at them?

Why would you have less confidence that many Palestinians want to kill Israelis when the stated goal of their government is to wipe out all Jews? At least publicly, the Israeli government does not have a policy of wiping out all Arabs.

7ChristianKl4mo
Government policy does not determine the views of the population.  Getting accurate views of what the Palestinian population thinks likely isn't easy. 

e.g. 12 (ETA: 14) bees are worth 1 human

This is a misrepresentation of what the report says.

The report:

Instead, we’re usually comparing either improving animal welfare (welfare reforms) or preventing animals from coming into existence (diet change → reduction in production levels) with improving human welfare or saving human lives.


I don't think he's misrepresenting what the report says at all. Trevor gets the central point of the post perfectly. The post's response to the heading "So you’re saying that one person = ~three chickens?" is, no, t... (read more)

2Erich_Grunewald5mo
Assuming you have the singular "you" in mind, no, I do not think I am not running a motte and bailey. I said above that if you accept the assumptions, I think using the ranges as (provisional, highly uncertain) moral weights is pretty reasonable, but I also think it's reasonable to reject the assumptions. I do think it is true that some people have (mis)interpreted the report and made stronger claims than is warranted, but the report is also full of caveats and (I think) states its assumptions and results clearly. Yes, the report is intended to guide decision-making in this way. It is not intended to provide a be-all-end-all estimate. The results still need to be interpreted in the context of the assumptions (which are clearly stated up front). I would take it as one input when making decisions, not the only input. No, that is not the post's response to that heading. It also says: "No. We’re estimating the relative peak intensities of different animals’ valenced states at a given time. So, if a given animal has a welfare range of 0.5 (and we assume that welfare ranges are symmetrical around the neutral point), that means something like, 'The best and worst experiences that this animal can have are half as intense as the best and worst experiences that a human can have' [...]" There is a difference between comparing the most positive/negative valenced states an animal can achieve and their moral worth. I think I disagree with your characterization, but it depends a bit on what you mean by "mostly". The report makes a weaker claim, that if you don't accept the premises, you shouldn't totally ignore the conclusions (as opposed to "mostly accepting" the conclusions). The idea is that even if you don't accept hedonism, it would be weird if capacity for positively/negatively valenced experiences didn't matter at all when determining moral weights. That seems reasonable to me and I don't really see the issue? So if you factor in life span (taking 2 months for a drone)

This was fun thank you!

There must be a hiccup in the data because you show < 30 total group b pickers in the first chart, but say there are 40 group b pickers in the "unilaterally make your virtue the focus" chart.

2Ruby5mo
Oh, good catch. I had the rows on the denominator sorted wrong so that table was 75% wrong. Fixed now...

Theft of any amount over a hundred or so dollars is evil and needs to be punished. Let's say you punish theft of $100 by a weekend in jail. Extrapolate that on a linear scale and you'll have criminals who non-violently stole $20,000 doing more than double the jail time that a criminal who cold-cocked a stranger and broke his jaw would get. Doesn't really make sense.

I got it both messages. Only clicked on the first. I guess other admins besides you were working on this and didn't say anything to you?

What are the odds that there is a more secretive Petrov day event going on LW today?

I don't think you really explain why section 3 doesn't mean that the two AI risk theories are in fact competing for public attention. You kind of bring it up, explain how it creates tension between AI killeveryoneism and AI misuse, say it's kind of bad, and then move on. I don't see anything explaining away the effect that your group status theory has. You explain why it's kind of of morally annoying, but you don't explain why either AI killeveryone-ists or AI misuse-ists should stop competing against each other for public attention .

3Seth Herd5mo
I think the claim is that they're not competing for public attention any more than AInotkilleveryoneism is competing with, say, saving the whales. Intuitively that doesn't sound right. When people think about AI, they'll think of either one or the other a bit more, so there's competition for attention there. But attention to a certain topic isn't a fixed quantity. If AInotkilleveryoneism worked with the current AI harms crowd, we might collectively get more than the sum of public attention we're getting. Or maybe not, for all I know. I'd love it if we had some more media/marketing/PR people helping with this project.

My opinion is that Eliezer thought he needed a more technical rebuttal of Omnizoid's post than he did. Omnizoid was wrong, pointlessly mean, and had terrible support for most of the object-level points that he did make. In general the post was just bad and that's probably why it got no play on Lesswrong. That's all the criticism needed. I was expecting a well-written, justifiably rude, and factually-supported takedown of Eliezer. I didn't get it and I was disappointed. The top comment at the EA forum however directed me to that great takedown I was looking... (read more)

Wikipedia claims: "The 1964 case New York Times Co. v. Sullivan, however, radically changed the nature of libel law in the United States by establishing that public officials could win a suit for libel only when they could prove the media outlet in question knew either that the information was wholly and patently false or that it was published 'with reckless disregard of whether it was false or not'."

This is typically referred to as showing "actual malice." But as you correctly surmised, this case law is irrelevant. Sullivan has been extended to cover p... (read more)

Public figures are typically those who have general fame or notoriety in the community.

He very obviously is one. As habryka points out, he has a WP entry backed by quite a few sources about him, specifically. He has an entire 5400-word New Yorker profile about him, which is just one of several you can grab from the WP entry (eg. Bloomberg). For comparison, I don't think even Eliezer has gotten an entire New Yorker profile yet! If this is not a 'public figure', please do explain what you think it would take. Does he need a New York Times profile as well?... (read more)

3habryka5mo
Emerson Spartz has a Wikipedia article, and the critique is highly relevant to him in-particular. My best understanding is that Emerson is a public figure for the purpose of this article (though not necessarily for the purpose of all articles), but it doesn't seem super clear cut to me.

The game theory behind Werewolf goes deeper than that. Werewolf is an iterated game, if you play it at least once on a friday you're probably playing at least four more times in succession. A good way to pick up whether someone is a Villager or a Baddie is to notice how their behavior during the game correlates with their revealed role at the end of the game.

Alice is a noob player and is always quiet when she's a Baddie and talkative and open when she's a Villager. She's giving off easy tells that an observant player like Bob picks up on. He can then notic... (read more)

4Viliam5mo
This sounded really wrong to me. Like, what is the analogy in real life? I am a good guy today, but I predict that I may become a criminal tomorrow, so I am already optimizing to make it difficult to figure out? But I suppose, in real life, circumstances also change, so things that are not criminal today may become criminal tomorrow, so you can be a good guy today and also optimize to make yourself safe when the circumstances change, even if your values won't.

Errol is a Logical Decision Theorist. Whenever he's playing a game of Werewolf, he's trying to not just win that game, but to maximize his probability of winning across all versions of the game, assuming he's predictable to other players. Errol firmly commits to reporting whether he's a werewolf whenever he gets handed that role, reasoning that behind the veil of ignorance, he's much more likely to land as villager than as werewolf, and that villager team always having a known villager greatly increases his overall odds of winning. Errol follows through with his commitments. Errol is not very fun to play with and has since been banned from his gaming group.

Is your point that "being asked to not hang out with low value people" is inherently abusive in a way worse than everything else going on in that list? Like maybe it's terrible, but I don't put it in it's own separate category apart from "sleeping with my boss." That's kind of my general point: none of the stuff said in this post is unusual for an environment where the employee lives and sleeps with their boss.

8DanielFilan5mo
Yes

This is within the context of me saying that the office environment is incredibly weird and atypical.

Sorry, I was using "normal" to mean "not abusive". Even in weird and atypical environments, I find it hard to think of situations where "don't hang out with your family" is an acceptable ask (with the one exception listed in my comment).

Plenty of "weird and atypical" things aren't red flags; this one, however, is a well-known predictor of abusive environments.

Under this community norm, how does Alice respond when Bob lies about her in public in a way that hurts her commercial business?

1AprilSR5mo
I'm more confident that we should generally have norms prohibiting using threats of legal action to prevent exchange of information than I am of the exact form those norms should take. But to give my immediate thoughts: I think the best thing for Alice to do if Bob is lying about her is to just refute the lies. In an ideal world, this is sufficient. In practice, I guess maybe it's insufficient, or maybe refuting the lies would require sharing private information, so if necessary I would next escalate to informing forum moderators, presenting evidence privately, and requesting a ban. Only once those avenues are exhausted might I consider threatening a libel suit acceptable. I do notice now that the Nonlinear situation in particular is impacted by Ben Pace being a LessWrong admin—so if step 1 doesn't work, step 2 might have issues, so maybe escalating to step 3 might be acceptable sooner than usual. Concerns have been raised that there might be some sort of large first-mover advantage. I'm not sure I buy this—my instinct is that the Nonlinear cofounders are just bad-faith actors making any arguments that seem advantageous to them (though out of principle I'm trying to withhold final judgement). That said, I could definitely imagine deciding in the future that this is a large enough concern to justify weaker norms against rapid escalation.

According to the post, the employees actively wanted to live somewhere else and were in a practical sense prevented from doing so

No not really, they weren't prevented from living where they so chose. To me living in fun, exotic locations, but you have to live with your boss sounds simply like a trade-off that the employees were willing to make. I don't see anything in the post to suggest that they were prevented from doing otherwise. Just that to do otherwise they would probably have had to pick a different job!

They also weren't willing to work for ne

... (read more)
9Rebecca5mo
The agreement was $75k, which is very much not next to nothing, and regardless of the split of expenses/cash, it doesn't seem like they added up to close to that?

I don't think the post fully conveyed it, but I think the employees were quite afraid of leaving and expected this to get them a lot of backlash or consequences. A particularly salient for people early in EA careers is what kind of reference they'll get.

Think about the situation of leaving your first EA job after a few months. Option 1: say nothing about why you left, have no explanation for leaving early, don't really get a reference. Option 2: explain why the conditions were bad, risk the ire of Nonlinear (who are willing to say things like "your career could be over in a couple of DMs"). It's that kind of bind that gets people to keep persisting, hope it'll get better.

This is honestly really weird and typical of what I expect from the people who spend their time being business-side community members in EA.

I (using Lightcone funds) am going to pay them each $5,000 after publishing this post.

I don't think you understand just what this means. You're paying your sources to contribute to muckraking.

Nonlinear seems like the standard rationalist org that does weird stuff commercially, hires weird people, and has weird rules about social and sexual stuff. The disgruntled, former friend-employee was sleeping with one of the ... (read more)

Being asked to... not hang out with low value people... is just one more thing that is consistent with the office environment.

Maybe I'm naive, but I don't think there's approximately any normal relationship in which it's considered acceptable to ask someone to not associate with ~anyone other than current employees. The closest example I can think of is monasticism, but in that context (a) that expectation is clear and (b) at least in the Catholic church there's a higher internal authority who can adjudicate abuse claims.

I believe that a commitment to transparently reward whistleblowers, in cases where you conclude they are running a risk of retaliation, is a very good policy when it comes to incentivizing true whistleblowing.

According to the post, the employees actively wanted to live somewhere else and were in a practical sense prevented from doing so. They also weren't willing to work for next to no pay - that is again specifically one of the issues that was raised. It's also plausible to me that the romantic attraction component was endogenous to the weirdness they were objecting to. It seems like the gist of your argument is 'weird things they were happy to do' >= 'weird things they say they weren't happy to do', but a significant proportion of the components on the LHS... (read more)

I tried asking a dog whether a Human is conscious and he continued to lick at my feet. He didn't mention much of anything on topic. Maybe I just picked a boring, unopinionated dog.

I asked Chatgpt-3.5 if humans are conscious and it said in part: 'Yes, humans are considered conscious beings. Consciousness is a complex and multifaceted phenomenon, and there is ongoing debate among scientists, philosophers, and scholars about its nature and the mechanisms that give rise to it. However, in general terms, consciousness refers to the state of being aware of one's thoughts, feelings, sensations, and the external world."

1Super AGI6mo
  Yes, this is a common issue as the phrases for "human consciousness" and "lick my feet please" in dog sound very similar. Though, recent advancements in Human animal communications should soon be able to help you with this conversation? E.g. https://phys.org/news/2023-08-hacking-animal-communication-ai.html https://www.scientificamerican.com/article/how-scientists-are-using-ai-to-talk-to-animals/   "Humans are considered conscious beings". Considered by whom, I wonder? "Consciousness refers to the state of being aware of one's thoughts, feelings, sensations, and the external world."  True, though this also requires the "observer" to have the ability and intelligence to be able to recognize these traits in other entities. Which can be challenging when these entities are driven by "giant, inscrutable matrices of floating-point numbers" or other systems that are very opaque to the observer?

I don't think the body of this post is related to the title. Whether a framework outlines a path to AGI has little to do with consumer takeup of an earlier product based on the same framework.

Load More