Purplehermann

Wikitag Contributions

Comments

Sorted by

At what IQ do you think humans are able to "move up to higher levels of abstraction"? 

(Of course this assumes AIs don't get the capability to do this themselves)

Re robotics advancing while AI intelligence stalls, robotics advancing should be enough to replace any people who can't take advantage of automation of their current jobs.

 

I don't think you're correct in general,  but it seems that automation will clear out at least the less skilled jobs in short order (decades at most)

I very much hope the computers brought in were vetted and kept airgapped.

You keep systems separate, yes. 

For some reason I assumed that write permissions were on user in the actual system/secure network and any data exporting would be into secured systems. If they created a massive security leak for other nations to exploit, that's a crux for me on whether this was reckless.

 

Added: what kind of idiot purposely puts data in the wrong system purposely? The DOGE guys doing this could somehow make sense,  governmental workers??

I know people who have gotten access to similarly important governmental systems at younger ages. 

Don't worry about it too much. 

 

If they abuse it,  it'll cost their group lots of political goodwill. (Recursive remove for example)

Musk at least is looking to upgrade humans with Neuralink

If he can add working memory can be a multiplier for human capabilities, likely to scale with increased IQ.

 

Any reason the 4M$ isn't getting funded? 

Any good, fairly up-to-date lists of the relevant papers to read to catch up with AI research (as far as a crash course will take a newcomer)?

 

Preferably one that will be updated

Reading novels with ancient powerful beings is probably the best direction you have for imagining how status games amongst creatures which are only loosely human look.

 

Resources being bounded, there will tend to always be larger numbers of smaller objects (given that those objects are stable).

There will be tiers of creatures. (In a society where this is all relevant)

While a romantic relationship skipping multiple tiers wouldn't make sense,  a single tier might.

 

The rest of this is my imagination :)

Base humans will be F tier, the lowest category while being fully sentient. (I suppose dolphins and similar would get a special G tier).

Basic AGIs (capable of everything a standard human is, plus all the spikey capabilities) and enhanced humans E tier.

Most creatures will be here.

D tier:

Basic ASIs and super enhanced humans (gene modding for 180+ IQ plus SOTA cyborg implants) will be the next tier, there will be a bunch of these in absolute terms but relative to the earlier tier rarer.

C tier:

Then come Alien Intelligence, massive compute resources supporting ASIs trained on immense amounts of ground reality data, biological creatures that have been redesigned fundamentally to function at higher levels and optimally synergize with neural connections (whether with other carbon based or silicon based lifeforms)

B tier:

Planet sized clusters running ASIs will be a higher tier.

A, S tiers:

Then you might get entire stars, then galaxies.

There will be much less at each level.

 

Most tiers will have a -, neutral or +.

- : prototype, first or early version. Qualitatively smarter than the tier below, but non-optimized use of resources, often not the largest gap from the + of the earlier tier

Neutral: most low hanging optimizations and improvements and some harder ones at this tier are implemented

+: highly optimized by iteratively improved intelligences or groups of intelligences at this level, perhaps even by a tier above. 

Writing tests, QA and Observability are probably going to stay for a while and work hand in hand with AI programming, as other forms of programming start to disappear. At least until AI programming becomes very reliable.

This should allow for working code to be produced way faster, likely giving more high-quality 'synthetic' data, but more importantly massively changing the economics of knowledge work

Is there a reason that random synthetic cells will not be mirror cells?

Load More