LESSWRONG
LW

359
HiroSakuraba
1042150
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
My AGI timeline updates from GPT-5 (and 2025 so far)
HiroSakuraba25d8-2

OpenAI delivering an iterative update rather than a revolutionary one has lengthened many people's timelines.  My take is that this incentivizes many more players into trying for the frontier.  Xai's Grok has gone from non-existent in 2022 to a leading model.  The rollout pace of improvements to the newest version of Grok are far more frequent than other leading companies.  Nvidia has also recently begun releasing open larger source models as well as the accompanyin datasets.  Meta is another player that is now all-in.  The failure of Llama and moderate updates by OpenAI likely pushed Zuckerberg into realizing that his favored relentless A/B testing at scale could work. Twenty-nine billion for new datacenters and huge payout for top minds is like a beacon for sovereign wealth / hedge funds to notice that the science fiction reality is now here.  When the prize is up for grabs much more captial will be thrown into the arena than if the winner was a foregone conclusion.  

 

So, my timelines have shortened due to market sentiment conditions and dawning realizations rather than benchmarks improving.  While tech stocks may fall, bubbles may burst, and benchmarks could stagnate; I still believe the very idea of taking the lead in AGI trumps all.

Reply
Dear Paperclip Maximizer, Please Don’t Turn Off the Simulation
HiroSakuraba2mo30

Thank you for writing this up.  I agree with just about everything said.

Reply
The Office of Science and Technology Policy put out a request for information on A.I.
HiroSakuraba2y10

The majority of my response is in reducing our systems exposure vulnerabilities.  As a believer in the power of strong cryptography no matter the intelligence involved, I am going to explain the value of removing or spinning down of the NSA/CIA program of back-doors, zero day exploits, and intentional cryptographic weaknesses that have been introduced into our hardware and software infrastructure.

Reply
The inordinately slow spread of good AGI conversations in ML
HiroSakuraba2y20

Do you still hold this belief?

Reply
The inordinately slow spread of good AGI conversations in ML
HiroSakuraba3y10

Language is one defining aspect of intelligence in general and human intelligence in particular.  That an AGI wouldn't utilize the capability of LLM's doesn't seem credible.  The cross modal use cases for visual perception improvements (self-supervised labeling, pixel level segmentation, scene interpretation, casual inference) can be seen in recent ICLR/CVPR papers.  The creation of github.com/google/BIG-bench should lend some credence that many leading institutions see a path forward with LLM's.

Reply
The inordinately slow spread of good AGI conversations in ML
HiroSakuraba3y1-9

If current trends hold, then large language models will be a pillar in creating AGI; which seems uncontroversial.  I think one can then argue that after ignoring the circus around sentience means we should focus on the new current capabilities: reasoning, and intricate knowledge of how humans think and communicate.  From that perspective there is a now a much stronger argument for manipulation.  Outwitting humans prior to having human level intelligence is a major coup.  PaLM's ability to explain a joke and reason it's way to understanding the collective human psyche is as impressive to me as Alphafold.  

The goal posts have been moved on the Turing test and it's important to point that out in any debate.  Those in the liberal A.I. bias community have to at least acknowledge the particular danger of these LLM models that can reason positions.  Naysayers need to have it thrown in their face that these lesser non-sentient, neural networks trained on a collection of words will eventually out debate them.

Reply
Contra Hofstadter on GPT-3 Nonsense
HiroSakuraba3y210

It is bizarre watching people like him and Gary Marcus embarrass themselves. It's like watching Boomers saying they'll never see another shooter as good as Larry Bird, all while Steph Curry is blasting through every NBA record of note.

Reply
AI Could Defeat All Of Us Combined
HiroSakuraba3y32

As autonomous agents become more widespread, the risk becomes more obvious.  Dyson and Tesla will be pumping out androids in 8 years.   Starlink will provide the bandwidth to control any internet connected IoT device.  Security Zero day back doors are discovered every month, that don't even include the intentional nation state added ones.  I personally thought the wake-up call would happen with the deployment of Ghost Drones (not fully autonomous but easily could be).  Max Tegmark's #banslaughterbots campaign is going nowhere as drones will continue to show combat effectiveness.  At this point, the proverbial water is going to be near boiling and many people will still say "Sauna doesn't have enough jets."

Reply
AGI Safety FAQ / all-dumb-questions-allowed thread
HiroSakuraba3y70

Is anyone at MIRI or Anthropic creating diagnostic tools for monitoring neural networks?  Something that could analyze for when a system has bit-flip errors versus errors of logic, and eventually evidence of deception.

Reply
AGI Ruin: A List of Lethalities
HiroSakuraba3y2-1

It's unfortunate we couldn't have a Sword of Damocles deadman switch in case of AGI led demise.  A world ending asteroid positioned to go off in case of "all humans falling over dead at the same time."  At least that would spare the Milky Way and Andromeda possible future civilizations.  A radio beacon warning about building intelligent systems would be beneficial as well.   "Don't be this stupid" written in the glowing embers of our solar system.

Reply
Load More
No wikitag contributions to display.
60The Office of Science and Technology Policy put out a request for information on A.I.
2y
4
7A discussion of the paper, "Large Language Models are Zero-Shot Reasoners"
3y
0