On a functional level it's the fastest & the most convenient input method when you're not in front of the proper keyboard. Which even for me constitutes a big chunk of my life 🙃.
Taking a voice note allows me to quickly close an open loop of a stray though or a new idea and go on with my life - now mentally unburdened
Another important aspect is that taking voice notes evokes a certain experience of "fluency" for me.
When was the last time you (intentionally) used your caps lock key?
No, seriously.
Here is a typical US-layout qwerty (mac) keyboard. Notice:
Remap your caps lock key.
I have mine mapped to escape.
Modifier keys such as control or command are also good options (you could then map control/command to escape).
How do I do this you ask?
Thanks to Rudolf for introducing me to this idea.
I've had caps lock remapped to escape for a few years now, and I also remapped a bunch of symbol keys like parentheses to be easier to type when coding. On other people's computers it is slower for me type text with symbols or use vim, but I don't mind since all of my deeply focused work (when the mini-distraction of reaching for a difficult key is most costly) happens on my own computers.
I have a lot of ideas about AGI/ASI safety. I've written them down in a paper and I'm sharing the paper here, hoping it can be helpful.
Title: A Comprehensive Solution for the Safety and Controllability of Artificial Superintelligence
Abstract:
As artificial intelligence technology rapidly advances, it is likely to implement Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) in the future. The highly intelligent ASI systems could be manipulated by malicious humans or independently evolve goals misaligned with human interests, potentially leading to severe harm or even human extinction. To mitigate the risks posed by ASI, it is imperative that we implement measures to ensure its safety and controllability. This paper analyzes the intellectual characteristics of ASI, and three conditions for ASI to cause catastrophes (harmful goals, concealed intentions,...
I agree, it takes extra effort to make the AI behave like a team of experts.
Thank you :)
Good luck on sharing your ideas. If things aren't working out, try changing strategies. Maybe instead of giving people a 100 page paper, tell them the idea you think is "the best," and focus on that one idea. Add a little note at the end "by the way, if you want to see many other ideas from me, I have a 100 page paper here."
Maybe even think of different ideas.
I cannot tell you which way is better, just keep trying different things. I don't know what is right because I'm also having trouble sharing my ideas.
See livestream, site, OpenAI thread, Nat McAleese thread.
OpenAI announced (but isn't yet releasing) o3 and o3-mini (skipping o2 because of telecom company O2's trademark). "We plan to deploy these models early next year." "o3 is powered by further scaling up RL beyond o1"; I don't know whether it's a new base model.
o3 gets 25% on FrontierMath, smashing the previous SoTA. (These are really hard math problems.[1]) Wow. (The dark blue bar, about 7%, is presumably one-attempt and most comparable to the old SoTA; unfortunately OpenAI didn't say what the light blue bar is, but I think it doesn't really matter and the 25% is for real.[2])
o3 also is easily SoTA on SWE-bench Verified and Codeforces.
It's also easily SoTA on ARC-AGI, after doing RL on the public ARC-AGI...
I would say that, barring strong evidence to the contrary, this should be assumed to be memorization.
I think that's useful! LLM's obviously encode a ton of useful algorithms and can chain them together reasonably well
But I've tried to get those bastards to do something slightly weird and they just totally self destruct.
But let's just drill down to demonstrable reality: if past SWE benchmarks were correct, these things should be able to do incredible amounts of work more or less autonomously and get all the LLM SWE replacements we've seen have stuck to high...
Terrence Deacon's The Symbolic Species is the best book I've ever read on the evolution of intelligence. Deacon somewhat overreaches when he tries to theorize about what our X-factor is; but his exposition of its evolution is first-class.
Deacon makes an excellent case—he has quite persuaded me—that the increased relative size of our frontal cortex, compared to other hominids, is of overwhelming importance in understanding the evolutionary development of humanity. It's not just a question of increased computing capacity, like adding extra processors onto a cluster; it's a question of what kind of signals dominate, in the brain.
People with Williams Syndrome (caused by deletion of a certain region on chromosome 7) are hypersocial, ultra-gregarious; as children they fail to show a normal fear of adult strangers. WSers are...
9 years since the last comment - I'm interested in how this argument interacts with GPT-4 class LLMs, and "scale is all you need".
Sure, LLMs are not evolved in the same way as biological systems, so the path towards smarter LLMs aren't fragile in the way brains are described in this article, where maybe the first augmentation works, but the second leads to psychosis.
But LLMs are trained on writing done by biological systems with intelligence that was evolved with constraints.
So what does this say about the ability to scale up training on this human data in an attempt to reach superhuman intelligence?
In this post, I propose an idea that could improve whistleblowing efficiency, thus hopefully improving AI Safety by making unsafe practices discovered marginally faster.
I'm looking for feedback, ideas for improvement, and people interested in making it happen.
It has been proposed before, that it's beneficial to have an efficient and trustworthy whistleblowing mechanism The technology that makes it possible has become easy and convenient. For example, here is Proof of Organization, built on top of ZK Email: a message board that allows people owning an email address at their company's domain to post without revealing their identity And here is an application for ring signatures using GitHub SSH keys that allows creating a signature that proves that you own one of the keys from any subgroup you define...
Once I talked to a person who said they were asexual. They were also heavily depressed and thought about committing suicide. I repeatedly told them to eat some meat, as they were vegan for many years. I myself had experienced veganism-induced depression. Finally, after many weeks they ate some chicken, and the next time we spoke, they said that they were no longer asexual (they never were), nor depressed.
I was vegan or vegetarian for many consecutive years. Vegetarianism was manageable, perhaps because of cheese. I never hit the extreme low points that I did with veganism. I remember once after not eating meat for a long time there was a period of maybe a weak, where I got extremely fatigued. I took 200mg of modafinil[1], without having any build-up resistance. Usually, this would give me a lot...
After a very very cursory google search I wasn't able to find any (except in some places in Singapore), I'd be interested if this is available at all in the US
Six months ago, I was a high school English teacher.
I wasn’t looking to change careers, even after nineteen sometimes-difficult years. I was good at it. I enjoyed it. After long experimentation, I had found ways to cut through the nonsense and provide real value to my students. Daily, I met my nemesis, Apathy, in glorious battle, and bested her with growing frequency. I had found my voice.
At MIRI, I’m still struggling to find my voice, for reasons my colleagues have invited me to share later in this post. But my nemesis is the same.
Apathy will be the death of us. Indifference about whether this whole AI thing goes well or ends in disaster. Come-what-may acceptance of whatever awaits us at the other end of the glittering path. Telling ourselves...
I wonder how you react to naysayers who say things like:
How about if you solve a ban on gain-of-function research first, and then move on to much harder problems like AGI? A victory on this relatively easy case would result in a lot of valuable gained experience, or, alternatively, allow foolish optimists to have their dangerous optimism broken over shorter time horizons.
There are 2 things to keep in mind:
It's only now that LLMs are reasonably competent in at least some hard problems, and at any rate, I expect RL to basically solve the domain, because of verifiability properties combined with quite a bit of training data.
We should wait a few years, as we have another scale-up that's coming up, and it will probably be quite a jump from current AI due to more compute:
https://www.lesswrong.com/posts/NXTkEiaLA4JdS5vSZ/?commentId=7KSdmzK3hgcxkzmPX