Kabir Kumar

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

rather than this, there should just be a better karma system, imo. 
one way to improve it - have the voting buttons for comments be on the opposite side of the username

This is very useful, thank you. 
Something that might be interesting to add at the end of surveys such as these:
"How much has this survey changed your mind on things?" - sometimes just being asked a question about something can change your mind on it, would be interesting to see if it happens and how much so.

Clickbait still works here, just with a different language. 

Cons: Humans are opaque. Even from our inside view, it is very difficult to understand how they work, and very hard to modify. They are also the most difficult to talk about rigorously. There is also the failure mode of anthropomorphizing badly and attributing arbitrary properties of humans (and especially human goals) to AGI.

I don't think it's really correct to say that humans are opaque from an inside view, especially for people with high empathy. People who understand themselves well and have high empathy can very consistently predict and understand others.

Pretty much all of those reasons - what it's missing is that nicotine itself may also be a carcinogen- at least, it has the ability to be one: https://link.springer.com/article/10.1007/s10311-023-01668-1 
Although there aren't enough isolated studies done on nicotine in a long period to be conclusive: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5020336/ 
Some reviews disagree: https://pubmed.ncbi.nlm.nih.gov/26380225/ 
 

I strongly advise against taking nicotine.

Eliezer is extremely skilled at capturing attention. One of the best I've seen, outside of presidents and some VCs.
However, as far as I've seen, he's terrible at getting people to do what he wants. 
Which means that he has a tendency to attract people to a topic he thinks is important but they never do what he thinks should be done- which seems to lead to a feeling of despondence. 
This is where he really differs from those VCs and presidents- they're usually far more balanced.

For an example of an absolute genius in getting people to do what he wants, see Sam Altman.

Unless I'm missing something, this seems to disregard the possibility of deception. Or it handwaves deception away in a line or two.

The type of person to end up as the CEO of a leading AI company is likely (imo) someone very experienced in deception and manipulation- at the very least through experiencing others trying it on them, even if by some ridiculously unlikely chance they haven't used deception to gain power themselves.

A clever, seemingly logically sound argument for them to slow down and trust that their competitor will also slow down because of the argument, will ring all kinds of bells.

I think whistleblower protections, licenses, enforceable charters, mandatory 3rd party safety evals, etc have a much higher chance of working.

Yes, we host a bi-monthly Critique-a-Thon- the next one is from December 16th to 18th!

Judges include:
- Nate Soares, President of MIRI, 
- Ramana Kumar, researcher at DeepMind
- Dr Peter S Park, MIT postdoc at the Tegmark lab,
- Charbel-Raphael Segerie, head of the AI unit at EffiSciences.

What about regulations against implementations of known faulty architectures?

Load More