wassname

Wiki Contributions

Comments

Sorted by

As long as people realise they are betting on more than just a direction

  • the underlying going up
  • Volatility going up
  • it all happening within the time frame

Timing is particularly hard, and many great thinkers have been wrong on timing. You might also make the most rational bet, but the market takes another year to become rational.

Worth looking at the top ten holdings of these, to make sure you know what you are buying, and that they are sensible allocations:

  • SMH - VanEck Semiconductor ETF
    • 22% Nvidia
    • 13% Taiwan Semiconductor Manufacturing
    • 8% Broadcom
    • 5% AMD
  • QQQ
    • 9% AAPL
    • 8% NVDA
    • 8% MSFT
    • 5% Broadcom

It might be worth noting that it can be good to prefer voting shares, held directly. For example, GOOG shares have no voting rights to Google, but GOOGL shares do. There are some scenarios where having control, rather than ownership/profit, could be important.

NVDA's value is primarily in their architectural IP and CUDA ecosystem. In an AGI scenario, these could potentially be worked around or become obsolete.

This idea was mentioned by Paul Christiano in one of his podcast appearances, iirc.

Interesting. It would be much more inspectable and controllable and modular which would be good for alignment.

You've got some good ideas in here, have you ever brainstormed any alignment ideas?

By sensible, I don't indicate disagreement, but a way of interpreting the question.

Do you have any idea at all? If you don't, what is the point of 'winning the race'?

Maybe they have some idea but don't want to say it. In recently disclosed internal OpenAI emails, Greg Brockman and Ilya Sutskever said to Elon Musk:

"You are concerned that Demis [Hassabi of DeepMind] could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to"

Perhaps this - originally private email - is saying the quiet part. And now that it is released, the quiet part is out loud. To use terms from the turn based game of Civilisation, perhaps they would use AI to achieve a cultural, espionage, technological, influence, diplomatic, and military victory simultaneously? But why would they declare that beforehand? Declaring it would only invite opposition and competition.

At the very least, you can hack and spy and sabotage other AGI attempts.

To be specific, there are a few areas where, it seems to me, increased intelligence could lead to quick and leveraged benefits. Hacking, espionage, negotiation, finance, and marketing/propaganda. For example, what if you can capture a significant fraction of the world's trading income, attract a large portion of China's talent to turn coat and move to your country, and hack into a large part of an opposition's infrastructure.

If one or more of these tactics can work significantly, you buy time for other tactics to progress.

A sensible question would weight ancestors by amount of shared genes.

To the people disagreeing, what part do you disagree with? My main point, or my example? Or something else

I think this is especially important for me/us to remember. On this site we often have a complex way of thinking, and a high computational budget (because we like exercising our brains to failure) and if we speak freely to the average person, they mat be annoyed at how hard it is to parse what we are saying.

We've all probably had this experience when genuinely trying to understand someone from a very different background. Perhaps they are trying to describe their inner experience when mediating, or Japanese poetry, or are simply from a different't discipline. Or perhaps we were just very tired that day, meaning we had a low computational budget.

On the other hand, we are often a "tell" culture, which had a lower computational load compared to ask or guess culture. As long as we don't tell too much.

Load More