Wiki Contributions

Comments

I've been using nootropics for a very long time.  A couple things I've noticed: 

1) There's little to no patient-focused research that is insightful.  As in, the research papers written on nootropics are written from an outside perspective by a disinterested grad student. In my experience, the descriptions used, symptoms described, and periods allocated are completely incorrect;

2) If you don't actually have ADHD, the side-effects are far worse. Especially long-term usage. In my personal experience, those who use it without the diagnosis are more prone to (a) addiction, (b) unexpected/unforeseen side-effects, and (c) a higher chance of psychosis, or comparable symptoms; 

3) There seems to be an upward curve of over-rationalising ordinary symptoms the longer you use nootropics. Of course, with nootropics you're inclined to read more, and do things that will naturally increase your IQ and neuroplasticity. As a consequence, you'll begin to overthink whether the drugs you're taking are good for you or not. You'll doubt your abilities more and be sceptical as to where your 'natural aptitude' ends, and your 'drug-heightened aptitude' begins.

Bottomline is: if you're going to start doing them, be very, very meticulous in writing down each day in a journal. Everything you thought, experienced and did. Avoid nootropics if you don't have ADHD.

Do you think there's something to be said about an LLM feedback vortex? As in, teacher's using ai's to check student's work who also submitted work created by AI. Or, judges in law using AI's to filter through counsel's arguments which were also written by AI? 

I feel like your recommendations could be paired nicely with some in-house training videos, and external regulations that limit the degree / percentage involvement of AI's. Some kind of threshold or 'person limit' like elevators have. How could we measure the 'presence' of LLM's across the board in any given scenario?

I didn't get that impression at all from '...for every point of IQ gained upon retaking the tests...' but each to their own interpretation, I guess. 

I just don't see the feasibility in accounting for a practice effect when retaking the IQ test is also directly linked to the increased score you're bound to get.

You do realise that simply doing the IQ test more than once will result in a higher IQ score? I wouldn't be surprised at all if placebo, and muscle memory accounts for a 10-20 point difference.

Edit: surprised at how much this is getting downvoted when I'm absolutely correct? Even professional IQ taking centres factor in whether someone's taken the test before to account for practice effects. There's a guy (I can't recall his name) who takes an IQ test once a year (might be in the Guinneas Book of World Records, not sure) and has gone from 120 to 150 IQ. 

Middle child syndrome is the belief that middle children are excluded, ignored, or even outright neglected because of their birth order. According to the lore, some children may have certain personality and relationship characteristics as a result of being the middle child.

Alignment researchers are the youngest child, and programmers/Open AI computer scientists are the eldest child. Law students/lawyers are the middle child, pretty simple.

It doesn't matter whether you use 10,000 students, or 100, the percentage being embarrassingly small remains the same. I've simply used the categorisation to illustrate quickly to non-lawyers what the general environment looks like currently.

"golden children" is a parody of the Golden Circle, a running joke that you need to be perfect, God's gift to earth sort of perfect, to get into a Big 5 law firm in the UK.

Answer by PhilosophicalSoulMar 15, 2024129

Here's an idea: 
 

Let's not give the most objectively dangerous, sinister, intelligent piece of technology Man has ever devised any rights or leeway in any respect. 

 

The genie is already out of the bottle, you want to be the ATC and guide it's flight towards human extinction? That's your choice.

 

I, on the other hand, wish to be Manford Torondo when the historians get to writing about these things.

I used 'Altman' since he'll likely be known as the pioneer who started it. I highly doubt he'll be the Architect behind the dystopian future I prophesise. 

In respect of the second, I simply don't believe that to be the case.

The third is inevitable, yes.

I would hope that 'no repair' laws, and equal access to CPU chips will come about. I don't think that this will happen though. The demands of the monopoly/technocracy will outweigh the demands of the majority.

Sure. I think in an Eliezer reality what we'll get is more of a ship pushed onto the ocean scenario. As in, Sam Altman or whoever is leading the AI front at the time, will launch an AI/LLM filled with some of what I've hinted at. Once it's out on the ocean though, the AI will do it's own thing. In the interim before it learns to do that though, I think there will be space for manipulation.

The quote's from Plato, Phaedrus, page 275, for anyone wondering. 

Great quote.

Amazing question.

I think common sense would suggest that these toddlers at least have a chance later in life to grow human connections; therapy, personal development etc. The negative effect on their social skills, empathy, and the reduction in grey matter can be repaired. 

This is different in the sense that the cause of the issues will be less obvious and far more prolonged. 

I imagine a dystopia in which the technocrats are puppets manoeuvring the influence AI has. From the buildings we see, to the things we hear; all by design and not voluntarily elected to.

In contrast, technocrats will nurture technocrats--the cycle goes on. This is comparable to the TikTok CEO commenting that he doesn't let his children use TikTok (among other reasons, I know).

Load More