PhilosophicalSoul

Don't use this site. People here will punish you for asking questions.

Wiki Contributions

Comments

I'm so happy you made this post. 

I only have two (2) gripes. I say this as someone who 1) practices/believes in determinism, and 2) has interacted with journalists on numerous occasions with a pretty strict policy on honesty.

1. "Deep honesty is not a property of a person that you need to adopt wholesale. It’s something you can do more or less of, at different times, in different domains."

I would disagree. In my view, 'deep honesty' excludes dishonesty by omission. You're either truthful all of the time or you're manipulative some of the time. There can't be both. 

2. "Fortunately, although deep honesty has been described here as some kind of intuitive act of faith, it is still just an action you can take with consequences you can observe.

Not always. If everyone else around you goes the mountain of deceit approach, your options are limited. The 'rewards' available for omissions are far less, and if you want to have a reasonably productive work environment, at least someone has to tell the truth unequivocally. Further, the 'consequences' are not always immediately observable when you're dealing with practiced liars. The consequences can come in the form of revenge months, or, even years later. 

I am a lawyer. 

I think one key point that is missing is this: regardless of whether the NDA and the subsequent gag order is legitimate or not; William would still have to spend thousands of dollars on a court case to rescue his rights. This sort of strong-arm litigation has become very common in the modern era. It's also just... very stressful. If you've just resigned from a company you probably used to love, you likely don't want to fish all of your old friends, bosses and colleagues into a court case.

Edit: also, if William left for reasons involving AGI safety - maybe entering into (what would likely be a very public) court case would be counteractive to their reason for leaving? You probably don't want to alarm the public by flavouring existential threats in legal jargon.  American judges have the annoying tendency to valorise themselves as celebrities when confronting AI (see Musk v Open AI).

I've been using nootropics for a very long time.  A couple things I've noticed: 

1) There's little to no patient-focused research that is insightful.  As in, the research papers written on nootropics are written from an outside perspective by a disinterested grad student. In my experience, the descriptions used, symptoms described, and periods allocated are completely incorrect;

2) If you don't actually have ADHD, the side-effects are far worse. Especially long-term usage. In my personal experience, those who use it without the diagnosis are more prone to (a) addiction, (b) unexpected/unforeseen side-effects, and (c) a higher chance of psychosis, or comparable symptoms; 

3) There seems to be an upward curve of over-rationalising ordinary symptoms the longer you use nootropics. Of course, with nootropics you're inclined to read more, and do things that will naturally increase your IQ and neuroplasticity. As a consequence, you'll begin to overthink whether the drugs you're taking are good for you or not. You'll doubt your abilities more and be sceptical as to where your 'natural aptitude' ends, and your 'drug-heightened aptitude' begins.

Bottomline is: if you're going to start doing them, be very, very meticulous in writing down each day in a journal. Everything you thought, experienced and did. Avoid nootropics if you don't have ADHD.

Do you think there's something to be said about an LLM feedback vortex? As in, teacher's using ai's to check student's work who also submitted work created by AI. Or, judges in law using AI's to filter through counsel's arguments which were also written by AI? 

I feel like your recommendations could be paired nicely with some in-house training videos, and external regulations that limit the degree / percentage involvement of AI's. Some kind of threshold or 'person limit' like elevators have. How could we measure the 'presence' of LLM's across the board in any given scenario?

I didn't get that impression at all from '...for every point of IQ gained upon retaking the tests...' but each to their own interpretation, I guess. 

I just don't see the feasibility in accounting for a practice effect when retaking the IQ test is also directly linked to the increased score you're bound to get.

You do realise that simply doing the IQ test more than once will result in a higher IQ score? I wouldn't be surprised at all if placebo, and muscle memory accounts for a 10-20 point difference.

Edit: surprised at how much this is getting downvoted when I'm absolutely correct? Even professional IQ taking centres factor in whether someone's taken the test before to account for practice effects. There's a guy (I can't recall his name) who takes an IQ test once a year (might be in the Guinneas Book of World Records, not sure) and has gone from 120 to 150 IQ. 

Middle child syndrome is the belief that middle children are excluded, ignored, or even outright neglected because of their birth order. According to the lore, some children may have certain personality and relationship characteristics as a result of being the middle child.

Alignment researchers are the youngest child, and programmers/Open AI computer scientists are the eldest child. Law students/lawyers are the middle child, pretty simple.

It doesn't matter whether you use 10,000 students, or 100, the percentage being embarrassingly small remains the same. I've simply used the categorisation to illustrate quickly to non-lawyers what the general environment looks like currently.

"golden children" is a parody of the Golden Circle, a running joke that you need to be perfect, God's gift to earth sort of perfect, to get into a Big 5 law firm in the UK.

Answer by PhilosophicalSoulMar 15, 2024129

Here's an idea: 
 

Let's not give the most objectively dangerous, sinister, intelligent piece of technology Man has ever devised any rights or leeway in any respect. 

 

The genie is already out of the bottle, you want to be the ATC and guide it's flight towards human extinction? That's your choice.

 

I, on the other hand, wish to be Manford Torondo when the historians get to writing about these things.

I used 'Altman' since he'll likely be known as the pioneer who started it. I highly doubt he'll be the Architect behind the dystopian future I prophesise. 

In respect of the second, I simply don't believe that to be the case.

The third is inevitable, yes.

I would hope that 'no repair' laws, and equal access to CPU chips will come about. I don't think that this will happen though. The demands of the monopoly/technocracy will outweigh the demands of the majority.

Sure. I think in an Eliezer reality what we'll get is more of a ship pushed onto the ocean scenario. As in, Sam Altman or whoever is leading the AI front at the time, will launch an AI/LLM filled with some of what I've hinted at. Once it's out on the ocean though, the AI will do it's own thing. In the interim before it learns to do that though, I think there will be space for manipulation.

Load More