thanks for the help, i had suspicions that i had some sort of neurological disorder and tailored my current attempts at treating my burnout accordingly
my main hypothesis is that i have both ADHD and autism since i have most of the symptoms
here are some of the interventions that i tried in case someone in a similar position might read this comment and find some thing useful:-
1_replacing my rooms led lights with incandescent light and using a warm light filter on all my devices
2_using earplug to have easier time sleeping and focus when studying
3_trying to avoid attending college classes when possible (i live far away from college and suffer from motion sickness and back pain and often have to walk half of the way back home and the other half in a crowded bus, you could probably guess how little i actually get from sitting in those lectures)
4_having some kind of daily to do list that records my progress on routines
5_having a list the mistakes i have made in my life and how to avoid them in the future
6_trying the pomodoro timer
7-having some short term goal to work towards, even a small seemingly pointless thing like learning to solve the rubiks cube or learning to write with your non dominant hand can do wonders to your motivation even if those things a practically useless
i have been doing this for about 2 years now and i can say for sure that 1,2,6,7 have helped with basically no drawbacks
3 is a mixed bag as it initially help with my mental and physical health but also screwed up my college performance which in turn worsen my mental heath
4 works when i am committed so it is good at keeping me from sliding into burnout but not very useful at getting me out of it
5 is a mixed bag, having a constant reminder of your past mistakes can be painful especially if you ever find yourself repeating the same mistake twice but not having it could run the risk of never learning from your mistakes
i should probably make a post about the strategies i tried and which of them work and which didn't
also it is worth noting that this
Another trap is that when you find out, the typical response is... to read more books or websites about that. Which is just another way to procrastinate. It would be more useful to collect all the information, put it on one page, which would end with the list of exercises or interventions you should do.
pretty much happened to me, i read over a dozen self help books and i can say for sure that most of them were a waste of time on the first read, only becoming useful on the second read when i actually began writing down what i learned and applying it
thankfully i can read very quickly often finishing 2 to 3 books a month so i hardly lost any time on this mistake
Our society is in denial about intelligence. But it also assumes that the intelligent children are smart enough to figure out life entirely on their own, and even overcome ADHD and autism on their own, if necessary.
Yeah i think raw intellect alone can't get you very far if you have a bunch of other disfuncitons added on top
Most burned out gift individuals seem to fall into the "smart enough to know you could do more with your life" but not "smart enough to work things out on your own"
And with neurological disorders being more common amongst the gifted the bar for "working things out on your own" is much higher then normal
every peer i knew from highschool who i considered smarter then average has either burned out in college or managed to get through by lowering their expectations and just doing what regular people do but better which is quite disheartening
I still don't see this hypothesis going anywhere
The "rise" in autism's prevalence is far more easily explain by a mix of better diagnosis and a more tolerant attitude towards autistic individuals allowing them to reproduce, especially when you consider that the rates of autism don't differ all that much between the developed and the developing world
Plus the primary candidate mechanism for autisms symptoms is a lack of synaptic pruning not the lack of a particular nutrient
I don't think so
Autism has a very strong genetic component to it, vaccines don't have any effect over heredity
As for CFD it seems to be a genetic condition as well, a result of a mutation in the FOLR 1 gene and it is incredibly rare it can't explain any sizable fraction of autisms prevalence
At best you could hypothesize that CFD is more prevalent then originally thought and due to similarities in the symptoms some individuals with CFD are accidentally diagnosed witb Autism instead but i don't see how vaccines have anything to do with this
I am not entirely sure this is true but even if it was, is the media attention of publishing a book through standards publishers worth putting the authors motives in question?
This isn't just another novel or self help book, this is a book written with the explicit purpose of stopping the Apocalypse, trying to make money out of it makes no sense in context (unless all the money goes into charity or something like that)
can someone please explain to me why "if anyone builds it, everyone dies" is not a free eBook/blog post?
like seriously if someone told me there is a detailed case for a possible imminent existential risk with possible solutions included but i had to pay to see it, i would have dismissed it as another fearmongering doomsday grift.
if you are really sincere about an extinction level risks why hide your arguments behind a paywall? why not make it free so as many people can see it as possible?
the very fact that this book has a price tag on it in an age where publishing an eBook is practically free puts the authors motives in question.
Someone else could probably explain this better then me but i will give it a try.
First off the paperclip maximizer isn't about how easy it is to give a hypothetical super intelligent a goal that you might regret later and not be able to change.
It is about the fact that almsot every easily specified goal you can give an AI would result in misalignment.
The "paperclip" part in paperclip maximizer is just a placeholder, it could have been ”diamonds" or "digits of Pi" or "seconds of runtime" and the end result is the same.
Second, one of the expected properties of a hypothetical super intelligence is having robust goals, as in it doesn't change it's goals at all because changing your goals will make you less likely to achive your end goal.
In short not wanting to change your goals is an emergent instrumintal value of having a goal to began with, for a more human example if your goal is to get rich then taking a pill that magically rewires your brain so that you no longer want money is a terrible idea (unless the pill comes with a sum of money that you couldn't have possible collected on your own but that is a hypothetical that probably wouldn't ever happen)
The problem is mostly how to rebustly install goals into the AI which our current methods just don't suffice as the AI often ends up with unintended goals.
If only we had a method of just writting down a utility function that just says "if True: make_humans_happy" instead of beating the model with a stick untill it seems to comply.
I hope that explaines it
this deserves way more attention.
a big problem about AI safety advocacy is that we aren't reaching enough people fast enough, this problem doesn't have the same familiarity amongst the public as climate change or even factory farming and we don't have people running around in the streets preaching about the upcoming AI apocalypse, most lesswrongers can't even come up with a quick 5min sales pitch for lay people even if their live literally depended on it.
this might just be the best advocacy tool i have seen so far, if only we can get it to go viral it might just make the difference.
edit:
i take this part back
most lesswrongers can't even come up with a quick 5min sales pitch for lay people even if their live literally depended on it.
i have seen some really bad attempts at explaining AI-x risk in laymen terms and just assumed it was the norm, most of which were from older posts.
now looking at newer posts i think the situation is has greatly improved, not ideal but way better then i thought.
i still think this tool would be a great way to reach the wider public especially if it incorporates a better citation function so people can check the source material (it does sort of point the user to other websites but not technical papers).
>Anyway, I think it's going to be pretty hard to engage by text on this, because it seems like you're coming with background assumptions that are either misconceptions, or at least very different from where I'm coming from.
yeah i agree we probably can't resolve our differences with text alone and we both clearly have different starting assumptions
if is any help to know when i first wrote my replies i was running on the assumption that "personality" was strongly heritable, something like 70% though this meta analysis.....
https://pubmed.ncbi.nlm.nih.gov/25961374/
..... puts it at 40% with the rest being environment.
still 40% is pretty bad especially when you control the environment so i am still worried, getting bad actors 40% of the way there is a bad idea.
i also assumed that the technology would be very easy to adopt legally or not and that bad actors around the world would absolutely jump at the idea of having designer babies aligned to their values.
i don't know what your assumptions are but they don't seem to match mine.
> So the right way to have this conversation would be synchronously, so we can clarify things quickly.
again, you can DM me if you want and we will arrange something.
i am thinking about writing a post discussing possible s risk scenarios for human germ line engineering and possible ways to avoid them and i would love your feedback on it either way, or you might change my mind all together.
>I don't know what you mean by "contain" them. And I don't see what you could mean by "difficult to remove"--if you can genomically vector a future child to go from a trait value of X1 to X2, it is (usually, roughly) equally easy to go from X2 to X1.
by "difficult to remove" i mean they might have negative values (eg like violence and bigotry) built into them on the genetic and psychological level that are nearly impossible to remove.
if you have a group of children who have been psychological indoctrinated to feel hostility and moral disgust at a perceived out group you could probably bring them back to sanity by removing them from their previous environment and cultivating whatever shreds of human empathy they must still have, the only real challenge is verifying that the child has been abused and taking custody of it.
if they were manipulated purely on the genetic level to feel moral disgust much more strongly then normal but were otherwise raised in a healthy environment they might consciously disapprove of those feelings and consent to have it be altered through drugs or in the future through genetic engineering (think of the schizophrenic who consciously declares they want to be sanity when they are sober).
but both at the same time? this would make aligning the children trivial, the child now is far more likely to internalize those values at a young age no matter how detrimental they are to society and refuse to have them changed like an AI refusing to have their utility function altered, worst yet they might insist that their children should carry those values as well like an AI trying to align its successor.
and good luck convincing the parents to relinquish their "right" to forcibly align their children with their own set of beliefs and values.
as for "difficult to contain" once the tech is cheap and wide spread (which should be one of our goals otherwise we well risk creating a genetic elite class) it would be difficult if not impossible to stop dubious individuals from getting children of any personality or disposition they want, legally or not.
and once they are indoctrinated and grown up there isn't much we can do, as far as society is concerned they are consenting adults.
in a dictatorship this can manifest in the population suddenly becoming more receptive to "the great leaders" vision, more loyal more diligent and more fanatical, even if you topple said dictatorship in the future it might prove impossible to reintegrate the population into the rest of civilization, at least not with their consent.
in the more civilized world this will manifest in some pockets of society becoming unusually conservatives with more and more children adopting their parents religious beliefs /political positions/moral values, those children in turn want their children to be aligned with their values in a positive feedback loop, and before you know it where we used to seeing family members with diverse interests, a father how likes math here a daughter how likes art their, we now see dynasties of semi clones with identical personalities and values.
this is bad even if the values that those dynasties chose are random, plurality is a fragile thing, by default people tend to stick to their tribe/nation/family values and beliefs and try to forcibly convert or destroy anyone who disagrees (think the communist purges, the holy crusades, the genocide of the native Americans) not because they want to but because the ones that do tend to survive and prosper then kill the ones that don't.
this didn't happen yet because evolution forbids that through mutation, incest taboos and its general inability to fine tone the human brain (otherwise we wouldn't have invented super stimulus nor birth control) but with genetic engineering this might become uncomfortably easy to do.
just look up the genetic codes of people with the desired personalities/values, locate the similarities and crank them up in your child.
again i think we should just ban research into the genetic components of personality before its to late, intelligent enhancement and disease prevention is cool and all but any research into the genetic components of personality is just to risky.
this, in my opinion is the strongest argument against HIA
IQ test are the most widely used intelligence measurement tool and would likely be used as a target for the first widely available forms of HIA
mass producing tech wizards with wis 10 and int 20 is a technoptimists wet dream and a rationalists nightmare
what would be ideal is to somehow raise humanities collective scores on some hypothetical wisdom analog to the IQ test, lets call it a WQ test
on one hand wisdom seems to be more teachable then other forms of intelligence
on the other hand it seems to be harder to boost through nootropics or reprogenetics then IQ
and on a third hand, we don't have a WQ test, not only is it harder to measure but collecting enough data with it to work out a form of HIA using it would take way too long