mtaran

Wiki Contributions

Comments

Consume fiction wisely

How is it that authors get reclassified as "harmful, as happened to Wright and Stross"? Do you mean that later works become less helpful? How would earlier works go bad?

Has anyone had weird experiences with Alcor?
Answer by mtaranJan 11, 202234

Given that you didn't actually paste in the criteria emailed to Alcor, it's hard to tell how much of a departure the revision you pasted is from it. Maybe add that in for clarity?

My impression of Alcor (and CI, who I used to be signed up with before) is that they're a very scrappy/resource-limited organization, and thus that they have to stringently prioritize where to expend time and effort. I wish it weren't so, but that seems to be how it is. In addition, they have a lot of unfortunate first-hand experience with legal issues arising during cryopreservation due to family intervention, which I suspect is influencing their proposed wording.

I would urge you to not ascribe to malice or incompetence what can be explained by time limitations and different priors. My suspicion is that if you explain where you're coming from and why you don't like their proposed wording (and maybe ask why they wanted to change some of the specific things you were suggesting) then they would be able to give you a more helpful response.

Given other sketchy things I've read about them (there is plenty of debate on this site and elsewhere calling them out for bad behavior)

I don't follow things too closely but would be interested in what you're referring to, if you could provide any links.

Downvoted for lack of standard punctuation, capitalization, etc., which makes the post unnecessarily hard to read.

New Year's Prediction Thread (2022)

Do you mean these to apply at the level of the federal government? At the level of that + a majority of states? Majority of states weighted by population? All states?

More accurate models can be worse

Downvoted for burying the lede. I assumed from the buildup this was something other than what it was, e.g. how a model that contains more useful information can still be bad, e.g. if you run out of resources for efficiently interacting with it or something. But I had to read to the end of the second section to find out I was wrong.

A good rational fiction about IT-inspired magic system?

Came here to suggest exactly this, based on just the title of the question. https://qntm.org/structure has some similar themes as well.

The Natural Abstraction Hypothesis: Implications and Evidence

Re: looking at the relationship between neuroscience and AI: lots of researchers have found that modern deep neural networks actually do quite a good job of predicting brain activation (e.g. fmri) data, suggesting that they are finding some similar abstractions.

Examples: https://www.science.org/doi/10.1126/sciadv.abe7547 https://www.nature.com/articles/s42003-019-0438-y https://cbmm.mit.edu/publications/task-optimized-neural-network-replicates-human-auditory-behavior-predicts-brain

Teaser: Hard-coding Transformer Models

I'll make sure to run it when I get to a laptop. But if you ever get a chance to set the distill.pub article up to run on heroku or something, that'll increase how accessible this is by an order of magnitude.

Teaser: Hard-coding Transformer Models

Sounds intriguing! You have a GitHub link? :)

Load More