All of Donald Hobson's Comments + Replies

Brain Efficiency: Much More than You Wanted to Know

Almost a tauutology = carries very little useful information.

In this case most of the information is carried by the definition of "Neuromorphic". A researcher proposes a new learning algorithm. You claim that if its not neuromorphic then it can't be efficient. How do you tell if the algorithm is neuromorphic?

Brain Efficiency: Much More than You Wanted to Know

If hypothetically that was true, that would be a specific fact not established by anything shown here. 

If you are specific in what you mean by "brainlike" it would be quite a surprising fact. It would imply that the human brain is a unique pinnacle of what is possible to achieve. The human brain is shaped in a way that is focussed on things related to ancestral humans surviving in the savannah. It would be an enormous coincidence if the abstract space of computation and the nature of fundamental physical law meant that the most efficient possible mind... (read more)

2jacob_cannell1dBrain-like != human brain. By brain-like I mostly just meant neuromorphic, so the statement is almost a tautology. DL models are all ready naturally somewhat 'brain-like', in the space of all ML models, as DL is a form of vague brain reverse engineering. But most of the remaining key differences ultimately stem from the low level circuit differences between von neuman and neuromorphic architectures. As just one example - DL currently uses large-batch GD style training because that is what is actually efficient on VN architecture, but will necessarily shift to brain-style small batch techniques on neuromorphic/PIM architecture as that is what efficiency dictates.
Brain Efficiency: Much More than You Wanted to Know

I was using that as a hypothetical example to show that your definitions were bad. (In particular, the attempt to define arithmetic as not AI because computers were so much better at it.) 

I also don't think that you have significant evidence that we don't live in this world, beyond the observation that if such an algorithm exists, it is sufficiently non-obvious that neither evolution or humans have found it so far. 

A lot of the article is claiming the brain is thermodynamically efficient at turning energy into compute. The rest is comparing the brain to existing deep learning techniques. 

I admit that I have little evidence that such an algorithm does exist, so its largely down to priors. 

Question Gravity

Other effects I was considering.

Is the bottle rotationally symmetric? Is there say a weight of congealed shampoo in it?

If there was a slight tilt in this whole setup, the bottle could be marginally off vertical. Empty, the centre of gravity is quite high above the bar, and the slight tilt puts it slightly inward. With some water in the centre of gravity is lower. Full to the brim, the centre of gravity isn't much lower.

Animal welfare EA and personal dietary options

There is also the slightly odd perspective that starts by saying 2 computers running the same computation only morally count once, and then goes on to claim that 2 battery hens are so mentally similar as to count as the same mind morally.

Calibration proverbs

Some of these rhymes are just hard to decipher and would be better in clear English than bad poetry. I understand the desire to make a pithy saying, but it really isn't clear what you meant with some of these.

Brain Efficiency: Much More than You Wanted to Know

Don't dismiss these tasks just by saying they aren't part of AGI by definition. 

The human brain is reasonably good at some tasks and utterly hopeless at others. The tasks early crude computers got turned to were mostly the places where the early crude computers could compete with brains, ie the tasks brains were hopeless at. So the first computers did arithmetic because brains are really really bad at arithmetic so even vacuum tubes were an improvement. 

The modern field of AI is what is left when all the tasks that it is easy to do perfectly are ... (read more)

2jacob_cannell5dBased on the evidence at hand (as summarized in this article) - we probably don't live in that world. The burden of proof is on you to show otherwise. But in those hypothetical worlds, AGI would come earlier, probably well before the end phase of Moore's Law.
Brain Efficiency: Much More than You Wanted to Know

I think your thermodynamics is dubious. Firstly, it is thermodynamically possible to run error free computations very close to the thermodynamic limits. This just requires the energy used to represent a bit to be significantly larger than the energy dissipated as waste heat when a bit is deleted. 

 

Considering a cooling fluid of water flowing at 100m/s through fractally structured pipes of cross section 0.01m^2 and being heated from 0C to 100C, the cooling power is 400 megawatts. 

 

I think that superconducting chips are in labs today. The... (read more)

2jacob_cannell6dI'm reasonably well read on reversible computing. It's dramatically less area efficient, and requires a new radically restricted programming model - much more restrictive than the serial to parallel programming transition. I will take and win any bets on reversible computing becoming a multi-billion dollar practical industry before AGI.
6jacob_cannell7dIn theory it's possible to perform computation without erasing bits - ie reversible computation, as mentioned in the article. And sure you can use more than necessary to represent a bit, but not much point in that, when you could instead use the minimum Landauer bound amount.
Brain Efficiency: Much More than You Wanted to Know

Suppose someone in 1900 looked at balloons and birds and decided future flying machines would have wings. They called such winged machines "birdomorphic", and say future flying machines will be more like birds.

I feel you are using "neuromorphic" the same way. Suppose it is true that future computers will be of a Processor In Memory design. Thinking of them as "like a brain" is like thinking a fighter jet is like a sparrow because they both have wings. 

Suppose a new processor architecture is developed, its basically PIM. Tensorflow runs on it. The AI software people barely notice the change.

6jacob_cannell1dThe set of AGI models you could run efficiently on a largescale pure PIM processor is basically just the set of brain-like models.
Brain Efficiency: Much More than You Wanted to Know

Just pointing out that humans doing arithmetic and GPT3 doing arithmetic are both awful in efficiency compared to raw processor instructions. I think what FeepingCreature is considering is how many other tasks are like that? 

5jacob_cannell7dThe set of tasks like that is simply traditional computer science. AGI is defined as doing what the brain does very efficiently, not doing what computers are already good at.
Brain Efficiency: Much More than You Wanted to Know

in which case fabricating new better chips seems unlikely to contribute.

Fabricating new better chips will be part of a Foom once the AI has nanotech. This might be because humans had already made nanotech by this point, or it might involve using a DNA printer to make nanotech in a day. (The latter requires a substantial amount of intelligence already, so this is a process that probably won't kick in the moment the AI gets to about human level. )

Brain Efficiency: Much More than You Wanted to Know

In worlds where brains are ultra-efficient, AGI necessarily becomes neuromorphic or brain-like, as brains are then simply what economically efficient intelligence looks like in practice, as constrained by physics.

 

I totally disagree. Firstly it may be that the brain is 99.9% efficient, and some totally different design is also 99.9% efficient. There can be several very different efficient ways to do things. 

Secondly AGI can be less efficient and still FOOMy if it has enough energy and mass. As it is usually easier to do something at all than to d... (read more)

Why maximize human life?

I think there is a confusion about what is meant by utilitarianism and utility. 

Consider the moral principle that moral value is local to the individual, in the sense that there is some function F: Individual minds -> Real numbers such that the total utility of the universe. Alice having an enjoyable life is good, and the amount of goodness doesn't depend on Bob. This is a real restriction on the space of utility functions. It says that you should be indifferent between (A coin toss between both Alice and Bob existing and neither Alice or Bob exist... (read more)

A good rational fiction about IT-inspired magic system?

I don't think this is an actual stable world. It is too easy to destroy everything with one typo. Also, there are lots of things that are in practice world disrupting that are easy to describe. Duplication of arbitrary matter. (including humans) Conjuring of energy. Making an object infinitely strong. Controlling the flow of time. Portals. 

2JBlack21dI suspect that in such universes that are not destroyed very quickly, an early user creates fail-safe spell constructs that limit such destruction by future users (including themselves under most conditions). This does leave open the possibility that some primordial magic user with root access still exists somewhere, and is very careful to use such power only when absolutely necessary, and only with the minimum weakening of ordinary constraints.
Six Specializations Makes You World-Class

I think there are 2 mistakes here. First you are missing an n!. There are only ~ 10,000 skill pairs if (physics , writing) is different from (writing , physics). There are ~ 5,000 unordered pairs.

Secondly is the assumption that no one else can have more skills than you. If everyone picks 5 skills to get good at, you can be the only person in the world with that combo of skills. If you get to pick 5 skills. And some other people are good at all 100, you don't get a unique combo. If 32 people each have 50 skills, then you probably don't have a unique combo.

2lsusr1moThank you for the correction. Adding the missing factorial increases the answer from five to six. You don't need a unique combination. You just need combination nobody else is competing at. If someone has more skills than you then there is a combinatorial explosion in the possibility space available to zem. The odds zir work collides with yours goes down fast as the number of skills your superior has goes up. Ze likely have better things to do than compete with you.
[Linkpost] Chinese government's guidelines on AI
If even the comatose behemoth of the gov has noticed the risk, then AGI is indeed much closer than most people think.

Reasoning doesn't work like that. The information flow is almost entirely from the subtle hints in reality, to people like MIRI, and then to the government. Maybe update on gov's being slightly less comatose, or MIRI having a really good PR team.

Once we make the assumption that governments are less on the ball than MIRI, and see what MIRI says, the governments actions tell us almost nothing about AI.

Second-order selection against the immortal

Your assumption that the immortals all choose not to reproduce is unrealistic. (for an evolutionary equilibrium) Either 1) Nothing can kill them, nothing can stop them reproducing. Exponentially expanding ball of flesh, stopped from collapsing into a black hole by shear imortality.

2) Absolutely unkillable, need some resource to reproduce. That resource is the bottleneck.

3) Can die of something sometimes. Malthusian equilibrium. The more stuff they can't die of, the better they do compared to the "mortals".

4lsusr1moThe winning strategy is to take the immortality pill and reproduce. Voluntarily stopping having children to prevent over-crowding only works if everybody does it.
Two Stupid AI Alignment Ideas

A couple more problems with extreme discounting. 

In contexts where the AI is doing AI coding, it is only weekly conserved. Ie the original AI doesn't care if it makes an AI that doesn't have super high discount rates, so long as that AI does the right things in the first 5 minutes of being switched on.

The theoretical possibility of time travel.

 

Also, the strong incentive to pay in Parfits hitchhiker only exists if Parfit can reliably predict you. If humans have the ability to look at any AI code, and reliably predict what it will do, then alignme... (read more)

A positive case for how we might succeed at prosaic AI alignment

I think you might be able to design advanced nanosystems without AI doing long term real world optimization. 

Well a sufficiently large team of smart humans could probably design nanotech. The question is how much an AI could help.

Suppose unlimited compute. You program a simulation of quantum field theory. Add a GUI to see visualizations and move atoms around. Designing nanosystems is already quite a bit easier.

Now suppose you brute force search over all arrangements of 100 atoms within a 1nm box, searching for the configuration that most efficiently t... (read more)

Quantilizer ≡ Optimizer with a Bounded Amount of Output

You can quantize any distribution. For the random distribution, you need to use 0.000... 01% quantilization to get anything useful at all. (In non-trivial environments, the vast majority of random actions are useless junk. If you have a humanoid coffee making robot, almost all random motor inputs will result in twitching on the floor.)

However, you can also quantalize over a model trained to imitate humans. Suppose you give some people a joystick and ask them to remote control the robot to make coffee. They manage this about half the time. You train the rob... (read more)

4Charlie Steiner2moYeah, I think this is the key point. Quantilizers are only safe or smart relative to some base distribution.
What would we do if alignment were futile?

There may be a nanotech critical point. Getting to full advanced nanotech probably involves many stages of bootstrapping. If lots of nanobots have been designed on a computer, then an early stage of the bootstrapping process might be last to be designed. (Building a great nanobot with a mediocre nanobot might be easier than building the mediocre nanobot from something even worse.) This would mean a sudden transition where one group potentially suddenly had usable nanotech. 

So, can a team of 100 very smart humans, working together, with hand coded nano... (read more)

2maximkazhenkov2moIf destroying GPUs is the goal, there seem to be a lot simpler, less speculative ways than nanomachines. The semiconductor industry is among the most vulnerable, as the pandemic has shown, with an incredibly long supply chain that mostly consists of a single or a handful of suppliers, defended against sabotage largely by "no one would actually do such a thing". Of course that is assuming we don't have a huge hardware overhang in which case current stockpiles might already be sufficient for doom, or that ASI will be based heavily on GPU computing at all.
Discussion with Eliezer Yudkowsky on AGI interventions

Under the Eliezerian view, (the pessimistic view that is producing <10% chances of success). These approaches are basically doomed. (See logistic success curve) 

Now I can't give overwhelming evidence for this position. Whisps of evidence maybe, but not an overwheming mountain of it. 

Under these sort of assumptions, building a container for an arbitrary superintelligence such that it has only 80% chance of being immediately lethal, and a 5% chance of being marginally useful is an achievment.

(and all possible steelmannings, that's a huge space)

Discussion with Eliezer Yudkowsky on AGI interventions

Lets say you use all these filtering tricks. I have no strong intuitions about whether these are actually sufficient to stop those kind of human manipulation attacks. (Of course, if your computer security isn't flawless, it can hack whatever computer system its on and bypass all these filters to show the humans arbitrary images and probably access the internet.) 

But maybe you can at quite significant expense make a Faraday cage sandbox, and then use these tricks. This is beyond what most companies will do in the name of safety. But Miri or whoever cou... (read more)

2localdeity2moWell, if you restrict yourself to accepting the safe, testable advice, that may still be enough to put you enough years ahead of your competition to develop FAI before they develop AI. My meta-point: These methods may not be foolproof, but if currently it looks like no method is foolproof—if, indeed, you currently expect a <10% chance of success (again, a number I made up from the pessimistic impression I got)—then methods with a 90% chance, a 50% chance, etc. are worthwhile, and furthermore it becomes worth doing the work to refine these methods and estimate their success chances and rank them. Dismissing them all as imperfect is only worthwhile when you think perfection is achievable. (If you have a strong argument that method M and any steelmanning of it has a <1% chance of success, then that's good cause for dismissing it.)

I feel you are taking some concepts that you think aren't very well defined, and throwing them away, replacing them with nothing. 

I admit that the intuitive notions of "morality" are not fully rigorous, but they are still far from total gibberish. Some smart philosopher may come along and find a good formal definition.

"Survival" is the closest we have to an objective moral or rational determinant. 

Whether or not a human survives is an objective question. The amount of hair they have is similarly objective. So is the amount of laughing they have d... (read more)

1Samuel Shadrach3moI mean yes, nothing is ever set in stone, we might come face-to-face with god tomorrow and that'll change everything we currently believe. But with available information I still think it's reasonable to say god as we nebulously define it does not exist with very high probability. Same goes for objective morality. I personally don't think formalisation is that important when it comes to knowing the facts here. What would convince me to change my mind would be empirical evidence of moral drivers outside of the brains of individual humans. Or perhaps some violation of physical laws - which could be evidence that a singular god is driving the world. But I certainly will not stop anyone from trying to formalise, or use that to increase / descrease their conviction. I'd be very interested in examples of this. Cooperative behaviour is favoured mammals onwards for sure. Even bacteria choose to transmit and share information via plasmids, and they don't have reputations or long memories. So I'd be a little surprised if this is common.
Intelligence or Evolution?

Firstly this would be AI's looking at their own version of the AI alignment problem. This is not random mutation or anything like it. Secondly I would expect there to only be a few rounds maximum of self modification that runs risk to goals. (Likely 0 rounds) Firstly damaging goals looses a lot of utility. You would only do it if its a small change in goals for a big increase in intelligence. And if you really need to be smarter and you can't make yourself smarter while preserving your goals. 

You don't have millions of AI all with goals different from each other. The self upgrading step happens once before the AI starts to spread across star systems.

Why is multi worlds not a good explanation for abiogenesis

I'm not sure what you mean by objectivity or why superimposed states don't have it.

If a pair of superimposed state differ by a couple of atom positions, they can interact and merge. If they differ by "cat alive" to "cat dead" then there is practically no interaction. The world stays superimposed forever.  So that gives causal isolation and permanence.

I mean maybe some of the connotations of "worlds" aren't quite accurate. Its a reasonably good word to use. 

1TAG3moNo objectivity means that superpositions can be made to disappear by a suitable choice of basis If there is no inteerction, they are decoherent states. If you could show that decoherence smoothly follows from coherent superposition, without any additional postulates, you would be on to something .
Intelligence or Evolution?

Error correction codes exist. They are low cost in terms of memory etc. Having a significant portion of your descendent mutate and do something you don't want is really bad.

If error correcting to the point where there is not a single mutation in the future only costs you 0.001% resources in extra hard drive, then <0.001% resources will be wasted due to mutations.

Evolution is kind of stupid compared to super-intelligences. Mutations are not going to be finding improvements. Because the superintelligence will be designing their own hardware and the hardwa... (read more)

2ChristianKl3moError correction codes help a superintelligence to avoid self-modifying but they don't allow goals necessarily to be stable with changing reasoning abilities.
[Prediction] We are in an Algorithmic Overhang, Part 2

Firstly we already have humans working together.

Secondly, do BCI's mean brainwashing for the good of the company? I think most people wouldn't want to work for such a company. I mean companies probably could substantially increase productivity with psycoactive substances. But that's illegal and a good way to loose all your employees.

Also something moloch like has a tendency to pop up in a lot of unexpected ways.  I wouldn't be surprised if you get direct brain to brain politicking.

Also this is less relevant for AI safety research, where there is already little empire building because most of the people working on it already really value success. 

1Quintin Pope3mo“… do BCI's mean brainwashing for the good of the company? I think most people wouldn't want to work for such a company.” I think this is a mistake lots of people make when considering potentially dystopian technology: that dangerous developments can only happen if they’re imposed on people by some outside force. Most people in the US carry tracking devices with them wherever they go, not because of government mandate, but simply because phones are very useful. Adderall use is very common in tech companies, esports gaming, and other highly competitive environments. Directly manipulating reward/motivation circuits is almost certainly far more effective than Adderall. I expect the potential employees of the sort of company I discussed would already be using BCIs to enhance their own productivities, and it’s a relatively small step to enhancing collaborative efficiency with BCIs. The subjective experience for workers using such BCIs is probably positive. Many of the straightforward ways to increase workers’ productivity seem fairly desirable. They’d be part of an organisation they completely trust and that completely trusts them. They’d find their work incredibly fulfilling and motivating. They’d have a great relationship with their co-workers, etc. Brain to brain politicking is of course possible, depending on the implementation. The difference is that there’s an RL model directly influencing the prevalence of such behaviour. I expect most unproductive forms of politicking to be removed eventually. Finally, such concerns are very relevant to AI safety. A group of humans coordinated via BCI with unaligned AI is not much more aligned than the standard paper-clipper AI. If such systems arise before superhuman pure AI, then I expect them to represent a large part of AI risk. I’m working on a draft timeline where this is the case.
[Prediction] We are in an Algorithmic Overhang, Part 2

Its easier to couple a cart to a horse than to build an internal combustion engine. 

Its easier to build a modern car, than to cybernetically enhance a horse to be that fast and strong.

Humans plus BCI are not to hard. If keyboards count as crude BCI, its easy.  Making something substantially superhuman. That's harder than building an ASI from scratch. 

2Quintin Pope3moYou can easily combine multiple horses into a “super-equine” transport system by arranging for fresh horses to be available periodically across the journey and pushing each horse to unsustainable speeds. Also, I don’t think it’s very hard to reach somewhat superhuman performance with BCIs. The difference between keyboards and the BCIs I’m thinking of is that my BCIs can directly modify neurology to increase performance. E.g., modifying motivation/reward to make the brains really value learning about/accomplishing assigned tasks. Consider a company where every employee/manager is completely devoted to company success, fully trust each other and have very little internal politicking/empire building. Even without anything like brain-level, BCI enabled parallel problem solving or direct intelligence augmentation, I’m pretty sure such a company would perform far better than any pure human company of comparable size and resources.
[Prediction] We are in an Algorithmic Overhang, Part 2

Possibly GPT3 x 100. Or RL of similar scale.

Very likely Evolution (with enough compute, but you might need a lot of compute.) 

AIXI. You will need a lot of compute. 

I was kind of referring to the disjunction.

[Prediction] We are in an Algorithmic Overhang, Part 2

The set of designs that look like "Human brains + BCI + Reinforcement learning" is large. There is almost certainly something superintelligent in that design space, and a lot of things that aren't. Finding a superintelligence in this design space is not obviously much easier than finding a superintelligence in the space of all computer programs.

I am unsure how this bypasses algorithmic complexity and hardware issues. I would not expect human brains to be totally plug and play compatible. It may be that the results of wiring 100 human brains together (with little external compute) are no better than the same 100 people just talking. It may be you need difficult algorithms and/or lots of hardware as well as BCI's.

1Quintin Pope3moI think using AI + BCI + human brains will be easier than straight AI for the same reason that it’s easier to finetune pretrained models for a specific task than it is to create a pretrained model. The brain must have pretty general information processing structure, and I expect it’s easier to learn the interface / input encoding for such structures than it is to build human level AI. Part of that intuition comes from how adaptable the brain is to injury, new sensory modalities, controlling robotic limbs, etc. Another part of the intuition comes from how much success we’ve seen even with relatively unsophisticated efforts to manipulate brains, such as curing depression [https://www.nature.com/articles/s41591-021-01480-w].
[Prediction] We are in an Algorithmic Overhang, Part 2

If I'm wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.

 

It is possible for evolution to have stumbled upon a really complicated algorithm for humans. Deep learning is fairly simple. AIXI is simple. Evolution is simple. If the human brain is incredibly complicated, even in its core learning algorithm, we could make something else. (Or possibly copy lots of data with little understanding)

2Quintin Pope3moYou could also likely build superintelligence by wiring up human brains with brain computer interfaces, then using reinforcement learning to generate some pattern of synchronized activations and brain-to-brain communication that prompts to brains collectively solve problems more effectively than a single brain is able to - a sort of AI guided super-collaboration. That would bypass both the algorithmic complexity and the hardware issues. The main constraints here are the bandwidths of brain computer interfaces (I saw a publication that derived a Moore’s law-like trend for this, but now can’t find it. If anyone knows where to find such a result, please let me know.) and the difficulty of human experiments.
6Tofly3moThe brain may also be excessively complicated to defend against parasites [https://slatestarcodex.com/2019/08/19/maybe-your-zoloft-stopped-working-because-a-liver-fluke-tried-to-turn-your-nth-great-grandmother-into-a-zombie/] .
[Prediction] We are in an Algorithmic Overhang, Part 2

We're not (yet) limited by hardware

There are 2 questions here, the intelligence of existing algorithms with new hardware, and the intelligence of new algorithms with existing hardware.

We could be (and probably are) in a world where existing algorithms + more hardware and existing hardware + better algorithms can both lead to superintelligence. In which case the question is how much progress is needed, and how long it will take. 

4lsusr3moWhen you say "[w]e could be (and probably are) in a world where existing algorithms + more hardware…can…lead to superintelligence" are you referring to popular algorithms like GPT or obscure algorithms buried in a research paper somewhere?
What Do GDP Growth Curves Really Mean?

If you take this definition literally, then if scientists find an extremely expensive way to make lab grown dodo meat, suddenly the GDP for when dodos existed jumps up.

Measure by old prices, and one thing we can make cheaply now that we basically couldn't make before and the numbers get huge. Measure by the old prices and one thing we could make but no longer can sends the numbers plummeting. 

There are also questions of how much you want to call two items similar. When we count the number of spoons in ancient Rome, do we compare them to modern plastic... (read more)

Intelligence or Evolution?

Darwinian evolution as such isn't a thing amongst superintelligences. They can and will preserve terminal goals. This means the number of superintelligences running around is bounded by the number humans produce before the point the first ASI get powerful enough to stop any new rivals being created. Each AI will want to wipe out its rivals if it can. (unless they are managing to cooperate somewhat)  I don't think superintelligences would have humans kind of partial cooperation. Either near perfect cooperation, or near total competition. So this is a scenario where a smallish number of ASI's that have all foomed in parallel expand as a squabbling mess.

1Ramana Kumar3moDo you know of any formal or empirical arguments/evidence for the claim that evolution stops being relevant when there exist sufficiently intelligent entities (my possibly incorrect paraphrase of "Darwinian evolution as such isn't a thing amongst superintelligences")?
Where's my magic sword?

When performing first aid, you must never leave a patient until you have passed them off to someone more qualified than yourself.

I think this is bad advice. Sometimes the patient has a small cut. A dab of antiseptic and a plaster and their treated. In a triage situation you might be rushing back and forth between several patients, trying to stem the bleeding until the ambulances arrive.

If you and a friend were out walking, and your friend broke their leg, you might want to attach some sort of crude splint, then leave your friend there as you walk for help.... (read more)

5gwillen4moSee my other comment for my more detailed thoughts, but note that wilderness situations are a specific exception to a lot of usual rules about emergency care. (E.g. "Wilderness First Responder" is a specific type of class/certification different from normal first aid, because it deals with unusual situations where rescue is not immediately available.)
What is the evidence on the Church-Turing Thesis?

Turing machines are kind of the limit of finite state machines. There are particular Turing machines that can simulate all possible finite state machines. In general, an arbitrary sequence of finite state machines can be uncomputable. But there are particular primitive recursive functions that simulate arbitrary finite state machines, (given the value as an input. ) You don't need the full strength of turing completeness to do that. So I would say, kind of no. There are systems strictly stronger than all finite automaton, yet not Turing complete. 

Really, the notion of a limit isn't rigorously defined here, so its hard to say.

What is the evidence on the Church-Turing Thesis?

Sometimes in mathematics, you can right 20 slightly different definitions and find you have defined 20 slightly different things. Other times you can write many different formalizations and find they are all equivalent. Turing completeness is the latter. It turns up in Turing machines, register machines, tiling the plane, Conways game of life and many other places. There are weaker and stronger possibilities, like finite state machines, stack machines and oracle machines. (Ie a Turing machine with a magic black box that solves the halting problem is strong... (read more)

2Morpheus4moThanks for the answer! Oops! You're right, and It's something that I used to know. So IIRC as long your tape (and your time) is not infinite you still have a finite state machine, so Turing machines are kind of finite state machines taken to the limit for (n→∞) is that right?
Can you control the past?

perfect deterministic software twins, exposed to the exact same inputs. This example that shows, I think, that you can write on whiteboards light-years away, with no delays; you can move the arm of another person, in another room, just by moving your own.

 

In this situation, you can  draw a diagram of the whole thing, including all identical copies, on the whiteboard. However you can't point out which copy is you. 

In this scenario, I don't think you can say that you are one copy or the other. You are both copies.

1MichaelStJules5moThere could be external information you and your copy are not aware of that would distinguish you two, e.g. how far different stars appear, time since the big bang. And we can still talk about things outside Hubble volumes. These are mostly relational properties that can be used to tell spacetime locations apart.
So You Want to Colonize The Universe

You can send probes programmed to just grab resources, build radio receivers and wait.

2avturchin5moBut even grabbing resources may damage alien life or do other things which turns are to be bad.
Yet More Modal Combat

Not yet. I'll let you know if I make a follow-up post with this. Thanks for a potential research direction.

A gentle apocalypse

This seems to be a bizarre mangling of several different scenarios. 

Yet in most of the world, humans will probably no longer be useful to anything or anyone – even to each other – and will peacefully and happily die off. 

Many humans will want to avoid death as long as they can, and to have children. Most humans will not think "robots do all that boring factory work, therefore I'm useless therefore kill myself now". If the robots also do nappy changing and similar, it might encourage more people to be parents. And there are some humans that want h... (read more)

5pchvykov5moyeah, I can try to clarify some of my assumptions, which probably won't be fully satisfactory to you, but a bit: * I'm trying to envision here a best-possible scenario with AI, where we really get everything right in the AI design and application (so yes, utopian) * I'm assuming that the question "is AI conscious?" to be fundamentally ill-posed as we don't have a good definition for consciousness - hence I'm imagining AI as merely correlation-seeking statistical models. With this, we also remove any notion of AI having "interests at heart" or doing anything "deliberately" * and so yes, I'm suggesting that humans may be having too much fun to reproduce with other humans, nor will feel much need to. It's more a matter of a certain carelessness, than deliberate suicide.
D&D.Sci August 2021: The Oracle and the Monk

I would use

Solar and Lunar Mana, which I think has about 66% chance of working. The  only mana type that can be predicted 10 days in advance (beyond the prediction of a random sample from the previous data) is Doom. And that still doesn't work out with as high probability. (Doom mana will be on a high at the time, but using it with solar gives a 69% chance of reaching 70 and an 11% chance of demons. So a 58% chance of doing good.) If the utility is 0 for any amount of mana <70, and the amount of mana if its >=70, then solar + earth does slightly

... (read more)
Donald Hobson's Shortform

The bible is written by many authors, and contains fictional and fictionalized characters. Its a collection of several thousand year old fanfiction. Like modern fanfiction, people tried telling variations on the same story or the same characters. (2 entirely different genesis stories) Hence there not even being a pretence at consistency. This explains why the main character is so often portrayed as a Mary Sue. And why there are many different books each written in a different style. And the prevalence of weird sex scenes.

Against "blankfaces"

Another reason someone might stick to the rules is if they think the rules carry more wisdom than their own judgement. Suppose you knew you weren't great at verbal discussions, and could be persuaded into a lot of different positions by a smart fast-talker, if you engaged with the arguments at all. You also trust that the rules were written by smart wise experienced people. Your best strategy is to stick to the rules and ignore their arguments.

Someone comes along with a phone that's almost out of battery and a sob story about how they need it to be charged... (read more)

1Mo Nastri5moThis strategy reminds me of epistemic learned helplessness [https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/].
The biological intelligence explosion

Genetic modification takes time. If you are genetically modifying embryos, thats ~20 years before they are usefully contributing to your attempt to make better embryos. 

Maybe you can be faster when enhancing already grown brains. Maybe not. Either way, enhancing already grown brains introduces even more complications.

At some point in this process, a human with at most moderate intelligence enhancement decides it would be easier to make an AI from scratch than to push biology any further. And then the AI can improve itself at computer speeds. 

In short, I don't expect the biological part of the process to be that explosive. It might be enough to trigger an AI intelligence explosion.

Slack Has Positive Externalities For Groups

If you have 100 hours, and and 100 commitments, each of which takes an hour, that is clearly a case of low time slack.

If you have 100 hours, and 80 commitments each of which takes either 0 or 2 hours (with equal chance) that is the high slack you seem to be talking about. Note that  units of free time are available. This person is still pretty busy. 

If you have 100 hours, and 1 hour of commitment, and most of the rest of the time will be spent laying in bed doing nothing or timewasting, this person has the most slack. 

A way reality mi... (read more)

and most of the rest of the time will be spent laying in bed doing nothing or timewasting, this person has the most slack. 

It depends on how much optionality the person has around changing this behavior. Often busy people have the most slack because they have the most agency to change their behavior.

Is the argument that AI is an xrisk valid?

Imagine a device that looks like a calculator. When you type 2+2, you get 7. You could conclude its a broken calculator, or that arithmetic is subjective, or that this calculator is not doing addition at all. Its doing some other calculation. 

Imagine a robot doing something immoral. You could conclude that its broken, or that morality is subjective, or that the robot isn't thinking about morality at all. 

These are just different ways to describe the same thing. 

Addition has general rules. Like a+b=b+a. This makes it possible to reason about. Whatever the other calculator computes may follow this rule, or different rules, or no simple rules at all. 

1TAG6moNot to the extent that there's no difference at all...you can exclude some of them on further investigation.
Is the argument that AI is an xrisk valid?

I think the assumption it that human-like morality isn't universally privileged. 

Human morality has been shaped by evolution in the ancestral environment. Evolution in a different environment would create a mind with different structures and behaviours.

In other words, a full specification of human morality is sufficiently complex that it is unlikely to be spontaneously generated.

In other words, there is no compact specification of an AI that would do what humans want, even when on an alien world with no data about humanity. An AI could have a pointer at human morality with instructions to copy it. There are plenty of other parts of the universe it could be pointed to, so this is far from a default.  

1VCM6moBut reasoning about morality? Is that a space with logic or with anything goes?
Load More