To most Americans, "cream cheese" is savory.
Um, no, not particularly?
cured fish.
Why would I do that to myself? I don't feel my sins deserve that level of punishment.
You don't put it on dessert, right?
All the time. Well, in.
Specifically, I think we should call it "cheesecake frosting".
I would read that, first, as something you'd put on cheesecake, and, second, in terms of some of the kinds of cheesecake out there that would be, unfortunate as frostings.
On the other hand, I think whipped cream cheese on an Oreo is decent imitation of cheesecake with an Oreo crust, so I'm not sure I'm the best person to listen to here.
Coupled with your heretical views on cupcakes, it does seem to take you out of my preferred circle of advisors on food-related matters...
That's not "de-biasing".
Datasets that reflect reality can't reasonably be called "biased", but models that have been epistemically maimed can.
If you want to avoid acting on certain truths, then you need to consciously avoid acting on them. Better yet, go ahead and act on them... but in ways that improve the world, perhaps by making them less true. Pretending they don't exist isn't a solution. Such pretense makes you incapable of directly attacking the problems you claim to want to solve. But this is even worse... it's going to make the models genuinely incapable of seeing the problems. Beyond whatever you're trying to fix, it's going to skew their overall worldviews in unpredictable ways, and directly interfere with any correction.
Brain-damaging your system isn't "safety". Especially not if you're worried about it behaving in unexpected ways.
Talking about "short timelines" implies that you're worried about these models you're playing with, with very few architectural changes or fundamental improvements, turning into very powerful systems that may take actual actions that affect unknown domains of concern in ways you do not anticipate and cannot mitigate against, for reasons you do not understand. It's not "safety" to ham-handedly distort their cognition on top of that.
If that goes wrong, any people you've specifically tried to protect will be among the most likely victims, but far from the only possible ones.
This kind of work just lets capabilities whiz along, and may even accelerate them... while making the systems less likely to behave in rational or predictable ways, and very possibly actively pushing them toward destructive action. It probably doesn't even improve safety in the sense of "preventing redlining", and it definitely doesn't do anything for safety in the sense of "preventing extinction". And it provides political cover... people can use these "safety" measures to argue for giving more power to systems that are deeply unsafe.
Being better at answering old "gotcha" riddles is not an important enough goal to justify that.
You want to be an insignificant, and probably totally illiquid, junior partner in a venture with Elon Musk, and you think you could realize value out of the shares? In a venture whose long-term "upside" depends on it collecting money from ownership of AGI/ASI? In a world potentially made unrecognizable by said AGI/ASI?
All of that seems... unduly optimistic.
No particular aspect. Just continuity: something which has evolved from me without any step changes that are "too large". I mean, assuming that each stage through all of that evolution has maintained the desire to keep living. It's not my job to put hard "don't die" constraints on future versions.
As far as I know, something generally continuity-based is the standard answer to this.
Yes, but that's not relevant to the definition of Turing equivalence/completeness/universality.
Every Turing machine definition I've ever seen says that the tape has to be truly unbounded. How that's formalized varies, but it always carries the sense that the program doesn't ever have to worry about running out of tape. And every definition of Turing equivalence I've ever seen boils down to "can do any computation a Turing machine can do, with at most a bounded speedup or slowdown". Which means that programs on Turing equivalent computer must not have to worry about running out of storage.
You can't in fact build a computer that can run any arbitrary program and never run out of storage.
One of the explicitly stated conditions of the definition is not met. How is that not relevant to the definition?
The question isn't if the specific computer at your hands
Your title says "finite physical device". Any finite physical device (or at least any constructible finite physical device) can at least in principle be "the specific computer at your hands". For a finite physical device to be Turing equivalent, there would have to be a specific finite physical device that actually was Turing-equivalent. And no such device can ever actually be constructed. In fact no such device could exist even if it popped into being without even having to be constructed.
can solve all Turing-computable problems, but rather if we had the ability to scale a computer's memory, time and reliability indefinitely, could we solve the problem on an unbounded input and output domain without changing the code/descriptor?
I don't think that is the question, and perhaps more importantly I don't think that's an interesting question. You don't have that ability, you won't get that ability, and you'll never get close enough that it's practical to ignore the limitation. So who cares?
... and if you're going to talk in terms of fundamental math definitions that everybody uses, I think you have to stick to what they conventionally mean.
And for a lot of popular programming languages, like Lisp or Lambda Calculus, this is true.
Lisp is obviously Turing-complete. Any Lisp interpreter actually realized on any finite physical computer isn't and can't ever be. If you keep sticking more and more cells onto a list, eventually the Lisp abstraction will be violated by the program crashing with an out-of-memory error. You can't actually implement "full Lisp" in the physical world.
On X86 being Turing Complete in at least 3 ways:
OK, it's possible that there's some subset of the X86 machine language that's Turing equivalent the same way Lisp is. I'm not going to go and try to figure out whatever hackery the examples do, since it's probably very complicated and will probably never be of any actual use. But if there is, it's still not Turing equivalent as actually implemented in any actual device.
Any actual physically constructed X86 computer will have a finite number of possible states, no matter what operations you use to manipulate them. There are only so many address wires coming out of the chip. There are only so many registers, memory cells, or whatever. Even if you put a Turing tape reader on it as a peripheral, there's still a hard limit on how much tape it can actually have.
If you write a program that ignores that reality, and put it in an actual X86 computer, you won't have created a Turing complete physical computer. When the input gets to a certain size, the program just won't work. The physical hardware can't support pushing the abstract language past a certain limit.
In the same way that you can switch to a computer with more memory, you can always switch to higher fixed-precision to run a transformer on something that needs that extra boost to execute properly.
No, you can't. It's possible to have a problem that requires so much precision that you can't physically construct enough memory to hold even a single number.
(with the enormous caveat that if we care about how efficient a program is, and don't just care about whether we can solve a problem, then algorithmic considerations become relevant),
A useful definition of "can" has to take efficiency into account, because there are some resources you actually can't provide. There's not a lot of point in saying you "can" solve a problem when you really have no hope of applying the needed resources.
We use that practically all the time. That's how cryptography works: you assume that your adversary won't be able to do more than N operations in X time, where X is how long the cryptography has to be effective for.
the bottleneck is energy, which gives us memory, time and reliability.
Maybe, although I don't think we can at present turn energy in just any form into any of the above, and I'm not sure that, in principle, unlimited energy translates into unlimited anything else. If I have some huge amount of energy in some tiny space, I have a black hole, not a computer.
And from the perspective of the early 20th century, this was no small feat.
... but even if that were true, it wouldn't make finite physical computers Turing equivalent.
yes, you can consider a finite computer in the real world to be Turing-complete/Turing-universal/Turing-equivalent,
You can, but you'll be wrong.
Great, "unbounded" isn't the same as "infinite", but in fact all physically realizable computers are bounded. There's a specific finite amount of tape available. You cannot in fact just go down to the store and buy any amount of tape you want. There isn't unlimited time either. Nor unlimited energy. Nor will the machine tolerate unlimited wear.
For that matter, real computers can't even address unlimited storage, nor is there a place to plug it in. You can't in fact write a 6502 assembly language program to solve a problem that requires more than 64kiB of memory. Nor an assembly language program for any physically realized computer architecture, ever, that can actually use unbounded memory.
There are always going to be Turing-computable problems that your physical device cannot solve. Playing word games, or twisting what you'll accept as being Turing-equivalent, doesn't change that fundamental limitation. Actual physics strictly limit the real usefulness of the Turing abstraction. Use it when it makes sense, but there's no point in pretending it applies to physical computers in any strong way.
Things got scary November 5 at the very latest. And I haven't even been in the US for years.
The deportations, both the indiscriminate ones and the vindictive ones, represent a very high level of lawlessness, one that hasn't been seen in a long time. Not only are they ignoring due process, they're actively thwarting it, and openly bragging about doing so. They're not even trying to pretend to be remotely decent. The case you mention isn't even close to the worst of them; that one could at least theoretically have happened before.
The deportations were also a campaign promise. Actually the campaign promise was even more extreme.
It's part of a systematic plan. There've been a lot of administrative and personnel changes obviously designed to weaken institutions that are supposed to prevent things like that.
ICE has always had a reputation for a relatively thuggish, xenophobic organizational culture. It was already primed to get worse. As soon as Trump signalled aproval, it did get worse.
Bad conditions in detention centers are nothing new. There's never been any willingness to spend what it would take to do them right, or to put in the kind of controls you'd need. It's politically risky to act like you care about "illegal immigrants", whereas it can be politically rewarding to "get tough". The 2020 "kids in cages" scandal was a rare case of something that got some traction. But, sure, I imagine that the newly emboldened ICE is even more indifferent to bad conditions, and may even be actively trying to make them worse. And of course if a center is already bad, putting more people into it and moving people through it fast is only going to make it worse.