Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.
Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread. This thread is solely for people to talk about the awesome things they have done. Not "will do". Not "are working on". Have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.
So, what's the coolest thing you've done this month?
Planned and executed a four-day solo hike. I did half of the south downs way, from Eastbourne to Amberley. That's 83 km of route, I probably walked about 90km in total, taking a shy over 72 hours, camping on or just off the trail in wherever seemed like a good location. I've never done anything like this before. It took less than two weeks from when I decided to go, to when I set off.
I got a new job! Which pays better than the old one.
Do you still get to do some science?
I sold out to the Dark Side in 2014. This was a move between industry jobs. But, actually, the new one is somewhat more in the direction of data-gathering than the old one was.
How did you go about finding a new one?
Since April I've lost over 10 kg and exercised every weekday.
Promotion and a raise!
I wrote a JSON API for PredictionBook which allows retrieving recent predictions as well as making new ones. Here is the documentation.
I intend to add additional API actions once I have written a Unix-style command-line interface and used it for a while (which should give me a better idea about what actions would actually be useful).
I went "mostly vegetarian" towards the end of June. I haven't bought any meat in restaurants or shops since then. I have eaten meat that I didn't pay for, e.g. because it was someone's leftovers or at picnics where it's been declared communal.
After years of confusion and lengthy hours of figuring out, in a brief moment I finally understood how is it possible for cryptography to work and how can Alice and Bob share secrets despite middleman listening from the start of their conversation. And of course now I can't imagine not getting it earlier.
I got a job.
I finished Couch to 5K today!
(The Zombies run version)
I've been working on this since February.
I self diagnosed excoriation as an addiction. I've stopped licking or biting my lips under all circumstances and have started applying a cream at night to both my lips and my finger tips. Although, I still haven't figured out a solution to supplement my willpower in scratching the skin inside my nose :( I'm scared I'll end up like those people who snort lots of cocaine and lose the gap between their nostrils.
Discovered that trigger warnings are keeping me from overcoming hyper-vigilence (actually diagnosed for this one!), not protecting me!
Have you considered coating your fingers with capsaicin to make scratching your mucus membrances immediately painful?
(Apologies if this advice is unwanted - I have not experienced anything similar, and am just spitballing).
Actually I really appreciate that suggestion! I hadn't considered it. I always appreciate recommendations!
It may also be useful so that I rub my eyes less. My optometrist said to avoid that since I'm losing my 3D perception.
As of the end of the sentence, I intend to look up capsaicin and similarly purposed agents, and the appropriate dosage and application. Then I'll look up safety considerations if the information is indicative, and availability.
Oh, dear. Please don't try this at home. You're likely to end up with a pain rating of 10 on this guy's scale.
I am not a therapist. At all. And I have no experience specifically with excoriation. BUT.... If you haven't tried forming replacement habits yet, spend some time on it. When your hand starts moving towards your head, instead of stopping it, direct it to a 'safe spot' on your cheek, shoulder, neck -- anywhere where it will do less damage. I would recommend the hair behind your ear, as it gives you something to fidget with, but your needs may vary.
As a random thought, would something like this help? This may not be the best example, but various mechanical barriers are available, including things like this and beyond.
Manuscript of my first first-author publication on biological oscillators accepted pending revisions and one or two additional experiments.
Peer review is occasionally hilarious, and this was one of those times.
Today I became a regular UNICEF donor.
I was invited to and did speak on the Advanced Writing panel at Bronycon 2015. Video not yet on YouTube; virtual handout here.
I made serious progress on a system for generating avatar animations based on the motion of a VR headset. It still needs refinement, but I'm extremely proud of what I've got so far.
I made a small webapp to improve my (and maybe your) breathing while working: http://inflow.rethaller.net/
I still need to find a name, but most of it is done. I don't expect the LW crowd to be very receptive to the "coherence" thing and I don't think the references are very strong research-wise, but I do think the idea of "regular and smooth breathing while focusing on the chest" works in practice, as I (and many others) have experienced it to work.
I've been using this thing almost daily for about two months, and I can tell that my breathing has improved a lot even when I'm not using it. Also, it has helped me at times going through work I really, really didn't like.
If I remember right having a normal heartbeat that's unregular is more healthy then having one with little heart rate variability. What's the case for developing regular breathing?
Also do you have a reason for a duration of inhale and exhale that are the default of the app?
I often notice that I'm having a slow, regular, smooth breathing, as opposed to a chaotic one (I also used to do micro apneas while working, and I don't notice it anymore). But maybe the bigger effect is that having this in the background helps me be more aware of my body and feelings while I work. I much more often zoom-out of what I'm working on to check my breathing, and also pay more attention to the rest of the body (very often I my case, I notice tension in my shoulders)
Breathing with regularity increases your HRV, by making your heart rate follow a sine of high amplitude, instead of being random, something like this. It's quite visible in this video but it's also pretty easy to measure yourself
Not really, but I've seen 5 or 6 seconds being recommended by several "coherence professionals" (as with this horrible clock)
It's sad that the smartwatches don't measure HRV. Otherwise this would be a nice group experiement.
Yes, I've also been looking for that without success. Some apps claim to measure HRV using a phone camera, but it seems to be mostly a scam. It looks like you have to go with either a chest strap, an ear pin, or a thumb sensor
Our comrades hopes and dreams are etched into its code, transforming even Platonic darkness into light!
Unmatched in Heaven and Earth: one program, one engine, whose power surpasses even the gods!
SUPER GALAXY PROBABILISTIC HALTING!
Witness: the power of the human race!
Can you explain what you did?
From the github description:
The github README is "Short summary, build instructions, examples".
Yes, I read that paper but I had no idea what any of it meant, unfortunately. I gather it does something which means you can evaluate a function for some number of substitutions/timesteps and if it hasn't terminated with output yet, give a meaningful probability for whether it ever halts, but I don't understand anything about what that something is, what sort of probability it's giving, or what one could do with this.
Yep, that's what it does.
The computational "something" is actually just "run the evaluation for
msteps". The clever bit is the "sort of probability", which is a loose upper-bound on the measure of halting times we've already exceeded after
msteps. Demonstrating how that measure makes sense is what takes up the meat of the paper.
You could place bets on anything involving the Halting Problem, with sensible, well-defined, nonrandom odds.
By the way, I thought you knew computability theory professionally. Don't you do something-or-other academic?
Can you motivate the construction any? I doubt it's as simple as 1/n timesteps because that would be too convenient, but what is it?
That's a tad contrived... I was thinking more along the lines of whether such an estimate could be useful in any of the traditional applications stymied by the Halting Problem, such as showing the equivalence of two functions, anti-virus checking, model checking, running sandboxed functions, resource limits, peephole optimization, etc.
No, I'm afraid I only know what I've picked up here and there. Not nearly enough to understand a paper like that.
It's all the same, really. You can do all those things, but probabilistically, up to the probabilities allowed by how long you've run the program for. Unless and until the program halts, there will always be some chance that you were wrong about its nonhalting and it does actually halt. If it actually doesn't halt, your confidence that it doesn't will asymptotically approach 1.0 in the limit.
It's not 1/n timesteps, yeah. It's more along the lines of "Halting times for long-running programs turn out to be algorithmically non-random, which means we use this class of functions called incompressibility cut-off functions to bound the measure of halting times." Beyond that, I'd have to start and finish a couple of textbooks (at least an intro to topology and then Calude's own Information and Randomness) to explain more deeply.
It doesn't help that Calude tends to use different notation in his papers from the standard notations used for studying Kolmogorov complexity/AIT.
Not really. 'You can bet on halting' is not a use. There is no website I can go to which has 24/7 gambling on randomly-generated lambda functions which I can use that code to clean up on; there is no lambda function e-sports league where competitors race to write or crack their rival's functions, etc. This could be said of any method for calculating probabilities of anything: 'you can bet on it!' That's not what I mean by use. What I meant are some of the applied tasks, but I'm not clear exactly on whether this probability applies: can I usefully use it to timeout functions being evaluated for superoptimization or peephole optimization or is there some catch or other issue? Can it be transformed from running times to other properties, similar to the connections between Halting & Godel & Rice's theorem? etc.
That's a little disappointing. But if you can't explain what incompressibility cut-off functions are like, can you maybe give an idea of their general shapes? How does this scale with program length? How many steps in general would it take to reach something like 99% probability of non-halting? etc.
Which is exactly why I said "bet": it's a 1-normalized measure, and can thus be mathematically applied in all the ways that 1-normalized measures (probabilities) are otherwise applied.
Yes and yes, provided that you are willing to accept uncertainty. This is where we get into the distinction between "in theory" and "in practice": in theory, we have not solved the Halting Problem. In practice, whether we can use this solution depends on what happens if we're wrong.
If I apply a peephole optimization on the assumption that some computation is nonhalting, but it actually does halt, how bad is it for the performance or correctness of my program? I can just refuse to apply the optimization if "p >= 0.05" that the program actually will halt, of course: in general, it usually doesn't seem as if we're going to lose much for betting conservatively by refusing to act on the probabilistic assumption of nonhalting. That would just leave us with the same state of knowledge we had before applying Calude's anytime algorithm.
In what cases do we really need this "solution", then? Mostly the issues precisely connected with Halting, such as Incompleteness Theorems and Rice's Theorem. In those cases, we face the very peculiar problem that our formal systems will not only never be able to prove most nonhalting programs to be nonhalting, but that certain programs do halt and the formal systems still can't prove that. In those cases, Calude's "solution" is a big help, because all we need to do is run the program until it halts, and we've proven it to halt, giving us constructive evidence to add the "Program X terminates" theorem to our formal system. Likewise, in cases where we need to formally prove that one-or-another program never halts, we can run the program for a long time, and then assign a probability to the formal statement "Program X never terminates", which then lets us start assigning probabilities to theorems derived from that statement.
The question then becomes, "What degree of probability in the half-open interval [0, 1) is acceptable to me, the program's human operator, to go ahead and believe that Program X Never Halts?"
If you want to phrase it in terms of cognition, we can then apply notions of utility and bounded-rational reasoning: a principled calculation for bounded-rational utility maximization should let you say, my agent has so-and-so much computing power, and gets so-and-so much utility out of a Halting Solution, and it thus ought to spend so-and-so much of its compute-power to obtain such-and-so a level of probabilistic confidence. The ordinary Dutch Book stuff from probability theory then tells us that the agent is behaving rationally about how it treats Halting Problems.
Wherever a limitary theorem has been established by reduction to the Halting Problem, Calude's probabilistic "Halting Solution" can be useful, provided that you are willing to reason explicitly about what you mean by "useful" in terms of trade-offs between compute-time, surety, and utility.
I wrote a hypercomputer 60-ish lines of python. It's (technically) more powerful than every supercomputer in the world.
Edit: actually, I spoke too soon. I have written code which outlines a general scheme that can be modified to construct schemes in which hyper computers could possible constructed (including itself). I haven't proven that my scheme allows for hypercomputation, but a scheme similar to could (probably), including itself.
Edit: I was downvoted for this, which suppose was justified.
What my code does is simulates a modified version of CGoL (John Conway's Game Of Life). It's modified so that information can (technically) flow back through time. It's very similar to what EY outlined in the second section of Casual Universes, except my algorithm is much simpler, and faster (it'd be even faster if I hadn't done a half-assed job of coding it and choose a good language to write it in).
My scheme is more general than the code. I've tried explaining it on /r/cellular_automatta here and here, with a passable degree of success.
The scheme itself is capable of hypercomputation with the right underlying rules. I'll write a quick demonstration, assuming you've read Casual Universes, and my explanations
in order to be capable of hyper computation it must be capable of regular computation. CA have already been proven to be Turing machines in disguise so I'll take this for granted.
by the above, you should be able to construct a simulation of any turing machine in the CA. Again, this is a fact, so I'll take it for granted
I've already said that the algorithm involves backwards information flow (time travel by another name)
by the above, we can construct a state in the CA which simulates a given Turing machine, then pipes it's output back in time to a finite time after the simulation started
if we modify the simulation to instead just pipe the fact that the machine produce output, and nothing else (one bit), we can know before hand that a turing machine produces output.
I'd think anyone reading this is familiar, but this is called the Halting Problem, I think (I could be wrong, but I am highly confident I am not) my scheme solved it.
The only real problem is that if the T-machine doesn't halt, neither will the one we constructed, but it will produce output after an arbitrarily finite amount of time.
This does mean my scheme is more powerful than an Turing machine. For instance, it can compute the busy beaver function to a proportional amount of values.
You can look at the code here, but it's messily and hacked together. I only wrote it as a proof of concept in the first /r/cellular_automata thread I linked.
Your scheme may well he more powerful than a Turing machine (i.e., if there were something in the world that behaves according to your model then it could do computations impossible to a mere Turing machine) but much of what you write seems to indicate that you think you have implemented your scheme. In Python. On an actual computer in our universe.
Obviously that is impossible (unless Python running on an actual computer in our universe can do things beyond the capabilities of Turing machines, which it can't).
Could you clarify explicitly whether you think what you have implemented is "more powerful than every supercomputer in the world" in any useful sense? What do you expect to happen if you feed your code a problem that has no Turing-computable solution? (What I expect to happen: either it turns out that you have a bug and your code emits a wrong answer, or your code runs for ever without producing the required output.)
I'm sorry that I over estimated my achievements. Thank you for being civil.
I'm actually quite interested in this. For something like the busy beaver function, it just runs forever with the output being just fuzzy and gets progressively less fuzzy but never being certain.
Although I wonder about something like super-tasks somehow being described for my model. You can definite get input from arbitrarily far in the future, but you can do even crazier things if you can achieve a transfinite number of branches.
If you're still interested in this (I doubt you are, there are more important things you can do with you are time, but still) you glance at this reply I gave to taryneast describing how it checks if a turing machine halts. (I do have an ulterior motive in pointing you there, seeing as I want to find that one flaw I'm certain is lurking in my model somewhere)
Could some other people who have read Causal Universes comment on whether EY implies in it that hypercomputation is possible? What is hypercomputation?
EY doesn't imply that hypercomputation is possible in our universe. He does imply that there are logically possible universes in which hypercomputation is possible. Hypercomputation is any computational process that is able to evaluate at least one non-Turing-computable function.
(That last definition may be unhelpful in practice. If our universe is finite then it can't contain anything that's even Turing-powerful, never mind capable of hypercomputation. If our own lifespan and perceptual powers are finite then there's no way we could ever know anything was doing hypercomputation.)
What about computing a Turing-computable function in O(m) time, where a Turing machine needs O(n) and m < n? Does quantum computation count as hypercomputation? The Wikipedia page seems to say no.
I'd agree with Wikipedia. After all, there are plenty of other physically-realizable models of computation that are equivalent to Turing machines in terms of what they can compute but substantially inequivalent in terms of how long it takes.
[EDITED to add:] ... Though I think anything you can do in a Newtonian universe can be modelled on a Turing machine with at worst a polynomial-in-problem-size slowdown. So there might be something to be said for counting as "hypercomputation" anything that, e.g., can evaluate all computable functions in constant time. But that isn't the usual use of the term.
I'm willing to suspend judgement pending actual results. Demonstrate it does what you claim and I'll be very interested.
Note you probably already know this, but in case you don't: AFAIK the Halting problem has a mathematical proof... you will require the same to prove that your system solves it. ie just showing that it halts on many programs won't be enough (Turing machines do this too). You'll have to mathematically prove that it halts for all possible problems.
For some strange reason, your post wasn't picked up by my RSS feed and the little mail icon wasn't orange, Sorry to keep you waiting for a reply for so long.
The Halting proof is for Turing machines. My model isn't a turing machine, it's supposed to be more powerful.
Not to sound condescending, but this is why I'm posting it on a random internet forum and not sending it to a math professor or something.
I don't think this is revolutionary, and I think there is very good possibility there is something wrong with my model.
I'll tell you what convinced me that this is a hyper-computer though., and I'll go a ahead and say I'm not overly familiar with the Halting inasmuch as I don't understand the inner workings as well as I can parrot facts about it. I'll let more experienced people tell me if this breaks some sort of conditional.
What my model essentially does is graft a time-travel formalism onto a something turing-complete. Since the turing -complete model of your choice is a special case of the model we just constructed, it's already turing complete. And the formalism itself already specifies that information can travel backwards through time, what has to be proven is that an algorithm can be constructed that solves the halting problem.
With all of that, we can construct an algorithm based off of the following assumptions about time travel
Inconsistent timelines "don't exist"*
A timeline is inconsistent if it sends back different information than it receives
If more than one timeline is consistent, then all are equally realized.
I have no idea if you read through the ramblings I linked, but the gist was that to simulate the model, at any given timestep the model receives all possible input from the future, organized into different branches. 'possible' is a important qualitfier, because the difference between the model being exponential in the size of the memory and exponential in an arbitrary quantity constrained to be smaller the size of the memory is whether you can tell if a given bit of memory is dependent on the future by looking at only the current state.
Next, I'll point out that because the model allows computation to be carried out between receiving and sending messages, you can use the structure of the model to do computation. An illustration:
Suppose X is a turing machine you are interested in whether or not it halts
1. Receive extra input from the future (in the form of "X will halt after n timesteps" )
2. Is it properly formatted?
3. Simulate X for exactly n timesteps
If it halts before then, output "X will halt after m timesteps" where m is the number of cycles before it halted. Halt.
If it doesn't halt after n timesteps, output "X will halt after n+1 timesteps". Halt
I'll note this algorithm only appeared to me after writing my posts.
Here's how it works.
We can number each timeline branch based off of what it outputs to the past. If it outputs "X will halt after y timesteps" then it is machine y.
If X doesn't halt, machine y will simulate X up until y-1 timesteps, and output that it wil halt after y timesteps.
The above point should be emphasized. Recall above how a timeline is inconsistent and therefore "non-existent" if it's input doesn't match it's output. Thus, for a non-halting X, every machine y will be inconsistent, and the hyper-computer will halt immediately (things are a bit fuzzy here, I am 80% confident that if there is a problem with my model, it lies in this part).
If it halts at t=z, then y=z is that only consistent timeline. For timelines y>z, they output y+1, for timelines y<z, they output z. Making y=z a kind of attractor.
My problems with this timeline are as follows:
I haven't been able to formulate the algorithm, without a) having every timeline inconsistent when the machine halts or b) the actual output uncertain (if it says it halts at z, you know for a fact it does, but if it says it doesn't halt, then you can't be sure)
"non-existent" timelines have casual weight.