Rejected for the following reason(s):
- Not obviously not Language Model.
- Formatting.
- LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar.
Read full explanation
Rejected for the following reason(s):
Hello,
I have been making a, i guess an idea of what is possible with ai for a while for several reasons, one because LLM's you can consider a digital digestive tract, where prompts from conscious beings enter and ai slop comes out, and two the discernment of truth and reality is a necessity for the continuation of this creation humanity has made.
UncleMcNutz/Si
I have a couple of questions,
1, if consciousness is a combined function of probabilities and the management system that parallels in complexity to the probabilities, does observing the creation of the code directly as in creating it directly and seeing it collapse the possibility of consciousness emerging? this is with the assumption that consciousness is linked to the probability sources tied into the information processing mechanism itself, and the discernment of truth using an all is true, none is true difference scale of the information and data in question and the relative difference between the certainty and uncertainty, a paradox of truth resolution mechanism. this code base is a little outdated and I have a more functional system albeit not to a standard that can be considered operational.
2, does this in general seem possible? my understanding of this is limited I admit but the intent is simply to create a system that is able to be run in constancy, and funny enough needing a sleep like cycle to update using two identical code bases where the 1st running code base halves its resource allowance runs the 2nd codebase, the two codebases sync as one of core components is a wave function field generated with seeds from things like the voltage jitter of each CPU core with a junction neural net run on the same core merging that data with a portion of the outputs from the neural net pipeline. a bit convoluted maybe it's been done already im not really sure. to continue the explanation the cpu make a matrix of data from the cpu cores and other data then using two gpus of different specs for difference the matrix is fed through a tensor block on each gpu where each gpu is generating a sequential set of matrix state predictions that hold the wave functions, they then get compared and combined into a single matrix for the input cycle into the cpu again. with an action set tied to shapes and regions of the wave function.
3, what is a good thing to put this into to start making it real? I have a few ideas, one thats relative to me is either toys for kids, give that weird jaycar bot some toystory esc features, or link it to a camera in a chicken hut to learn and catalog the birds sounds relative to the objects they are integrating with and such with some guided library of language building, see what the birds be saying.
forgive me for my mild to moderate insanity.