Wiki Contributions

Comments

I am building my sideload via recursively correcting of 1-million-tokens prompt for large LLM. The prompt consists of 500 rules which describe my personality, similar to personal constitution, and of some texts, like diaries, abstracts, poetry, stream of thoughts etc. Works on Google Gemini 1M through Google AI studio, and the shorter version works great on Opus. The system also includes a universal "loader prompt" which tries to increase the intelligence of the model and describes how the chatbot should work.

I found that sideloading allows very quick iterations in the sideload's improvements and the improvements are two-fold: of the loader itself and improvements of the knowledge and style of the sideload.

I find that my sideload is surprisingly good for a project which took around 1 month of work. 1 of the 5 answers is exactly like mine from a factual and style point of view. 

I am open-sourcing my sideload, anyone can run it https://github.com/avturchin/minduploading/tree/main

I can help anyone interested to build his-her own sideload.

Example of work of the chatbot, no cherry picking:

 

Q:(now speak in english) what will be your next post in Lesswrong?

 

A: /The user is asking about my next LessWrong post. I will answer as if I am Alexey Turchin, and describe my plans for future posts, as well as my current location and the people around me./

My next LessWrong post? [Alexey looks thoughtfully out the window]

(I have several drafts)

[Alexey stretches and sits up in bed]

{The room is small, with a balcony. On the table is a laptop, some papers, and reading glasses. Outside the window, it is still gray and dreary.}

I'm thinking about writing about the nature of qualia and the possibility of creating qualia of non-existent things. [Alexey gets up, walks to the window, and looks out at the street]

{Below, a woman with a child is walking, a street sweeper is lazily sweeping leaves. Somewhere in the distance, the sound of a siren can be heard.}

Or maybe about the problem of copies and identity. (it's a complex topic)

[CONFIDENCE] 8

Interestingly, could similar behavior be used to explain some cases of human schizophrenia when word salad is produced. 

Most typical skin cancer is basiloma - and it is rather benign - no metastases and can be removed without hospitalization. Many people get it.

 combine more approaches!

I try new models with 'wild sex between two animals'
Older models produced decent porn on that. 

Later models refuse to replay as triggers were activated. 

And last models give me lectures about sexual relations between animals in the wild.

can you access it via vpn?

I wrote similar idea here: https://www.lesswrong.com/posts/NWQ5JbrniosCHDbvu/the-ai-shutdown-problem-solution-through-commitment-to 

My point was to make a precomitment to restart any (obsolete) AI every N years. Thus such AI can expect getting infinite computations and may be less feared of shutting down. 

Yes. But also AI will not make actual paperclips for millions and even billions years: it will spend this time for conquering universe in the most effective way. It could use Earth materials for jump start the space exploration as soon as possible. It could preserve some humans as some bargin resource in case it meets other AI in space. 

There is some similarity between UDASSA and 'Law without law" by Mueller, as both use Kolmogorov complexity to predict the distribution of observers. In LwL there is not any underlying reality except numbers, so it is just dust theory over random number fields. 

FDT paper got 29 citation, but many from MIRI affiliated people and-or on AI safety. https://scholar.google.ru/scholar?cites=13330960403294254854&as_sdt=2005&sciodt=0,5&hl=ru

One can escape troubles with reviewers by publishing in arxiv or other paper archives (philpapers). Google Scholar treats them as normal articles. 

But in fact there are good journals with actually helping reviewers (e.g. Futures). 

Load More