LESSWRONG
LW

bvbvbvbvbvbvbvbvbvbvbv
16451080
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2bvbvbvbvbvbvbvbvbvbvbv's Shortform
4y
4
No wikitag contributions to display.
Upcoming Changes in Large Language Models
[+]bvbvbvbvbvbvbvbvbvbvbv2y-210
Auto-GPT: Open-sourced disaster?
bvbvbvbvbvbvbvbvbvbvbv2y-36

Pinging @stevenbyrnes : do you agree with me that instead of mapping those protoAGIs to a queue of instructions it would be best to have the AGI be made from a bunch of brain strcture with according prompts? For example "amygdala" would be in charge of returning an int between 0 and 100 indicating feat level. A "hypoccampus" would be in charge of storing and retrieving memories etc. I guess the thalamus would be consciousness and the cortex would process some abstract queries.

We could also use active inference and bayesian updating to model current theories of consciousness. Even use it to model schizophrenia by changing the number of past messages some strctures can access (i.e. modeling long range connection issues) etc.

To me that sounds way easier to inspect and align than pure black boxes as you can throttle the speed and manually change values like make sure the AGI does not feel threatened etc.

Is anyone aware of similar work? I've created a diagram of the brain structures and its roles in a few minutes with chatgpt and it seems super easy.

[This comment is no longer endorsed by its author]Reply
Hutter-Prize for Prompts
bvbvbvbvbvbvbvbvbvbvbv2y1-2

This reminds me of an idea : I think it would be great to hold a bi-monthly competition where people try to do something as incredible as possible in just 30 minutes using LLMs or other AIs. The winner being decided by a select few.

Reply
How much I'm paying for AI productivity software (and the future of AI use)
bvbvbvbvbvbvbvbvbvbvbv9mo40

Sharing my setup too:

Personnaly I'm just self hosting a bunch of stuff:

  • litellm proxy, to connect to any llm provider
  • langfuse for observability
  • faster whisper server, the v3 turbo ctranslate2 versions takes 900mb of vram and are about 10 times faster than I speak
  • open-webui, as it's connected to litellm and ollama, i avoid provider lock in and keep all my messages on my backend instead of having some at openai, some at anthropic, etc. Additionally it supports artifacts and a bunch of other nice features. It also allows me to craft my perfecr prompts. And to jailbreak when needed.
  • piper for now for tts but plan on switching to a selfhosted fish audio.
  • for extra privacy I have a bunch of ollama models too. Mistral Nemo seems to be quite capable. Otherwise a few llama3, qwen2 etc.
  • for embeddings either bge-m3 or some self hosted jina.ai models.

I made a bunch of scripts to pipe my microphone / speaker / clipboard / llms together for productivity. For example I press 4 times on shift, speak, then shift again, and voila what I said was turned into an anki flashcard.

As providers, I mostly rely on openrouter.ai which allows to swap between providers without issue. These last few months I've been using sonnet 3.5 but change as soon as there's a new frontier model.

For interacting with codebases I use aider.

So at the end all my cost comes from API calls and none from subscriptions.

Reply
What Depression Is Like
bvbvbvbvbvbvbvbvbvbvbv10mo10

Very interesting of you to think of it that way. It turns out that it's very in line with recent results from computation psychiatry. Basically in depression we can study and distinguish how much the lack of activity is due to "lack of ressource to act" vs "increased cost of action". Both look clinically about the same but underlying biochemical pathways differ, so it's a (IMHO) promising approach to shorten the times it takes for a doctor to find the appropriate treatment for a given patient.

If that's something you already know I'm sorry, I'm short on time and wanted this to be out :)

Reply
Why you should be using a retinoid
bvbvbvbvbvbvbvbvbvbvbv11mo20

Just a detail : Haven't retinoids been discovered when looking for cancer treatments? I thought it was the origin story behind isotretinoin.

Reply
Notice when you stop reading right before you understand
bvbvbvbvbvbvbvbvbvbvbv1y10

My personnal solution to this is to mostly use Anki for everything and anything.

  1. It helps not loose my momentum: if I see my cards about the beginning of the articles about transformers it increases my chance of finishing the article
  2. It garuantees that I never get the dreaded my knowledge is limited to a couple of related keywords ("self-attention", "encoder"/"decoder") with no real gears-level understanding of anything. feeling. Making it all the more easier to get back to reading.

In fact I hate the number 2 feeling so much that it was a huge motivation to really master Anki. (1300 day streak or so with no sign of regret whatsoever)

Reply
Clip keys together with tiny carabiners
bvbvbvbvbvbvbvbvbvbvbv1y40

I think very cheap carabiners are extremely fragile especially for repeated use. I saw a failing mode where the mobile axis just opens in the wrong way by going around the stator. Keep that in kind when choosing which carabiner to use.

Might be better to keep using ring keyholders but have one decently strong carabineer to hold the rings together instead of what you did : tiny carabiners that hols onto a ring no?

Reply
Terminology: <something>-ware for ML?
Answer by bvbvbvbvbvbvbvbvbvbvbvJan 10, 202410

I don't really like any of those ideas. I think it's really interesting that aware is so related though. I think the best bet would be based on software. So something like deepsoftware, nextsoftware, nextgenerationsoftware, enhancedsoftware, etc.

Reply
The Sequences on YouTube
bvbvbvbvbvbvbvbvbvbvbv2y20

For ayone trying to keep up with AI for film making, I recommend the youtube channel curious refuge https://www.youtube.com/channel/UClnFtyUEaxQOCd1s5NKYGFA

Reply
Load More
3[Paper] Trajectories through semantic spaces in schizophrenia and the relationship to ripple bursts
2y
0
3The Peril of the Great Leaks (written with ChatGPT)
2y
1
21Correcting a misconception: consciousness does not need 90 billion neurons, at all
2y
19
2bvbvbvbvbvbvbvbvbvbvbv's Shortform
4y
4