LESSWRONG
LW

1309
wunan
415Ω66910
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
3wunan's Shortform
4y
4
Buck's Shortform
wunan24d150

Here is Eliezer's post on this topic from 17 years ago for anyone interested: https://www.lesswrong.com/posts/3Jpchgy53D2gB5qdk/my-childhood-role-model

Anna Salamon's comment and Eliezer's reply to it are particularly relevant.

Reply
Loki zen's Shortform
wunan2mo20

Searching the keyword "prompt engineering" (both on here and Google) may guide you to some helpful resources. Sorry I don't have anything specific to link you to.

Reply
Habryka's Shortform Feed
wunan3mo11

No massive advance (no GPT-5, or disappointing GPT-5)

Inversion: There was a substantial advance in frontier model AI in 2024. 

 

Shouldn't the inversion simply be "There was a massive advance"?

Reply
wunan's Shortform
wunan4mo*120

If you have Long COVID or ME/CFS, or want to learn more about them, I highly recommend https://s4me.info. The signal to noise ratio is much better than on other forums for those topics that I've found. The community is good at recognizing and critiquing low vs high quality studies.

 

As an example of the quality, this factsheet created by the community is quite good: https://s4me.info/docs/WhatIsMECFS-S4ME-Factsheet.pdf

Reply
A chess game against GPT-4
wunan3y52

Did you and GPT4 only output the moves, or did you also output the board state after each turn?

Reply
Bankless Podcast: 159 - We’re All Gonna Die with Eliezer Yudkowsky
wunan3y10

Unfortunately without speaker labels the YouTube transcript is less useful unless you're listening while reading.

Reply
Bankless Podcast: 159 - We’re All Gonna Die with Eliezer Yudkowsky
wunan3y63

Is there a transcript anywhere?

Reply
Paper: Large Language Models Can Self-improve [Linkpost]
wunan3y103

Another similar result was that AlphaFold was trained on its own high-confidence predictions for protein sequences with unknown structures:

The AlphaFold architecture is able to train to high accuracy using only supervised learning on PDB data, but we are able to enhance accuracy (Fig. 4a) using an approach similar to noisy student self-distillation35. In this procedure, we use a trained network to predict the structure of around 350,000 diverse sequences from Uniclust3036 and make a new dataset of predicted structures filtered to a high-confidence subset. We then train the same architecture again from scratch using a mixture of PDB data and this new dataset of predicted structures as the training data, in which the various training data augmentations such as cropping and MSA subsampling make it challenging for the network to recapitulate the previously predicted structures. This self-distillation procedure makes effective use of the unlabelled sequence data and considerably improves the accuracy of the resulting network.

Reply
riceissa's Shortform
wunan3y*40

I'm also dealing with chronic illness and can relate to everything you listed. I've been thinking that a discord server specifically for people with chronic illness in the rationality community might be helpful to make it easier for us to share notes and help each other. There are different discord servers for various conditions unaffiliated with the rationality community, but they tend to not have great epistemic standards and generally have a different approach than what I'm looking for. Do you have any interest in a discord server?

Reply
Load More
6What is the research speed multiplier of the most advanced current LLMs?
Q
2y
Q
2
17Avoiding "enlightenment" experiences while meditating for anxiety?
Q
2y
Q
6
10COVID contagiousness after negative tests?
Q
3y
Q
2
6What AI newsletters or substacks about AI do you recommend?
Q
3y
Q
1
3wunan's Shortform
4y
4
17OpenAI charter
7y
2