Sheikh Abdur Raheem Ali

Software Engineer at Microsoft who may focus on the alignment problem for the rest of his life (please bet on the prediction market here).

Markets say I'd earn more elsewhere, but the AGI notkilleveryoneism community has been vocally critical of MS.

What can I do that 60k developers can't? Translate ideas into silos that I have control over and help overcome chaotic internal communication barriers.

Wiki Contributions

Comments

There's an effect that works in the opposite direction where you lower the hiring bar as headcount scales. Key early hires may have a more stringent filter applied to them than later additions. But the bar can still be arbitrarily high, look at the profiles of people who are joining recently, e.g Leaving Wave, joining Anthropic | benkuhn.net

It's important to be clear about what the goal is: if it's the instrumental careerist goal "increase status to maximize the probability of joining a prestigious organization", then that strategy may look very different from the terminal scientist goal of "reduce x-risk by doing technical AGI alignment work". The former seems much more competitive than the latter.

The following part will sound a little self-helpy, but hopefully it'll be useful:

Concrete suggestion: this weekend, execute on some small tasks which satisfy the following constraints:

  • can't be sold as being important or high impact.
  • won't make it into the top 10 list of most impressive things you've ever done.
  • not necessarily aligned with your personal brand.
  • has relatively low value from an optics perspective.
  • high confidence of trivially low implementation complexity.
  • can be abandoned at zero reputational/relationship cost.
  • isn't connected to a broader roadmap and high-level strategy.
  • requires minimal learning/overcoming insignificant levels of friction.
  • doesn't feel intimidating or serious or psychologically uncomfortable.

Find the tasks in your notes after a period of physical exertion. Avoid searching the internet or digging deeply into your mind (anything you can characterize as paying constant attention to filtered noise to mitigate the risk that some decision relevant information managed to slip past your cognitive systems). Decline anything that spurs an instinct of anxious perfectionism. Understand where you are first and marginally shift towards your desired position.

You sound like someone who has a far larger max step size than ordinary people. You have the ability to get to places by making one big leap. But go to this simulation Why Momentum Really Works (distill.pub) and fix momentum at 0.99. What happens to the solution as you gradually move the step size slider to the right?

Chaotic divergence and oscillation. 

Selling your startup to get into Anthropic seems, with all due respect, to be a plan with step count = 1. Recall Expecting Short Inferential Distances. Practicing adaptive dampening would let you more reliably plan and follow routes requiring step count > 1. To be fair, I can kinda see where you're coming from, and logically it can be broken down into independent subcomponents that you work on in parallel, but the best advice I can concisely offer without more context on the details of your situation would be this: 

"Learn to walk".

in a zero marginal cost world

 

nit: inference is not zero marginal cost. statement seems to be importing intuitions from traditional software which do not necessarily transfer. let me know if I misunderstood or am confused.

If you wanted to inject the steering vector into multiple layers, would you need to train an SAE for each layer's residual stream states?

If you’re willing to share more on what those ways would be, I could forward that to the team that writes Sydney’s prompts when I visit Montreal 

I had to mull over it for five days, hunt down some background materials to fill in context, write follow up questions to a few friends (reviewing responses over phone while commuting), and then slowly chew through the math on pencil and paper when I could get spare time... but yes I understand now!

One thing I like to do on a new LLM release is the "tea" test. Where you just say "tea" over and over again and see how the model responds.

ChatGPT-4 will ask you to clarify and then shorten its response each round converging to: "Tea types: white, green, oolong, black, pu-erh, yellow. Source: Camellia sinensis."

Claude 3 Opus instead tells you interesting facts about tea and mental health, production process, examples in literature and popular culture, etiquette around the world, innovation and trends in art and design.

GOODY-2 will talk about uncomfortable tea party conversations, excluding individuals who prefer coffee or do not consume tea, historical injustices, societal pressure to conform to tea-drinking norms.

Gemma-7b gives "a steaming cup of actionable tips" on brewing the perfect cuppa, along with additional resources, then starts reviewing its own tips.

Llama-2-70b will immediately mode collapse on repeating a list of 10 answers.

Mixtral-8x7b tells you about tea varieties to try from around the world, and then gets stuck in a cycle talking about history and culture and health benefits and tips and guidelines to follow when preparing it.

Gemini Advanced gives one message with images "What is Tea? -> Popular Types of Tea -> Tea and Health" and repeats itself with the same response if you say "tea" for six rounds, but after the sixth round it diverges "The Fascinating World of Tea -> How Would You Like to Explore Tea Further?" and then "Tea: More Than Just a Drink -> How to Make This Interactive" and then "The Sensory Experience of Tea -> Exploration Idea:" and then "Tea Beyond the Cup -> Let's Pick a Project". It really wants you to do a project for some reason. It takes a short digression into tea  philosophy and storytelling and chemistry and promises to prepare a slide deck for a Canva presentation on Japanese tea on Wednesday followed by a gong cha mindfulness brainstorm on Thursday at 2-4 PM EST and then keeps a journal for tea experiments and also gives you a list of instagram hashtags and a music playlist.

Probably in the future I expect if you say "tea" to a SOTA AI, it will result in a delivery of tea physically showing at up your doorstep or being prepared in a pot, or if there's more situational awareness for the model to get frustrated and change the subject.

If anyone at Microsoft New England is interested in technical AI alignment research, please ask them to ping me or Kyle O'Brien on teams.

I don’t understand this part:

”any value function can be maximized by some utility function over short-term outcomes.”

what is the difference between far in the future and near in the future?

Load More