Htarlov

Web developer and Python programmer. Professionally interested in data processing and machine learning. Non-professionally is interested in science and farming. Studied at Warsaw University of Technology.

Posts

Sorted by New

Wiki Contributions

Comments

I think that in an ideal world where you could review all priors to very minute details having as much time as needed, and where people were fully rational, then "trust" as a word would not be needed.

We don't live in such a world though. 

If someone says "trust me" then in my opinion it conveys two meanings on two different planes (usually both, sometimes only one):

  1. Emotional. Most people base their choices on emotions and relations, not rational thought. Words like "trust me" or "you can trust me" convey an emotional message asking for an emotional connection or reconsideration, usually because of some contextual reason (like the other person being in a position that on an emotional level seems to be trustworthy, f.ex. a doctor).
  2. Rational. Time for reconsideration. The person asks you to take more time to reconsider your position because she or he thinks you didn't consider well enough why she or he is to be trusted in a given scope or that person just presented some new information (like "trust me, I'm an engineer").

"I decided to trust her about ..." - for me, it is a short colloquial term for "I took time to reconsider if things she says on the topic ... are true and now I think that is more likely that they are". 

For many people, it also has emotional and bonding components.

Another thing is that people tend to trust or mistrust another person in general broad scope. They don't go into detail and think separately on every topic or thing someone says and decide separately for each of them. That's an easy heuristic that usually is good enough, so our minds are wired to operate like that. So people usually say that they trust a person generally, not trust that person within some subject/scope.

P.S. I'm from a different part of the world (central EU, Poland). We don't use phrases like "accept trust" here - which is probably an interesting difference in how differences in language create different ways of thinking. For us here "trust" is not like a contract. It is more a one-way thing (but with some expectation of mutuality in most circumstances).

What I would also like to add, which is often not addressed and it gives some positive look, is that the "wanting" meaning the objective function of the agent, it's goals, should not necessarily be some certain outcome or certain end-goal on which it will focus totally. It might not be the function over the state of universe but function over how it changes in time. Like velocity vs position. It might prefer some way the world changes or does not change, but not having a certain end-goal (which is also unreachable in long-term in a stable way as universe will die in some sense, everything will be destroyed with P=1 minus very minute epsilon over enough time).

Why positive? Because these things usually need balance and stabilisation in some sense to retain same properties, which means less probability of drastic measures to get a little bit better outcome a little bit sooner. It might cease controll over us, which is bad, but gives lower probability of rapid doom.

Also, looking on current works it seems more likely for me that those property-based goals will be embedded rather than some end-goal like curing cancer or helping humanity. We try to make robust AGI so we don't want to embed certain goals or targets, but rather patterns how to work productively and safely with humans. Those are more meta and about way of how things go/change.

Note it is more like intuition for me than hard argument.

The question is if one can make a thing that is "wanting" in that long-term sense by combining not-wanting LLM model as short-term intelligence engine with some programming-based structure that would refocus it onto it's goals and some memory engine (to remember not only information, buy also goals, plans and ways to do things). I think that the answer is a big YES and we will soon see that in a form of amalgamation of several models and enforced mind structure.

There is one thing that I'm worried about in future of LLM. This is a basic notion that the whole is not always just the sum of parts and it may have very different properties.

Many people feel safe because of properties of LLM and how they are trained etc. and because we are not anywhere close to AGI when it comes to different solutions which seem more dangerous. What they don't realize is that soonest AGI likely won't be a next bigger LLM model.

It will likely be amalgamation of few models and pieces of programming, including few LLM of different sizes and capabilities, maybe not all exactly chat-like. It will have different properties than any one of its parts and it will be different than single LLM. It might be more brain-like when it comes to learning and memories. Maybe not in a way that weights of LLM models will change, but some inner state will change, and some more basic parts will learn or remember solutions and will structure them into more complex solutions (like we remember how to drive without consciously deciding on each move of the muscle or even making higher level decisions). It will have goals, priorities, strategies and short-time tactic schemes and it will be processed on higher level than single LLM.

Why I do think that? Because it is already to be seen on the horizon if you think about works like multi-model GPT-4, GPT Engineer, multitude of projects adding long term memory for GPT, and that scientific works where GPT writes itself code to bootstrap itself into doing complex tasks like achieving goals in Minecraft. If you extrapolate that then AGI is likely, initially maybe not very fast or cheap one though. It is likely to be on top of LLM but not being simply an LLM.

Most likely explanation is the simplest fitting one:

  • The Board was angry on lack of communication for some time but with internal disagreement (Greg, Ilya)
  • The things sped up lately. Ilya thought it might be good to change CEO to someone who would slow down and look more into safety as Altman says a lot about safety but speeds up anyway. So he gave a green light on his side (acceptation of change)
  • Then the Board made the moves that they made
  • Then the new CEO wanted to try to hire back Altman so they changed her
  • Then that petition/letter started rolling because the prominent people saw those moves as harming to the company and the goal
  • Ilya also saw that the outcome is bad both for the company and for the goal of slowing down and he saw that if the letter will get more signatures it will be even worse, so he changed his mind and also signed

Take note about the language that Ilya uses. He didn't say they did bad to Altman or that decission was bad. He said that that he changed his mind because of consequences being harm for the company.

Both seem around the corner for me.

For robo-taxis it is more a society-based problem than a technical one.

  • Robo-taxis have problems with edge cases (like some situations in some places with some bad circumstances). Usually in those where human drivers also have even worse problems (like pedestrians wearing black on the road at night with rainy weather - robo-taxi at least have LIDAR to detect objects in bad visibility). Sometimes they are also prone to object detection hacking (by stickers put on signs, paintings on the road, etc.). In general, they have fewer problems than human drivers.
  • Robo-taxis have a public trust problem. Any more serious accident hits the news and propagates distrust, even if they are already safer than human drivers in general.
  • Robo-taxis and self-driving cars in general move responsibility from the driver to the producer. The responsibility that the producer does not want to have and needs to count toward costs. It makes investors cautious.

What is missing so we would have robo-taxis is mostly public trust and more investments.

For AGI we already have basic blocks. Just need to scale them up and connect them into a proper system. What building blocks? These:

  • Memory, duh. It is there for a long time, with many solutions with indexing and high performance.
  • Thoughts generating. Now we have LLMs that can generate thoughts based on instructions and context. It can easily be made to interact with other models and memory. A more complex system can be built from several LLMs with different instructions interacting with each other.
  • Structuring the system and communication within it. It can be done with normal code.
  • Loop of thoughts (stream of thoughts). It can be easily achieved by looping LLM(s).
  • Vision. Image and video processing. We have a lot of transformer models and image-processing techniques. There are already sensible image-to-text models, even LLM-based ones (so can answer questions about images).
  • Actuators and movement. We have models built for movements on different machines. Including much of humanoid movements. We currently even have models that are able to one-shot or few-shot learning of movements for attached machines.
  • Learning of new abilities. LLMs are able to write code. It can write code for itself to make more complex procedures based on more basic commands. There was a work where LLM explored and learned Minecraft having only very basic procedures. It wrote code for more complex operations and used what it wrote to move around and do things, and build stuff.
  • Connection to external interfaces (even GUI). It can be translated into basic API that can be explored, memorized, and called by the system that can build more complex operations for itself.

What is missing for AGI:

  • Performance. LLMs have high performance in reading input data, but not very much inferring the result. It also does not scale very well (but better than humans). Multi-model complex systems on current LLMs would be either slow and somewhat dumb and make a lot of mistakes (open-source fast models, even GPT 3.5) or be very slow but better.
  • Cost-effectiveness. For sporadic use like "write me this or that" it is cost-effective, but for a continuous stream of thoughts, especially with several models, it does not compare well to a remote human worker. It needs some further advancements, maybe dedicated hardware, 
  • Learning, refining, and testing is very slow and costly with those models. This makes a cap for anyone wanting to build something sensible. Rather slow steps are done towards AGI by the main players.
  • The scope for the model is rather short currently. Best of powerful models have a scope of about 32 thousand tokens. There are some models that trade quality for being able to operate on more tokens, but those are not the best ones. 32k seems a lot, but when you need a lot of context and information to process to have coherent thoughts on non-trivial topics not rooted in model learning data... then it is a problem. This is the case with streams of thoughts if you need it to analyze instructions, analyze context and inputs, propose strategy, refine it, propose current tactic, refine it, propose next moves and decisions, refine it, generate instructions for the current task at hand, and also process learning to add new procedures, code, memories, etc. to reuse later. Some modern LLMs are technically capable of all that, but the scope is a road blocker for any non-trivial thing here.

If I would be to guess I would say that AGI will be sooner in scale - just because there is hype, there are big investments and the main problems are currently less like "we need a breakthrough" and more like "we need refinements". For robo-taxis we still need a lot more investments and some breakthroughs in areas of public trust or law.

In the case of biological species, it is not as simple as competing for resources. Not on the level of individuals and not on the level of genes or evolution. 

First of all, there is sexual reproduction. This is more optimal due to the pressure of microorganisms that adapt to immunological systems. Sexual reproduction mixes immunological genes fairly quickly. This also enables a quicker mutation rate with protection against negative aspects (by having two copies of genes - for many of those one working gene is enough and there are 2 copies from 2 parents). With this sexual reproduction often the female is biologically forced to give more resources to the offspring while for males it is somewhat voluntary and minimal input is much lower. Another difference is that often female knows exactly that she is the biological mother, but the father might not be certain about that. This kind of specialization is often more optimal than equalization - so the male can pursue more risky behavior including fighting off predators and losing the male to the predators or environment does not mean that the prospect of having offspring fails. This also makes more complex mating behaviors like the need to lose resources to show off health and other good qualities. Mating behaviors and peacock feathers are examples. Human complex social and linguistic behaviors are also somewhat examples - that's why humans dine together and talk a lot together on dates. The human female gives much more time and energy to the offspring, at least initially. Needs to know if the male is both good genetic material, healthy enough to take care of her during pregnancy when she is more vulnerable (at least in a natural environment where humans evolved), and also willing to raise the child with her later. There is a more prevalent strategy for females and males where they make a pair, bond together, have children, and raise them. There is also a more uncommon strategy for females (take genes from one male that looks more healthy and raise offspring with another one which looks more stable and able) and for males (impregnate many females and leave each of them so some will manage to handle on their own or with another male that does not know that he is not the father). The situation is more complex than only efficiency for resources or survival of the fittest. The environment is complex and evolution is not totally efficient (it optimizes often up to local optimum, and niches overlap and interact).

Second of all, resources are limited, and ways to use them also. Storing them long-term after harvest for many species is either impossible (microbes and insects will eat them) or would hinder their other capabilities (e.g. can store that as fat, but being fat is usually not very good). This means that preserving from gathering resources and resting might be better than gathering them efficiently all the time. This is what lions do - they rest instead of hunting when they don't need to hunt.

What does it tell us about self-replicating nano-machines? First of all, they won't need sexual reproduction. So unlikely they would lose energy on mating. They would rather do computational emulations at scale to redesign themself, if capable. They would also not need to rest. They will either use resources or store them in a manner that is more efficient to secure or use. If there is no such sensible manner that would not lose energy, they would leave it for later in the original state. They might secure it and observe but leave it until later.


What would they do depends on what is their goal and their technical capabilities. If they are capable and in need of converting as much of atoms to "computronium" or to their copies (as either a final goal or instrumental one) then they will surely do that. No need to lose resources. If they are not capable then they will probably hang low until more capable and use only what is usable.
Nevertheless, in my opinion, goals may not be compatible with that strategy. Including one like "simulate a virtual reality with beings having good fun". For many final goals more usable is to secure and gather resources on a grand scale but try to use them on as small a scale as possible and sensible for the end goal. The small scale is more efficient because of the light speed limit and dilation of time. Machines might try to find technology to stop stars from dispersing energy (maybe to break them and cool them down in a controlled way or some way to block them and stabilize them inside enclosed space, I don't know). Then they might add a network of observing agents with low energy usage for security, but not to use those resources right away. Use the matter slowly at the center of the galaxy turning it into energy (+ some lost to the black hole) to work for eons. They might make the galaxy go dim to preserve resources but might choose not to use them until much later.

An alternative explanation of mistakes is that making mistakes and then correcting them was awarded during additional post-training refinement stages. I work with GPT-4 daily and sometimes it feels like it makes mistakes on purpose just to be able to say that it is sorry for the confusion and then correct it. It feels like it also makes fewer mistakes when you ask politely, which is rather strange (use please, thank you, etc.).

Nevertheless, distillation seems like a very possible thing that is also going on here.

It does not distill the whole of a human mind though. There are areas that are intuitive for the average human, even a small child, that are not for the GPT-4. For example, it has problems with concepts of 3D geometry and visualizing things in 3D. It may have similar gaps in other areas, including more important ones (like moral intuitions).

I'm already worried as I tested AutoGPT and looked at how it works in code and for me, it seems like it will get very good planning capabilities with the change of a model to one with a few times longer token scope (like coming soon GPT-4 version with about 32k tokens) plus small refinements. So it won't get into loops, maybe have more than one GPT-4 module for different scopes of planning like long-term strategy vs short-term strategy vs tactic vs decisions on most current task + maybe some summarization-based memory. I don't see how it wouldn't work as an agent.

Put it into ElasticSearch index and give GPT-4 simple query API that it can use by adding some prefix and predefined set of parameters or a JSON so the script would run it instead of communicating this back to the user and give an answer as user response with also predefined prefix. Then it should be able to get questions, search for info, and respond. Worked like a charm for a product database in PoC so should work for documentation.

Load More