Sunflower
03/20/2024 (Wed) 23:15
Id: 66d629
No.6775
del
>>6774Thats because in both cases its likely amost always close to 1 model.most ai pictures you see are either Midjourney or Dalle. Dalle especially has that ai effect.Midjourney is partly trained by having users give positive feedback to aesthetic pictures making midjourney have that slightly iver-aesthetic feel your brain can sense.One good thing about stable diffusion is the spreading of many different sub models in a sense which can offer more variety than the regular approach.
Chatbots from sites usually either use 1 model or a few you can pick. Even with local models most models are derived from only a few 'base' models. A lot of them are based on llama 2 . They are also themselves trained on ai output. This why when gpt3 and gpt4 started getting released even though the models improved,a lot of them had this"gpt like" quality to them like for example hammering on about morality. The newest big model,counter to this,Claude 3 as an example is not based on the same training data and has its own style. Apparently more natural which means perhaps that local models will improve as well since cheap models use the output of gpt4 to train themselves and since claude 3 is somewhat less ai like it will be harder to recognize these newer models as "claude like" but still possible.