Wilshire@lemmy.world to Technology@lemmy.worldEnglish · 3 months agoThe first GPT-4-class AI model anyone can download has arrived: Llama 405Barstechnica.comexternal-linkmessage-square61fedilinkarrow-up1211arrow-down117cross-posted to: tech@programming.dev
arrow-up1194arrow-down1external-linkThe first GPT-4-class AI model anyone can download has arrived: Llama 405Barstechnica.comWilshire@lemmy.world to Technology@lemmy.worldEnglish · 3 months agomessage-square61fedilinkcross-posted to: tech@programming.dev
minus-squareBlaster M@lemmy.worldlinkfedilinkEnglisharrow-up6·3 months agoAs a general rule of thumb, you need about 1 GB per 1B parameters, so you’re looking at about 405 GB for the full size of the model. Quantization can compress it down to 1/2 or 1/4 that, but “makes it stupider” as a result.
As a general rule of thumb, you need about 1 GB per 1B parameters, so you’re looking at about 405 GB for the full size of the model.
Quantization can compress it down to 1/2 or 1/4 that, but “makes it stupider” as a result.