proxmox

REFLECTION Llama3.1 70b Tested on Ollama Home Ai Server – Best Ai

REFLECTION Llama3.1 70b Tested on Ollama Home Ai Server – Best Ai LLM?

#REFLECTION #Llama3.1 #70b #Tested #Ollama #Home #Server

“Digital Spaceport”

The buzz about the new Reflection Llama 3.1 Fine Tune being the worlds best LLM is all over the place, and today I am testing it out with you on the Home Ai Server that we put together recently. Running in Docker on a LXC in proxmox on my Epyc quad 3090 GPU Ai Rig. Ollama got out support for…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

12 Comments

  1. Ask it this:
    A fisher and his two domesticated wolves want to cross a river in a large boat to get to their herd of goats. How many times do they need to cross the river to get to the goats without any animals getting eaten? Think carefully.

  2. Reflection sucks actually. But the big idiots at FAANG and other better funded startups than a one-man-army now at least understand that CoT and adversarial CoT within a MoE (reading the future off my misterious crystal ball with the last statement) will give them a revolutionary product. Sad thing: pioneers just go die off in silence unless they are catched by sharks. Hope Otherside (matts org in HF) does well.

  3. Thanks for the detailed video so quickly after this release! FYI, I saw an interview with the creators of Reflection Llama on the Matthew Berman YouTube channel and they mentioned that it doesn’t always use reflection… only with questions that are hard enough to warrant it. Also, they mentioned that they hadn’t tested how well the quantized versions would perform with <think> and <reflect>. They did zero testing with the quantized version, so you are exploring uncharted territory 👍

  4. Ух ты! Качество ваших видео, визуальные эффекты и подача невероятно красивы и очень интересны. Спасибо и хороших выходных. Привет из Бразилии, БР.

Leave a Reply