Which model is best for A.I. LLM models.
For running large language models locally,
ideally with the 96+ unified ram etc
and stackable for the greatest speed, say use 4 of them.
I saw Alex Ziskind put 96GB of RAM in this tiny mini PC and ran Llama 70B LLM on it and is there a better version model to use today?
Thanks in advance.
Hi,
Given how AMD CPUs have proven their superiority over Intel ones lately, and that the higher RAM you can install on a mini PC, the better the LLM will perform, then I would advise you the K11 model which has been confirmed to support up to 128 GB of RAM in this test to install the OS from VMWare :
https://williamlam.com/2025/03/esxi-on-gmktec-nucbox-k11.html
You can buy it from there:
https://www.gmktec.com/products/amd-ryzen%E2%84%A2-9-8945hs-nucbox-k11?cfb=c01da7f0-5637-460d-88b7-77019466dfc1
This unit is not UMA, the graphics chip does not share system RAM but has a maximum of 16GB VRAM allocated and the 128GB cannot be utilized for VRAM.
I would posit that the best unit for runnings LLM's would have UMA, shared memory up to the full amount 128GB in this case and a superfast port like a thunderbolt port.
What do you have to say about this situation?