黑貓沒有順手測一下16吋Mac的插座電源輸出;我看到reddit上有人測到這次M4 MAX 16吋版本會上到163w和190w...但不是持續的 原貼文是在跑LM Studio大模型,我這裡複製一下reddit那裡的原話: "Kinda expected more, but in a laptop that's still quite impressive. Does that say 163 watts though..? Am I reading it wrong?" "no, you’re reading it correctly, that’s system total power, highest I saw as 190W 😬, while powermetrics report GPU at 70W, very dodgy apple. I hope they don’t make another i9 situation in the next few years. 🤞"
@Wayne-sn6qy3 күн бұрын
以及reddit的up主對溫度的描述: “During inference, GPU temp stays around 110C, then throttles to keep at 110C, and then fan will start to get loud and it just use whatever GPU frequency that can maintain 110C. I guess high power mode is setting a more aggressive fan curve. After inference, usually before I can finish reading and send prompt again (1-3min), the fan will just drop to min speed. I'm testing Qwen coder autocomplete right now, and with 3B model, generated code basically appear in less than a second, then I have to pause and read what it generated, so I guess not much sustained load, and fan is at min speed still... quite impressive.”