3990X Titan RTX 400k級實驗室文書機 - 3C
By Poppy
at 2020-05-08T19:00
at 2020-05-08T19:00
Table of Contents
在正文前 先提醒
最近要採購主機的公家研究人員
若實驗室有機房機櫃可放機架式
應該優先考慮今年的政府採購網共同供應契約
LP5-108036 採購期限到2021/04/30
不僅有Epyc
1U單路Epyc16核/32GB 134,185
1U單路Epyc32核/64GB 187,433
1U單路Epyc64核/128GB 297,125
2U單路Epyc16核/32GB 140,575
2U單路Epyc32核/64GB 209,798
2U單路Epyc64核/128GB 312,034
(但沒有雙路Epyc 不知道理由是什麼)
也有高密度2U4Nodes
每節點雙路Xeon Silver/8c/單3.2GHz/全2.5GHz/32GB
403,621
睽違多年 難得一見的四路主機
2U四路Xeon Gold/ 8c/單3.9GHz/全2.8GHz/128GB 376,556
3U四路Xeon Gold/20c/單3.9GHz/全2.8GHz/128GB 475,930
今年更是直接有GPU主機
終於不用再招標 開評選委員會了
2xRTX6000 24GBvram/256GB/雙路16C 585,729
4xRTX8000 48GBvram/512GB/雙路高頻8C或20C 1,331,203
4x V100 32GBvram/512GB/雙路高頻8C或20C 1,735,889
===
這篇文章能看看自組主機的極限大約在哪
對於不熟組裝機的人購買電腦
真的請找廠商
或是共契的筆電/桌機/塔式工作站/機架伺服器
採購流程也方便
(但共契還是要看是否符合需求
筆電問題還好 近年都已全面標配SSD
但桌機有SSD的項次少的可憐
i7配機械讀取臂拿來辦公
瀏覽器 office卡到呼吸困難
還要改裝就失去採購方便的意義
塔式工作站機架伺服器若規格不夠可以原廠加購項目
至少都是品牌機 大廠驗證品質保固很好 出問題有人負責)
尤其是要求有ECC/raid穩定不當機確保資料要正確的工作站或伺服器
不要自組
這邊只比效能 什麼ECC、RAID、IPMI遠端管理、10/25/40/100Gbps網卡都沒有
記憶體也只有256GB 以64核來說每核才分到4GB
四通道對於某些任務也是瓶頸
(若看到其他review 有些吃記憶體頻寬的項目
3900x與3950x 3970x與3990x會同分就是這個原因)
基本上就是單人平行計算使用 沒辦法多人丟一堆工作上來
若不追求單核效能 白牌單路Epyc組起來價格差不多的
測試結果看看就好
也提供手上一些舊伺服器的結果給有升級計畫的人評估參考
測試軟體細節可看 #1UjJiMol (PC_Shopping)
===
測試硬體
AMD Ryzen Threadripper 3990X
ENERMAX LIQTECH TR4 II 360 (上置冷排 內往外吹)
ASUS PRIME TRX40-PRO
8x Kingston KVR32N22D8/32
2x NVIDIA TITAN RTX
TITAN RTX NVLINK BRIDGE
Intel Optane 900P 480GB
FSP CANNON 2000W
Apexgaming Hermes C2
2x Thermalright TY-143 SQ (前置進風)
(這個組合有一些注意事項
1. VRM風扇架內附螺絲只能鎖薄扇 20mm厚的風扇螺絲會不夠長 (手冊沒寫)
2. 這個機殼上置這個冷排風扇會卡到VRM散熱片 歪一點點勉強能鎖
3. 因為VRM散熱片卡到冷排的關係 VRM風扇支架其實鎖不上去
4. 使用3-slot bridge只剩一個從晶片組的pcie x4 並且會影響顯卡散熱
5. 使用4-slot bridge可以有一個low profile從CPU的pcie x16
但titan rtx的風扇會擋到所有前置音效、USB、風扇線、面板線等等
在這機殼還會擋到電源線出口
若配上7槽的機殼(像這個)會吸不到風
rtx公版雙風扇一定要留一槽
就算像 #1UcfVWN9 推文照片一樣裝上去
重載也會直接過熱降頻到生活無法自理(剩?00MHz左右)
這也是為什麼geforce nvlink bridge沒有2 slot的原因)
BIOS版本與設定
ASUS PRIME TRX40-PRO 0902
PBO manual
PPT 1000W
TDC 1000A
EDC 1000A
CPU冷排風扇測點CPU
PUMP全速
前上風扇測點VRM
前下扇測點PCH
後風扇測點PCH
20度C 20% 65度C 70% 70度C 100%
其餘預設
DDR4-3200 (22-22-22) 1.2V
另外使用
nvidia-smi -pm 1
nvidia-smi -pl 320
解除TITAN RTX到320W
OS
Ubuntu Server 20.04 LTS kernel 5.4.0-26
CUDA driver 440.64
頻率溫度功耗
3990x
sensors讀取溫度
turbostat讀取頻率瓦數
TITAN RTX
nvidia-smi讀取溫度頻率瓦數
待機
3990x+TITAN RTX
CPU 2200MHz 35度C 36W
GPU 300MHz 33度C 14W
延長線 111W
Prime95 Version 29.8 build 6
Small FFTs(L1/L2/L3)
3990x sse2
1秒
CPU 3896MHz 75.4度C 657W
延長線 1027W
1分鐘
CPU 3503MHz 86.0度C 486W
延長線 748W
https://youtu.be/u3f6RF38rnM
3990x fma3
1秒
CPU 3538MHz 80.8度C 675W
延長線 987W
1分鐘
CPU 3337MHz 93.8度C 522W
延長線 848W
https://youtu.be/TDqVbTaJ_jI
1xGPU tensorflow resnet50 training fp16 batch128
1xTITAN RTX
1秒
GPU 1905MHz 47度C 299W
延長線 557W
1分鐘
GPU 1860MHz 70度C 280W
延長線 494W
https://youtu.be/yfBuosZqKDw
p95+tensorflow
3990x fma3+2xTITAN RTX
延長線 1494~1287W
https://youtu.be/fKHs8-pbdbM
IO測試
| 3990x+900P CPU| 3990x+900P PCH|3990x+sx8200pro cpu
1MSeqQ8T1r|2441MB/s |2433MB/s |2782MB/s
1MSeqQ8T1w|2236MB/s |2231MB/s |2835MB/s
1MSeqQ1T1r|2449MB/s |2435MB/s |2764MB/s
1MSeqQ1T1w|2218MB/s |2220MB/s |2817MB/s
4kQ32T16r |2386MB/s(583k) |2387MB/s(583k) | 696MB/s(170k)
4kQ32T16w |2439MB/s(595k) |2407MB/s(588k) |1469MB/s(359k)
4kQ1T1r | 291MB/s(71.1k)| 268MB/s(65.3k)|79.1MB/s(19.3k)
4kQ1T1w | 217MB/s(52.9k)| 204MB/s(49.9k)| 209MB/s(50.9k)
對手伺服器規格
===
Nehalem
4x Intel Xeon X7550
8C16T/單2.4GHz/全2.13GHz
p95sse2 2.066GHz
64x 16GB DDR3-1066 4R ECC RDIMM
Ubuntu Server 16.04.6 LTS kernel 4.4.0-177
===
SandyBridge
2x Intel Xeon E5-2690
8C16T/單3.8GHz/全3.3GHz
p95avx 3.2GHz
24x 16GB DDR3-1066 2R ECC RDIMM
Ubuntu Server 16.04.6 LTS kernel 4.4.0-177
===
DGX Station
1x Intel Xeon E5-2698v4
20C40T/單2.7GHz/全2.7GHz
p95avx2 2.6GHz
8x 32GB DDR4-2400 2R ECC RDIMM
4x V100 32GB 300W
DGX OS Desktop 4.0.7 kernel 4.15.0-96
CUDA driver 410.129
===
Skylake
2x Intel Xeon Gold 6148
20C40T/單3.7GHz/全3.1GHz
p95avx512 1.9GHz
24x 16GB DDR4-2666 1R ECC RDIMM
1x V100 32GB 250W
Ubuntu Server 18.04.4 LTS kernel 4.15.0-96
CUDA driver 440.64
===
CascadeLake
2x Intel Xeon Gold 6248
20C40T/單3.9GHz/全3.2GHz
p95avx512 2.1GHz
24x 32GB DDR4-2933 2R ECC RDIMM
Ubuntu Server 18.04.4 LTS kernel 4.15.0-96
===
國家高速網路與計算中心
台灣杉二號
TWCC
2x Xeon Gold 6154
18C18T
(猜測是鎖3.0GHz無Turbo無idle降頻無avx節流?)
24x 32GB DDR4-2666 2R ECC UDIMM
8x V100 32GB
Red Hat Enterprise Linux 7.5.1804 kernel 3.10.0
CUDA driver 418.87
實際使用時是在container內
依container type限制資源
GPU數量 1 2 4 8
CPU使用量限制(%) 400% 800% 1600% 3200%
RAM限制(GB) 90GB 180GB 360GB 720GB
這次測試使用8GPU
===
CPU理論效能測試
| 128-bit SSE2 | 256-bit AVX | 256-bit FMA3
| Multiply + Add | Multiply + Add | Fused Multiply Add
| 1T | nT | 1T | nT | 1T | nT
3990x| 42.816 | 4009.97 | 84.672 | 7203.17 | 138.816 | 8012.35
Nehalem| 15.936 | 325.584|
SandyBridge| 28.416 | 419.376| 49.824 | 813.696|
DGX Station| 21.552 | 432 | 41.28 | 832.416| 82.56 | 1664.83
Skylake| 22.704 | 991.44 | 44.832 | 1665.89 | 89.664 | 3323.9
CascadeLake| 30.096 | 1023.89 | 59.52 | 1789.54 | 119.232 | 3579.26
TWCC | 28.8 | 919.632| 55.2 | 1669.34 | 108.288 | 3343.3
| 512-bit AVX512
| Fused Multiply Add
| 1T | nT
Skylake| 192 | 5641.73
CascadeLake| 238.08 | 6396.67
TWCC | 209.664| 5481.98
(Gold 6148那台單核分數偏低的原因
根據觀察 應該是因為Turbo反應慢
時脈還沒拉起來程式就跑完了
而且是慢慢增加 不是向其他台直接切換到頂
有人知道TurboBoost反應速度跟什麼有關係嗎?
不知道是不是白牌server主機板issue還是UEFI或OS設定問題)
CPU計算效能測試
Intel均使用mkl版
|Cholesky|Det | Dot |Fft |Inv |Lu |Qr |Svd
3990x pip | 606.62| 350.05| 748.52|4.92|285.76|479.42|124.15|11.14
3990x mkl | 1119.49|1074.78| 971.88|5.03|214.65|888.52|440.20|34.16
debug mkl | 1268.56|1023.48|1205.16|5.05|712.24|799.41|475.49|43.18
Nehalem | 178.54| 199.17| 105.35|1.20|125.58|161.51| 81.88| 5.12
SandyBridge | 282.10| 318.30| 286.69|3.65|272.92|260.07|151.88| 7.19
DGX Station | 563.56| 705.35| 689.77|3.20|538.82|518.52|239.39|13.82
Skylake | 725.24|1054.83|1245.51|3.38|755.73|721.35|297.93|18.36
CascadeLake | 1139.19|1582.38|1369.20|3.58|878.06|789.13|335.06|19.10
TWCC | 1101.03|1446.08|1133.23|3.97|812.55|711.94|287.07|14.01
由於這個結果太慘 沒有展現出64核殺翻全場的氣勢
一句四通道塞車就想打發可能太混
延續AMD要debug的傳統
因此加測關核跑
使用MKL_NUM_THREADS設定核心數量
|Cholesky|Det | Dot |Fft |Inv |Lu |Qr |Svd
48c+debug | 1406.96|1081.58|1274.44|4.99|738.84|821.23|492.49|45.97
32c+debug | 1399.11| 981.86|1208.92|6.04|760.53|769.36|502.57|48.60
24c+debug | 1142.76|1023.80|1182.79|6.09|809.32|791.94|483.72|45.80
16c+debug | 823.99| 880.87| 872.70|6.10|658.55|709.48|411.23|43.35
8c+debug | 452.84| 445.21| 451.89|5.96|372.54|400.83|268.13|22.70
這結果看起來若要跑多核數學運算的買3960x就好
更上去請買epyc 八通道記得插滿
多工vm多開不吃記憶體頻寬 需要單核效能的的再來看3970x 3990x
(MSRP USD 3960x $1399 3970x $1999 3990x $3990
夾在中間的Epyc有很多 其實不一定要買3970x (32C/4.5~3.7GHz/128MB/280W)
$4025 7552 48C/3.3~2.2GHz/192MB/200W
$3400 7542 32C/3.4~2.9GHz/128MB/225W
$3100 7F52 16C/3.9~3.5GHz/256MB/240W
$2450 7F72 24C/3.7~3.2GHz/192MB/240W
$2300 7502P 32C/3.35~2.5GHz/128MB/180W
$2100 7F32 8C/3.9~3.7GHz/128MB/180W)
至於svd項目爆高 估計是合計256MB的L3太扯
若把8個執行緒擠到同一個CCD上 速度會跟3700x差不多
MKL預設為granularity=core
在Linux 5.4上 OS會自動打散到不同CCX
MKL只能看出
一個core兩個SMT
無法辨識
一個CCX四個core共享L3
一個CCD兩個CCX一條IFOP到IO hub
若要手動設定granularity 或是在windows上
建議自己準備一個有node_n id資訊的cpuinfo.txt 提供給MKL
https://software.intel.com/en-us/cpp-compiler-developer
-guide-and-reference-thread-affinity-interface-linux-and-windows
另外在TR上NUMA Nodes Per Socket選項是沒有效果的
沒辦法只使用最靠近的IMC降延遲
這篇是論壇文 不是論文 所以
如何考量
SMT減輕記憶體延遲懲罰 增加運算單元使用率
cache coherence communication overhead
cache line invalidation overhead
page thrashing
CCX L3怎麼分配 CCD頻寬
epyc上的NUMA Nodes Per Socket選項
這些議題來分配與設定Threads
讓3990x跑程式最快
就交由其他
演算法、計組計結、平行運算、分散式系統
融會貫通的大神下結論
nvidia-smi topo -m
3990x
GPU0 GPU1 CPU Affinity
GPU0 X NV2 0-127
GPU1 NV2 X 0-127
DGX Station
GPU0 GPU1 GPU2 GPU3 CPU Affinity
GPU0 X NV1 NV1 NV2 0-39
GPU1 NV1 X NV2 NV1 0-39
GPU2 NV1 NV2 X NV1 0-39
GPU3 NV2 NV1 NV1 X 0-39
TWCC
GPU0 GPU1 GPU2 GPU3
GPU0 X NV1 NV1 NV2
GPU1 NV1 X NV2 NV1
GPU2 NV1 NV2 X NV2
GPU3 NV2 NV1 NV2 X
GPU4 SYS SYS NV1 SYS
GPU5 SYS SYS SYS NV1
GPU6 NV2 SYS SYS SYS
GPU7 SYS NV2 SYS SYS
GPU4 GPU5 GPU6 GPU7 CPU Affinity
GPU0 SYS SYS NV2 SYS 0-17
GPU1 SYS SYS SYS NV2 0-17
GPU2 NV1 SYS SYS SYS 0-17
GPU3 SYS NV1 SYS SYS 0-17
GPU4 X NV2 NV1 NV2 18-35
GPU5 NV2 X NV2 NV1 18-35
GPU6 NV1 NV2 X NV1 18-35
GPU7 NV2 NV1 NV1 X 18-35
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between
NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe
Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically
the CPU)
PXB = Connection traversing multiple PCIe switches (without traversing the
PCIe Host Bridge)
PIX = Connection traversing a single PCIe switch
NV# = Connection traversing a bonded set of # NVLinks
nvidia-smi topo -mp
3990x
GPU0 GPU1 CPU Affinity
GPU0 X SYS 0-127
GPU1 SYS X 0-127
DGX Station
GPU0 GPU1 GPU2 GPU3 CPU Affinity
GPU0 X PIX PHB PHB 0-39
GPU1 PIX X PHB PHB 0-39
GPU2 PHB PHB X PIX 0-39
GPU3 PHB PHB PIX X 0-39
TWCC
GPU0 GPU1 GPU2 GPU3
GPU0 X PIX NODE NODE
GPU1 PIX X NODE NODE
GPU2 NODE NODE X PIX
GPU3 NODE NODE PIX X
GPU4 SYS SYS SYS SYS
GPU5 SYS SYS SYS SYS
GPU6 SYS SYS SYS SYS
GPU7 SYS SYS SYS SYS
GPU4 GPU5 GPU6 GPU7 CPU Affinity
GPU0 SYS SYS SYS SYS 0-17
GPU1 SYS SYS SYS SYS 0-17
GPU2 SYS SYS SYS SYS 0-17
GPU3 SYS SYS SYS SYS 0-17
GPU4 X PIX NODE NODE 18-35
GPU5 PIX X NODE NODE 18-35
GPU6 NODE NODE X PIX 18-35
GPU7 NODE NODE PIX X 18-35
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between
NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe
Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically
the CPU)
PXB = Connection traversing multiple PCIe switches (without traversing the
PCIe Host Bridge)
PIX = Connection traversing a single PCIe switch
p2pBandwidthLatencyTest
3990x
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 550.18 11.80
1 11.76 553.24
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1
0 552.10 46.94
1 46.93 552.71
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 556.35 20.84
1 21.06 556.59
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1
0 557.18 93.51
1 93.49 554.59
P2P=Disabled Latency Matrix (us)
GPU 0 1
0 1.94 12.44
1 13.86 1.93
CPU 0 1
0 3.24 8.52
1 9.51 3.44
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1
0 1.94 2.15
1 2.09 1.93
CPU 0 1
0 3.54 2.86
1 2.83 3.45
DGX Station
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3
0 735.64 10.05 11.10 11.05
1 10.04 739.82 11.12 11.06
2 11.09 11.13 739.82 9.99
3 11.09 11.15 10.05 741.22
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1 2 3
0 727.42 24.21 24.22 48.33
1 24.21 742.63 48.33 24.21
2 24.20 48.32 742.63 24.20
3 48.34 24.22 24.22 742.63
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3
0 746.18 10.45 19.07 18.90
1 10.45 752.65 19.27 19.11
2 19.08 19.11 749.04 10.52
3 19.03 18.99 10.42 753.38
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3
0 746.89 48.37 48.31 96.47
1 48.37 750.48 96.42 48.38
2 48.36 96.25 750.48 48.36
3 96.28 48.38 48.33 753.38
P2P=Disabled Latency Matrix (us)
GPU 0 1 2 3
0 1.89 16.56 16.44 16.42
1 16.43 1.76 16.19 16.42
2 15.81 16.43 1.87 16.43
3 16.43 16.41 15.81 1.83
CPU 0 1 2 3
0 3.84 9.41 9.21 9.46
1 9.33 3.93 9.68 9.45
2 9.41 9.25 3.78 9.46
3 9.49 9.39 9.35 3.77
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1 2 3
0 1.89 1.91 1.90 1.91
1 1.85 1.76 1.85 1.85
2 1.85 1.87 1.87 1.86
3 1.87 1.85 1.85 1.82
CPU 0 1 2 3
0 3.82 2.90 2.88 2.85
1 2.86 3.91 2.82 2.86
2 2.86 2.86 3.91 2.84
3 2.86 2.89 2.86 3.84
TWCC
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 731.51 9.26 10.41 10.39 10.37 10.36 10.38 10.36
1 9.30 739.82 10.41 10.41 10.37 10.37 10.38 10.38
2 10.43 10.41 739.82 9.24 10.37 10.37 10.38 10.38
3 10.44 10.40 9.28 739.82 10.37 10.37 10.37 10.38
4 10.42 10.39 10.42 10.41 738.42 9.26 10.38 10.39
5 10.42 10.38 10.42 10.41 9.26 742.63 10.32 10.37
6 10.42 10.39 10.42 10.41 10.40 10.42 739.82 9.26
7 10.42 10.39 10.42 10.42 10.40 10.42 9.26 739.82
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 731.51 24.22 24.22 48.36 9.03 9.35 48.33 8.93
1 24.22 741.22 48.35 24.22 9.36 9.19 8.96 48.35
2 24.22 48.35 742.63 48.34 24.22 8.90 9.00 8.83
3 48.34 24.22 48.34 742.63 8.88 24.23 8.83 8.83
4 9.01 8.86 24.22 9.07 742.63 48.35 24.22 48.34
5 8.86 8.97 9.05 24.22 48.32 741.22 48.35 24.23
6 48.34 9.08 9.34 9.17 24.23 48.35 744.05 24.22
7 9.13 48.34 9.01 9.34 48.34 24.22 24.22 742.63
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 746.18 9.38 17.86 17.92 17.79 17.81 17.15 16.99
1 9.27 746.89 17.14 17.06 17.30 17.07 17.13 16.82
2 17.82 17.05 749.76 9.66 17.66 17.74 17.73 17.17
3 17.78 17.08 9.39 747.61 17.96 17.75 17.59 17.26
4 18.03 17.10 17.69 17.72 749.04 9.40 17.58 17.05
5 17.67 17.44 17.80 17.77 9.39 748.32 17.73 17.11
6 17.83 17.02 17.77 17.65 17.43 17.23 749.76 9.38
7 17.27 16.81 17.00 17.28 17.03 17.04 9.44 749.76
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 747.61 48.40 48.40 96.52 17.24 17.25 96.54 17.25
1 48.34 750.48 96.50 48.39 17.30 17.24 17.27 96.52
2 48.40 96.28 747.61 96.55 48.40 17.25 17.24 17.25
3 96.28 48.39 96.50 747.61 17.29 48.40 17.25 17.24
4 17.25 17.31 48.34 17.28 754.83 96.52 48.41 96.50
5 17.24 17.24 17.25 48.40 96.31 751.92 96.28 48.40
6 96.51 17.27 17.26 17.25 48.34 96.31 746.18 48.40
7 17.24 96.31 17.25 17.25 96.26 48.40 48.39 746.89
P2P=Disabled Latency Matrix (us)
GPU 0 1 2 3 4 5 6 7
0 1.68 16.39 16.39 16.38 16.38 16.41 16.40 16.40
1 16.43 1.65 16.51 16.83 16.45 16.46 16.49 16.44
2 16.47 16.46 1.71 16.46 17.44 17.45 17.44 17.44
3 16.50 16.44 16.44 1.64 17.45 17.46 17.44 17.44
4 16.43 16.44 16.47 16.44 1.65 16.44 16.81 16.41
5 17.40 17.20 17.32 17.32 15.81 1.63 16.20 16.06
6 16.67 16.56 16.49 16.59 16.48 16.42 1.59 16.43
7 15.41 15.40 15.47 15.40 15.51 15.37 15.50 1.59
CPU 0 1 2 3 4 5 6 7
0 3.93 9.98 10.54 10.37 9.93 8.83 10.02 10.37
1 9.89 3.64 10.43 10.40 9.93 8.74 9.96 10.08
2 10.26 10.31 4.18 10.94 10.48 9.39 10.66 10.80
3 10.24 10.18 11.02 4.00 10.48 9.43 10.72 10.52
4 9.66 9.63 10.65 10.51 4.07 9.07 10.12 10.14
5 8.92 8.83 9.87 9.79 9.36 3.39 9.67 9.45
6 9.76 9.63 10.71 10.61 10.15 9.20 3.94 10.23
7 10.04 9.83 11.20 10.67 10.22 9.28 10.24 4.28
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1 2 3 4 5 6 7
0 1.67 1.48 1.48 1.90 2.12 2.11 1.92 2.13
1 1.52 1.65 1.99 1.53 2.13 2.12 2.12 1.98
2 1.47 1.89 1.74 1.88 1.46 2.12 2.12 2.12
3 1.85 1.48 1.89 1.63 2.11 1.48 2.12 2.13
4 2.10 2.10 1.53 2.10 1.66 1.99 1.52 1.97
5 2.09 2.09 2.10 1.52 1.98 1.62 1.98 1.52
6 1.89 2.12 2.10 2.11 1.47 1.88 1.59 1.48
7 2.12 1.90 2.11 2.12 1.90 1.48 1.48 1.59
CPU 0 1 2 3 4 5 6 7
0 3.65 2.76 2.65 2.67 2.65 2.71 2.58 2.61
1 2.64 3.62 2.60 2.58 2.71 2.67 2.63 2.55
2 2.96 2.86 4.12 2.79 2.87 2.96 2.89 2.96
3 2.82 2.83 2.78 4.01 2.86 2.89 2.86 2.90
4 2.63 2.70 2.66 2.76 3.94 2.65 2.69 2.73
5 2.35 2.38 2.33 2.42 2.36 3.30 2.32 2.36
6 2.66 2.83 2.80 2.76 2.69 2.73 4.02 2.71
7 2.68 2.70 2.90 2.78 2.71 2.78 2.67 4.11
Tensorflow測試 resnet50
1xTITAN RTX fp32
| batch64 | batch128
3990x | 298.97 | 310.80
1xTITAN RTX fp16
| batch64 | batch128 | batch256
3990x | 844.00 | 877.49 | 877.88
2xTITAN RTX fp32
| batch32 | batch64 | batch128
| global64 | global128 | global256
3990x | 601.78 | 654.36 | 674.78
2xTITAN RTX fp16
| batch32 | batch64 | batch128 | batch256
| global64 | global128 | global256 | global512
3990x | 1353.65 | 1635.21 | 1813.69 | 1896.68
1xV100 fp32
| batch64 | batch128 | batch256
6148 | 351.17 | 378.99 | 392.35
1xV100 fp16
| batch64 | batch128 | batch256
6148 | 850.51 | 1019.35 | 1145.15
4xV100 fp32
| batch16 | batch32 | batch64
| global64 | global128 | global256
DGX Station | 1037.34 | 1248.10 | 1430.58
4xV100 fp16
| batch16 | batch32 | batch64 | batch128
| global64 | global128 | global256 | global512
DGX Station | 1223.04 | 2382.59 | 3032.58 | 3739.49
8xV100 fp32
| batch8 | batch16 | batch32
| global64 | global128 | global256
TWCC | 479.91 | 773.50 | 1281.98
8xV100 fp16
| batch8 | batch16 | batch32 | batch64
| global64 | global128 | global256 | global512
TWCC | 654.66 | 1210.17 | 2272.34 | 3708.51
Pytorch 與 AMP(Apex) 測試
bert | fp32| fp16
3990x 2xTitan RTX |00:25.39|00:28.93
6148 1xV100 |00:57.33|01:25.29
DGX Station 4xV100 |00:27.66|00:37.42
TWCC 8xV100 |00:12.54|00:20.86
(6148那台可能是單核或環境問題 參考就好)
===
沒有Sever_Shopping板可以貼
這些測項與MIS板的需求也不同(沒測長時間重載硬碟網路IO)
或許該轉DataScience板?
自組價格比共契的585,729便宜一點點
CPU核心多一倍
犧牲的就是整機穩定度
沒有品牌系統廠驗證 散熱電力無法保證
也沒有大SI處理保固 狀況排除
不建議省這些錢
實際在使用時 有看過延長線瞬間1600W的讀數
幾乎頂到110V的上限1650W
CPU 700~800W也是有
就會想到大同電鍋6人份耗電600W 10人份700W
用那麼一個小小的AIO水冷要幫電鍋散熱感覺就很不合理
真的跑幾個月運算 水管也不知道能撐多久
冷頭內感覺就跟熱水器差不多
AIO應該要提供水溫回報給主機板
這360一體水號稱解熱能力500W+
PBO穩定也就真的差不多500W
想要PBO AUTO全64核4.1GHz 800W穩定跑應該是要開放水+冷水機
但小實驗室用自組的主機 最後如果水冷漏水 一定很精彩
這代TITAN公版從鼓風扇換雙風扇完全就是要阻止買遊戲卡跑運算的
兩張TITAN RTX 600W的熱積在那邊超級難處理
整台主機全開時冷排排風口的溫度跟冬天用的陶瓷電暖器有87%像
一個TY 143全速吹顯卡 另一個全速吹VRM
感覺是不太夠力 但也沒辦法 現在這樣已經跟電風扇開強差不多大聲
除非這機器會放在沒人的地方
或是有冷熱通道隔離的機房 買個層板主機橫躺或4U機殼
前置3風扇全換萬轉12cm
但搞到這樣為什麼不直接買設計好的塔式/機架式配Quadro/Tesla呢?
--
最近要採購主機的公家研究人員
若實驗室有機房機櫃可放機架式
應該優先考慮今年的政府採購網共同供應契約
LP5-108036 採購期限到2021/04/30
不僅有Epyc
1U單路Epyc16核/32GB 134,185
1U單路Epyc32核/64GB 187,433
1U單路Epyc64核/128GB 297,125
2U單路Epyc16核/32GB 140,575
2U單路Epyc32核/64GB 209,798
2U單路Epyc64核/128GB 312,034
(但沒有雙路Epyc 不知道理由是什麼)
也有高密度2U4Nodes
每節點雙路Xeon Silver/8c/單3.2GHz/全2.5GHz/32GB
403,621
睽違多年 難得一見的四路主機
2U四路Xeon Gold/ 8c/單3.9GHz/全2.8GHz/128GB 376,556
3U四路Xeon Gold/20c/單3.9GHz/全2.8GHz/128GB 475,930
今年更是直接有GPU主機
終於不用再招標 開評選委員會了
2xRTX6000 24GBvram/256GB/雙路16C 585,729
4xRTX8000 48GBvram/512GB/雙路高頻8C或20C 1,331,203
4x V100 32GBvram/512GB/雙路高頻8C或20C 1,735,889
===
這篇文章能看看自組主機的極限大約在哪
對於不熟組裝機的人購買電腦
真的請找廠商
或是共契的筆電/桌機/塔式工作站/機架伺服器
採購流程也方便
(但共契還是要看是否符合需求
筆電問題還好 近年都已全面標配SSD
但桌機有SSD的項次少的可憐
i7配機械讀取臂拿來辦公
瀏覽器 office卡到呼吸困難
還要改裝就失去採購方便的意義
塔式工作站機架伺服器若規格不夠可以原廠加購項目
至少都是品牌機 大廠驗證品質保固很好 出問題有人負責)
尤其是要求有ECC/raid穩定不當機確保資料要正確的工作站或伺服器
不要自組
這邊只比效能 什麼ECC、RAID、IPMI遠端管理、10/25/40/100Gbps網卡都沒有
記憶體也只有256GB 以64核來說每核才分到4GB
四通道對於某些任務也是瓶頸
(若看到其他review 有些吃記憶體頻寬的項目
3900x與3950x 3970x與3990x會同分就是這個原因)
基本上就是單人平行計算使用 沒辦法多人丟一堆工作上來
若不追求單核效能 白牌單路Epyc組起來價格差不多的
測試結果看看就好
也提供手上一些舊伺服器的結果給有升級計畫的人評估參考
測試軟體細節可看 #1UjJiMol (PC_Shopping)
===
測試硬體
AMD Ryzen Threadripper 3990X
ENERMAX LIQTECH TR4 II 360 (上置冷排 內往外吹)
ASUS PRIME TRX40-PRO
8x Kingston KVR32N22D8/32
2x NVIDIA TITAN RTX
TITAN RTX NVLINK BRIDGE
Intel Optane 900P 480GB
FSP CANNON 2000W
Apexgaming Hermes C2
2x Thermalright TY-143 SQ (前置進風)
(這個組合有一些注意事項
1. VRM風扇架內附螺絲只能鎖薄扇 20mm厚的風扇螺絲會不夠長 (手冊沒寫)
2. 這個機殼上置這個冷排風扇會卡到VRM散熱片 歪一點點勉強能鎖
3. 因為VRM散熱片卡到冷排的關係 VRM風扇支架其實鎖不上去
4. 使用3-slot bridge只剩一個從晶片組的pcie x4 並且會影響顯卡散熱
5. 使用4-slot bridge可以有一個low profile從CPU的pcie x16
但titan rtx的風扇會擋到所有前置音效、USB、風扇線、面板線等等
在這機殼還會擋到電源線出口
若配上7槽的機殼(像這個)會吸不到風
rtx公版雙風扇一定要留一槽
就算像 #1UcfVWN9 推文照片一樣裝上去
重載也會直接過熱降頻到生活無法自理(剩?00MHz左右)
這也是為什麼geforce nvlink bridge沒有2 slot的原因)
BIOS版本與設定
ASUS PRIME TRX40-PRO 0902
PBO manual
PPT 1000W
TDC 1000A
EDC 1000A
CPU冷排風扇測點CPU
PUMP全速
前上風扇測點VRM
前下扇測點PCH
後風扇測點PCH
20度C 20% 65度C 70% 70度C 100%
其餘預設
DDR4-3200 (22-22-22) 1.2V
另外使用
nvidia-smi -pm 1
nvidia-smi -pl 320
解除TITAN RTX到320W
OS
Ubuntu Server 20.04 LTS kernel 5.4.0-26
CUDA driver 440.64
頻率溫度功耗
3990x
sensors讀取溫度
turbostat讀取頻率瓦數
TITAN RTX
nvidia-smi讀取溫度頻率瓦數
待機
3990x+TITAN RTX
CPU 2200MHz 35度C 36W
GPU 300MHz 33度C 14W
延長線 111W
Prime95 Version 29.8 build 6
Small FFTs(L1/L2/L3)
3990x sse2
1秒
CPU 3896MHz 75.4度C 657W
延長線 1027W
1分鐘
CPU 3503MHz 86.0度C 486W
延長線 748W
https://youtu.be/u3f6RF38rnM
3990x fma3
1秒
CPU 3538MHz 80.8度C 675W
延長線 987W
1分鐘
CPU 3337MHz 93.8度C 522W
延長線 848W
https://youtu.be/TDqVbTaJ_jI
1xGPU tensorflow resnet50 training fp16 batch128
1xTITAN RTX
1秒
GPU 1905MHz 47度C 299W
延長線 557W
1分鐘
GPU 1860MHz 70度C 280W
延長線 494W
https://youtu.be/yfBuosZqKDw
p95+tensorflow
3990x fma3+2xTITAN RTX
延長線 1494~1287W
https://youtu.be/fKHs8-pbdbM
IO測試
| 3990x+900P CPU| 3990x+900P PCH|3990x+sx8200pro cpu
1MSeqQ8T1r|2441MB/s |2433MB/s |2782MB/s
1MSeqQ8T1w|2236MB/s |2231MB/s |2835MB/s
1MSeqQ1T1r|2449MB/s |2435MB/s |2764MB/s
1MSeqQ1T1w|2218MB/s |2220MB/s |2817MB/s
4kQ32T16r |2386MB/s(583k) |2387MB/s(583k) | 696MB/s(170k)
4kQ32T16w |2439MB/s(595k) |2407MB/s(588k) |1469MB/s(359k)
4kQ1T1r | 291MB/s(71.1k)| 268MB/s(65.3k)|79.1MB/s(19.3k)
4kQ1T1w | 217MB/s(52.9k)| 204MB/s(49.9k)| 209MB/s(50.9k)
對手伺服器規格
===
Nehalem
4x Intel Xeon X7550
8C16T/單2.4GHz/全2.13GHz
p95sse2 2.066GHz
64x 16GB DDR3-1066 4R ECC RDIMM
Ubuntu Server 16.04.6 LTS kernel 4.4.0-177
===
SandyBridge
2x Intel Xeon E5-2690
8C16T/單3.8GHz/全3.3GHz
p95avx 3.2GHz
24x 16GB DDR3-1066 2R ECC RDIMM
Ubuntu Server 16.04.6 LTS kernel 4.4.0-177
===
DGX Station
1x Intel Xeon E5-2698v4
20C40T/單2.7GHz/全2.7GHz
p95avx2 2.6GHz
8x 32GB DDR4-2400 2R ECC RDIMM
4x V100 32GB 300W
DGX OS Desktop 4.0.7 kernel 4.15.0-96
CUDA driver 410.129
===
Skylake
2x Intel Xeon Gold 6148
20C40T/單3.7GHz/全3.1GHz
p95avx512 1.9GHz
24x 16GB DDR4-2666 1R ECC RDIMM
1x V100 32GB 250W
Ubuntu Server 18.04.4 LTS kernel 4.15.0-96
CUDA driver 440.64
===
CascadeLake
2x Intel Xeon Gold 6248
20C40T/單3.9GHz/全3.2GHz
p95avx512 2.1GHz
24x 32GB DDR4-2933 2R ECC RDIMM
Ubuntu Server 18.04.4 LTS kernel 4.15.0-96
===
國家高速網路與計算中心
台灣杉二號
TWCC
2x Xeon Gold 6154
18C18T
(猜測是鎖3.0GHz無Turbo無idle降頻無avx節流?)
24x 32GB DDR4-2666 2R ECC UDIMM
8x V100 32GB
Red Hat Enterprise Linux 7.5.1804 kernel 3.10.0
CUDA driver 418.87
實際使用時是在container內
依container type限制資源
GPU數量 1 2 4 8
CPU使用量限制(%) 400% 800% 1600% 3200%
RAM限制(GB) 90GB 180GB 360GB 720GB
這次測試使用8GPU
===
CPU理論效能測試
| 128-bit SSE2 | 256-bit AVX | 256-bit FMA3
| Multiply + Add | Multiply + Add | Fused Multiply Add
| 1T | nT | 1T | nT | 1T | nT
3990x| 42.816 | 4009.97 | 84.672 | 7203.17 | 138.816 | 8012.35
Nehalem| 15.936 | 325.584|
SandyBridge| 28.416 | 419.376| 49.824 | 813.696|
DGX Station| 21.552 | 432 | 41.28 | 832.416| 82.56 | 1664.83
Skylake| 22.704 | 991.44 | 44.832 | 1665.89 | 89.664 | 3323.9
CascadeLake| 30.096 | 1023.89 | 59.52 | 1789.54 | 119.232 | 3579.26
TWCC | 28.8 | 919.632| 55.2 | 1669.34 | 108.288 | 3343.3
| 512-bit AVX512
| Fused Multiply Add
| 1T | nT
Skylake| 192 | 5641.73
CascadeLake| 238.08 | 6396.67
TWCC | 209.664| 5481.98
(Gold 6148那台單核分數偏低的原因
根據觀察 應該是因為Turbo反應慢
時脈還沒拉起來程式就跑完了
而且是慢慢增加 不是向其他台直接切換到頂
有人知道TurboBoost反應速度跟什麼有關係嗎?
不知道是不是白牌server主機板issue還是UEFI或OS設定問題)
CPU計算效能測試
Intel均使用mkl版
|Cholesky|Det | Dot |Fft |Inv |Lu |Qr |Svd
3990x pip | 606.62| 350.05| 748.52|4.92|285.76|479.42|124.15|11.14
3990x mkl | 1119.49|1074.78| 971.88|5.03|214.65|888.52|440.20|34.16
debug mkl | 1268.56|1023.48|1205.16|5.05|712.24|799.41|475.49|43.18
Nehalem | 178.54| 199.17| 105.35|1.20|125.58|161.51| 81.88| 5.12
SandyBridge | 282.10| 318.30| 286.69|3.65|272.92|260.07|151.88| 7.19
DGX Station | 563.56| 705.35| 689.77|3.20|538.82|518.52|239.39|13.82
Skylake | 725.24|1054.83|1245.51|3.38|755.73|721.35|297.93|18.36
CascadeLake | 1139.19|1582.38|1369.20|3.58|878.06|789.13|335.06|19.10
TWCC | 1101.03|1446.08|1133.23|3.97|812.55|711.94|287.07|14.01
由於這個結果太慘 沒有展現出64核殺翻全場的氣勢
一句四通道塞車就想打發可能太混
延續AMD要debug的傳統
因此加測關核跑
使用MKL_NUM_THREADS設定核心數量
|Cholesky|Det | Dot |Fft |Inv |Lu |Qr |Svd
48c+debug | 1406.96|1081.58|1274.44|4.99|738.84|821.23|492.49|45.97
32c+debug | 1399.11| 981.86|1208.92|6.04|760.53|769.36|502.57|48.60
24c+debug | 1142.76|1023.80|1182.79|6.09|809.32|791.94|483.72|45.80
16c+debug | 823.99| 880.87| 872.70|6.10|658.55|709.48|411.23|43.35
8c+debug | 452.84| 445.21| 451.89|5.96|372.54|400.83|268.13|22.70
這結果看起來若要跑多核數學運算的買3960x就好
更上去請買epyc 八通道記得插滿
多工vm多開不吃記憶體頻寬 需要單核效能的的再來看3970x 3990x
(MSRP USD 3960x $1399 3970x $1999 3990x $3990
夾在中間的Epyc有很多 其實不一定要買3970x (32C/4.5~3.7GHz/128MB/280W)
$4025 7552 48C/3.3~2.2GHz/192MB/200W
$3400 7542 32C/3.4~2.9GHz/128MB/225W
$3100 7F52 16C/3.9~3.5GHz/256MB/240W
$2450 7F72 24C/3.7~3.2GHz/192MB/240W
$2300 7502P 32C/3.35~2.5GHz/128MB/180W
$2100 7F32 8C/3.9~3.7GHz/128MB/180W)
至於svd項目爆高 估計是合計256MB的L3太扯
若把8個執行緒擠到同一個CCD上 速度會跟3700x差不多
MKL預設為granularity=core
在Linux 5.4上 OS會自動打散到不同CCX
MKL只能看出
一個core兩個SMT
無法辨識
一個CCX四個core共享L3
一個CCD兩個CCX一條IFOP到IO hub
若要手動設定granularity 或是在windows上
建議自己準備一個有node_n id資訊的cpuinfo.txt 提供給MKL
https://software.intel.com/en-us/cpp-compiler-developer
-guide-and-reference-thread-affinity-interface-linux-and-windows
另外在TR上NUMA Nodes Per Socket選項是沒有效果的
沒辦法只使用最靠近的IMC降延遲
這篇是論壇文 不是論文 所以
如何考量
SMT減輕記憶體延遲懲罰 增加運算單元使用率
cache coherence communication overhead
cache line invalidation overhead
page thrashing
CCX L3怎麼分配 CCD頻寬
epyc上的NUMA Nodes Per Socket選項
這些議題來分配與設定Threads
讓3990x跑程式最快
就交由其他
演算法、計組計結、平行運算、分散式系統
融會貫通的大神下結論
nvidia-smi topo -m
3990x
GPU0 GPU1 CPU Affinity
GPU0 X NV2 0-127
GPU1 NV2 X 0-127
DGX Station
GPU0 GPU1 GPU2 GPU3 CPU Affinity
GPU0 X NV1 NV1 NV2 0-39
GPU1 NV1 X NV2 NV1 0-39
GPU2 NV1 NV2 X NV1 0-39
GPU3 NV2 NV1 NV1 X 0-39
TWCC
GPU0 GPU1 GPU2 GPU3
GPU0 X NV1 NV1 NV2
GPU1 NV1 X NV2 NV1
GPU2 NV1 NV2 X NV2
GPU3 NV2 NV1 NV2 X
GPU4 SYS SYS NV1 SYS
GPU5 SYS SYS SYS NV1
GPU6 NV2 SYS SYS SYS
GPU7 SYS NV2 SYS SYS
GPU4 GPU5 GPU6 GPU7 CPU Affinity
GPU0 SYS SYS NV2 SYS 0-17
GPU1 SYS SYS SYS NV2 0-17
GPU2 NV1 SYS SYS SYS 0-17
GPU3 SYS NV1 SYS SYS 0-17
GPU4 X NV2 NV1 NV2 18-35
GPU5 NV2 X NV2 NV1 18-35
GPU6 NV1 NV2 X NV1 18-35
GPU7 NV2 NV1 NV1 X 18-35
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between
NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe
Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically
the CPU)
PXB = Connection traversing multiple PCIe switches (without traversing the
PCIe Host Bridge)
PIX = Connection traversing a single PCIe switch
NV# = Connection traversing a bonded set of # NVLinks
nvidia-smi topo -mp
3990x
GPU0 GPU1 CPU Affinity
GPU0 X SYS 0-127
GPU1 SYS X 0-127
DGX Station
GPU0 GPU1 GPU2 GPU3 CPU Affinity
GPU0 X PIX PHB PHB 0-39
GPU1 PIX X PHB PHB 0-39
GPU2 PHB PHB X PIX 0-39
GPU3 PHB PHB PIX X 0-39
TWCC
GPU0 GPU1 GPU2 GPU3
GPU0 X PIX NODE NODE
GPU1 PIX X NODE NODE
GPU2 NODE NODE X PIX
GPU3 NODE NODE PIX X
GPU4 SYS SYS SYS SYS
GPU5 SYS SYS SYS SYS
GPU6 SYS SYS SYS SYS
GPU7 SYS SYS SYS SYS
GPU4 GPU5 GPU6 GPU7 CPU Affinity
GPU0 SYS SYS SYS SYS 0-17
GPU1 SYS SYS SYS SYS 0-17
GPU2 SYS SYS SYS SYS 0-17
GPU3 SYS SYS SYS SYS 0-17
GPU4 X PIX NODE NODE 18-35
GPU5 PIX X NODE NODE 18-35
GPU6 NODE NODE X PIX 18-35
GPU7 NODE NODE PIX X 18-35
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between
NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe
Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically
the CPU)
PXB = Connection traversing multiple PCIe switches (without traversing the
PCIe Host Bridge)
PIX = Connection traversing a single PCIe switch
p2pBandwidthLatencyTest
3990x
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 550.18 11.80
1 11.76 553.24
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1
0 552.10 46.94
1 46.93 552.71
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 556.35 20.84
1 21.06 556.59
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1
0 557.18 93.51
1 93.49 554.59
P2P=Disabled Latency Matrix (us)
GPU 0 1
0 1.94 12.44
1 13.86 1.93
CPU 0 1
0 3.24 8.52
1 9.51 3.44
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1
0 1.94 2.15
1 2.09 1.93
CPU 0 1
0 3.54 2.86
1 2.83 3.45
DGX Station
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3
0 735.64 10.05 11.10 11.05
1 10.04 739.82 11.12 11.06
2 11.09 11.13 739.82 9.99
3 11.09 11.15 10.05 741.22
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1 2 3
0 727.42 24.21 24.22 48.33
1 24.21 742.63 48.33 24.21
2 24.20 48.32 742.63 24.20
3 48.34 24.22 24.22 742.63
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3
0 746.18 10.45 19.07 18.90
1 10.45 752.65 19.27 19.11
2 19.08 19.11 749.04 10.52
3 19.03 18.99 10.42 753.38
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3
0 746.89 48.37 48.31 96.47
1 48.37 750.48 96.42 48.38
2 48.36 96.25 750.48 48.36
3 96.28 48.38 48.33 753.38
P2P=Disabled Latency Matrix (us)
GPU 0 1 2 3
0 1.89 16.56 16.44 16.42
1 16.43 1.76 16.19 16.42
2 15.81 16.43 1.87 16.43
3 16.43 16.41 15.81 1.83
CPU 0 1 2 3
0 3.84 9.41 9.21 9.46
1 9.33 3.93 9.68 9.45
2 9.41 9.25 3.78 9.46
3 9.49 9.39 9.35 3.77
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1 2 3
0 1.89 1.91 1.90 1.91
1 1.85 1.76 1.85 1.85
2 1.85 1.87 1.87 1.86
3 1.87 1.85 1.85 1.82
CPU 0 1 2 3
0 3.82 2.90 2.88 2.85
1 2.86 3.91 2.82 2.86
2 2.86 2.86 3.91 2.84
3 2.86 2.89 2.86 3.84
TWCC
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 731.51 9.26 10.41 10.39 10.37 10.36 10.38 10.36
1 9.30 739.82 10.41 10.41 10.37 10.37 10.38 10.38
2 10.43 10.41 739.82 9.24 10.37 10.37 10.38 10.38
3 10.44 10.40 9.28 739.82 10.37 10.37 10.37 10.38
4 10.42 10.39 10.42 10.41 738.42 9.26 10.38 10.39
5 10.42 10.38 10.42 10.41 9.26 742.63 10.32 10.37
6 10.42 10.39 10.42 10.41 10.40 10.42 739.82 9.26
7 10.42 10.39 10.42 10.42 10.40 10.42 9.26 739.82
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 731.51 24.22 24.22 48.36 9.03 9.35 48.33 8.93
1 24.22 741.22 48.35 24.22 9.36 9.19 8.96 48.35
2 24.22 48.35 742.63 48.34 24.22 8.90 9.00 8.83
3 48.34 24.22 48.34 742.63 8.88 24.23 8.83 8.83
4 9.01 8.86 24.22 9.07 742.63 48.35 24.22 48.34
5 8.86 8.97 9.05 24.22 48.32 741.22 48.35 24.23
6 48.34 9.08 9.34 9.17 24.23 48.35 744.05 24.22
7 9.13 48.34 9.01 9.34 48.34 24.22 24.22 742.63
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 746.18 9.38 17.86 17.92 17.79 17.81 17.15 16.99
1 9.27 746.89 17.14 17.06 17.30 17.07 17.13 16.82
2 17.82 17.05 749.76 9.66 17.66 17.74 17.73 17.17
3 17.78 17.08 9.39 747.61 17.96 17.75 17.59 17.26
4 18.03 17.10 17.69 17.72 749.04 9.40 17.58 17.05
5 17.67 17.44 17.80 17.77 9.39 748.32 17.73 17.11
6 17.83 17.02 17.77 17.65 17.43 17.23 749.76 9.38
7 17.27 16.81 17.00 17.28 17.03 17.04 9.44 749.76
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6 7
0 747.61 48.40 48.40 96.52 17.24 17.25 96.54 17.25
1 48.34 750.48 96.50 48.39 17.30 17.24 17.27 96.52
2 48.40 96.28 747.61 96.55 48.40 17.25 17.24 17.25
3 96.28 48.39 96.50 747.61 17.29 48.40 17.25 17.24
4 17.25 17.31 48.34 17.28 754.83 96.52 48.41 96.50
5 17.24 17.24 17.25 48.40 96.31 751.92 96.28 48.40
6 96.51 17.27 17.26 17.25 48.34 96.31 746.18 48.40
7 17.24 96.31 17.25 17.25 96.26 48.40 48.39 746.89
P2P=Disabled Latency Matrix (us)
GPU 0 1 2 3 4 5 6 7
0 1.68 16.39 16.39 16.38 16.38 16.41 16.40 16.40
1 16.43 1.65 16.51 16.83 16.45 16.46 16.49 16.44
2 16.47 16.46 1.71 16.46 17.44 17.45 17.44 17.44
3 16.50 16.44 16.44 1.64 17.45 17.46 17.44 17.44
4 16.43 16.44 16.47 16.44 1.65 16.44 16.81 16.41
5 17.40 17.20 17.32 17.32 15.81 1.63 16.20 16.06
6 16.67 16.56 16.49 16.59 16.48 16.42 1.59 16.43
7 15.41 15.40 15.47 15.40 15.51 15.37 15.50 1.59
CPU 0 1 2 3 4 5 6 7
0 3.93 9.98 10.54 10.37 9.93 8.83 10.02 10.37
1 9.89 3.64 10.43 10.40 9.93 8.74 9.96 10.08
2 10.26 10.31 4.18 10.94 10.48 9.39 10.66 10.80
3 10.24 10.18 11.02 4.00 10.48 9.43 10.72 10.52
4 9.66 9.63 10.65 10.51 4.07 9.07 10.12 10.14
5 8.92 8.83 9.87 9.79 9.36 3.39 9.67 9.45
6 9.76 9.63 10.71 10.61 10.15 9.20 3.94 10.23
7 10.04 9.83 11.20 10.67 10.22 9.28 10.24 4.28
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1 2 3 4 5 6 7
0 1.67 1.48 1.48 1.90 2.12 2.11 1.92 2.13
1 1.52 1.65 1.99 1.53 2.13 2.12 2.12 1.98
2 1.47 1.89 1.74 1.88 1.46 2.12 2.12 2.12
3 1.85 1.48 1.89 1.63 2.11 1.48 2.12 2.13
4 2.10 2.10 1.53 2.10 1.66 1.99 1.52 1.97
5 2.09 2.09 2.10 1.52 1.98 1.62 1.98 1.52
6 1.89 2.12 2.10 2.11 1.47 1.88 1.59 1.48
7 2.12 1.90 2.11 2.12 1.90 1.48 1.48 1.59
CPU 0 1 2 3 4 5 6 7
0 3.65 2.76 2.65 2.67 2.65 2.71 2.58 2.61
1 2.64 3.62 2.60 2.58 2.71 2.67 2.63 2.55
2 2.96 2.86 4.12 2.79 2.87 2.96 2.89 2.96
3 2.82 2.83 2.78 4.01 2.86 2.89 2.86 2.90
4 2.63 2.70 2.66 2.76 3.94 2.65 2.69 2.73
5 2.35 2.38 2.33 2.42 2.36 3.30 2.32 2.36
6 2.66 2.83 2.80 2.76 2.69 2.73 4.02 2.71
7 2.68 2.70 2.90 2.78 2.71 2.78 2.67 4.11
Tensorflow測試 resnet50
1xTITAN RTX fp32
| batch64 | batch128
3990x | 298.97 | 310.80
1xTITAN RTX fp16
| batch64 | batch128 | batch256
3990x | 844.00 | 877.49 | 877.88
2xTITAN RTX fp32
| batch32 | batch64 | batch128
| global64 | global128 | global256
3990x | 601.78 | 654.36 | 674.78
2xTITAN RTX fp16
| batch32 | batch64 | batch128 | batch256
| global64 | global128 | global256 | global512
3990x | 1353.65 | 1635.21 | 1813.69 | 1896.68
1xV100 fp32
| batch64 | batch128 | batch256
6148 | 351.17 | 378.99 | 392.35
1xV100 fp16
| batch64 | batch128 | batch256
6148 | 850.51 | 1019.35 | 1145.15
4xV100 fp32
| batch16 | batch32 | batch64
| global64 | global128 | global256
DGX Station | 1037.34 | 1248.10 | 1430.58
4xV100 fp16
| batch16 | batch32 | batch64 | batch128
| global64 | global128 | global256 | global512
DGX Station | 1223.04 | 2382.59 | 3032.58 | 3739.49
8xV100 fp32
| batch8 | batch16 | batch32
| global64 | global128 | global256
TWCC | 479.91 | 773.50 | 1281.98
8xV100 fp16
| batch8 | batch16 | batch32 | batch64
| global64 | global128 | global256 | global512
TWCC | 654.66 | 1210.17 | 2272.34 | 3708.51
Pytorch 與 AMP(Apex) 測試
bert | fp32| fp16
3990x 2xTitan RTX |00:25.39|00:28.93
6148 1xV100 |00:57.33|01:25.29
DGX Station 4xV100 |00:27.66|00:37.42
TWCC 8xV100 |00:12.54|00:20.86
(6148那台可能是單核或環境問題 參考就好)
===
沒有Sever_Shopping板可以貼
這些測項與MIS板的需求也不同(沒測長時間重載硬碟網路IO)
或許該轉DataScience板?
自組價格比共契的585,729便宜一點點
CPU核心多一倍
犧牲的就是整機穩定度
沒有品牌系統廠驗證 散熱電力無法保證
也沒有大SI處理保固 狀況排除
不建議省這些錢
實際在使用時 有看過延長線瞬間1600W的讀數
幾乎頂到110V的上限1650W
CPU 700~800W也是有
就會想到大同電鍋6人份耗電600W 10人份700W
用那麼一個小小的AIO水冷要幫電鍋散熱感覺就很不合理
真的跑幾個月運算 水管也不知道能撐多久
冷頭內感覺就跟熱水器差不多
AIO應該要提供水溫回報給主機板
這360一體水號稱解熱能力500W+
PBO穩定也就真的差不多500W
想要PBO AUTO全64核4.1GHz 800W穩定跑應該是要開放水+冷水機
但小實驗室用自組的主機 最後如果水冷漏水 一定很精彩
這代TITAN公版從鼓風扇換雙風扇完全就是要阻止買遊戲卡跑運算的
兩張TITAN RTX 600W的熱積在那邊超級難處理
整台主機全開時冷排排風口的溫度跟冬天用的陶瓷電暖器有87%像
一個TY 143全速吹顯卡 另一個全速吹VRM
感覺是不太夠力 但也沒辦法 現在這樣已經跟電風扇開強差不多大聲
除非這機器會放在沒人的地方
或是有冷熱通道隔離的機房 買個層板主機橫躺或4U機殼
前置3風扇全換萬轉12cm
但搞到這樣為什麼不直接買設計好的塔式/機架式配Quadro/Tesla呢?
--
Tags:
3C
All Comments
By Quanna
at 2020-05-10T10:36
at 2020-05-10T10:36
By Queena
at 2020-05-14T02:05
at 2020-05-14T02:05
By Todd Johnson
at 2020-05-18T11:44
at 2020-05-18T11:44
By Yedda
at 2020-05-21T17:34
at 2020-05-21T17:34
By Susan
at 2020-05-23T01:02
at 2020-05-23T01:02
By Isla
at 2020-05-24T01:25
at 2020-05-24T01:25
By Hedy
at 2020-05-25T18:48
at 2020-05-25T18:48
By Lily
at 2020-05-26T13:35
at 2020-05-26T13:35
By Eartha
at 2020-05-28T16:15
at 2020-05-28T16:15
By George
at 2020-06-01T19:00
at 2020-06-01T19:00
By Puput
at 2020-06-02T09:34
at 2020-06-02T09:34
By Bethany
at 2020-06-02T20:06
at 2020-06-02T20:06
By Hedwig
at 2020-06-05T09:13
at 2020-06-05T09:13
By Daniel
at 2020-06-06T06:24
at 2020-06-06T06:24
By Tristan Cohan
at 2020-06-10T15:07
at 2020-06-10T15:07
By Michael
at 2020-06-13T03:24
at 2020-06-13T03:24
By Skylar DavisLinda
at 2020-06-15T00:20
at 2020-06-15T00:20
By Rae
at 2020-06-19T04:52
at 2020-06-19T04:52
By Poppy
at 2020-06-22T14:05
at 2020-06-22T14:05
By Joseph
at 2020-06-26T17:52
at 2020-06-26T17:52
By David
at 2020-06-26T19:54
at 2020-06-26T19:54
By Frederica
at 2020-06-28T12:10
at 2020-06-28T12:10
By Yuri
at 2020-07-02T05:05
at 2020-07-02T05:05
By David
at 2020-07-06T19:03
at 2020-07-06T19:03
By Todd Johnson
at 2020-07-08T22:59
at 2020-07-08T22:59
By Olga
at 2020-07-12T19:16
at 2020-07-12T19:16
By Mia
at 2020-07-17T01:23
at 2020-07-17T01:23
By Barb Cronin
at 2020-07-18T09:49
at 2020-07-18T09:49
By Iris
at 2020-07-20T02:35
at 2020-07-20T02:35
By Agnes
at 2020-07-21T20:29
at 2020-07-21T20:29
By Lucy
at 2020-07-22T12:36
at 2020-07-22T12:36
By Connor
at 2020-07-25T17:21
at 2020-07-25T17:21
By Rosalind
at 2020-07-26T06:57
at 2020-07-26T06:57
By Caitlin
at 2020-07-31T01:06
at 2020-07-31T01:06
By Sandy
at 2020-08-01T16:59
at 2020-08-01T16:59
By Delia
at 2020-08-05T04:45
at 2020-08-05T04:45
By Christine
at 2020-08-05T06:56
at 2020-08-05T06:56
By Madame
at 2020-08-07T06:14
at 2020-08-07T06:14
By Olga
at 2020-08-11T12:13
at 2020-08-11T12:13
By Edward Lewis
at 2020-08-16T01:37
at 2020-08-16T01:37
By Christine
at 2020-08-17T20:24
at 2020-08-17T20:24
By Madame
at 2020-08-22T03:06
at 2020-08-22T03:06
By Hedwig
at 2020-08-25T17:51
at 2020-08-25T17:51
By Zora
at 2020-08-29T03:54
at 2020-08-29T03:54
By Edwina
at 2020-09-02T02:41
at 2020-09-02T02:41
By Ida
at 2020-09-06T06:44
at 2020-09-06T06:44
By Lauren
at 2020-09-09T18:54
at 2020-09-09T18:54
By Poppy
at 2020-09-09T22:33
at 2020-09-09T22:33
By Steve
at 2020-09-10T12:24
at 2020-09-10T12:24
By Heather
at 2020-09-14T14:16
at 2020-09-14T14:16
By Olga
at 2020-09-17T20:12
at 2020-09-17T20:12
By Robert
at 2020-09-22T06:35
at 2020-09-22T06:35
By Edith
at 2020-09-24T18:32
at 2020-09-24T18:32
By Erin
at 2020-09-26T02:19
at 2020-09-26T02:19
By Frederica
at 2020-09-27T05:18
at 2020-09-27T05:18
By Quintina
at 2020-09-27T06:39
at 2020-09-27T06:39
By Franklin
at 2020-09-30T20:39
at 2020-09-30T20:39
By Lily
at 2020-10-04T04:32
at 2020-10-04T04:32
By William
at 2020-10-04T10:53
at 2020-10-04T10:53
By Jake
at 2020-10-08T01:02
at 2020-10-08T01:02
By Aaliyah
at 2020-10-10T05:19
at 2020-10-10T05:19
By Rae
at 2020-10-14T21:14
at 2020-10-14T21:14
By Ivy
at 2020-10-15T08:35
at 2020-10-15T08:35
By Genevieve
at 2020-10-18T14:52
at 2020-10-18T14:52
By Agnes
at 2020-10-20T06:21
at 2020-10-20T06:21
By Faithe
at 2020-10-21T17:48
at 2020-10-21T17:48
By Kama
at 2020-10-22T02:26
at 2020-10-22T02:26
By Michael
at 2020-10-22T20:56
at 2020-10-22T20:56
By Jacky
at 2020-10-23T10:05
at 2020-10-23T10:05
By Rosalind
at 2020-10-27T05:50
at 2020-10-27T05:50
By Annie
at 2020-11-01T01:00
at 2020-11-01T01:00
By Tom
at 2020-11-05T15:31
at 2020-11-05T15:31
By Odelette
at 2020-11-09T02:47
at 2020-11-09T02:47
By Olivia
at 2020-11-09T06:02
at 2020-11-09T06:02
By Una
at 2020-11-12T15:39
at 2020-11-12T15:39
By Ina
at 2020-11-16T19:20
at 2020-11-16T19:20
By Daniel
at 2020-11-17T00:56
at 2020-11-17T00:56
By Zora
at 2020-11-20T09:12
at 2020-11-20T09:12
By Ina
at 2020-11-21T15:38
at 2020-11-21T15:38
By Skylar DavisLinda
at 2020-11-22T00:01
at 2020-11-22T00:01
By Xanthe
at 2020-11-23T22:01
at 2020-11-23T22:01
Related Posts
360rgb水冷限量特價
By Cara
at 2020-05-08T18:08
at 2020-05-08T18:08
2020年第一季全球筆電市場僅下滑2%
By Una
at 2020-05-08T17:48
at 2020-05-08T17:48
Intel SSD將全面轉向144層3D QLC閃存:PL
By Kama
at 2020-05-08T16:56
at 2020-05-08T16:56
電腦溫度越低,效能未必提升?
By Isla
at 2020-05-08T16:52
at 2020-05-08T16:52
收到一張白給的1060 (EVAG RMA分享)
By Audriana
at 2020-05-08T16:26
at 2020-05-08T16:26