anandtech ZEN PPT 分析文章中文翻譯 - 3C

By Audriana
at 2016-08-22T13:06
at 2016-08-22T13:06
Table of Contents
看到對岸有人翻譯就轉過來了
有做部分名詞更動像是 緩存=>快取 多線程=>多執行緒
--
【AMD Zen 微架構完全解析:雙調度器,微指令快取以及快取架構】
AMD Zen Microarchitecture: Dual Schedulers, Micro-Op Cache and Memory
Hierarchy Revealed
原文:http://www.anandtech.com/show/10578/amd-zen-microarchitecture-dual-schedulers-micro-op-cache-memory-hierarchy-revealed
縮網址:http://goo.gl/BMfq8u
翻譯出處:http://www.mykancolle.com/?post=385
In their own side event this week, AMD invited select members of the press
and analysts to come and discuss the next layer of Zen details. In this
piece, we’re discussing the microarchitecture announcements that were made,
as well as a look to see how this compares to previous generations of AMD
core designs.
AMD這邊邀請了部分媒體和分析師參與Zen的進一步細節的討論。 這篇文章裡我們將討
論架構,並與前代處理器作比較。
--
Prediction, Decode, Queues and Execution【分支預測、解碼、佇列以及執行】
First up, let’s dive right into the block diagram as shown:
首先讓我們之間看下面的圖
http://images.anandtech.com/doci/10578/s1%20Perf.png
If we focus purely on the left to start, we can see most of the high-level
microarchitecture details including basic caches, the new inclusion of an
op-cache, some details about decoders and dispatch, scheduler arrangements,
execution ports and load/store arrangements. A number of slides later in the
presentation talk about cache bandwidth.
如果我們從左邊開始看起,我們可以看到大部分的架構細節,包括L1快取、新的微指令
快取、解碼、分發、調度器、執行埠以及L/S單元的設計。
Firstly, one of the bigger deviations from previous AMD microarchitecture
designs is the presence of a micro-op cache (it might be worth noting that
these slides sometimes say op when it means micro-op, creating a little
confusion). AMD’s Bulldozer design did not have an operation cache,
requiring it to fetch details from other caches to implement frequently used
micro-ops. Intel has been implementing a similar arrangement for several
generations to great effect (some put it as a major stepping stone for
Conroe), so to see one here is quite promising for AMD. We weren’t told the
scale or extent of this buffer, and AMD will perhaps give that information in
due course.
首先,Zen與前代架構的很大一處不同在於,出現了微指令快取(ppt上有時候寫的op快
取,實際上意思就是micro-op,容易誤導人)。 AMD的推土機設計沒有微指令快取,就必
須從其他快取中提取細節,來執行頻繁使用的微指令。 Intel很早就開始用微指令快取了
,效果非常好(在Conroe架構上引入的重要改進),所以對於AMD來說這應該能帶來不小
提升。 AMD沒告訴我們這個緩衝區的大小,估計在適當的時候會給出資訊。
Aside from the as-expected ‘branch predictor enhancements’, which are as
vague as they sound, AMD has not disclosed the decoder arrangements in Zen at
this time, but has listed that they can decode four instructions per cycle to
feed into the operations queue. This queue, with the help of the op-cache,
can deliver 6 ops/cycle to the schedulers. The reasons behind the queue being
able to dispatch more per cycle is if the decoder can supply an instruction
which then falls into two micro-ops (which makes the instruction vs micro-op
definitions even muddier). Nevertheless, this micro-op queue helps feed the
separate integer and floating point segments of the CPU. Unlike Intel who
uses a combined scheduler for INT/FP, AMD’s diagram suggests that they will
remain separate with their own schedulers at this time.
拋開含糊不清的「增強的分支預測器」,AMD這次也沒披露解碼器的設計,但列出他們
每週期可以解碼4條指令到佇列。 這個佇列在微指令快取的輔助下,到調度器時能達到最
高每週期6條指令。 因為解碼器可以解碼一條指令,然後該指令隨後拆分為兩條微指令(
這讓指令和微指令的區別變得模糊)。 此外,這個微指令佇列還能提高每個整數和浮點
單元的利用率。 AMD不像Intel那樣給整數/浮點一個公用的調度器,而是繼續使用分離的
調度器。
The INT side of the core will funnel the ALU operations as well as the
AGU/load and store ops. The load/store units can perform 2 16-Byte loads and
one 16-Byte store per cycle, making use of the 32 KB 8-way set associative
write-back L1 Data cache. AMD has explicitly made this a write back cache
rather than the write through cache we saw in Bulldozer that was a source of
a lot of idle time in particular code paths. AMD is also stating that the
load/stores will have lower latency within the caches, but has not explained
to what extent they have improved.
整數部分包括ALU、AGU以及LS操作。 LS單元每週期可以執行2次16位元組的load以及1
次16位元組的store操作,利用32KB 8路組相連 回寫式L1資料快取。 AMD明確說明這是回
寫式快取,而不是推土機上的穿透式快取(在一定條件下會帶來大量的閒置時間)。 AMD
聲稱快取內的LS操作延遲會更低,但沒再做進一步說明。
The FP side of the core will afford two multiply ports and two ADD ports,
which should allow for two joined FMAC operations or one 256-bit AVX per
cycle. The combination of the INT and FP segments means that AMD is going for
a wide core and looking to exploit a significant amount of instruction level
parallelism. How much it will be able to depends on the caches and the
reorder buffers – no real data on the buffers has been given at this time,
except that the cores will have a +75% bigger instruction scheduler window
for ordering operations and a +50% wider issue width for potential
throughput. The wider cores, all other things being sufficient, will also
allow AMD’s implementation of simultaneous multithreading to potentially
take advantage of multiple threads with a linear and naturally low IPC.
每核心浮點部分包括兩個乘法埠,兩個ADD埠,每週期能夠執行兩條捆綁的FMAC命令或
者一條256bit AVX。 把整數和浮點部分合起來看,Zen核心在指令級並行上將會有很大提
升 - 提升多少取決於快取和重排序快取 - 這次沒給出ROB的具體資料,只說排序操作的
指令調度視窗將會增大75%,發射寬度提升50%。 即便是天生IPC就低的AMD處理器,核心
並行性越好,其他的方面就有效率多了,這也使得這次用的SMT在多執行緒上占得先機。
Deciphering the New Cache Hierarchy 【解密新的快取結構】
http://images.anandtech.com/doci/10578/s3%20Cache.png
The cache hierarchy is a significant deviation from recent previous AMD
designs, and most likely to its advantage. The L1 data cache is both double
in size and increased in associativity compared to Bulldozer, as well as
being write-back rather than write-through. It also uses an asymmetric
load/store implementation, identifying that loads happen more often than
stores in the critical paths of most work flows. The instruction cache is no
longer shared between two cores as well as doubling in associativity, which
should decrease the proportion of cache misses. AMD states that both the L1-D
and L1-I are low latency, with details to come.
這次的快取結構相比以前做出了重大改進,而且是朝著好的方向。 相較于推土機,Zen
的L1快取在大小和關聯性都翻倍了,而且是寫回式而不是穿透式。 同時採用了非對稱LS
單元,因為在大多數情況下Load操作比Store要頻繁得多。 指令快取不再是兩個核心共用
,同時關聯性也翻倍,這將減少快取未命中的情況。 AMD聲稱L1資料和指令快取延遲都很
低,今後將公佈更多細節。
The L2 cache sits at half a megabyte per core with 8-way associativity, which
is double that of Intel’s Skylake which has 256 KB/core and is only 4-way.
On the other hand, Intel’s L3/LLC on their high-end Skylake SKUs is at 2
MB/core or 8 MB/CPU, whereas Zen will feature 1 MB/core and both are at
16-way associativity.
L2快取變成了每核心512KB,8路相連,這是Intel Skylake上256kb 4路關聯的兩倍。
另一方面,Intel的L3在高端Skylake i7上是每核心2MB,每CPU8MB,在Zen上則是每核心
1MB,這兩者都是16路關聯。
Edit 7:18am: Actually, the slide above is being slightly evasive in its
description. It doesn't say how many cores the L3 cache is stretched over, or
if there is a common LLC between all cores in the chip. However, we have
recieved information from a source (which can't be confirmed via public AMD
documents) that states that Zen will feature two sets of 8MB L3 cache between
two groups of four cores each , giving 16 MB of L3 total. This would means 2
MB/core, but it also implies that there is no last-level unified cache in
silicon across all cores, which Intel has. The reasons behind something like
this is typically to do with modularity, and being able to scale a core
design from low core counts to high core counts. But it would still leave a
Zen core with the same L3 cache per core as Intel.
實際上上面的ppt在描述上有點曖昧。 沒有說多少核心共用8M L3,更沒說是否每顆晶
片上的所有核心都是共用同一個L3的。 然而我們從一個消息來源獲得的資訊(在AMD官方
ppt上找不到的)表明,Zen的8核晶片上是4個核心為一個簇,每個簇4個核心共用8M L3,
8核晶片有兩組8MB,共16MB的 L3。 這樣的話就是每核心2MB,但這也說明了Zen的L3不是
完全共用的,然而Intel的是完全共用的。 這樣做的原因估計和模組化有點關係,通過增
加這樣的模組可以做出從4核心直到32核心,但Zen的每核心L3和Intel的依然都是每核心
2MB(沒有任何優勢)
http://i.imgur.com/JSBgrgJ.jpg
What this means, between the L2 and the L3, is that AMD is putting more lower
level cache nearer the core than Intel, and as it is low level it becomes
separate to each core which can potentially improve single thread
performance. The downside of bigger and lower (but separate) caches is how
each of the cores will perform snoop in each other’s large caches to ensure
clean data is being passed around and that old data in L3 is not out-of-date.
AMD’s big headline number overall is that Zen will offer up to 5x cache
bandwidth to a core over previous designs.
這也意味著,AMD的L1和L2比Intel更大、延遲更低。 而且L1、L2距離核心更近,還是
每核心獨立的,在單線程性能上會有顯著提升。 但更大的獨立L1/L2帶來的壞處是,每個
核心都要監聽其他核心的快取,確保 1.傳遞的是乾淨資料、2. L3上的原資料不過期。
AMD給出的總體數位是,Zen在快取頻寬上是前代的5倍。
Low Power, FinFET and Clock Gating 【低功耗,FinFET,門控時鐘】
When AMD launched Carrizo and Bristol Ridge for notebooks, one of the big
stories was how AMD had implemented a number of techniques to improve power
consumption and subsequently increase efficiency. A number of those lessons
have come through with Zen, as well as a few new aspects in play due to the
lithography.
在AMD發佈Carrizo和Bristol Ridge的時候,介紹的一個重點就是一系列降低功耗和提
升能效的技術。 有一部分技術延續到了Zen上,同時伴隨著制程更新,還加入了一些新的
技術。
http://images.anandtech.com/doci/10578/s5%20FinFET.png
First up is the FinFET effect. Regular readers of AnandTech and those that
follow the industry will already be bored to death with FinFET, but the
design allows for a lower power version of a transistor at a given frequency.
Now of course everyone using FinFET can have a different implementation which
gives specific power/performance characteristics, but Zen on the 14nm FinFET
process at Global Foundries is already a known quantity with AMD’s Polaris
GPUs which are built similarly. The combination of FinFET with the fact that
AMD confirmed that they will be using the density-optimised version of 14nm
FinFET (which will allow for smaller die sizes and more reasonable efficiency
points) also contributes to a shift of either higher performance at the same
power or the same performance at lower power.
首先就是FinFET。 雖然大部分的人都已經熟悉FinFET到吐了,但我們還是要介紹一下
。 FinFET設計能在給定頻率下設計出電晶體的低功耗版本。 每個FinFET代工廠給出的技
術指標都不同,但Zen用的GF 14nm技術和Polaris GPU的不會差太多,這意味著AMD使用的
是14nm的追求密度版本,能在同等功耗下達成更高性能,或者低功耗下達成同等性能。
http://images.anandtech.com/doci/10578/s6%20Efficiency.png
AMD stated in the brief that power consumption and efficiency was constantly
drilled into the engineers, and as explained in previous briefings, there
ends up being a tradeoff between performance and efficiency about what can be
done for a number of elements of the core (e.g. 1% performance might cost 2%
efficiency). For Zen, the micro-op cache will save power by not having to go
further out to get instruction data, improved prefetch and a couple of other
features such as move elimination will also reduce the work, but AMD also
states that cores will be aggressively clock gated to improve efficiency.
AMD介紹說工程師們一直很注重功耗和能效,在性能和功能單元的效率上做了很多權衡
(比如提升1%的性能,代價是2%的能效損失)。 不僅有微指令快取可以節約讀取指令快
取的電能,改善的預取機制等也能減少工作量。 但AMD也說明,為了提升能效,Zen的門
控時鐘將會很激進。
We saw with AMD’s 7th Gen APUs that power gating was also a target with that
design, especially when remaining at the best efficiency point (given
specific performance) is usually the best policy. The way the diagram above
is laid out would seem to suggest that different parts of the core could
independently be clock gated depending on use (e.g. decode vs FP ports),
although we were not able to confirm if this is the case. It also relies on
having very quick (1-2 cycle) clock gating implementations, and note that
clock gating is different to power-gating, which is harder to implement.
AMD第七代APU上也有差不多的設計,保持在效率最高的那個點(特定性能)是最好的方
式。 上圖似乎暗示著每個核心的不同部分(取決於用途)都有獨立的門控時鐘(比如解
碼單元或者浮點埠),雖然目前還無法確認。 同時還需要有非常快速的門控時鐘(1-2個
週期),要知道門控時鐘與功耗門限不同,門控時鐘更難設計。
Simultaneous Multi-Threading 【同步多執行緒】
On Zen, each core will be able to support two threads in what is called ‘
simulatenous multi-threading’. Intel has supported their version of SMT for
a number of years, and other CPU manufacturers like IBM support up to 8
threads per core on their POWER8 platform designs. Building a core to be able
to use multiple threads can be tough, as it requires a lot of resources to
make sure that the threads do not block each other by consuming all the cache
and buffers in play. But AMD will equip Zen with SMT which means we will see
8C/16T parts hitting the market.
Zen架構上,每個核心支援兩個執行緒,這叫做同步多執行緒。 Intel版本的SMT早在08
年就開始啟用,其他的廠商比如IBM,在POWER8上支援最多8個執行緒(SMT8)。 讓一個
核心處理兩個執行緒很困難,需要很多資源來確保執行緒之間不會因爭奪快取而互相阻塞
。 Zen桌上出版將會有8核16執行緒。
http://images.anandtech.com/doci/10578/s4%20SMT.png
Unlike Bulldozer, where having a shared FP unit between two threads was an
issue for floating point performance, Zen’s design is more akin to Intel’s
in that each thread will appear as an independent core and there is not that
resource limitation that BD had. With sufficient resources, SMT will allow
the core instructions per clock to improve, however it will be interesting to
see what workloads will benefit and which ones will not.
在推土機上,共用浮點單元使得浮點性能不如人意。 但Zen的設計更類似于Intel,每
個執行緒都和一個單獨核心差不多,不會有推土機上的資源限制。 有了更多的資源,SMT
將會提升IPC,我很想看看哪些負載能從中獲益。 】
Timeframe and Availability 【時程表、供貨日期】
At the presentation, it was given that Zen will be available in volume in
2017. As the AM4 platform will share a socket with Bristol Ridge, users are
likely to see Bristol Ridge systems from AMD’s main OEM partners, like Dell
and others, enter the market before separate Zen CPUs will hit the market for
DIY builders. It’s a matter of principle that almost no consumer focused
semiconductor company releases a product for the sale season, and Q1 features
such events as CES, which gives a pretty clear indication of when we can
expect to get our hands on one.
在ppt上寫著Zen將會在17年大量出貨。 由於AM4平臺上Summit Ridge和Bristol Ridge
使用同樣的插槽,可能我們能從AMD的OEM們那裡先見到Bristol Ridge進入市場。 沒有哪
個主賣消費級產品的半導體廠商會在年末清倉季發佈新品,而第一季度會有CES之類的大
型展會,那時候我們肯定能拿到手。
It’s worth noting that AMD said that as we get closer to launch, further
details will come as well as deeper information about the design. It was also
mentioned that the marketing strategy is also currently being determined,
such that Zen may not actually be the retail product name for the line of
processors (we already have Summit Ridge as the platform codename, but that
could change for retail as well).
AMD說距離發佈越近,就會公佈越多的架構細節。 還提到了行銷策略上的決定,比如
Zen不會是實際產品線的名稱(實際平臺代號是Summit Ridge,但到了出貨時候也可能會
變)。
Wrap Up 總結
AMD has gone much further into their core design than I expected this week.
When we were told we had a briefing, and there were 200-odd press and
analysts in the room, I was expecting to hear some high level puff about the
brand and a reiteration of their commitment to the high end. To actually get
some slides detailing parts of the microarchitecture, even at a basic cache
level, was quite surprising and it somewhat means that AMD might have stolen
the show with the news this week.
這次AMD的介紹比我想的要深入。 當有人告訴我去參加一個短會,並且會有200多家媒
體和分析師到場,我還估計應該就是吹吹牛逼,重申要回到高端市場什麼的。 但實際上
AMD給出了部分架構的詳盡介紹,甚至還介紹了基本快取結構,這出乎我的意料,估計這
個星期媒體上都會是AMD的新聞了吧。
--
有做部分名詞更動像是 緩存=>快取 多線程=>多執行緒
--
【AMD Zen 微架構完全解析:雙調度器,微指令快取以及快取架構】
AMD Zen Microarchitecture: Dual Schedulers, Micro-Op Cache and Memory
Hierarchy Revealed
原文:http://www.anandtech.com/show/10578/amd-zen-microarchitecture-dual-schedulers-micro-op-cache-memory-hierarchy-revealed
縮網址:http://goo.gl/BMfq8u
翻譯出處:http://www.mykancolle.com/?post=385
In their own side event this week, AMD invited select members of the press
and analysts to come and discuss the next layer of Zen details. In this
piece, we’re discussing the microarchitecture announcements that were made,
as well as a look to see how this compares to previous generations of AMD
core designs.
AMD這邊邀請了部分媒體和分析師參與Zen的進一步細節的討論。 這篇文章裡我們將討
論架構,並與前代處理器作比較。
--
Prediction, Decode, Queues and Execution【分支預測、解碼、佇列以及執行】
First up, let’s dive right into the block diagram as shown:
首先讓我們之間看下面的圖
http://images.anandtech.com/doci/10578/s1%20Perf.png

If we focus purely on the left to start, we can see most of the high-level
microarchitecture details including basic caches, the new inclusion of an
op-cache, some details about decoders and dispatch, scheduler arrangements,
execution ports and load/store arrangements. A number of slides later in the
presentation talk about cache bandwidth.
如果我們從左邊開始看起,我們可以看到大部分的架構細節,包括L1快取、新的微指令
快取、解碼、分發、調度器、執行埠以及L/S單元的設計。
Firstly, one of the bigger deviations from previous AMD microarchitecture
designs is the presence of a micro-op cache (it might be worth noting that
these slides sometimes say op when it means micro-op, creating a little
confusion). AMD’s Bulldozer design did not have an operation cache,
requiring it to fetch details from other caches to implement frequently used
micro-ops. Intel has been implementing a similar arrangement for several
generations to great effect (some put it as a major stepping stone for
Conroe), so to see one here is quite promising for AMD. We weren’t told the
scale or extent of this buffer, and AMD will perhaps give that information in
due course.
首先,Zen與前代架構的很大一處不同在於,出現了微指令快取(ppt上有時候寫的op快
取,實際上意思就是micro-op,容易誤導人)。 AMD的推土機設計沒有微指令快取,就必
須從其他快取中提取細節,來執行頻繁使用的微指令。 Intel很早就開始用微指令快取了
,效果非常好(在Conroe架構上引入的重要改進),所以對於AMD來說這應該能帶來不小
提升。 AMD沒告訴我們這個緩衝區的大小,估計在適當的時候會給出資訊。
Aside from the as-expected ‘branch predictor enhancements’, which are as
vague as they sound, AMD has not disclosed the decoder arrangements in Zen at
this time, but has listed that they can decode four instructions per cycle to
feed into the operations queue. This queue, with the help of the op-cache,
can deliver 6 ops/cycle to the schedulers. The reasons behind the queue being
able to dispatch more per cycle is if the decoder can supply an instruction
which then falls into two micro-ops (which makes the instruction vs micro-op
definitions even muddier). Nevertheless, this micro-op queue helps feed the
separate integer and floating point segments of the CPU. Unlike Intel who
uses a combined scheduler for INT/FP, AMD’s diagram suggests that they will
remain separate with their own schedulers at this time.
拋開含糊不清的「增強的分支預測器」,AMD這次也沒披露解碼器的設計,但列出他們
每週期可以解碼4條指令到佇列。 這個佇列在微指令快取的輔助下,到調度器時能達到最
高每週期6條指令。 因為解碼器可以解碼一條指令,然後該指令隨後拆分為兩條微指令(
這讓指令和微指令的區別變得模糊)。 此外,這個微指令佇列還能提高每個整數和浮點
單元的利用率。 AMD不像Intel那樣給整數/浮點一個公用的調度器,而是繼續使用分離的
調度器。
The INT side of the core will funnel the ALU operations as well as the
AGU/load and store ops. The load/store units can perform 2 16-Byte loads and
one 16-Byte store per cycle, making use of the 32 KB 8-way set associative
write-back L1 Data cache. AMD has explicitly made this a write back cache
rather than the write through cache we saw in Bulldozer that was a source of
a lot of idle time in particular code paths. AMD is also stating that the
load/stores will have lower latency within the caches, but has not explained
to what extent they have improved.
整數部分包括ALU、AGU以及LS操作。 LS單元每週期可以執行2次16位元組的load以及1
次16位元組的store操作,利用32KB 8路組相連 回寫式L1資料快取。 AMD明確說明這是回
寫式快取,而不是推土機上的穿透式快取(在一定條件下會帶來大量的閒置時間)。 AMD
聲稱快取內的LS操作延遲會更低,但沒再做進一步說明。
The FP side of the core will afford two multiply ports and two ADD ports,
which should allow for two joined FMAC operations or one 256-bit AVX per
cycle. The combination of the INT and FP segments means that AMD is going for
a wide core and looking to exploit a significant amount of instruction level
parallelism. How much it will be able to depends on the caches and the
reorder buffers – no real data on the buffers has been given at this time,
except that the cores will have a +75% bigger instruction scheduler window
for ordering operations and a +50% wider issue width for potential
throughput. The wider cores, all other things being sufficient, will also
allow AMD’s implementation of simultaneous multithreading to potentially
take advantage of multiple threads with a linear and naturally low IPC.
每核心浮點部分包括兩個乘法埠,兩個ADD埠,每週期能夠執行兩條捆綁的FMAC命令或
者一條256bit AVX。 把整數和浮點部分合起來看,Zen核心在指令級並行上將會有很大提
升 - 提升多少取決於快取和重排序快取 - 這次沒給出ROB的具體資料,只說排序操作的
指令調度視窗將會增大75%,發射寬度提升50%。 即便是天生IPC就低的AMD處理器,核心
並行性越好,其他的方面就有效率多了,這也使得這次用的SMT在多執行緒上占得先機。
Deciphering the New Cache Hierarchy 【解密新的快取結構】
http://images.anandtech.com/doci/10578/s3%20Cache.png

The cache hierarchy is a significant deviation from recent previous AMD
designs, and most likely to its advantage. The L1 data cache is both double
in size and increased in associativity compared to Bulldozer, as well as
being write-back rather than write-through. It also uses an asymmetric
load/store implementation, identifying that loads happen more often than
stores in the critical paths of most work flows. The instruction cache is no
longer shared between two cores as well as doubling in associativity, which
should decrease the proportion of cache misses. AMD states that both the L1-D
and L1-I are low latency, with details to come.
這次的快取結構相比以前做出了重大改進,而且是朝著好的方向。 相較于推土機,Zen
的L1快取在大小和關聯性都翻倍了,而且是寫回式而不是穿透式。 同時採用了非對稱LS
單元,因為在大多數情況下Load操作比Store要頻繁得多。 指令快取不再是兩個核心共用
,同時關聯性也翻倍,這將減少快取未命中的情況。 AMD聲稱L1資料和指令快取延遲都很
低,今後將公佈更多細節。
The L2 cache sits at half a megabyte per core with 8-way associativity, which
is double that of Intel’s Skylake which has 256 KB/core and is only 4-way.
On the other hand, Intel’s L3/LLC on their high-end Skylake SKUs is at 2
MB/core or 8 MB/CPU, whereas Zen will feature 1 MB/core and both are at
16-way associativity.
L2快取變成了每核心512KB,8路相連,這是Intel Skylake上256kb 4路關聯的兩倍。
另一方面,Intel的L3在高端Skylake i7上是每核心2MB,每CPU8MB,在Zen上則是每核心
1MB,這兩者都是16路關聯。
Edit 7:18am: Actually, the slide above is being slightly evasive in its
description. It doesn't say how many cores the L3 cache is stretched over, or
if there is a common LLC between all cores in the chip. However, we have
recieved information from a source (which can't be confirmed via public AMD
documents) that states that Zen will feature two sets of 8MB L3 cache between
two groups of four cores each , giving 16 MB of L3 total. This would means 2
MB/core, but it also implies that there is no last-level unified cache in
silicon across all cores, which Intel has. The reasons behind something like
this is typically to do with modularity, and being able to scale a core
design from low core counts to high core counts. But it would still leave a
Zen core with the same L3 cache per core as Intel.
實際上上面的ppt在描述上有點曖昧。 沒有說多少核心共用8M L3,更沒說是否每顆晶
片上的所有核心都是共用同一個L3的。 然而我們從一個消息來源獲得的資訊(在AMD官方
ppt上找不到的)表明,Zen的8核晶片上是4個核心為一個簇,每個簇4個核心共用8M L3,
8核晶片有兩組8MB,共16MB的 L3。 這樣的話就是每核心2MB,但這也說明了Zen的L3不是
完全共用的,然而Intel的是完全共用的。 這樣做的原因估計和模組化有點關係,通過增
加這樣的模組可以做出從4核心直到32核心,但Zen的每核心L3和Intel的依然都是每核心
2MB(沒有任何優勢)
http://i.imgur.com/JSBgrgJ.jpg
What this means, between the L2 and the L3, is that AMD is putting more lower
level cache nearer the core than Intel, and as it is low level it becomes
separate to each core which can potentially improve single thread
performance. The downside of bigger and lower (but separate) caches is how
each of the cores will perform snoop in each other’s large caches to ensure
clean data is being passed around and that old data in L3 is not out-of-date.
AMD’s big headline number overall is that Zen will offer up to 5x cache
bandwidth to a core over previous designs.
這也意味著,AMD的L1和L2比Intel更大、延遲更低。 而且L1、L2距離核心更近,還是
每核心獨立的,在單線程性能上會有顯著提升。 但更大的獨立L1/L2帶來的壞處是,每個
核心都要監聽其他核心的快取,確保 1.傳遞的是乾淨資料、2. L3上的原資料不過期。
AMD給出的總體數位是,Zen在快取頻寬上是前代的5倍。
Low Power, FinFET and Clock Gating 【低功耗,FinFET,門控時鐘】
When AMD launched Carrizo and Bristol Ridge for notebooks, one of the big
stories was how AMD had implemented a number of techniques to improve power
consumption and subsequently increase efficiency. A number of those lessons
have come through with Zen, as well as a few new aspects in play due to the
lithography.
在AMD發佈Carrizo和Bristol Ridge的時候,介紹的一個重點就是一系列降低功耗和提
升能效的技術。 有一部分技術延續到了Zen上,同時伴隨著制程更新,還加入了一些新的
技術。
http://images.anandtech.com/doci/10578/s5%20FinFET.png

First up is the FinFET effect. Regular readers of AnandTech and those that
follow the industry will already be bored to death with FinFET, but the
design allows for a lower power version of a transistor at a given frequency.
Now of course everyone using FinFET can have a different implementation which
gives specific power/performance characteristics, but Zen on the 14nm FinFET
process at Global Foundries is already a known quantity with AMD’s Polaris
GPUs which are built similarly. The combination of FinFET with the fact that
AMD confirmed that they will be using the density-optimised version of 14nm
FinFET (which will allow for smaller die sizes and more reasonable efficiency
points) also contributes to a shift of either higher performance at the same
power or the same performance at lower power.
首先就是FinFET。 雖然大部分的人都已經熟悉FinFET到吐了,但我們還是要介紹一下
。 FinFET設計能在給定頻率下設計出電晶體的低功耗版本。 每個FinFET代工廠給出的技
術指標都不同,但Zen用的GF 14nm技術和Polaris GPU的不會差太多,這意味著AMD使用的
是14nm的追求密度版本,能在同等功耗下達成更高性能,或者低功耗下達成同等性能。
http://images.anandtech.com/doci/10578/s6%20Efficiency.png

AMD stated in the brief that power consumption and efficiency was constantly
drilled into the engineers, and as explained in previous briefings, there
ends up being a tradeoff between performance and efficiency about what can be
done for a number of elements of the core (e.g. 1% performance might cost 2%
efficiency). For Zen, the micro-op cache will save power by not having to go
further out to get instruction data, improved prefetch and a couple of other
features such as move elimination will also reduce the work, but AMD also
states that cores will be aggressively clock gated to improve efficiency.
AMD介紹說工程師們一直很注重功耗和能效,在性能和功能單元的效率上做了很多權衡
(比如提升1%的性能,代價是2%的能效損失)。 不僅有微指令快取可以節約讀取指令快
取的電能,改善的預取機制等也能減少工作量。 但AMD也說明,為了提升能效,Zen的門
控時鐘將會很激進。
We saw with AMD’s 7th Gen APUs that power gating was also a target with that
design, especially when remaining at the best efficiency point (given
specific performance) is usually the best policy. The way the diagram above
is laid out would seem to suggest that different parts of the core could
independently be clock gated depending on use (e.g. decode vs FP ports),
although we were not able to confirm if this is the case. It also relies on
having very quick (1-2 cycle) clock gating implementations, and note that
clock gating is different to power-gating, which is harder to implement.
AMD第七代APU上也有差不多的設計,保持在效率最高的那個點(特定性能)是最好的方
式。 上圖似乎暗示著每個核心的不同部分(取決於用途)都有獨立的門控時鐘(比如解
碼單元或者浮點埠),雖然目前還無法確認。 同時還需要有非常快速的門控時鐘(1-2個
週期),要知道門控時鐘與功耗門限不同,門控時鐘更難設計。
Simultaneous Multi-Threading 【同步多執行緒】
On Zen, each core will be able to support two threads in what is called ‘
simulatenous multi-threading’. Intel has supported their version of SMT for
a number of years, and other CPU manufacturers like IBM support up to 8
threads per core on their POWER8 platform designs. Building a core to be able
to use multiple threads can be tough, as it requires a lot of resources to
make sure that the threads do not block each other by consuming all the cache
and buffers in play. But AMD will equip Zen with SMT which means we will see
8C/16T parts hitting the market.
Zen架構上,每個核心支援兩個執行緒,這叫做同步多執行緒。 Intel版本的SMT早在08
年就開始啟用,其他的廠商比如IBM,在POWER8上支援最多8個執行緒(SMT8)。 讓一個
核心處理兩個執行緒很困難,需要很多資源來確保執行緒之間不會因爭奪快取而互相阻塞
。 Zen桌上出版將會有8核16執行緒。
http://images.anandtech.com/doci/10578/s4%20SMT.png

Unlike Bulldozer, where having a shared FP unit between two threads was an
issue for floating point performance, Zen’s design is more akin to Intel’s
in that each thread will appear as an independent core and there is not that
resource limitation that BD had. With sufficient resources, SMT will allow
the core instructions per clock to improve, however it will be interesting to
see what workloads will benefit and which ones will not.
在推土機上,共用浮點單元使得浮點性能不如人意。 但Zen的設計更類似于Intel,每
個執行緒都和一個單獨核心差不多,不會有推土機上的資源限制。 有了更多的資源,SMT
將會提升IPC,我很想看看哪些負載能從中獲益。 】
Timeframe and Availability 【時程表、供貨日期】
At the presentation, it was given that Zen will be available in volume in
2017. As the AM4 platform will share a socket with Bristol Ridge, users are
likely to see Bristol Ridge systems from AMD’s main OEM partners, like Dell
and others, enter the market before separate Zen CPUs will hit the market for
DIY builders. It’s a matter of principle that almost no consumer focused
semiconductor company releases a product for the sale season, and Q1 features
such events as CES, which gives a pretty clear indication of when we can
expect to get our hands on one.
在ppt上寫著Zen將會在17年大量出貨。 由於AM4平臺上Summit Ridge和Bristol Ridge
使用同樣的插槽,可能我們能從AMD的OEM們那裡先見到Bristol Ridge進入市場。 沒有哪
個主賣消費級產品的半導體廠商會在年末清倉季發佈新品,而第一季度會有CES之類的大
型展會,那時候我們肯定能拿到手。
It’s worth noting that AMD said that as we get closer to launch, further
details will come as well as deeper information about the design. It was also
mentioned that the marketing strategy is also currently being determined,
such that Zen may not actually be the retail product name for the line of
processors (we already have Summit Ridge as the platform codename, but that
could change for retail as well).
AMD說距離發佈越近,就會公佈越多的架構細節。 還提到了行銷策略上的決定,比如
Zen不會是實際產品線的名稱(實際平臺代號是Summit Ridge,但到了出貨時候也可能會
變)。
Wrap Up 總結
AMD has gone much further into their core design than I expected this week.
When we were told we had a briefing, and there were 200-odd press and
analysts in the room, I was expecting to hear some high level puff about the
brand and a reiteration of their commitment to the high end. To actually get
some slides detailing parts of the microarchitecture, even at a basic cache
level, was quite surprising and it somewhat means that AMD might have stolen
the show with the news this week.
這次AMD的介紹比我想的要深入。 當有人告訴我去參加一個短會,並且會有200多家媒
體和分析師到場,我還估計應該就是吹吹牛逼,重申要回到高端市場什麼的。 但實際上
AMD給出了部分架構的詳盡介紹,甚至還介紹了基本快取結構,這出乎我的意料,估計這
個星期媒體上都會是AMD的新聞了吧。
--
Tags:
3C
All Comments

By Jacob
at 2016-08-24T03:59
at 2016-08-24T03:59

By Donna
at 2016-08-24T08:26
at 2016-08-24T08:26

By Caitlin
at 2016-08-25T19:09
at 2016-08-25T19:09

By Suhail Hany
at 2016-08-27T18:25
at 2016-08-27T18:25

By Lily
at 2016-08-29T10:13
at 2016-08-29T10:13

By Susan
at 2016-09-02T01:10
at 2016-09-02T01:10

By Christine
at 2016-09-04T12:21
at 2016-09-04T12:21

By Candice
at 2016-09-06T01:19
at 2016-09-06T01:19

By Agatha
at 2016-09-10T16:27
at 2016-09-10T16:27

By Mia
at 2016-09-11T15:35
at 2016-09-11T15:35

By Hamiltion
at 2016-09-14T07:23
at 2016-09-14T07:23

By Elma
at 2016-09-15T08:22
at 2016-09-15T08:22

By Iris
at 2016-09-18T15:53
at 2016-09-18T15:53

By Genevieve
at 2016-09-18T23:22
at 2016-09-18T23:22

By Puput
at 2016-09-19T13:00
at 2016-09-19T13:00

By Zora
at 2016-09-23T18:34
at 2016-09-23T18:34

By Ina
at 2016-09-28T04:56
at 2016-09-28T04:56

By Genevieve
at 2016-09-29T12:20
at 2016-09-29T12:20

By Hedda
at 2016-10-02T16:42
at 2016-10-02T16:42

By Kristin
at 2016-10-05T05:58
at 2016-10-05T05:58

By Olga
at 2016-10-06T20:55
at 2016-10-06T20:55

By Susan
at 2016-10-07T16:27
at 2016-10-07T16:27

By Mary
at 2016-10-11T16:49
at 2016-10-11T16:49

By Edith
at 2016-10-11T20:36
at 2016-10-11T20:36

By Faithe
at 2016-10-13T12:11
at 2016-10-13T12:11

By Regina
at 2016-10-13T16:08
at 2016-10-13T16:08

By Andy
at 2016-10-15T23:57
at 2016-10-15T23:57

By David
at 2016-10-20T19:58
at 2016-10-20T19:58

By Yuri
at 2016-10-23T08:43
at 2016-10-23T08:43

By Catherine
at 2016-10-26T16:46
at 2016-10-26T16:46

By Annie
at 2016-10-30T09:16
at 2016-10-30T09:16

By Kyle
at 2016-11-01T05:42
at 2016-11-01T05:42

By Annie
at 2016-11-05T14:45
at 2016-11-05T14:45

By Skylar DavisLinda
at 2016-11-06T11:52
at 2016-11-06T11:52

By Regina
at 2016-11-07T22:19
at 2016-11-07T22:19

By Skylar DavisLinda
at 2016-11-10T16:38
at 2016-11-10T16:38

By Delia
at 2016-11-13T07:19
at 2016-11-13T07:19

By Ida
at 2016-11-14T21:40
at 2016-11-14T21:40

By Agnes
at 2016-11-19T03:04
at 2016-11-19T03:04

By Rebecca
at 2016-11-20T19:56
at 2016-11-20T19:56

By Lucy
at 2016-11-21T02:27
at 2016-11-21T02:27

By Eden
at 2016-11-22T17:43
at 2016-11-22T17:43

By Jacky
at 2016-11-24T20:51
at 2016-11-24T20:51

By Una
at 2016-11-26T02:57
at 2016-11-26T02:57

By Linda
at 2016-11-29T22:16
at 2016-11-29T22:16

By Connor
at 2016-12-01T08:52
at 2016-12-01T08:52

By Frederica
at 2016-12-06T00:46
at 2016-12-06T00:46

By Margaret
at 2016-12-06T08:52
at 2016-12-06T08:52

By Mary
at 2016-12-08T06:50
at 2016-12-08T06:50

By Jacob
at 2016-12-08T21:50
at 2016-12-08T21:50

By Frederica
at 2016-12-09T20:53
at 2016-12-09T20:53

By Dinah
at 2016-12-10T23:25
at 2016-12-10T23:25

By Elvira
at 2016-12-11T19:22
at 2016-12-11T19:22

By Frederica
at 2016-12-13T21:12
at 2016-12-13T21:12

By Ida
at 2016-12-15T20:56
at 2016-12-15T20:56

By Selena
at 2016-12-17T01:23
at 2016-12-17T01:23

By Emily
at 2016-12-21T13:22
at 2016-12-21T13:22

By Edwina
at 2016-12-25T03:07
at 2016-12-25T03:07

By Mason
at 2016-12-25T10:15
at 2016-12-25T10:15

By James
at 2016-12-29T21:35
at 2016-12-29T21:35

By Lydia
at 2016-12-30T22:23
at 2016-12-30T22:23

By Olga
at 2017-01-04T00:21
at 2017-01-04T00:21

By Ivy
at 2017-01-07T05:52
at 2017-01-07T05:52

By Noah
at 2017-01-09T01:30
at 2017-01-09T01:30

By John
at 2017-01-09T07:52
at 2017-01-09T07:52

By Thomas
at 2017-01-12T12:14
at 2017-01-12T12:14

By Tracy
at 2017-01-13T03:37
at 2017-01-13T03:37

By Kristin
at 2017-01-14T23:52
at 2017-01-14T23:52

By Caroline
at 2017-01-19T07:25
at 2017-01-19T07:25

By Freda
at 2017-01-21T14:45
at 2017-01-21T14:45

By Lily
at 2017-01-23T01:26
at 2017-01-23T01:26

By Lucy
at 2017-01-26T18:36
at 2017-01-26T18:36

By Doris
at 2017-01-28T20:03
at 2017-01-28T20:03

By William
at 2017-01-28T22:20
at 2017-01-28T22:20

By Ivy
at 2017-01-31T18:09
at 2017-01-31T18:09

By Skylar Davis
at 2017-02-05T07:15
at 2017-02-05T07:15

By Sierra Rose
at 2017-02-06T09:24
at 2017-02-06T09:24

By Bennie
at 2017-02-07T10:42
at 2017-02-07T10:42

By Damian
at 2017-02-08T17:48
at 2017-02-08T17:48

By Faithe
at 2017-02-09T06:46
at 2017-02-09T06:46

By Emma
at 2017-02-12T12:35
at 2017-02-12T12:35

By Cara
at 2017-02-13T22:24
at 2017-02-13T22:24

By Ula
at 2017-02-17T11:49
at 2017-02-17T11:49

By Belly
at 2017-02-22T06:26
at 2017-02-22T06:26

By Oscar
at 2017-02-25T20:52
at 2017-02-25T20:52

By Kristin
at 2017-02-27T02:16
at 2017-02-27T02:16

By Rachel
at 2017-02-27T07:01
at 2017-02-27T07:01

By Zanna
at 2017-03-03T18:33
at 2017-03-03T18:33

By Bennie
at 2017-03-07T03:00
at 2017-03-07T03:00

By Tom
at 2017-03-10T06:28
at 2017-03-10T06:28

By Rachel
at 2017-03-11T23:58
at 2017-03-11T23:58

By Kumar
at 2017-03-14T14:48
at 2017-03-14T14:48

By Daph Bay
at 2017-03-18T00:07
at 2017-03-18T00:07

By Ethan
at 2017-03-22T22:22
at 2017-03-22T22:22

By Necoo
at 2017-03-26T07:03
at 2017-03-26T07:03

By Thomas
at 2017-03-26T16:22
at 2017-03-26T16:22

By Annie
at 2017-03-30T14:14
at 2017-03-30T14:14

By Quanna
at 2017-04-04T00:31
at 2017-04-04T00:31

By Necoo
at 2017-04-04T12:54
at 2017-04-04T12:54

By Connor
at 2017-04-04T22:06
at 2017-04-04T22:06

By Audriana
at 2017-04-09T01:25
at 2017-04-09T01:25

By Agnes
at 2017-04-11T22:53
at 2017-04-11T22:53

By Delia
at 2017-04-12T17:23
at 2017-04-12T17:23

By Kyle
at 2017-04-14T16:01
at 2017-04-14T16:01

By Madame
at 2017-04-15T00:49
at 2017-04-15T00:49

By Skylar DavisLinda
at 2017-04-15T05:12
at 2017-04-15T05:12

By Delia
at 2017-04-19T12:04
at 2017-04-19T12:04

By Heather
at 2017-04-24T07:51
at 2017-04-24T07:51

By David
at 2017-04-29T02:13
at 2017-04-29T02:13

By Charlie
at 2017-04-29T08:23
at 2017-04-29T08:23

By Gilbert
at 2017-05-02T09:23
at 2017-05-02T09:23

By John
at 2017-05-07T03:59
at 2017-05-07T03:59

By Gilbert
at 2017-05-07T18:17
at 2017-05-07T18:17

By Una
at 2017-05-09T11:12
at 2017-05-09T11:12

By Ivy
at 2017-05-13T10:08
at 2017-05-13T10:08

By Connor
at 2017-05-15T03:23
at 2017-05-15T03:23

By Odelette
at 2017-05-20T02:36
at 2017-05-20T02:36

By Puput
at 2017-05-20T13:16
at 2017-05-20T13:16

By Una
at 2017-05-24T15:46
at 2017-05-24T15:46

By Todd Johnson
at 2017-05-27T00:22
at 2017-05-27T00:22

By Eartha
at 2017-05-29T08:24
at 2017-05-29T08:24

By Wallis
at 2017-06-01T14:10
at 2017-06-01T14:10

By Madame
at 2017-06-05T05:44
at 2017-06-05T05:44

By Liam
at 2017-06-05T19:42
at 2017-06-05T19:42

By Noah
at 2017-06-08T22:39
at 2017-06-08T22:39

By Isabella
at 2017-06-11T03:30
at 2017-06-11T03:30

By Sandy
at 2017-06-14T18:25
at 2017-06-14T18:25

By Selena
at 2017-06-15T17:45
at 2017-06-15T17:45

By Isla
at 2017-06-18T02:44
at 2017-06-18T02:44

By Steve
at 2017-06-18T05:16
at 2017-06-18T05:16

By Thomas
at 2017-06-18T13:51
at 2017-06-18T13:51

By Wallis
at 2017-06-19T15:22
at 2017-06-19T15:22

By Mia
at 2017-06-21T04:33
at 2017-06-21T04:33

By Belly
at 2017-06-23T21:46
at 2017-06-23T21:46

By Robert
at 2017-06-27T01:05
at 2017-06-27T01:05

By Ivy
at 2017-06-29T00:47
at 2017-06-29T00:47

By Selena
at 2017-07-03T04:02
at 2017-07-03T04:02

By Yedda
at 2017-07-04T23:30
at 2017-07-04T23:30

By Quintina
at 2017-07-08T21:36
at 2017-07-08T21:36

By Susan
at 2017-07-12T23:09
at 2017-07-12T23:09

By Genevieve
at 2017-07-16T20:13
at 2017-07-16T20:13

By Robert
at 2017-07-17T04:52
at 2017-07-17T04:52

By Faithe
at 2017-07-19T08:11
at 2017-07-19T08:11

By Freda
at 2017-07-21T02:05
at 2017-07-21T02:05

By William
at 2017-07-22T01:01
at 2017-07-22T01:01

By Valerie
at 2017-07-22T18:21
at 2017-07-22T18:21

By Ivy
at 2017-07-23T10:59
at 2017-07-23T10:59

By Candice
at 2017-07-24T21:50
at 2017-07-24T21:50

By Caroline
at 2017-07-25T16:33
at 2017-07-25T16:33

By Ivy
at 2017-07-28T10:01
at 2017-07-28T10:01

By Hazel
at 2017-07-29T06:10
at 2017-07-29T06:10

By William
at 2017-08-03T03:54
at 2017-08-03T03:54

By Leila
at 2017-08-06T18:11
at 2017-08-06T18:11

By William
at 2017-08-09T18:42
at 2017-08-09T18:42

By Catherine
at 2017-08-09T19:49
at 2017-08-09T19:49

By Mary
at 2017-08-14T02:00
at 2017-08-14T02:00

By Emily
at 2017-08-15T13:11
at 2017-08-15T13:11

By Anthony
at 2017-08-17T09:53
at 2017-08-17T09:53

By Hedda
at 2017-08-17T13:40
at 2017-08-17T13:40

By Anthony
at 2017-08-20T09:20
at 2017-08-20T09:20

By William
at 2017-08-20T17:55
at 2017-08-20T17:55

By Ula
at 2017-08-22T22:49
at 2017-08-22T22:49

By Lucy
at 2017-08-26T19:40
at 2017-08-26T19:40

By Kyle
at 2017-08-30T13:24
at 2017-08-30T13:24

By Agnes
at 2017-09-01T18:16
at 2017-09-01T18:16

By Mary
at 2017-09-04T16:28
at 2017-09-04T16:28

By Puput
at 2017-09-06T19:02
at 2017-09-06T19:02

By Isabella
at 2017-09-07T00:12
at 2017-09-07T00:12

By Jessica
at 2017-09-07T06:16
at 2017-09-07T06:16

By Olive
at 2017-09-09T18:06
at 2017-09-09T18:06

By Kumar
at 2017-09-13T18:19
at 2017-09-13T18:19

By Charlotte
at 2017-09-16T02:46
at 2017-09-16T02:46

By Michael
at 2017-09-20T21:33
at 2017-09-20T21:33

By Jacob
at 2017-09-21T01:10
at 2017-09-21T01:10

By Rebecca
at 2017-09-24T17:51
at 2017-09-24T17:51

By Xanthe
at 2017-09-25T20:32
at 2017-09-25T20:32

By Elma
at 2017-09-27T17:56
at 2017-09-27T17:56

By Lily
at 2017-09-29T05:21
at 2017-09-29T05:21

By Ethan
at 2017-10-01T05:04
at 2017-10-01T05:04

By Suhail Hany
at 2017-10-01T09:50
at 2017-10-01T09:50

By Eden
at 2017-10-02T12:49
at 2017-10-02T12:49

By Erin
at 2017-10-06T11:38
at 2017-10-06T11:38

By Zenobia
at 2017-10-06T16:44
at 2017-10-06T16:44

By Ingrid
at 2017-10-10T12:52
at 2017-10-10T12:52

By Rosalind
at 2017-10-10T22:49
at 2017-10-10T22:49

By Sarah
at 2017-10-15T06:23
at 2017-10-15T06:23

By Carol
at 2017-10-17T16:29
at 2017-10-17T16:29

By Queena
at 2017-10-17T21:27
at 2017-10-17T21:27

By Leila
at 2017-10-21T05:03
at 2017-10-21T05:03

By Jacky
at 2017-10-22T12:06
at 2017-10-22T12:06

By Rosalind
at 2017-10-26T15:07
at 2017-10-26T15:07

By Hardy
at 2017-10-27T14:26
at 2017-10-27T14:26

By Edward Lewis
at 2017-10-29T21:32
at 2017-10-29T21:32

By Andy
at 2017-11-03T10:44
at 2017-11-03T10:44

By Tristan Cohan
at 2017-11-07T14:17
at 2017-11-07T14:17

By Carol
at 2017-11-10T09:10
at 2017-11-10T09:10

By Eartha
at 2017-11-15T02:20
at 2017-11-15T02:20

By Selena
at 2017-11-18T21:37
at 2017-11-18T21:37

By Audriana
at 2017-11-22T18:14
at 2017-11-22T18:14

By Isabella
at 2017-11-23T08:46
at 2017-11-23T08:46

By John
at 2017-11-26T03:23
at 2017-11-26T03:23

By Suhail Hany
at 2017-11-28T10:38
at 2017-11-28T10:38

By Edwina
at 2017-12-01T02:18
at 2017-12-01T02:18

By Yedda
at 2017-12-01T20:08
at 2017-12-01T20:08

By Callum
at 2017-12-04T02:41
at 2017-12-04T02:41

By David
at 2017-12-04T09:33
at 2017-12-04T09:33

By Suhail Hany
at 2017-12-06T05:49
at 2017-12-06T05:49

By Heather
at 2017-12-10T10:34
at 2017-12-10T10:34

By Susan
at 2017-12-12T16:14
at 2017-12-12T16:14

By Brianna
at 2017-12-14T00:52
at 2017-12-14T00:52

By Dora
at 2017-12-14T19:09
at 2017-12-14T19:09

By Ophelia
at 2017-12-16T01:04
at 2017-12-16T01:04

By Cara
at 2017-12-17T20:25
at 2017-12-17T20:25

By Ingrid
at 2017-12-21T15:37
at 2017-12-21T15:37

By Jacky
at 2017-12-26T04:54
at 2017-12-26T04:54

By Sarah
at 2017-12-27T12:09
at 2017-12-27T12:09

By Andrew
at 2017-12-29T22:20
at 2017-12-29T22:20

By Ivy
at 2017-12-30T05:23
at 2017-12-30T05:23

By Ina
at 2018-01-03T14:53
at 2018-01-03T14:53

By Connor
at 2018-01-05T15:39
at 2018-01-05T15:39

By Daph Bay
at 2018-01-10T13:23
at 2018-01-10T13:23

By Linda
at 2018-01-12T09:16
at 2018-01-12T09:16

By Annie
at 2018-01-14T17:51
at 2018-01-14T17:51

By Jake
at 2018-01-17T09:18
at 2018-01-17T09:18

By Isabella
at 2018-01-18T18:23
at 2018-01-18T18:23

By Adele
at 2018-01-21T01:55
at 2018-01-21T01:55

By Anonymous
at 2018-01-22T19:24
at 2018-01-22T19:24

By Margaret
at 2018-01-25T16:44
at 2018-01-25T16:44

By Olga
at 2018-01-26T23:03
at 2018-01-26T23:03

By Eartha
at 2018-01-29T09:31
at 2018-01-29T09:31

By Tom
at 2018-01-30T15:25
at 2018-01-30T15:25

By Michael
at 2018-02-04T14:48
at 2018-02-04T14:48

By Faithe
at 2018-02-06T08:01
at 2018-02-06T08:01

By Erin
at 2018-02-07T08:22
at 2018-02-07T08:22

By Jessica
at 2018-02-09T00:37
at 2018-02-09T00:37

By Quintina
at 2018-02-13T05:42
at 2018-02-13T05:42

By Mia
at 2018-02-13T16:40
at 2018-02-13T16:40

By Donna
at 2018-02-16T04:39
at 2018-02-16T04:39

By Harry
at 2018-02-17T04:49
at 2018-02-17T04:49

By Olga
at 2018-02-18T21:51
at 2018-02-18T21:51

By Kyle
at 2018-02-23T00:12
at 2018-02-23T00:12

By Christine
at 2018-02-26T04:29
at 2018-02-26T04:29

By Heather
at 2018-03-02T15:48
at 2018-03-02T15:48

By Yuri
at 2018-03-06T12:47
at 2018-03-06T12:47

By Erin
at 2018-03-10T12:47
at 2018-03-10T12:47

By Hazel
at 2018-03-11T00:02
at 2018-03-11T00:02

By Queena
at 2018-03-11T07:11
at 2018-03-11T07:11

By Enid
at 2018-03-15T04:59
at 2018-03-15T04:59

By Agatha
at 2018-03-18T06:10
at 2018-03-18T06:10

By Susan
at 2018-03-22T22:01
at 2018-03-22T22:01

By Hedy
at 2018-03-24T00:12
at 2018-03-24T00:12

By Rosalind
at 2018-03-28T22:18
at 2018-03-28T22:18

By Bethany
at 2018-04-02T01:36
at 2018-04-02T01:36

By Eartha
at 2018-04-02T10:28
at 2018-04-02T10:28

By Charlotte
at 2018-04-05T22:33
at 2018-04-05T22:33

By Emily
at 2018-04-08T18:14
at 2018-04-08T18:14

By Frederic
at 2018-04-11T13:36
at 2018-04-11T13:36

By Jack
at 2018-04-12T20:57
at 2018-04-12T20:57

By Quintina
at 2018-04-17T15:23
at 2018-04-17T15:23

By Olivia
at 2018-04-20T07:48
at 2018-04-20T07:48

By Andy
at 2018-04-22T03:13
at 2018-04-22T03:13

By Edward Lewis
at 2018-04-22T23:31
at 2018-04-22T23:31

By Kelly
at 2018-04-25T11:57
at 2018-04-25T11:57

By Robert
at 2018-04-30T01:34
at 2018-04-30T01:34

By Skylar Davis
at 2018-05-03T18:40
at 2018-05-03T18:40

By Liam
at 2018-05-04T10:16
at 2018-05-04T10:16

By Bennie
at 2018-05-09T04:29
at 2018-05-09T04:29

By Aaliyah
at 2018-05-10T07:35
at 2018-05-10T07:35

By Brianna
at 2018-05-11T18:36
at 2018-05-11T18:36

By Dinah
at 2018-05-15T12:46
at 2018-05-15T12:46

By Necoo
at 2018-05-15T22:23
at 2018-05-15T22:23

By Suhail Hany
at 2018-05-18T02:52
at 2018-05-18T02:52

By Gilbert
at 2018-05-23T00:13
at 2018-05-23T00:13

By Jacob
at 2018-05-27T17:22
at 2018-05-27T17:22

By Emma
at 2018-05-29T15:40
at 2018-05-29T15:40

By Dinah
at 2018-05-31T14:10
at 2018-05-31T14:10

By Hedda
at 2018-06-02T12:15
at 2018-06-02T12:15

By Quanna
at 2018-06-03T14:15
at 2018-06-03T14:15

By Robert
at 2018-06-07T22:36
at 2018-06-07T22:36

By Lydia
at 2018-06-12T03:19
at 2018-06-12T03:19

By Hardy
at 2018-06-15T22:39
at 2018-06-15T22:39

By Linda
at 2018-06-20T18:44
at 2018-06-20T18:44

By Damian
at 2018-06-22T12:48
at 2018-06-22T12:48

By Xanthe
at 2018-06-27T07:04
at 2018-06-27T07:04

By Margaret
at 2018-06-28T22:13
at 2018-06-28T22:13

By Ingrid
at 2018-07-02T21:00
at 2018-07-02T21:00

By Andrew
at 2018-07-07T13:57
at 2018-07-07T13:57

By Agatha
at 2018-07-11T02:50
at 2018-07-11T02:50

By Mary
at 2018-07-11T18:31
at 2018-07-11T18:31

By Hamiltion
at 2018-07-16T16:31
at 2018-07-16T16:31

By Daph Bay
at 2018-07-18T05:05
at 2018-07-18T05:05

By Ingrid
at 2018-07-18T12:42
at 2018-07-18T12:42

By Kumar
at 2018-07-19T02:37
at 2018-07-19T02:37

By Eartha
at 2018-07-20T21:50
at 2018-07-20T21:50

By Mason
at 2018-07-23T15:32
at 2018-07-23T15:32

By Oscar
at 2018-07-26T04:04
at 2018-07-26T04:04

By Dorothy
at 2018-07-26T17:33
at 2018-07-26T17:33

By Zanna
at 2018-07-31T16:25
at 2018-07-31T16:25

By Odelette
at 2018-08-02T20:48
at 2018-08-02T20:48

By Elvira
at 2018-08-04T11:53
at 2018-08-04T11:53

By Joseph
at 2018-08-08T17:36
at 2018-08-08T17:36

By Faithe
at 2018-08-13T09:08
at 2018-08-13T09:08

By Ina
at 2018-08-17T09:46
at 2018-08-17T09:46

By Erin
at 2018-08-20T19:54
at 2018-08-20T19:54

By Michael
at 2018-08-25T13:05
at 2018-08-25T13:05

By Jacob
at 2018-08-27T03:38
at 2018-08-27T03:38

By Tom
at 2018-08-31T06:44
at 2018-08-31T06:44

By Skylar DavisLinda
at 2018-08-31T09:08
at 2018-08-31T09:08

By Ula
at 2018-09-04T02:53
at 2018-09-04T02:53

By Anthony
at 2018-09-05T22:43
at 2018-09-05T22:43

By Sarah
at 2018-09-08T20:23
at 2018-09-08T20:23

By Gilbert
at 2018-09-10T08:56
at 2018-09-10T08:56

By Charlotte
at 2018-09-12T20:57
at 2018-09-12T20:57

By Necoo
at 2018-09-13T05:50
at 2018-09-13T05:50

By Lucy
at 2018-09-16T03:35
at 2018-09-16T03:35

By Elizabeth
at 2018-09-16T23:04
at 2018-09-16T23:04

By Elizabeth
at 2018-09-17T19:37
at 2018-09-17T19:37

By Agatha
at 2018-09-22T11:53
at 2018-09-22T11:53

By Lucy
at 2018-09-27T09:17
at 2018-09-27T09:17

By Megan
at 2018-09-28T12:00
at 2018-09-28T12:00

By Andy
at 2018-09-28T23:55
at 2018-09-28T23:55

By Queena
at 2018-10-01T17:14
at 2018-10-01T17:14

By John
at 2018-10-05T11:15
at 2018-10-05T11:15

By Kristin
at 2018-10-09T15:39
at 2018-10-09T15:39

By Sierra Rose
at 2018-10-13T09:17
at 2018-10-13T09:17

By Barb Cronin
at 2018-10-17T12:34
at 2018-10-17T12:34

By Poppy
at 2018-10-21T09:18
at 2018-10-21T09:18

By Quintina
at 2018-10-22T05:36
at 2018-10-22T05:36

By Bethany
at 2018-10-25T15:52
at 2018-10-25T15:52

By Mia
at 2018-10-27T07:58
at 2018-10-27T07:58

By Poppy
at 2018-10-28T19:35
at 2018-10-28T19:35

By Michael
at 2018-10-29T08:02
at 2018-10-29T08:02

By Edward Lewis
at 2018-11-01T00:17
at 2018-11-01T00:17

By Ula
at 2018-11-02T06:15
at 2018-11-02T06:15

By Kyle
at 2018-11-03T03:04
at 2018-11-03T03:04

By Bethany
at 2018-11-07T16:12
at 2018-11-07T16:12

By Mason
at 2018-11-09T15:17
at 2018-11-09T15:17

By Noah
at 2018-11-11T03:09
at 2018-11-11T03:09

By Callum
at 2018-11-15T23:30
at 2018-11-15T23:30

By George
at 2018-11-16T10:38
at 2018-11-16T10:38

By Frederic
at 2018-11-16T12:55
at 2018-11-16T12:55

By Oliver
at 2018-11-19T09:59
at 2018-11-19T09:59

By Harry
at 2018-11-23T14:16
at 2018-11-23T14:16

By Franklin
at 2018-11-28T11:45
at 2018-11-28T11:45

By Andrew
at 2018-12-01T11:05
at 2018-12-01T11:05

By Iris
at 2018-12-02T18:57
at 2018-12-02T18:57

By Lily
at 2018-12-06T18:53
at 2018-12-06T18:53

By Barb Cronin
at 2018-12-08T05:33
at 2018-12-08T05:33

By Hazel
at 2018-12-10T10:25
at 2018-12-10T10:25

By Andy
at 2018-12-13T17:11
at 2018-12-13T17:11

By Xanthe
at 2018-12-18T15:27
at 2018-12-18T15:27

By Skylar Davis
at 2018-12-23T04:16
at 2018-12-23T04:16

By Suhail Hany
at 2018-12-25T09:22
at 2018-12-25T09:22

By Rachel
at 2018-12-28T23:51
at 2018-12-28T23:51

By Mia
at 2019-01-02T11:42
at 2019-01-02T11:42

By Linda
at 2019-01-06T15:13
at 2019-01-06T15:13

By Ethan
at 2019-01-08T19:59
at 2019-01-08T19:59

By William
at 2019-01-09T19:00
at 2019-01-09T19:00

By Frederica
at 2019-01-14T18:39
at 2019-01-14T18:39

By Olive
at 2019-01-18T01:23
at 2019-01-18T01:23

By Rachel
at 2019-01-18T03:19
at 2019-01-18T03:19

By Daniel
at 2019-01-20T03:50
at 2019-01-20T03:50

By Isabella
at 2019-01-22T06:07
at 2019-01-22T06:07

By Emma
at 2019-01-22T12:03
at 2019-01-22T12:03

By Donna
at 2019-01-26T06:52
at 2019-01-26T06:52

By Daph Bay
at 2019-01-28T04:10
at 2019-01-28T04:10

By Dora
at 2019-01-31T06:03
at 2019-01-31T06:03

By Hedda
at 2019-02-02T14:00
at 2019-02-02T14:00

By Heather
at 2019-02-06T07:37
at 2019-02-06T07:37

By Noah
at 2019-02-09T13:58
at 2019-02-09T13:58

By Gilbert
at 2019-02-11T05:53
at 2019-02-11T05:53

By Tristan Cohan
at 2019-02-12T13:16
at 2019-02-12T13:16

By George
at 2019-02-15T20:28
at 2019-02-15T20:28

By Lucy
at 2019-02-19T05:51
at 2019-02-19T05:51

By Todd Johnson
at 2019-02-23T16:36
at 2019-02-23T16:36

By Rebecca
at 2019-02-28T01:27
at 2019-02-28T01:27

By Kelly
at 2019-03-02T18:52
at 2019-03-02T18:52

By Odelette
at 2019-03-03T21:23
at 2019-03-03T21:23

By Dorothy
at 2019-03-04T02:21
at 2019-03-04T02:21

By Hedwig
at 2019-03-05T14:52
at 2019-03-05T14:52

By Charlotte
at 2019-03-08T16:09
at 2019-03-08T16:09

By Olive
at 2019-03-12T16:21
at 2019-03-12T16:21

By Enid
at 2019-03-13T14:29
at 2019-03-13T14:29

By Charlotte
at 2019-03-18T13:03
at 2019-03-18T13:03

By Doris
at 2019-03-19T03:16
at 2019-03-19T03:16

By Ethan
at 2019-03-19T11:49
at 2019-03-19T11:49

By Yuri
at 2019-03-22T07:16
at 2019-03-22T07:16

By Rosalind
at 2019-03-26T17:19
at 2019-03-26T17:19

By Hamiltion
at 2019-03-27T10:01
at 2019-03-27T10:01

By Eden
at 2019-03-29T13:29
at 2019-03-29T13:29

By Ingrid
at 2019-04-01T12:58
at 2019-04-01T12:58

By Edwina
at 2019-04-02T15:23
at 2019-04-02T15:23

By Daph Bay
at 2019-04-06T13:30
at 2019-04-06T13:30
Related Posts
25K打lol、文書、遊戲機(含OS)

By Xanthe
at 2016-08-22T12:58
at 2016-08-22T12:58
PCIe4.0明年初發佈頻寬翻番5.0已提上日程

By Zanna
at 2016-08-22T12:10
at 2016-08-22T12:10
Intel Kaby Lake 將會帶來最強的14nm晶片

By Genevieve
at 2016-08-22T12:09
at 2016-08-22T12:09
EVGA發布GTX1060 ACX3雙風扇SSC/FTW FTW+

By Mason
at 2016-08-22T12:09
at 2016-08-22T12:09
慶祝 30 週年 msi 準備推出GTX1080 30th

By Gilbert
at 2016-08-22T12:08
at 2016-08-22T12:08