2 pointsThey are definitely holding back. Now that they can match Intel and have them by the short and curly's, they can just refine and release. AMD's product roadmap seems to be very strong. There's talk of 2.5D for Zen 3 in the form of HBM on the I/O die. and 3D stacking for Zen 4 Intel however are stuck on the confinements of 14nm desktop and server for another two and a half years, that means whatever refinements their new cores will have, they won't get more logic in there so IPC is going to be super hard to eek out. I just hope that AMD don't stuff it up now they have the upper hand.
2 pointsA reply from a user on whirlpool forums I found interesting. Dasa writes... Jag writes... They haven't disclosed the cache layout as yet, but I don't think it's a unified L3 cache. It'll probably be 32MB L3 used by 6 cores in the same chiplet, and another 32MB used by the other 6 cores in the other chiplet. I doubt it'll be any faster than the 32MB in the single chiplet designs except in limited circumstances (running very different tasks on each chiplet). There shouldn't be increased latency any more because there's no longer any local access to memory. The memory controller has now moved from the CCX to the I/O controller chip, each chiplet connects to the I/O controller. This means that all cores have equal latency to memory. The previous design was NUMA, with each CCX having direct access to half the RAM, and accessing the other half via the Infinity Fabric and other CCX. Now all chiplets access RAM over Infinity Fabric to the I/O Controller. ie: Instead of having half memory fast and half slow, now all uses the slower mode. Reducing the impact of this design choice is the primary reason the L3 cache has doubled. BoganTimmy writes... Jag writes... I thought the PCIe 4.0 bandwidth test to be presented in such a way as to be easily misleading. If you miss any of the qualifiers it appears to be showing something that it wasn't. At this point in time there was the comparison in gaming that suggests it's at least on par with Intel. WIth no testing parameters it's impossible to know at this stage. Perhaps both CPUs were hitting a GPU bottleneck? They were suspiciously close. Independent testing will show this up pretty quick. the_other_guy writes... Jag writes... If you were going to count the L2 and L3 like this than it'd be 70MB vs 18M. However this isn't really an apple to oranges comparison for several reasons: 1. As per above, it's likely that the 64MB L3 actually operates as two independent 32MB caches, rather than one combined cache. 2. Zen's L3 cache operates as a non-inclusive victim cache and not a general purpose write-back cache like on Intel. That means this cache contains data that was evicted from the L2 cache. When you need to access data in the L3, it must swap data between the L2 and L3, which slows a read operation into a move (L3 <-> L2) and read (from L2). This increases cache latency. Hopefully this is something that Zen 2 improves on, but there's no architecture information indicating that they've changed this as yet. 3. Due to the above, AMD's L2 acts more like a window into the L3, so adding them together isn't really appropriate. Technically it can hold <L2 size> + <L3 size>, but it acts more like a cache of <L3 size>. 4. Zen generally has a higher latency path to memory than Intel does, so cache size is more important. Due to the above it'd be closer to say in operation it's more like 32MB vs 18MB, but even then that is a fairly irrelevant comparison. sysKin writes... Jag writes... It would add a fair bit of complexity to the I/O Controller. If they weren't victim caches it may be worth always checking the other L3 for the information before checking RAM, but I don't think this will happen for this generation. They haven't released much info about the I/O Controller, and the I/O controller does some unexpectedly strange things like directly support an audio codec and USB ports (Meaning these don't compete for chipset bandwidth), so there's still some hope yet. Sov writes... Jag writes... RAID 1 maybe? RAID 0 seems a bit silly except for fairly useless bragging rights. If you really need RAID 0 NVMe, then you probably want a HEDT system with non-M2 PCIe SSD. The way SSD access their NAND memory is effectively already RAID 0 built-in to the drive. There may be some edge cases where you can get multiple PCIe 3.0 x4 SSDs for cheaper than the newer PCIe 4.0 SSDs of similar speed, but I imagine there will be a fairly narrow window. Nukkels writes... Jag writes... Mature node vs new node. This behaviour is fairly typical, it's just at an unusual extreme because Intel's 14nm process has had extended tuning in comparison to previous nodes. Intel's original node roadmap acknowledged this by expecting to run 10nm alongside 14nm+*, they'd use the 10nm node for power efficiency applications and the 2nd iteration 14nm+ for performance applications. I think the biggest surprise in the new CPUs is that the 3700X has a lower base clock than the 2700X. Too early to tell if they've had binning issues, or if they just needed to make a bit more room for the 3800X so that they could up-sell the typical price point. Note that we're also yet to see how AMD will be calculating TDP for this generation, which is fairly important info. Their previous method used a load temperature that was tied to the GoFlo nodes, this may change with the move to TSMC. Boosting may also work differently.
1 pointSurely you aren't surprised by that - it's right there in the name: public wifi. You should not have any expectation of privacy in any public space - even virtual ones.
This leaderboard is set to Sydney/GMT+10:00