Intel has issued a statement confirming that BIOS patches for the Spectre vulnerability are causing crashes on Broadwell and Haswell systems.
The company wants us to know that it’s sticking to its recent commitment to put security first by confirming that it’s investigating an issue with the CPU microcode updates it issued to its hardware partners. These updates are being distributed to users’ systems as BIOS updates, which are just beginning to roll out.
Intel said that customers have reported “higher system reboots” (crashes) after applying BIOS updates. So far, the issue only affects Broadwell (Core i3/5/7 5000 series on for mobile) and Haswell(Core i3/5/7 4000 series for desktop and mobile). Intel didn’t specifically say if Broadwell-E (Core i7 6000 series on desktop) are also affected. The issues have been reported in both data centers and regular user systems.
We are working quickly with these customers to understand, diagnose and address this reboot issue. If this requires a revised firmware update from Intel, we will distribute that update through the normal channels. We are also working directly with data center customers to discuss the issue.
Intel doesn’t recommend you ignore the BIOS updates from your system OEM, but you might want to wait for this to unfold if you’re using one of the affected CPUs. The BIOS fixes are used in conjunction with software fixes to mitigate Spectre Variant 2. If you have auto-updating turned on in your OS, then most likely you have already received the software side of the fix.
I guess it's to keep power consumption per core down.
Yeah... well it has to fit in that 15W envelope. But they can ramp single core performance right up. Unfortunately these aren't the 12nm refresh so they won't be able to turbo as high or competitively.
We noted that the Acer Swift 3 with a Core i7-8250U 8th Gen CPU and GeForce MX150 pulled about 9 Watts at idle and 13 - 16 Watts under the light duty load of our HD video loop test. The HP Envy x360 15z with Ryzen 5 Mobile pulled about the same 9 Watts at idle and with similar panel brightness, but under the load of video playback with VLC, pulled 20 Watts with peaks to 30 Watts in spots. We also quickly tested CPU utilization whether running VLC or the Windows 10 video player, and saw Ryzen 5 2500U CPU utilization oscillated at a low 4 - 12 percent. So, it appears at least with respect to VLC and video playback, that Ryzen Mobile with Vega 8 graphics is more power-hungry or perhaps has a bit more driver maturity to undergo to be fully optimized. EXCERPT: We're down to two variables that could be affecting power draw -- beyond just AMD Ryzen Mobile and its Vega GPU with respect to HD video playback -- driver optimization for Ryzen Mobile or the system's 7200 RPM hard drive.
Update, 11/24/2017 - 9:33AM
HP pushed a Radeon graphics driver update to the Envy x360 15z so we re-ran our HD video rundown test. The machine picked up 19 minutes of up-time as a result, so we've updated the graph above to reflect this time.
HP Envy x360 15z With Ryzen Mobile Performance And Final Thoughts
Fitting for the forthcoming Thanksgiving holiday here in the US, we've got a lot to digest with our first look at AMD Ryzen Mobile platform. So, let's break down the main course and various side dishes. First, the AMD Ryzen 5 2500U quad-core mobile processor we tested generally offered competitive performance to Intel's latest 8th Gen quad-core Kaby Lake-R offering in various, highly-refined and optimized machines like the Dell XPS 13 and the HP Spectre x360. Presumably, a Ryzen 7 2700U would look even better in a similar match-up, with a bit more top-end clock speed.
Looking at Ryzen Mobile's graphics prowess, as we hoped, the platform offers significantly better performance with its Vega 8 IGP in comparison to Intel's latest UHD 620 IGP in the 8th Gen Core series line-up. In some tests it offered 60 - 70 percent faster frame rates and was able to make unplayable titles playable at 1080p. Granted our short window for testing was mostly relegated to some light-duty, legacy game titles, but as an aside, we also quickly tested current gen games like Middle Earth: Shadow of War. Here we saw playable frame rates at 1080p with Low to Medium image quality settings.
The early indicators for AMD's Ryzen Mobile platform are strong, both on the CPU and GPU side of the equation. With respect to battery life, however, the picture for us is still pretty murky and we're going to reserve judgement for now. Frankly, we don't feel like the HP machine we picked up at retail is a very compelling solution overall. Though it's priced right at $729, its dim display and pokey hard drive left a lot to be desired and ultimately hampered our testing from getting a clean A/B comparison in certain spots. With Ryzen Mobile in a more premium configuration, with a higher quality more power-efficient display and fast SSD, our view of its performance profile could have been significantly different.
In fact, AMD may be in a peculiar spot with Ryzen Mobile. The delineation line may be drawn for some users between making the jump from integrated graphics, to whether or not discrete graphics solutions, like NVIDIA's GeForce MX150, might be available in a given model of machine. As we showed, a GeForce MX150 puts up next level performance over Ryzen 5 2500U's Vega 8 IGP at least, though the question still remains how a Ryzen 7 2700U would compare with 2 more Radeon CUs and a touch more clock speed at its disposal.
Ultimately, it will come down to what AMD's OEM partners like HP, Lenovo, Acer, and Dell can pull together for laptop designs with Ryzen Mobile. It would seem the product lends itself very well to premium configurations, if battery life can be managed in thin and light designs. Either way you slice it, our early view of Ryzen Mobile is encouraging with some real bright spots, coupled with a bit of uncertainty as well. We'll just have to see what comes to market from the major players in the months ahead. What's very clear, however, is that AMD is back on competitive footing again with Intel in mobile processors as well, with Ryzen and Vega delivering a solid 1-2 punch.
I think that's a pretty good result all told. I look forward to seeing 2700U benches.
According to HPE, the benchmarks were attained using AMD EPYC model 7601 on ProLiant DL385 Gen10 systems, which are available with up to 64-cores, 4 TB memory and 128 lanes of PCIe connectivity. For the SPECrate 2017_fp_base benchmark the HPE/AMD combo scored 257, and for the SPECfp_rate2006 the score was 1980. Both were the highest ever two socket system scores for their respective benchmarks.
Intel is now claiming it will "introduce" its first 10nm products by the end of 2017, with serious volumes coming in 2018. Credible leaks have revealed that Intel is targeting availability more in the middle of 2018.
A delay in the mass-production start from the end of 2015 to the end of 2017 is a delay of two years, or roughly a full generation, at least back when an Intel generation was defined as roughly two years.
As if it couldn't get any worse, by Intel's own admission, its first- and second-generation 10nm technologies -- 10nm and 10nm+, respectively -- will offer worse performance than its upcoming 14nm++ technology . Intel says the company's 10nm technology won't open up a clear performance lead over its 14nm++ technology until its third iteration -- known as 10nm++ -- which should go into production sometime in 2020.
Interesting article. I can't seem to get my head around it as Intel are unstoppable, or at least appear to be. Even... how the hell how does a smaller process not increase performance?
Seems like this might be enough of a struggle for them for AMD to catch up and we can have a bit more balance in the market place.
Writer of the article has shares in Intel, so it's not a bash piece.
The advantage in photo rendering tests are even less than the video ones, and that's likely the workload that'll be done.
Coffee Lake is out of the question. If the 8 core was out.. which is what the Z390 board is going to support, that might be different... but would it still use the ring bus architecture or go to mesh like Skylake X and lose its edge? Regardless it's still not here.
7820 or 1700 would be good buys and arguments for both. But for me, -50% cost for the 1700. it's not that far behind in the rendering workloads, and there's an upgrade path for Ryzen+
But if g_day didn't want to swap the CPU out and sell the old one later... then 7820 would be the go.
Trouble with that offer of advice is there is very little understanding of the astronomy workflow and which hardware optimises its processing requirements. PixelInsight (which I don't use) gives some benchmark capabilites, but I use DeepSkyStacker or CCDStack. The authors of both have said it is multi core enabled (but no GPU smarts) and very little else to go by.
I have searched the Australian and US astronomy websites of over 30,000 dedicated amateur astronomers and spoken to the makers of many of the Astronomy software tools - and there isn't clear consensus! When you say the community can help me - remember clearly - I helped start this community and am one of a small set of it's initial go-to folk for all matters technical. But my knowledge doesn't give me insight as to say whether a very heavily raided PCIE storage device at any cost for scratch files would be better than one or more large RAMDrives. The workflow tools within PixelInsight show you actually do best from having four decent sized RAMDrives than any other configuration - with a lot of debate even from its authors on why!
My needs are closer aligned to Workstation for a sub-class of 3D rendering needs - with the extra constraint that the GPU isn't heavily used yet by the makers of small, but very expensive Astrophotography programs - CCDWare family, DSS. Whilst PixelInsight has started to use it, Registar and many other haven't yet. All the Astronomy data acquisition s/w tools - The SkyX, PEMPro, ZWO Camera control suites, PHD autoguiding, auto focusing and V-spline, Plate solving, Dome management etc are well set up and working fine. There challenges aren't data prcessing merely faultless data acquisition. It's the occassional data crunching load that is the X-factor for most astronomers.
It's trivially simple to set up a big rig for gaming. Game development has really simple and very well known hardware needs. The interaction of CPU, GPU and memory latency in Directx11 or 12 or OpenGL games are thoroughly understood and catered for. Astronomers needs far less well so. The arrival of very fast, relatively inexpensive CMOS cameras that have shallow light wells (20,000e - 40,000e) but extremely low readout noise are game changers! It used to be you needed $4K - $16K CCDs that were very noisey and very deep electron well for each pixel - and so had to be very, very deep cooled to reduce dark currents; they had fat pixels with really deep well to shoot hard targets with a decent signal to noise ratio. They had poor frame buffers and slow USB 1.1 or 2 readouts so amp glow was a factor. Nowadays sub $2K CMOS from say ZWO with USB3 can bring shooting light frames of your target down from requiring 20 x 30 minute shots to 300 x 2 minute shots. The signal to noise goes through the roof in that situation - thanks to the mathematics of noise sampling. But the need is to combine that data for each of Red, Green, Blue, Luminance and Hydrogen Alpha, Oxygen II and Sulphur II filter channels. So you may have 300 x 10 MB RAW images to combine for each of seven colour channels, for each target, for each run duration (30 seconds, 2 minutes, 4 minutes etc) before you start the channel combine and deep processing is Photoshop CS or MaximDL or both!
So I have opted for an all round rig that should do everything credibly, but allow for movement once the software or hardware available adapts. Need more CPU then I would go to a I9 7940 - 14 core beast. Would it beat the Threadripper 1950 for my workload - I certainly expect so given the price differential. Would it beat a dual CPU 60 core Xeon Workstation from 2012 - possibly it would get its bottom paddled!
I may find that no sane amount of compute will bring my needs down to real time processing of a few minutes per imaging target. Fine you just batch them all and queue them to run before you go to bed and look at the results in the morning. If they take 4, 8, 12 or 16 hours to complete - so be it! So far I have only tried runs of about 30 light frames (the actual picture of your target) plus matching master dark frames, flat frames, BIAS and flat dark frames. Once the light frames grow into the hundreds I don't know how the various software sets will scale. All this has to be temperature calibrated too (lights, darks and flats) - but Ihope to avoid this by having everything calibrated Summer through WInter to -25 to -30 C.
So with this deeper insights to my needs can you really offer far better advice than given to me over the past six months?
I meant no disrespect mate, I'm looking at it from a bird's eye kind of perspective. When you said it's "gotta be better than your old Core 2 Quad", I am just trying to get myself up to your speed. Then you went and said a whole bunch of words to put me in my place.. and indeed it did.
From my view I just want to make sure that you want to invest so much dosh for something that you've been getting by with on old beater. But i suppose you've already calculated the cost.
As regards where these products are priced... AMD has come in to seriously undercut Intel. They priced their 16 core at the same cost as Intel's 10 core, and it dominates it just about every measure, trading blows with AMD's 12 core offering and not being successful there either. More than likely AMD will support their socket longer than Intel does, because that's what they usually do. From all the tech reviewers I've watched, they all say that Threadripper is the most compelling offering, and a couple of them built Threadripper systems for their own professional work. So with that information, it's really up to you... you will be happy with both I feel, but one will definitely leave you with more in your pocket.