Jump to content
SceptreCore

AMD conFusion? Forget the hype!

Recommended Posts

I guess Bulldozer was really a server chip and AMD thought it could make an awesome desktop one but that didn't turn out.

Majority of AMD's CPUs were server first then desktop. The Stars core was done in Barcelona and Magny-Cours before they moved onto the Bulldozer architecture with Interlagos and Valencia.

 

I miss my old Opteron 165, the motherboard kinda died ... well two socket 939 mobos died on me.

Share this post


Link to post
Share on other sites

2600k @ 5.2GHz verses the FX-8150 @ 4.76GHz. CrossFireX HD 6970 x3 Head-to-Head

http://www.tweaktown.com/articles/4353/amd...ead/index1.html

power use is actually kinda close when they are clocked to this level but performance isnt

once again fps for both are over 60fps in the ports tested seems not many people like to test games that really drop a cpu below 60fps

or if they do they chose to benchmark a low cpu stress part of the game

Edited by Dasa

Share this post


Link to post
Share on other sites

Hicsy mate, there is something in the realm of 10 - 15 reviews in the last few pages that show that the bulldozer is generally inferior to the 2500k, overclocking or not (still not a bad effort by any means from AMD but dear god I would like to see it outperform their previous gen in single core applications) particularly when you consider the price - even the RRP is a touch too much imo, not to mention the price these are actually getting advertised.

Share this post


Link to post
Share on other sites

I guess Bulldozer was really a server chip and AMD thought it could make an awesome desktop one but that didn't turn out.

Majority of AMD's CPUs were server first then desktop. The Stars core was done in Barcelona and Magny-Cours before they moved onto the Bulldozer architecture with Interlagos and Valencia.

 

I miss my old Opteron 165, the motherboard kinda died ... well two socket 939 mobos died on me.

 

Oh yeah those were awesome, many people loved them!!!

Share this post


Link to post
Share on other sites

I doubt they'll move to TSMC for their mainstream CPUs but for APUs/mobiles, it might be easier.

Share this post


Link to post
Share on other sites

It would be a massive undertaking for AMD to do this. especially since TSMC does not have a 32nm node, only 28nm and 40nm. 28nm is having issues - as we see from not having either new amd or nvidia gps's. A bulldozer on a 40nm node would be quite possibly the biggest chip ever produced.

 

Also I hear that CPU designs don't work well on half node processes. Being that the parent of a 40nm is a 45nm process then you could say that really you would be building it on a 45nm process. If that's the case then might as well run it through GloFo's 45nm node. And in that case Thuban is 904M transistors with a die of 346mm2, bulldozer is aparently around 2 billion transistors - I really don't want to think how big that would be!

 

I think this is more to light a fire under GloFo's arse than anything else. I could see AMD taking their CPU business to IBM before they would go to TSMC.

 

I know that if I was Rory Read right now, I'd be kicking down GloFo's door and smashing heads against walls.

Share this post


Link to post
Share on other sites

I guess Bulldozer was really a server chip and AMD thought it could make an awesome desktop one but that didn't turn out.

Majority of AMD's CPUs were server first then desktop. The Stars core was done in Barcelona and Magny-Cours before they moved onto the Bulldozer architecture with Interlagos and Valencia.

 

I miss my old Opteron 165, the motherboard kinda died ... well two socket 939 mobos died on me.

 

Oh yeah those were awesome, many people loved them!!!

 

my opteron 170 is no longer in use but i still have it in working order along with 4 sticks of bh-5 that could do ~260mhz 2226

in its day it was a bit slower than my pm 730 using a socket adapter in a 478 mb at single threaded tasks but dam it was nice having a dual core chip with two x1900xt

Share this post


Link to post
Share on other sites

Posted Image

Intel has started bottom left (SMT), AMD has gone for top left (CMP).

Ahhh.... Dude. AMD have gone top right. Hence the little info pointing to it highlighting its benefits. Edited by SceptreCore

Share this post


Link to post
Share on other sites

If your current new top line 8 core CPU can't even match a 4 core hyperthreading 2600K, something went wrong I think.

You have to understand. These aren't eight physical cores in the typical sense. They're about 2/3 the physical size of their previous full block cores. And about 80% of the performance. But now there are two... Taking up 1.5x the space the previous architecture cores would take. With the same amount of FPUs. They were going for hyper threading, but with with the physical processing power. Obviously it didn't work out as hoped. Probably why it was delayed... They made a big "what the?", coz it wasn't what the maths were telling them. But then again, benchmark colour graphs look much worse then percentages that wouldn't make anyone care. So maybe AMD has hit their intended performance marks.

 

They improved their IPC to lessen the delta. I always knew AMD were sacrificing single threaded performance, for multi-threaded. Although at one stage they said greater single threaded performance, and in some workloads that's true. There is room alone to improve IPC further just by reducing those cache latencies.

Share this post


Link to post
Share on other sites

The fact of the matter is, software these days is rarely truly capable of taking advantage of multiple cores. It's bloody hard to write concurrent software, and many developers which try it find themselves in an uphill battle against obscure bugs which are hard to reproduce let along track down. So making a processor which sucks in single threaded performance is a massive mistake for consumer CPUs.

Share this post


Link to post
Share on other sites

It would be a massive undertaking for AMD to do this. especially since TSMC does not have a 32nm node, only 28nm and 40nm. 28nm is having issues - as we see from not having either new amd or nvidia gps's. A bulldozer on a 40nm node would be quite possibly the biggest chip ever produced.

 

Also I hear that CPU designs don't work well on half node processes. Being that the parent of a 40nm is a 45nm process then you could say that really you would be building it on a 45nm process. If that's the case then might as well run it through GloFo's 45nm node. And in that case Thuban is 904M transistors with a die of 346mm2, bulldozer is aparently around 2 billion transistors - I really don't want to think how big that would be!

 

I think this is more to light a fire under GloFo's arse than anything else. I could see AMD taking their CPU business to IBM before they would go to TSMC.

 

I know that if I was Rory Read right now, I'd be kicking down GloFo's door and smashing heads against walls.

Yeh I call bollocks too. Although your idea about IBM doing the manufacturing. Read was with lenovo previously.

 

If anything. AMD will devoting much time to get the process and architecture gelling.

 

The fact of the matter is, software these days is rarely truly capable of taking advantage of multiple cores. It's bloody hard to write concurrent software, and many developers which try it find themselves in an uphill battle against obscure bugs which are hard to reproduce let along track down. So making a processor which sucks in single threaded performance is a massive mistake for consumer CPUs.

Software devs are just bloody lazy then. Its their job to keep up with technology.

 

A rare forum member at AMDZone is a dev. And he is already altering his benchmark software to incorporate FMA4 and XOP optimization.

 

Besides. Doesn't the operating system scheduler do most of the job for you... You usurp have to code for instruction sets?

Share this post


Link to post
Share on other sites

Software devs are just bloody lazy then. Its their job to keep up with technology.

 

A rare forum member at AMDZone is a dev. And he is already altering his benchmark software to incorporate FMA4 and XOP optimization.

 

Besides. Doesn't the operating system scheduler do most of the job for you... You usurp have to code for instruction sets?

Firstly, instruction sets have pretty much nothing to do with concurrent programming. Implementing new instructions is usually handled by the compiler anyway. The OS scheduler handles processes and their associated threads, it doesn't magically make programs concurrent for you.

 

It's easy to call devs "bloody lazy", but writing correct concurrent software is extremely difficult. It's not about being lazy, it's the fact that it's almost impossible to test concurrent software for complete correctness. At any one time there could be millions of possible states - how do you test those? And when a bug occurs, how do you identify which state caused it? It's not as simple as learning new 'instructions', it's a completely new programming paradigm which is orders of magnitudes more difficult to get right. Your software may behave as expected on your dev machine, then completely spaz out on another machine due to the way interlacing of threads are handled by the scheduler.

 

To combat this, people start spamming their code with synchronized methods which have large overhead and bring down performance. Or they try and maintain performance and lock everything down, until they screw up and start finding themselves with live locks, dead locks, and an assortment of race conditions they failed to recognise. Then there's concurrency models which try and help, such as the Actor model. But even that isn't without its flaws.

 

And that's assuming the algorithms you use can be made to utilise multiple cores in the first place.

 

Anyway, I'm sure there's some computer scientists on here that could explain this better than I can. But the general consensus is to limit concurrency to where it's really required, which means a lot of software is going to remain a single core affair.

 

In the end, it's easier to make a processor that has multiple cores than it is to write software which can take advantage of it. To then degrade single core performance is just insanity.

Share this post


Link to post
Share on other sites

I'm with .:Cyb3rGlitch:. on this one the sacrifice in single threaded performance was to much compared to the gains in this new architecture, especially at this time with the way programs are being written. That being said I still believe that this architecture still has legs and could be the best thing since sliced bread - I'm almost entirely convinced that they just pushed an unfinished product out the door after not being able to resolve the early "performance issues" with the previous stepping.

 

Maybe however this is a long term strategy to get the multi threaded cpu running early in order to have the jump on intel when programs ARE actually threaded for this kind of cpu.

Edited by UberPenguin

Share this post


Link to post
Share on other sites

All I see you saying there is.... Its difficult. I wonder where we'd be if the wright brothers said its too hard.

 

Multi threading is where its going. Dont give me that rubbish about how its all too complex, and how do you compensate for the millions of variables? And you can't trace where what goes where half the time. What I'm talking about is CPUID profiles for different architectures. After all, the performance of differing architectures are nothing without optimisations. Devs are meant to be optimising their software for all available target processors. Some programing suites even have processor optimising profiles on the outset of a project. These are the big business Devs..... And they really have no excuse for no arch optimises. And your not talking about some small apartment building team trying to bring an idea to life on some core i5 on with a raid array that some coding mates all put on for. Industry leaders/partners with industry level resources. Servers that can crunch processor states with varying code parameters. And all that jazz.

 

Microsofts position is understandable. Their next release is a year away... And they don't want to spend time updating the scheduler which is a big job. But there is no guarantee that Windows 8 will adopt quickly... And what happens to business who don't want windows 8 for reasons that are obvious? MS is slightly screwing AMD on this one.

 

Maybe however this is a long term strategy to get the multi threaded cpu running early in order to have the jump on intel when programs ARE actually threaded for this kind of cpu.

AMD have to make their product decisions ahead of time. They really can't do two at once.... Besides bobcat and llano.

Share this post


Link to post
Share on other sites

All I see you saying there is.... Its difficult. I wonder where we'd be if the wright brothers said its too hard.

The Wright Brothers crashed and failed many times before they made something that worked consistently, and was reproducible. Concurrency is still an area of active research, and there's no foolproof way to go about it. It's easy to dismiss the whole situation as programmers who don't give a shit, but that's not the case.

 

Multi threading is where its going. Dont give me that rubbish about how its all too complex, and how do you compensate for the millions of variables? And you can't trace where what goes where half the time. What I'm talking about is CPUID profiles for different architectures. After all, the performance of differing architectures are nothing without optimisations. Devs are meant to be optimising their software for all available target processors. Some programing suites even have processor optimising profiles on the outset of a project. These are the big business Devs..... And they really have no excuse for no arch optimises. And your not talking about some small apartment building team trying to bring an idea to life on some core i5 on with a raid array that some coding mates all put on for. Industry leaders/partners with industry level resources. Servers that can crunch processor states with varying code parameters. And all that jazz.

 

Microsofts position is understandable. Their next release is a year away... And they don't want to spend time updating the scheduler which is a big job. But there is no guarantee that Windows 8 will adopt quickly... And what happens to business who don't want windows 8 for reasons that are obvious? MS is slightly screwing AMD on this one.

IIRC CPUID just tells the system which extensions are supported. That sounds like something a compiler will deal with, not software developers (unless those devs are in the business of making compilers). If a developer is using an Intel compiler, then sure, AMD will probably suffer as a result. Otherwise it's a level playing field. I do agree that MS scheduler is optimised for Intel processors. This is why I was asking for Linux based benchmarks, so see if the scheduler really was affecting performance significantly.

Share this post


Link to post
Share on other sites

Multi threading is where its going. Dont give me that rubbish about how its all too complex, and how do you compensate for the millions of variables? And you can't trace where what goes where half the time. What I'm talking about is CPUID profiles for different architectures. After all, the performance of differing architectures are nothing without optimisations. Devs are meant to be optimising their software for all available target processors. Some programing suites even have processor optimising profiles on the outset of a project. These are the big business Devs..... And they really have no excuse for no arch optimises. And your not talking about some small apartment building team trying to bring an idea to life on some core i5 on with a raid array that some coding mates all put on for. Industry leaders/partners with industry level resources. Servers that can crunch processor states with varying code parameters. And all that jazz.

 

Sceptre, I don't think you understand how difficult it is to get a piece of code to run on multiple processors without it producing a variation or a runtime error. I've seen the problems of that happening with 100 cores when I did parallel computing back at uni. The problem is that if the underlying code isn't done right, then everything falls apart. Unfortunately, most coding practices are still a single instruction, single execution process. So until we can get coders that can do single instruction, MULTIPLE execution processes, expect single threading to be king.

 

Microsofts position is understandable. Their next release is a year away... And they don't want to spend time updating the scheduler which is a big job. But there is no guarantee that Windows 8 will adopt quickly... And what happens to business who don't want windows 8 for reasons that are obvious? MS is slightly screwing AMD on this one.

This is assuming that the scheduler is optimised for any parallel processing. However, in doing so, it will benefit Intel as well so you cannot say one is benefiting from the other as both need it really.

 

Maybe however this is a long term strategy to get the multi threaded cpu running early in order to have the jump on intel when programs ARE actually threaded for this kind of cpu.

AMD have to make their product decisions ahead of time. They really can't do two at once.... Besides bobcat and llano.

 

True but the general consensus is that it'll take time before we can get any programming language or coding practices to be multi-threaded. Until then, it's back to improving what we do now which is still single threading. AMD can do it, but they need to focus on improving Bulldozer's design (perhaps a few tweaks here and there) and get it competitive again.

Share this post


Link to post
Share on other sites

SceptreCore, so AMD shrunk the cores down ... so they could pack more into die and reduce power consumption, hoping the smaller more compact cores - in bigger numbers could match Intel's "bloated" cores in small numbers.

Share this post


Link to post
Share on other sites

This might be of some interest then.

http://openbenchmarking.org/result/1110131-LI-BULLDOZER29

 

Link was taken from Phoronix.

 

anyone got an X6 out there thats similar to that system so we can get a comparo.

I saw that, but they didn't compare it with another CPU. >_

 

http://www.guru3d.com/article/amd-fx-8150-...essor-review/17

 

Cyber. Look at the bottom benchmark, and then read the description of the benchmark. Optimised for EVERY modern processor from AMD, Intel and VIA.

Yeah, but it's written in assembly. Almost no one does that any more because it's extremely inefficient for minimal performance gain. It's nice to see AMD go well in this benchmark, but really it means nothing until compilers take advantage of the new instructions. Assuming, of course, that that other benchmarks suffered because of current compilers.

Share this post


Link to post
Share on other sites

okay. These numbers are not going to be totally comparable, most everything could be different however From the bulldozer above and phoronix's Llano A8-3850 review.

 

This is what the test system look like

AMD Fusion A8-3850: Processor: AMD A8-3850 APU with Radeon HD @ 2.90GHz (4 Cores), Motherboard: Gigabyte GA-A75M-UD2H, Chipset: AMD Device 1705, Memory: 4096MB, Disk: 60GB OCZ VERTEX2, Graphics: AMD Radeon HD 6550D 512MB (600/667MHz)

 

AMD FX 8150, ASUS Sabertooth, 4096mb ram, wd 300gb Velociraptor, 8800GT.

 

so the llano based platform had the advantage of an ssd.

 

 

bench ---- a8 ------ BD
--------------------------
C-Ray ---- 107.6 -- 51.07 - lower is better
Smallpt --- 242 ---- 104 - lower is better
7-Zip ------ 9224 --- 18209 - higher is better
OpenSSL -- 52.48 --- 68.68 - higher is better
JohnTheRipper - 842 - 954 - higher is better
Edited by Sparky

Share this post


Link to post
Share on other sites

this one is interesting

take a 8 core chip and instead of setting it to 2CU/4C like the 4 core chip from amd set it to 4CU/4C suddenly its performance per clock may be better than phenom ii in games

but its also less power efficient

http://www.xtremesystems.org/forums/showth...1-Threaded-Perf.

 

hopefully we will see some reviews go into more detail like this

 

edit

http://www.hardware.fr/articles/842-9/efficacite-cmt.html

 

but then again maybe its still just not quiet enough

 

Posted Image

Edited by Dasa

Share this post


Link to post
Share on other sites

Thanks Dasa that was a really interesting read and it would seem to suggest that if AMD released a "bulldozer optimiser" then it would improve single threaded performance considerably. Correct me if I'm wrong but it seems to have proven that for single threaded applications using a the CPU as a 4module/4Core works far better then using it as a 2module/4core which I think is pretty intuitive. So in order to remedy this they really need to better allocate their threads.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×