AMD FX-8320 vs 6th gen i7: Cores vs speed/power?

It's a little frustrating to find specific info here in the forums. I am reasonably sure this has been broached before, but I can't find anything of use on this topic.

As indicated in the title, I'm looking to build a new computer. I was holding out for the nVidia 1070 and 1080 cards, but, frankly, I've already waited 6 months and the new cards don't even work with Iray. So, enough is enough. I'm going to build a box based on the older, tried-and-true technology. Which, of course, brings up the processor. I had pretty-much decided to go with a 6th gen i7, but while searching for pre-built systems, I came across the AMD FX-8320. 

This made me pause. The AMD has 8 cores, whereas the Intel has 4 (yeah, I know there's an 8-Core i7 out there, but out of my price range -- unless someone has a good source for them?). Would extra cores provide a boost for rendering Open GL with Daz, or with Poser's Comic Book Mode (which is basically a render of the display/draw mode)? Or are my first instincts right to go with the new i7 platform?

 

«1

Comments

  • nicsttnicstt Posts: 11,715

    Whilst it is about cores for some tasks, it is also about what those cores are capable of. I would look for comparisons via goodle; generally a good i7 four cores/8 threads will do more than than the FX8320, but I'd try and find info specific to your requirements. You will pay more though, so is it about what fits in your budget.

  • hphoenixhphoenix Posts: 1,335

    Okay, a quick primer on current CPU architectures.....

     

    The Core i7-6700/6800 CPU has 4 cores, but each core has a dual integer instruction pipeline.  This means that core can effectively execute 2 integer instructions simultaneously, providing a big speedup.  Since 90% of 'normal' computer use uses integer instructions, this makes the i7 a big winner here.  3D (not gaming-type) uses a lot more Floating-Point (FP) instructions, which cannot benefit from the dual integer pipeline.  Also, since integer operations may have a context switch between ops, sometimes the dual pipeline has to 'unwind' the operation of one of the dual pipelines (it does have branch-prediction to reduce this, but it still happens) so the benefit isn't a straight 100% boost.  Each core contains its own L2 cache and SIMD vector unit.

     

    The FX-8320 has 4 blocks, each with two cores, for a total of eight cores.  However, each of those 'blocks' has only one SIMD vector unit and L2-cache.  So while it DOES have more cores than an i7, those cores are limited by sharing vector and cache per block.  This means standard CPU FP will see a big boost compared to a i7, but code optimized to use the SIMD vector instructions will block between adjacent cores on the same block.  Proper multithreading will alleviate this somewhat.  And since the L2-Cache is shared per block, memory contention blocking is more frequent.


    So in final conclusions:  They're about the same.  If you are doing straght CPU-based Floating Point calculations without SIMD instructions, the AMD CPU will have a considerable edge.  If you are doing non-FP (integer) instructions, the i7 will have an edge.  But, the degree is going to be slight to either, since NO applications/OS is perfectly designed to use only one or the other, and there are always contention issues that create delays of one form or another.  So get what gives you the best price/performance ratio, and provides the features you want.

     

    A small note.....there are only a VERY few AMD FX motherboards that support PCI-E 3.0.  Most only support PCI-E 2.0, so the bandwitdth for your GPUs on the FX CPU supporting motherboard may sway you to go with the Intel solution.  The actual delay currently is pretty small, but as you go up to using very high-end GPUs and 4k+ display resolutions, it becomes more of an issue.

     

  • mmitchell_houstonmmitchell_houston Posts: 2,490
    edited June 2016
    nicstt said:

    Whilst it is about cores for some tasks, it is also about what those cores are capable of. I would look for comparisons via goodle; generally a good i7 four cores/8 threads will do more than than the FX8320, but I'd try and find info specific to your requirements. You will pay more though, so is it about what fits in your budget.

    Goggle didn't really help me because almost all sites focus on gaming and not 3D (specifically Iray). Budget is, of course, an issue. But it's not what's driving this consideration. I'm interested in whether the extra cores would benefit Poser display rendering, or whether I should stick to Intel for general quality and performance.

    Post edited by mmitchell_houston on
  • hphoenix said:

    The Core i7-6700/6800 CPU has 4 cores, but each core has a dual integer instruction pipeline.  This means that core can effectively execute 2 integer instructions simultaneously, providing a big speedup.  Since 90% of 'normal' computer use uses integer instructions, this makes the i7 a big winner here.  3D (not gaming-type) uses a lot more Floating-Point (FP) instructions, which cannot benefit from the dual integer pipeline...

     

    The FX-8320 has 4 blocks, each with two cores, for a total of eight cores.  However, each of those 'blocks' has only one SIMD vector unit and L2-cache.  So while it DOES have more cores than an i7, those cores are limited by sharing vector and cache per block.  This means standard CPU FP will see a big boost compared to a i7, but code optimized to use the SIMD vector instructions will block between adjacent cores on the same block... So in final conclusions:  They're about the same.  If you are doing straght CPU-based Floating Point calculations without SIMD instructions, the AMD CPU will have a considerable edge.  If you are doing non-FP (integer) instructions, the i7 will have an edge...  A small note.....there are only a VERY few AMD FX motherboards that support PCI-E 3.0.  Most only support PCI-E 2.0, so the bandwitdth for your GPUs on the FX CPU supporting motherboard may sway you to go with the Intel solution...

    Thank you very much for that info. I'm still leaning toward the Intel chip, but I'm going to pose this question over at R'osity to see if anyone knows anything about the specific Poser display performance issue I raised. 

  • Richard HaseltineRichard Haseltine Posts: 104,182

    There is a huge list of benchmarks, including the AMD, here https://www.cpubenchmark.net/high_end_cpus.html ; - find the Intel chip(s) you were thinking of and compare (bearing in mind that this is only one test).

  • MattymanxMattymanx Posts: 6,975

    Whichever one you decide on, I recommend the Noctua ND-15 as your cooler for your new CPU

    http://noctua.at/en/nh-d15

     

     

  • namffuaknamffuak Posts: 4,266
    Mattymanx said:

    Whichever one you decide on, I recommend the Noctua ND-15 as your cooler for your new CPU

    http://noctua.at/en/nh-d15

     

     

    Second that. Just about any top-end Noctua; I'm running the NH-U12S with just one fan - my CPU never exceeds 60 C over a 12 hour render on a 3.5 GHz I7 6 core with all 12 threads showing 100% for the entire run (make sure your case has plenty of ventilation!).

  • DustRiderDustRider Posts: 2,803
    edited June 2016

    A couple of months ago I got a custom AMD 8350 in a small case for some serious number crunching I had to do building 3D models from photos. The stock cooler was horrible (loud with some serious vibration), and I got a Corsair H75 Liquid CPU cooler to replace it. I was looking for a nice air cooler that would work with the case and motherboard (microATX motherboard in a small case), and that I could get delivered the next day. The only thing I could find to get with next day delivery was the Corsair. I reluctantly ordered it, because I wasn't a big fan of liquid cooled systems. I'm really glad I did! It dropped the ambient temp in the case a great deal, which was a big plus for the temps on the EVGA GTX 960 SC Ultra Quiet video card. I did have to put a fan in the case to cool the motherboard components that were designed to be cooled by the air coming off the CPU cooler, but it's a pretty quiet fan. Now under full load for days (CPU maxed, GPU running between 20-100%) the CPU never gets above 56 C (and only get to 56 because the computer is where the afternoon sun hits it) and the GPU stays around 60-61 C, and it is very quiet under full load. I don't have the CPU over clocked, but it consistently runs at the "turbo" speed of about 4.15 GHz.  So depending on case style, and GPU's, it may be worth while investing in a liquid cooler because most of the heat generated by the cooler is very efficiently transported outside the case.

    I haven't really had a chance to do anything serious with DS and Iray on the machine yet, with the few simple tests I've done it has performed quite well. I have used it a lot for rendering with Carrara. I'm quite pleased with the performance, especially with Agisoft Photoscan (not as fast as my laptop, but much faster than the i7-4790 machines at work). Of course I wasn't expecting a killer top of the line computer, so I'm quite happy with it. I probably would have gotten an i7 machine, but it would have cost me $200 more, and that was outside the budget I had for the project. With the AMD processor it's good usable machine (32Gb RAM), that does well for what I needed it for, and it didn't break the bank.

    Post edited by DustRider on
  • Kendall SearsKendall Sears Posts: 2,995

    I refuse to buy an Intel 'i' series processor.  The one's I've used spend all of their time throttled back to much lower speeds.  If I'm going to buy or build with Intel it will be with XEON only.  After taking in the performance drops on the 'i' series processors in 'real life' use, the AMD's usually exceed the performance of the Intels.

    Just one more data point.

    Kendall

  • I refuse to buy an Intel 'i' series processor.  The one's I've used spend all of their time throttled back to much lower speeds.  If I'm going to buy or build with Intel it will be with XEON only.  After taking in the performance drops on the 'i' series processors in 'real life' use, the AMD's usually exceed the performance of the Intels.

    What about unlocked and overclocked Intel processors? And I'm not talking about general usage: I'm talking specifically about using them for 3D with Daz Studio and Poser.

  • AlienRendersAlienRenders Posts: 794
    edited June 2016

    What i7 are you looking at? Without specifics, you can't tell which is better. Generally, Intel has better per core performance. But for value, it's tough to beat AMD.

    https://www.cpubenchmark.net/high_end_cpus.html

     

    Post edited by AlienRenders on
  • mmitchell_houstonmmitchell_houston Posts: 2,490
    edited June 2016

    What i7 are you looking at? Without specifics, you can't tell which is better. Generally, Intel has better per core performance. But for value, it's tough to beat AMD.

    https://www.cpubenchmark.net/high_end_cpus.html

    Something along these lines: Intel Core i7-6700K 4 GHz Quad-Core Processor - 8 MB - LGA1151 Socket

     

    I'm not being rude (at least I'm not trying to be rude), but totally serious: Do benchmarks actually help answer the question I asked? I'm not interested in general speed or which processor is a little bit faster than the other. I'm specifically interested in knowing if having more cores (and thus threads: the AMD delivers twice as many as Intel) will improve my render times when working with Poser 11's Firefly and Comic Book Preview modes? The gist of what I'm reading elsewhere makes me think this is the case, which is the only reason I'm asking this question. As near as I can tell, the CPUs I am discussing are about the same in general processing power for general rendering speed, especially when working with Iray and sending the scene to be handled by the GPU. I like Intel chips and can afford a 6th generation quad core i7. But is that the best chip for rendering a Poser display? I don't know.

    Post edited by mmitchell_houston on
  • Kendall SearsKendall Sears Posts: 2,995

    I refuse to buy an Intel 'i' series processor.  The one's I've used spend all of their time throttled back to much lower speeds.  If I'm going to buy or build with Intel it will be with XEON only.  After taking in the performance drops on the 'i' series processors in 'real life' use, the AMD's usually exceed the performance of the Intels.

    What about unlocked and overclocked Intel processors? And I'm not talking about general usage: I'm talking specifically about using them for 3D with Daz Studio and Poser.

    Whether or not the 'i' series is "unlocked" or "overclocked" the more load you put on them the more they will throttle back.  They will throttle back due to heat, and they will also throttle back the more cores that are used.  The XEON processors almost always run full on.  But they are much more expensive.

    Kendall

  • mtl1mtl1 Posts: 1,507
    edited June 2016

    Just go straight for an Intel i7 and don't look back. There's really no competition in this area until AMD Zen comes out next year...

    In general though, don't choose anything with AMD Bulldozer cores if you're looking at rendering or encoding applications. You're basically throwing your money away as its performance is multiple-generations behind Intel.

     

    edit: Price to performance, you may be able to score a discount by going with Haswell or Haswell-E processors instead of Skylake.

    Post edited by mtl1 on
  • Jim_1831252Jim_1831252 Posts: 728

    @ OP - Check for Cinebench and LuxRender benchmarks. If you are doing a lot of CPU rendering these benchmarks will show you whose boss. When it comes to rendering performance current AMD chips are pretty poor competition. In fact AMD chips will only outperform current Intel chips for very specialised purposes.

     

     

    Whether or not the 'i' series is "unlocked" or "overclocked" the more load you put on them the more they will throttle back.  They will throttle back due to heat, and they will also throttle back the more cores that are used.  The XEON processors almost always run full on.  But they are much more expensive.

    Kendall

    Kendall, if you keep your processor cooled with a sufficient cooler there should be no throttling problems. I use a huge Noctua heat sink with a single fan and it does the job. Are you a heavy overclocker? If i had the cash I would certainly buy one or two higher-end xenons on a premium workstation motherboard and do all my rendering with Lux CPU. Which Xenons are you currently running?

     

  • AlienRendersAlienRenders Posts: 794

    Even though I own a FX-8350 myself, I'm hard pressed to think of a situation where it's better than the 6700K. More cores doesn't mean a whole lot if the total processing doesn't match up. That's why I asked about benchmarks. DirectX 12 is going to be better for multithreading with games and such. But for rendering where you need to send a lot of data to the video card, I'm thinking you want more processing power per core, not more cores.

  • Even though I own a FX-8350 myself, I'm hard pressed to think of a situation where it's better than the 6700K. More cores doesn't mean a whole lot if the total processing doesn't match up. That's why I asked about benchmarks. DirectX 12 is going to be better for multithreading with games and such. But for rendering where you need to send a lot of data to the video card, I'm thinking you want more processing power per core, not more cores.

    But what kind of rendering are you doing? I'm looking specifically at rendering the Poser 11 Preview mode. I don't use Lux (via Reality), and all my photorealistic rendering is done with Iray. That's why I asked the question about cores as they relate to the Poser preview render, which is entirely CPU based. And, based on some reading I've done, the number of cores are a big concern.

    And I don't game at all, so that's actually why I'm here instead of the general forums. Almost everything out there is slanted toward gaming performance, not 3D.

  • MattymanxMattymanx Posts: 6,975
    namffuak said:
    Mattymanx said:

    Whichever one you decide on, I recommend the Noctua ND-15 as your cooler for your new CPU

    http://noctua.at/en/nh-d15

     

     

    Second that. Just about any top-end Noctua; I'm running the NH-U12S with just one fan - my CPU never exceeds 60 C over a 12 hour render on a 3.5 GHz I7 6 core with all 12 threads showing 100% for the entire run (make sure your case has plenty of ventilation!).

    Same here, maxes out around 60C.  And when your done, the Noctua drops the temperature really fast.  When my new computer was built, the techs used both fans and situated it so they point up so all the heat goes up through the 200mm fan above it

  • DAZ_SpookyDAZ_Spooky Posts: 3,100

    One major issue with the AMD Processors is the Motherboard tech. The AMD Motherboards are, technically, well behind the Intel ones.

  • hphoenixhphoenix Posts: 1,335

    One major issue with the AMD Processors is the Motherboard tech. The AMD Motherboards are, technically, well behind the Intel ones.

    +1 This.

     

    OP:  The real hard part with 'benchmarks' is that a lot of benchmarks are biased.  Yes, it's true.  Not always intentionally, but as I explained before, the use of SIMD instructions to 'accelerate' certain ops can create a situation where the choice of SIMD instructions, and whether or not to use them (or thread based on core numbers) can make a big difference.  Most of the Windows benchmarks are coded in Visual Studio, using Windows optimizations......which include (in VS 2015 and 2013) the automatic use of SIMD intrinsics (a fancy term for a a macro that does all the setup and execution on the vector unit) to increase performance.  Which are INTEL based intrinsics.  This (as I said) isn't usually intentional.....but the bias is there regardless.  Run an AMD-released benchmark and you'll see the Intel CPUs perform slower, of course, and vice-versa.

    Having actual cores (not 'hyperthreaded' dual integer pipelines) means real threads operating independantly, gives the AMD a good boost IF the threading is properly written to take advantage of it.  Being able to execute two instructions at once gives the Core i7 a big boost IF the code doesn't do a lot of context switching (which will create stalls and unwinds.)

    The simple fact is that the two processor brands, when you get rid of code-biases, are pretty damn close in raw performance (assuming equivalent clocked models.)  But more code is written to be optimized for Intel, even without meaning to.  So they tend to perform better.  AMD needs to get more intrinsic support and such into the major compilers/IDEs so that the bias goes down.

     

  • hphoenix said:

    One major issue with the AMD Processors is the Motherboard tech. The AMD Motherboards are, technically, well behind the Intel ones.

    +1 This.

     

    OP:  The real hard part with 'benchmarks' is that a lot of benchmarks are biased.  Yes, it's true.  Not always intentionally, but as I explained before, the use of SIMD instructions to 'accelerate' certain ops can create a situation where the choice of SIMD instructions, and whether or not to use them (or thread based on core numbers) can make a big difference.  Most of the Windows benchmarks are coded in Visual Studio, using Windows optimizations......which include (in VS 2015 and 2013) the automatic use of SIMD intrinsics (a fancy term for a a macro that does all the setup and execution on the vector unit) to increase performance.  Which are INTEL based intrinsics.  This (as I said) isn't usually intentional.....but the bias is there regardless.  Run an AMD-released benchmark and you'll see the Intel CPUs perform slower, of course, and vice-versa.

    Having actual cores (not 'hyperthreaded' dual integer pipelines) means real threads operating independantly, gives the AMD a good boost IF the threading is properly written to take advantage of it.  Being able to execute two instructions at once gives the Core i7 a big boost IF the code doesn't do a lot of context switching (which will create stalls and unwinds.)

    The simple fact is that the two processor brands, when you get rid of code-biases, are pretty damn close in raw performance (assuming equivalent clocked models.)  But more code is written to be optimized for Intel, even without meaning to.  So they tend to perform better.  AMD needs to get more intrinsic support and such into the major compilers/IDEs so that the bias goes down.

     

    Thanks for the insight. I'm still leaning toward the i7 (and for the money, I might go with the 5th gen chip, instead, and toss more into the video cards -- but probably not). But I still feel the need to question the conventional wisdom. After all, I'm looking at some specialized 3D rendering situations, and am not interested in spending a lot of money on a new rig only to find that it lets me down when it comes to Poser display rendering. If that's the case, I might be better off buying a used dual-processor Xeon workstation and pimping it out, instead.

  • StratDragonStratDragon Posts: 3,253
    hphoenix said:

    Thanks for the insight. I'm still leaning toward the i7 (and for the money, I might go with the 5th gen chip, instead, and toss more into the video cards -- but probably not). But I still feel the need to question the conventional wisdom. After all, I'm looking at some specialized 3D rendering situations, and am not interested in spending a lot of money on a new rig only to find that it lets me down when it comes to Poser display rendering. If that's the case, I might be better off buying a used dual-processor Xeon workstation and pimping it out, instead.

    my i7 920 is going on 7 years old and it still provides impressive power despite it's age. I've used about every manufacturer of CPU that has come down the pike and the i7 was the best investment I've ever made. I also have a Dual Xeon about the same age and it is not as robust as the i7 at times except when I render though something like Lux and the distinction is very noticeable, but again these are older gen CPU's and much may have changed. I have a 9600 as well but it's not in my rendering arsenal.

  • By the way, thanks for all the tips about the importance of cooling. I've added some more to my budget to get a nice cooling system. I'll look at the ones listed here and get something like them (if not them exactly). I'm definitely going to keep my eye on the throttling that can come from too much heat.

  • Silver DolphinSilver Dolphin Posts: 1,628

    If you go with Intel I would recommend going i7 3930k 6core with 64gigs DDR3 ram which is cheaper. I have this 6 core Intel system with 64gigs of ram and Win7 Pro 64bit and 3x Nvidia 780GTX 6gb editions (I have a cheap 2gb Nvidia 640gt just to run my Dual 27" Monitors). Intel DDR4 is great and fast but the returns on render speeds are negligible and it is too expensive. I would spend my extra money on Ram and Video cards. Nividia has a bunch of new great cards for Gaming and VR but I use my system for 3D rendering >>Lightwave, Vue, Photoshop, Daz studio and Carrara Pro so this system works great. I would make sure you get a motherboard with 8 ram slots and as many PCIE slots as possible so you can get all the speed you can from the extra video cards.  A word of warning any more than 2 Nvidia Iray cards is going to need more than 32gigs of ram so you can see the benifits of adding cards and a 1000watt plus powersupply. The best Nvidia Iray card out there is a Nvidia Titan X 12gb. The best budget card if you can get them around $400 is the Nvidia 980 6gb edition >>I would get the Titan X from ebay from gamers and VR users upgrading to 1080's for about $600 dollars each which is a great deal for us Iray users. It just depends on your budget.

  • KnittingmommyKnittingmommy Posts: 8,191

    I don't know a whole lot about computers or AMD vs Intel.  However, when I built my computer I did a lot of research and figured out that for 3D graphics, they are pretty much neck and neck and there isn't really a whole of difference.  My husband has an I7 in his computer and a regular fan setup which is very loud and he usually has to replace fans because they keep wearing out.  FYI, he solely uses Blender for modeling and some rendering.  He, also, didn't notice much of a difference or improvement when he switched from an I5 to an I7.  I went the AMD route with an 8350 FX on a motherboard that is both crossfire ready and SLI as I didn't know anything about graphics cards at that point and figured I wanted to eventually have 2 and I wanted to be able to use any kind and those two options covered it.  I also got a Corsair water cooled system which I happen to love.  It is extremely quiet and I haven't had to do anything to it while hubby has had to change fans at least twice since I got mine.  I have an app to keep an eye on temps and it hardly ever goes above 65C even after running constantly for upwards of 6 days in a row.

    The only thing I regret is that I built my system before the beta version of DS came out and I didn't know anything about nVidia or Iray when I was putting my box together so I got a Radeon R7 260X which is good, but if I were to get a graphics card today, I would definitely go with nVidia considering how much I love rendering in Iray even with CPU only.  I'm definitely saving up for an nVidia card to add to my system.  The other thing I'd change is that I'd go with a motherboard capable of going up to 64 GB instead of the 32 GB I now have.

    I don't really understand the debate between Intel enthusiasts and AMD enthusiasts as they are both good cards, but having used AMD now for a couple of years, I'd definitely go with AMD again considering the price factor and I really don't see that much of a difference in performance between my computer and my husband's with the exception that mine is much quieter, is more reliable cooling wise, and is just as powerful CPU wise. So, there is a non computer geek's view.  I don't know anything about that particular processor, the 8320, but I'm very happy with my 8350.  Oh, I can also tell you that there are some factory over clocked AMD's being sold if that is important to you that are guaranteed to run at the over clocked specifications listed by AMD and the over clocked processor still comes with a warrantee.  Something to look into if that's important to you.

  • AlienRendersAlienRenders Posts: 794

    I'm a programmer and I optimize code all the time where I work. I know assembly, DirectX, OpenGL, etc.. I still don't understand what you're asking. I've rarely known a situation where more cores is better than the same processing power on a single core. You mention conflicting information and I think perhaps I'm missing something. On the one hand, you mention you're using a CPU only renderer. Then in another sentence, you're saying you're using OpenGL. Sure, you can use OpenGL with a software reference implementation but I have no idea why you'd want to do that. So if it's rendered on the video card, more cores aren't going to help you because OpenGL and DirectX have diminishing returns per core when sending data to the video card. In fact, there's a GPU benchmark program that I have where it gives my machine a low score on one of the tests because the physics is computed on the CPU (the rest is rendered with DirectX on the video card) and the FX-8350 is just too slow per core to keep up with the video card (R9 290) compared to other CPU's that are faster per core (and some of them are lower on the CPU charts). DirectX 12 and Vulkan are much much better in this respect where it handles multithreading a lot better. If you're doing CPU rendering, get the fastest overall processing power you can with the fewest cores. So if it's between processor A with 8 cores and processor B with 4 cores and they both have the same ranking on CPU charts, go with processor B. It's a no brainer. The reason being that you're not losing multicore speed, but when you run things that aren't threaded that well, you'll get a huge boost. IOW, I'm not saying purposely go with fewer cores. I'm saying if you see two or more processors you're interested in with equivalent rankings, you're probably better off going with the one with fewer cores.

    With all this talk, some concern should also be taken to your RAM and disk speed when loading textures and other resources depending on how your renderer works.

     

  • Kendall SearsKendall Sears Posts: 2,995

    This is what I build for my stuff here.  These are Xeons E or X series depending on the machine.  They range from 64GB of RAM upwards to 256GB.

    Kendall

    Thor.jpg
    1039 x 771 - 195K
  • Kendall SearsKendall Sears Posts: 2,995

    I'm a programmer and I optimize code all the time where I work. I know assembly, DirectX, OpenGL, etc.. I still don't understand what you're asking. I've rarely known a situation where more cores is better than the same processing power on a single core.

    [...snip...]

    With all this talk, some concern should also be taken to your RAM and disk speed when loading textures and other resources depending on how your renderer works.

     

    I've been doing highly-parallel programming since the mid-1980s and can assure you that there are LOTS of situations where even thousands of "cores" are not enough.

    Kendall

  • hphoenixhphoenix Posts: 1,335
    edited June 2016

    I'm a programmer and I optimize code all the time where I work. I know assembly, DirectX, OpenGL, etc.. I still don't understand what you're asking. I've rarely known a situation where more cores is better than the same processing power on a single core.

    [...snip...]

    With all this talk, some concern should also be taken to your RAM and disk speed when loading textures and other resources depending on how your renderer works.

     

    I've been doing highly-parallel programming since the mid-1980s and can assure you that there are LOTS of situations where even thousands of "cores" are not enough.

    Kendall

    AH, the good old days.  I wrote parallel code on CM-5, Maspar, and via the (at the time) new-fangled network parallelism stuff (edit:  I just remembered what the library/protocol was called then.....MPI) on a cluster of SparcStations.....and yes, having lots of cores is meaningless unless the code (and the algorithm) truly are designed for and capable of being massively parallelized.  Some stuff is pretty easy (non-recursive ray-tracing, image processing, etc.) but some algorithms just don't parallelize well.....

    And with modern machines (which while multi-core, are not massively parallel designs) the real problems end up being bus and memory contention.  Nothing like having half your cores starved because they're trying to access memory or resources that are tied up by the other half.....

     

    Post edited by hphoenix on
  • Kendall SearsKendall Sears Posts: 2,995
    edited June 2016
    hphoenix said:

    I'm a programmer and I optimize code all the time where I work. I know assembly, DirectX, OpenGL, etc.. I still don't understand what you're asking. I've rarely known a situation where more cores is better than the same processing power on a single core.

    [...snip...]

    With all this talk, some concern should also be taken to your RAM and disk speed when loading textures and other resources depending on how your renderer works.

     

    I've been doing highly-parallel programming since the mid-1980s and can assure you that there are LOTS of situations where even thousands of "cores" are not enough.

    Kendall

    AH, the good old days.  I wrote parallel code on CM-5, Maspar, and via the (at the time) new-fangled network parallelism stuff (edit:  I just remembered what the library/protocol was called then.....MPI) on a cluster of SparcStations.....and yes, having lots of cores is meaningless unless the code (and the algorithm) truly are designed for and capable of being massively parallelized.  Some stuff is pretty easy (non-recursive ray-tracing, image processing, etc.) but some algorithms just don't parallelize well.....

    And with modern machines (which while multi-core, are not massively parallel designs) the real problems end up being bus and memory contention.  Nothing like having half your cores starved because they're trying to access memory or resources that are tied up by the other half.....

     

    Indeed.  I was one of the lucky folks to get to write code to run on the ETA10 Super Computer's vector processors before the JvNCC's SuperComuter Center lost funding.  The Cray was always so overloaded.  Both of them were expensive as h*ll to run processes on.  I forget now what we paid per CPU second on those things, but it was crazy (in 1988 dollars) which is why it was IMPERATIVE to make use of the vector processors instead of the CPUs.

    Core starvation:  This is why I own Tesla Units.  Both 1U standalone and internal units.  However, the machines I have that have 24, 48, or 64 CPU cores are 4x XEON systems with buses specifically designed to feed the processors.  The biggest of my machines is capable of 2TB of RAM on board (as if I could ever afford that much ECC RAM).  Obviously, DS is nowhere near optimized to run on these things, but some rendering engines are.

    Memory and bus contention are some of the major reasons that "real world" performance between the Intel 'i' series processors and the AMD processors is nowhere near what it should be "on paper".  Especially when using an OS like Windows that is about as inefficient as one can get.  The lower performing CPU's have plenty of time to catch up to the faster ones because of all of the poor scheduling algorithms in the OS that keep the processors in a constant state of context switching.  What many don't understand about the benchmarks (even Cinebench) is that they are able to load into cache and then run "unimpeded" by normal OS operations.  This gives an extremely unbalanced view of what the systems are actually capable of.

    There are very good reasons that studios do their production rendering on farms of machines running Linux.

    EDIT:  I still have quite a few Sun SparcStations around here.  Including a Ross HyperStation.  All of my SparcStations were Quad-Processor Super or Hyper Sparc processors.

    EDIT2: Windows Server Standard or Datacenter is required to use all 4 processors.  Linux can use all processors.

    Kendall

    Post edited by Kendall Sears on
Sign In or Register to comment.