I followed a link from a Scan email when they were plugging Titan RTX cards - the top end machine in the linked section, which I think had quad Quadros, was £51,xxx (and 97p). I wislisted one.
https://www.nvidia.com/en-us/titan/titan-rtx/ looks really impressive so I called Nvidea and apparently it is their best card. The very helpful guy had a heavy accent so I missed 50% of what he said but the one thing that came thru loud and clear was that this card was better suited for servers. He also recommended an NVlink vs SLI bridges for 2080 ti's etc. He also inferred that dual cards share the load and are not used to full capacity but thought two 2080 RTX ti would be better despite sharing loads. (Scratches head). I am heading over to Nvidea forum to check out a couple of other things. then am heading to GPU benchmark.
You were badly misinformed, what you should expect from CS anywhere, Titan's of any variety are not server cards. They are intended for the commercial market. Quadro's and Tesla's are meant for the workstation and server market. The Titan RTX has the same GPU as the RTX Quadro 8000 but half the VRAM. I'm pretty sure that makes the RTX Quadro 8000 Nvidia's best graphics card at the moment.
Also you cannot use an SLI bridge on any RTX card. You have to use an NVLink connector.
in Daz 2 graphics cards do "share the load" in that, assuming the scene can load onto both, they render the scene together which decreases render time. Memory pooling may become available in Daz eventually but no one knows if or when besides the devs and they aren't saying.
After googling tons I agree but the problem is I'm not a tech and bad accent or not I tend to have to read it twice or watch it three times, then find a few more sites and rinse and repeat. After watching everything it seems the titan RTX is the card that stays on top for most benchmarks. I used GPU render benchmarks for Premier Pro, Photoshop 2019, Blender and games (no a gamer) But the RTX 2080ti hugged a close second most times. Sometimes the Quadro RTX 4000 outperformed the five and under performed the Titan RTX and the RTx 2080ti and found you right, the NVLink has to be the connector.
Somewhere along the way I stumbled upon the i9-9980xe 18 core 36 thread cpu and now have a crush on it. I rely on GPU for certain programs and CPU for others. I have decided RTX is the way to go (was tempted by the Titan V but after some benchmarks decided it would not work for what I need.) I watched endless videos for the past two days found very little on the CPU I like other than it runs hot in some systems, which won't be an issue with my custom build, and discovered some intrigue info on CPU's. If my tech advised against the i9-9980xe my second choice is the i9-9900k 8c/16t 5.2G.
Why? Because I watched this video and took some screenshots (have screenshots for the cards from various videos and blogs, which I will post first. Due to the amount of screenshots I took, I may do a second post sharing the cpu info. I am posting these because not all of us have three days to watch videos (not that I did either but I was on a mission). Hope these can help some of you who have helped me.
Okay went to upload the pics and can't find them. Obviously I saved them to whatever Dad download folder I last screen shot but these are of the CPU data.
Edit I called Daz Dad. I'm surprised I didn't type GoDazzy because I'm trying to set up an ssl for my cpanel.
Copy and pasted this email from my tech. This is what the system will be after the changes requested.
Please See the Second Revision of the estimate below. Also note that under motherboard I put ASUS HIGH END Motherboard. The reason for that is because the CPU is so new that only certain ASUS Motherboards are shipping out with the BIOS that supports the CPU. I spent half the day on the phone with ASUS and Newegg. I am going to have to go down to LA where I know a few suppliers to physically look at the motherboard and determine which BIOS version it has. The problem with buying a board online is the resellers are not able to guarantee the motherboard will have the BIOS version I need. There are several boards that support this CPU socket but if it does not have the latest BIOS I will not be able to get the CPU to POST(no video output on the monitor). I really don't want to have to buy an older CPU for this socket motherboard, install it so I can do the latest BIOS update and then return it. That way the BIOS has the update to support the CPU you guys have requested. I still may have to do it that way but that is a last resort if I am not able to find a board with the latest BIOS. With that being said, I guarantee the board I select will be of the highest quality. ASUS has 3 different top tier models, they are the ASUS PRIME, ASUS STRIX, and the ASUS TUF series. It will definitely be one of their elite lines.
Custom PC for Rendering Setup:
Intel Core i9-9980XE Processor 18 Core Processor, Runs at 3.0GHZ Idle with auto Turbo mode boosting speed to 4.5GHZ during heavy workloads for faster performance
ASUS HIGH END Motherboard includes 7/9 USB ports and 1 USB C Port on the back + 2 USB Ports in the front
Full sealed system water cooler, Master Liquid kit for CPU Cooling. Includes Dual 240MM Fans with RGB Lighting.
SupremeFX Premium Audio Chipset (Full support for 7.1 Surround and Sound and Digital Optical Output – Capable of connecting to a Home Theater Receiver)
Samsung EVO 2TB SSD for primary OS
And Samsung EVO 4TB SSD for DATA Storage
Graphics Card for Rendering: Nvidia GeForce 2080TI 11GB DDR6 RAM x 2 with NVLink
G Skill Ripjaws V 128GB DDR4 3200 RAM
Expanison USB 3.1 Card adds an additional 4 USB Ports to the rear of the machine
Expanison USB C Card adds 2 additional USB C Ports with 1 Port being DATA Only and the other with Video Support
The machine will have a total of 13 USB 3.1 Ports and 3 USB C Ports
Full Tower Case with Clear Side Window, will include intake and exhaust cooling system for optimal airflow and heat dissipation
EVGA 1300Watt G2 Gold Certified Power Supply
LG DVD-RW DVD and CD Reader and Writer
Windows 10 Professional 64 Bit
-end of email-
I was amazed at the trouble the cpu is causing re: travel etc but lucky for us LA is around the corner and I want a system that works not only with DAZ, but with Blender, Adobe Apps, Video Apps, Recording Apps, other 3D software including VR. Excited about ray tracing! I also need a machine that will accommodate our recording studio and our Kronus Korg 88 LE and other instruments.
I followed a link from a Scan email when they were plugging Titan RTX cards - the top end machine in the linked section, which I think had quad Quadros, was £51,xxx (and 97p). I wislisted one.
Special thanks to you for heading me in the right direction. All of you influenced my decision and thanks to each an every one of you. I will post the GPU screenshots here when I find them because they proved the Titan RTX was #1 the entire time for the apps they benchmarked. I hope to God I don't regret my choices.
Edit: The cpu card i9-9980XE overheats. If you get this card a cooling kit is extremely relevant.
A little over 2 yrs ago, I started saving up for a new Rig.
Last Christmas, I saved up enough and built this Rig. Then nVidia released the RTX2080Ti
I sold my GTX1080Ti, got the RTX2080Ti and added another RTX2080Ti and the NVLink
of course it's now $2500 over budget.
No regrets.
When you can, just do it. We're not getting any younger.
An Epyc processor, 64-128 gigs of ram, and 2 RTX Titans... or alternitively, 4 Titan XPs
...however Epyc means running Linux which is not compatible with any Daz software (or for that fact even many of the expensive pro programmes save for Maya).
(snip)
And yet, somehow, these people managed to bench a 7000 series EPYC running Windows 10:
And here's the Velocity Micro blog talking about their EPYC workstation. Note the Windows task manager screenshot - and according to the Cinebench screenshot it's running Windows 8 Pro no less, not Windows 10 (that's probably a misread by the Cinebench software). Velocitymicro only offers windows 10 as an option on their website, but it might be interesting to see if they might also install Windows 8, via a personal chat with their support staff or something.....
Lots of options out there on the interwebs r.e. AMD EPYC and Windows 10. And just possibly Windows 8 as well apparently if that Cinebench screenshot is to be believed...
...no point in having four RTX Titans as NVLInk only works with two for memory stacking. The only systems that allow memory stacking between more cards are those using NVLink motherboards which are high priced server components and then only work with Teslas and IBM Power 9s.
...no point in having four RTX Titans as NVLInk only works with two for memory stacking. The only systems that allow memory stacking between more cards are those using NVLink motherboards which are high priced server components and then only work with Teslas and IBM Power 9s.
You can however, have 2 Titans stacked with another pair of Titans. That would provide about 48GB with Nvlink plus the boatload of cores that 4 Titans would offer. That sounds pretty darn sweet.
ArtAngel, the Titan RTX and 2080ti are extremely similar chips. The Titan has a few more CUDA cores, the 2080ti is clocked higher. The 2080ti probably does beat at many tasks for that reason, but the two cards should be very similar. The main benefit to the Titan is the large VRAM capacity it has, more than double the 2080ti. While the 2080ti can pool VRAM with Nvlink, we still don't know for sure if Daz can support this. Also pooling VRAM is not a full doubling of that VRAM. You wont get 22GB, the reason is some data must be duplicated for performance. We don't know how much. I'm going to guess you may end up with 20GB of pooled memory.
Titans do have a trick up their shroud, they can use Tesla Compute Cluster, TCC. This turns off the video output (so you need card for video or on board GPU) and turns the card into a pure compute card. This *might* enhance Iray performance, as far as I know, nobody has tested it. I saw somebody post times for Vray with Titan V, and enabling TCC mode was indeed faster than without, but that is not proof it would be faster for Iray. Plus you would be limited with what you can do in TCC mode since the Titans would not be usable for video and other apps.
..yeah but first, you'd need an MB that could handle 4 double width cards. Next is it really worth the extra 5,000USD just for the added cores? Crikey for that you could a single Quadro 8000 or two Quadro 6000s and have the same total VRAM for about the same investment..
Unless you are a serious "armchair scientist," developing your own personal AI or say doing climate modelling, modelling, TCC is pretty pointless.
..yeah but first, you'd need an MB that could handle 4 double width cards. Next is it really worth the extra 5,000USD just for the added cores? Crikey for that you could a single Quadro 8000 or two Quadro 6000s and have the same total VRAM for about the same investment..
Unless you are a serious "armchair scientist," developing your own personal AI or say doing climate modelling, modelling, TCC is pretty pointless.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
There ARE some 8 GPU solutions out there now. It doesn't HAVE to fit in an ATX case.
The system above can accomodate up to four 2000W power supplies, which would mean around 73 amps of circuit(s). I think you could get away with a lot less than that though, as the RTX Titans each have a TDP of 280W. 2400W should be able to accomodate 8 of those, plus throw in another 1000W or so for the dual Xeons... So a 30 amp circuit MIGHT be able to power it, although I'd probably use two 20 amp circuits instead so that there's a bit of overhead.
This dual Xeon Purely (2 x 28 core) config detailed in this article uses an 1100W power supply. The BOXX system I linked mentioned 22 core Xeons:
NVLinking those 8 RTXs into four groups would provide you with up to 48 GB of VRAM headroom for larger scenes, with the advantage of having 8 GPUs to boost rendering speeds. It'd have to be a VERY detailed and huge scene to exceed that 48GB limit. I suppose if you included ALL of Airport island in a long shot without hiding anything, you might be able to get close to that.
That or have a couple dozen HD Genesis 8 characters, all with different skin maps, in a scene.
Outrider might be able suggest a scene that uses say more than 40GB of VRAM.
Hollywood CG blockbusters may also be able to exceed that 48GB cap... not sure if the OP is looking to do hollywood blockbusters with this system. If so, well NVIDIA does make cards with more than 24 GB of VRAM each.
There may be faster cards than those RTX Titans in Nvidia's library, and of course NVidia could announce new cards at Computex next week...
There's also those NVidia DGX systems, but you may need to repurpose the software for rendering from deep learning:
16 GPUs reside in one of those boxes. They are rather power hungry though...plus they are quite pricey!
Anyways, back on point, that 8 GPU BOXX system has a lot of potential. There are also numerous 4 GPU workstations available these days, but this is a dream machine, not a 'cut corners' machine.
..yeah but first, you'd need an MB that could handle 4 double width cards. Next is it really worth the extra 5,000USD just for the added cores? Crikey for that you could a single Quadro 8000 or two Quadro 6000s and have the same total VRAM for about the same investment..
Unless you are a serious "armchair scientist," developing your own personal AI or say doing climate modelling, modelling, TCC is pretty pointless.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
There ARE some 8 GPU solutions out there now. It doesn't HAVE to fit in an ATX case.
Have you ever browsed the Octane benchmark scores? One awesome thing about Octane Bench is it uploads scores publicly. If you look at these machines, you will see that are plenty of crazy machines out there. It turns out some people are living the dream. I'm posting a partial list from that page.
A link to the full page here, I only skimmed the top portion of this page, there dozens of ridiculous machines with 4 or more GPUs on this bench. https://render.otoy.com/octanebench/results.php
Look at this, 17 1080tis??? 10 Titan RTXs??? A Quadro GV 100 paired with 8 Titan Vs and a 2080ti??? The GV100 is a $10K card by itself. 8 Tesla V100's, which might actually be the older DGX machine since that is what it had inside.
If I had no budget, I would get a better laptop than this one.
edit: currently I have no budget for a new computer, so I am stuck with this laptop. The screen resolution is too low and there is not a dedicated GPU.
..yeah but first, you'd need an MB that could handle 4 double width cards. Next is it really worth the extra 5,000USD just for the added cores? Crikey for that you could a single Quadro 8000 or two Quadro 6000s and have the same total VRAM for about the same investment..
Unless you are a serious "armchair scientist," developing your own personal AI or say doing climate modelling, modelling, TCC is pretty pointless.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
There ARE some 8 GPU solutions out there now. It doesn't HAVE to fit in an ATX case.
The system above can accomodate up to four 2000W power supplies, which would mean around 73 amps of circuit(s). I think you could get away with a lot less than that though, as the RTX Titans each have a TDP of 280W. 2400W should be able to accomodate 8 of those, plus throw in another 1000W or so for the dual Xeons... So a 30 amp circuit MIGHT be able to power it, although I'd probably use two 20 amp circuits instead so that there's a bit of overhead.
This dual Xeon Purely (2 x 28 core) config detailed in this article uses an 1100W power supply. The BOXX system I linked mentioned 22 core Xeons:
NVLinking those 8 RTXs into four groups would provide you with up to 48 GB of VRAM headroom for larger scenes, with the advantage of having 8 GPUs to boost rendering speeds. It'd have to be a VERY detailed and huge scene to exceed that 48GB limit. I suppose if you included ALL of Airport island in a long shot without hiding anything, you might be able to get close to that.
That or have a couple dozen HD Genesis 8 characters, all with different skin maps, in a scene.
Outrider might be able suggest a scene that uses say more than 40GB of VRAM.
Hollywood CG blockbusters may also be able to exceed that 48GB cap... not sure if the OP is looking to do hollywood blockbusters with this system. If so, well NVIDIA does make cards with more than 24 GB of VRAM each.
There may be faster cards than those RTX Titans in Nvidia's library, and of course NVidia could announce new cards at Computex next week...
There's also those NVidia DGX systems, but you may need to repurpose the software for rendering from deep learning:
16 GPUs reside in one of those boxes. They are rather power hungry though...plus they are quite pricey!
Anyways, back on point, that 8 GPU BOXX system has a lot of potential. There are also numerous 4 GPU workstations available these days, but this is a dream machine, not a 'cut corners' machine.
...only the Tesla V100 comes with an NVLInk expansion slot interface. RTX Quadros, Titan's, and GForce cards are still all PCIe. You need NVLink expansion slots on the MB to share the VRAM of all eight cards as the bridge only allows two cards to share resources. The remaining cards would primarily be there for the cores.
True 96 GB of VRAM (2 x RTX Quadro 8000) is quite a bit of overhead. but 512 GB (The aforementioned NVidia DGX-2H Server)? Unless you are running a professional animation/film studio that is overkill.
Yeah, it's fun to "dream big", but the biggest system is not always the most practical for what we look to do. In many cases it is massive overkill. Again this is why In my first post on the first page I specified a system that doesn't have "gawdawful" specs because they really are not needed for what most of us do. It becomes more like buying one of those 200 mph exotic supercars when the speed limit on most city streets is 30 - 35 mph and 50 - 70 mph on the highway (and most streets/roads today are in terrible shape).
Oh, and don't forget the tech curve. That souped up Tesla powered NVidia server may seem the hottest thing on the computing road today but in several years will likely end up being looked on as a "clunker." when the newest "latest and greatest" tech comes out with 1024 or 2048 GB of VRAM.
I remember people being impressed years ago when I mentioned my then "new" system had a a shredding i7 930 and whopping 12 GB of memory. Now it's sort of like a "DC-3" chugging along in these days of 500 - 600 mph jet travel. Like that old airliner though, it still does it's job and is still dependable (BTW saw a DC-3 flying overhead yesterday 70+ year old plane, doing just fine).
Of course tech is always going to change. That's not a reason to ignore a currently powerful machine. Those who bought a big bad machine before will simply buy a newer, bigger, badder machine in the future. I mean, if you have the budget to buy a pimp machine in the first place, odds are there is disposable income to do it again.
As a gamer, this is nothing new to me. Back in the day, we had our 8 bit NES and were proud! LOL. But then came Sega Genesis, and Super NES. Then came N64, Dreamcast, Playstation 1, 2, 3, 4....I could go on and on. Technology does not last. You either move on with it, or you don't. If I want to play the newest video games, I have to keep up.
That's why I always say 3D modeling is not for the feint of heart. You have technology to keep up with, plus the 3D assets as well. This requires a pretty constant disposable income.
..yeah but first, you'd need an MB that could handle 4 double width cards. Next is it really worth the extra 5,000USD just for the added cores? Crikey for that you could a single Quadro 8000 or two Quadro 6000s and have the same total VRAM for about the same investment..
Unless you are a serious "armchair scientist," developing your own personal AI or say doing climate modelling, modelling, TCC is pretty pointless.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
There ARE some 8 GPU solutions out there now. It doesn't HAVE to fit in an ATX case.
The system above can accomodate up to four 2000W power supplies, which would mean around 73 amps of circuit(s). I think you could get away with a lot less than that though, as the RTX Titans each have a TDP of 280W. 2400W should be able to accomodate 8 of those, plus throw in another 1000W or so for the dual Xeons... So a 30 amp circuit MIGHT be able to power it, although I'd probably use two 20 amp circuits instead so that there's a bit of overhead.
This dual Xeon Purely (2 x 28 core) config detailed in this article uses an 1100W power supply. The BOXX system I linked mentioned 22 core Xeons:
NVLinking those 8 RTXs into four groups would provide you with up to 48 GB of VRAM headroom for larger scenes, with the advantage of having 8 GPUs to boost rendering speeds. It'd have to be a VERY detailed and huge scene to exceed that 48GB limit. I suppose if you included ALL of Airport island in a long shot without hiding anything, you might be able to get close to that.
That or have a couple dozen HD Genesis 8 characters, all with different skin maps, in a scene.
Outrider might be able suggest a scene that uses say more than 40GB of VRAM.
Hollywood CG blockbusters may also be able to exceed that 48GB cap... not sure if the OP is looking to do hollywood blockbusters with this system. If so, well NVIDIA does make cards with more than 24 GB of VRAM each.
There may be faster cards than those RTX Titans in Nvidia's library, and of course NVidia could announce new cards at Computex next week...
There's also those NVidia DGX systems, but you may need to repurpose the software for rendering from deep learning:
16 GPUs reside in one of those boxes. They are rather power hungry though...plus they are quite pricey!
Anyways, back on point, that 8 GPU BOXX system has a lot of potential. There are also numerous 4 GPU workstations available these days, but this is a dream machine, not a 'cut corners' machine.
...only the Tesla V100 comes with an NVLInk expansion slot interface. RTX Quadros, Titan's, and GForce cards are still all PCIe. You need NVLink expansion slots on the MB to share the VRAM of all eight cards as the bridge only allows two cards to share resources. The remaining cards would primarily be there for the cores.
True 96 GB of VRAM (2 x RTX Quadro 8000) is quite a bit of overhead. but 512 GB (The aforementioned NVidia DGX-2H Server)? Unless you are running a professional animation/film studio that is overkill.
Yeah, it's fun to "dream big", but the biggest system is not always the most practical for what we look to do. In many cases it is massive overkill. Again this is why In my first post on the first page I specified a system that doesn't have "gawdawful" specs because they really are not needed for what most of us do. It becomes more like buying one of those 200 mph exotic supercars when the speed limit on most city streets is 30 - 35 mph and 50 - 70 mph on the highway (and most streets/roads today are in terrible shape).
Oh, and don't forget the tech curve. That souped up Tesla powered NVidia server may seem the hottest thing on the computing road today but in several years will likely end up being looked on as a "clunker." when the newest "latest and greatest" tech comes out with 1024 or 2048 GB of VRAM.
I remember people being impressed years ago when I mentioned my then "new" system had a a shredding i7 930 and whopping 12 GB of memory. Now it's sort of like a "DC-3" chugging along in these days of 500 - 600 mph jet travel. Like that old airliner though, it still does it's job and is still dependable (BTW saw a DC-3 flying overhead yesterday 70+ year old plane, doing just fine).
Note that I did NOT say that the RTX Titans were pooling all of their memory into a single pool. What I said was to pair them up. Each card has 24GB of VRAM. x2 = 48. If I had said 192, then that would be a common pool, and note that I never said 192.
You'd essentially Vlink the cards into pairs using the special connector that Nvidia makes for this purpose at the top of the card. See here for benchmarks using The NVLink bridge:
Octane Render is one of the benches used in the above link. Heres the OTOY discussion about what happens when you exceed the memory of a single card using NVLink:
They note that memory performance slows down at that point, due to the additional time required to access the VRAM on the other card.
Now, if the Daz Studio renderer will ever support NVLink is a completely different discussion. I have no idea if thats one of the things being looked at in the beta development group or not, or if it's already working in the beta, as I haven't been following the beta development thread.:
Even so, the RTX Titan is a fast card, and the 24 GB of VRAM is a nice bump up from other options in this price range. If you only do smaller scenes though, then the 2080 Ti may be all you need. Myself, I seem to hit the VRAM limit regularly with some of my scenes, even with a 1080 Ti, at which point I do the usual things to squeeze a scene into the ram or set up multiple passes.
Titan RTXs aren't cheap, but they are cheaper than most of the other pro cards. Sure, you could buy an older card on Ebay or something, but then you are rolling the dice and taking your chances, plus those cards are generally slower than the latest cards. For some people, time is money.
But, as I pointed out in my last post, this is a wish list thread, not an affordability thread. Even without NVLink, 8 cards cranking out a render is an awesome thing, as pointed out in Outrider's post a couple of posts above.
I'm also curious about that 17 GPU setup mentioned in Outrider's post. What does that even look like?
If I had no budget my dream machine would be Watson. But not everyone might want or need a Watson. Tom hanks didn't, in Cast Away. And under similar circumstances I doubt Watson would be of much use. That's when you need a Wilson. Who knows where we could be or what could happen, especially when a F-16 fighter jet flys over your house and crashes into two warehouses, one being Amazon's fulfillment center. Maybe Watson would be better than Wilson in that event.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
..yeah but first, you'd need an MB that could handle 4 double width cards. Next is it really worth the extra 5,000USD just for the added cores? Crikey for that you could a single Quadro 8000 or two Quadro 6000s and have the same total VRAM for about the same investment..
Unless you are a serious "armchair scientist," developing your own personal AI or say doing climate modelling, modelling, TCC is pretty pointless.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
There ARE some 8 GPU solutions out there now. It doesn't HAVE to fit in an ATX case.
The system above can accomodate up to four 2000W power supplies, which would mean around 73 amps of circuit(s). I think you could get away with a lot less than that though, as the RTX Titans each have a TDP of 280W. 2400W should be able to accomodate 8 of those, plus throw in another 1000W or so for the dual Xeons... So a 30 amp circuit MIGHT be able to power it, although I'd probably use two 20 amp circuits instead so that there's a bit of overhead.
This dual Xeon Purely (2 x 28 core) config detailed in this article uses an 1100W power supply. The BOXX system I linked mentioned 22 core Xeons:
NVLinking those 8 RTXs into four groups would provide you with up to 48 GB of VRAM headroom for larger scenes, with the advantage of having 8 GPUs to boost rendering speeds. It'd have to be a VERY detailed and huge scene to exceed that 48GB limit. I suppose if you included ALL of Airport island in a long shot without hiding anything, you might be able to get close to that.
That or have a couple dozen HD Genesis 8 characters, all with different skin maps, in a scene.
Outrider might be able suggest a scene that uses say more than 40GB of VRAM.
Hollywood CG blockbusters may also be able to exceed that 48GB cap... not sure if the OP is looking to do hollywood blockbusters with this system. If so, well NVIDIA does make cards with more than 24 GB of VRAM each.
There may be faster cards than those RTX Titans in Nvidia's library, and of course NVidia could announce new cards at Computex next week...
There's also those NVidia DGX systems, but you may need to repurpose the software for rendering from deep learning:
16 GPUs reside in one of those boxes. They are rather power hungry though...plus they are quite pricey!
Anyways, back on point, that 8 GPU BOXX system has a lot of potential. There are also numerous 4 GPU workstations available these days, but this is a dream machine, not a 'cut corners' machine.
...only the Tesla V100 comes with an NVLInk expansion slot interface. RTX Quadros, Titan's, and GForce cards are still all PCIe. You need NVLink expansion slots on the MB to share the VRAM of all eight cards as the bridge only allows two cards to share resources. The remaining cards would primarily be there for the cores.
True 96 GB of VRAM (2 x RTX Quadro 8000) is quite a bit of overhead. but 512 GB (The aforementioned NVidia DGX-2H Server)? Unless you are running a professional animation/film studio that is overkill.
Yeah, it's fun to "dream big", but the biggest system is not always the most practical for what we look to do. In many cases it is massive overkill. Again this is why In my first post on the first page I specified a system that doesn't have "gawdawful" specs because they really are not needed for what most of us do. It becomes more like buying one of those 200 mph exotic supercars when the speed limit on most city streets is 30 - 35 mph and 50 - 70 mph on the highway (and most streets/roads today are in terrible shape).
Oh, and don't forget the tech curve. That souped up Tesla powered NVidia server may seem the hottest thing on the computing road today but in several years will likely end up being looked on as a "clunker." when the newest "latest and greatest" tech comes out with 1024 or 2048 GB of VRAM.
I remember people being impressed years ago when I mentioned my then "new" system had a a shredding i7 930 and whopping 12 GB of memory. Now it's sort of like a "DC-3" chugging along in these days of 500 - 600 mph jet travel. Like that old airliner though, it still does it's job and is still dependable (BTW saw a DC-3 flying overhead yesterday 70+ year old plane, doing just fine).
I'm also curious about that 17 GPU setup mentioned in Outrider's post. What does that even look like?
As am I, LOL! I suspect it could be some sort of mining rack repurposed for rendering. I found this insane pic of 17 AMD 290s used for mining, I counted. What is even happening here? Notice the fan on the right. This can't be safe.
I image the rack of 17 1080tis is a bit nicer than this, LOL. But you never know!
..yeah but first, you'd need an MB that could handle 4 double width cards. Next is it really worth the extra 5,000USD just for the added cores? Crikey for that you could a single Quadro 8000 or two Quadro 6000s and have the same total VRAM for about the same investment..
Unless you are a serious "armchair scientist," developing your own personal AI or say doing climate modelling, modelling, TCC is pretty pointless.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
There ARE some 8 GPU solutions out there now. It doesn't HAVE to fit in an ATX case.
The system above can accomodate up to four 2000W power supplies, which would mean around 73 amps of circuit(s). I think you could get away with a lot less than that though, as the RTX Titans each have a TDP of 280W. 2400W should be able to accomodate 8 of those, plus throw in another 1000W or so for the dual Xeons... So a 30 amp circuit MIGHT be able to power it, although I'd probably use two 20 amp circuits instead so that there's a bit of overhead.
This dual Xeon Purely (2 x 28 core) config detailed in this article uses an 1100W power supply. The BOXX system I linked mentioned 22 core Xeons:
NVLinking those 8 RTXs into four groups would provide you with up to 48 GB of VRAM headroom for larger scenes, with the advantage of having 8 GPUs to boost rendering speeds. It'd have to be a VERY detailed and huge scene to exceed that 48GB limit. I suppose if you included ALL of Airport island in a long shot without hiding anything, you might be able to get close to that.
That or have a couple dozen HD Genesis 8 characters, all with different skin maps, in a scene.
Outrider might be able suggest a scene that uses say more than 40GB of VRAM.
Hollywood CG blockbusters may also be able to exceed that 48GB cap... not sure if the OP is looking to do hollywood blockbusters with this system. If so, well NVIDIA does make cards with more than 24 GB of VRAM each.
There may be faster cards than those RTX Titans in Nvidia's library, and of course NVidia could announce new cards at Computex next week...
There's also those NVidia DGX systems, but you may need to repurpose the software for rendering from deep learning:
16 GPUs reside in one of those boxes. They are rather power hungry though...plus they are quite pricey!
Anyways, back on point, that 8 GPU BOXX system has a lot of potential. There are also numerous 4 GPU workstations available these days, but this is a dream machine, not a 'cut corners' machine.
...only the Tesla V100 comes with an NVLInk expansion slot interface. RTX Quadros, Titan's, and GForce cards are still all PCIe. You need NVLink expansion slots on the MB to share the VRAM of all eight cards as the bridge only allows two cards to share resources. The remaining cards would primarily be there for the cores.
True 96 GB of VRAM (2 x RTX Quadro 8000) is quite a bit of overhead. but 512 GB (The aforementioned NVidia DGX-2H Server)? Unless you are running a professional animation/film studio that is overkill.
Yeah, it's fun to "dream big", but the biggest system is not always the most practical for what we look to do. In many cases it is massive overkill. Again this is why In my first post on the first page I specified a system that doesn't have "gawdawful" specs because they really are not needed for what most of us do. It becomes more like buying one of those 200 mph exotic supercars when the speed limit on most city streets is 30 - 35 mph and 50 - 70 mph on the highway (and most streets/roads today are in terrible shape).
Oh, and don't forget the tech curve. That souped up Tesla powered NVidia server may seem the hottest thing on the computing road today but in several years will likely end up being looked on as a "clunker." when the newest "latest and greatest" tech comes out with 1024 or 2048 GB of VRAM.
I remember people being impressed years ago when I mentioned my then "new" system had a a shredding i7 930 and whopping 12 GB of memory. Now it's sort of like a "DC-3" chugging along in these days of 500 - 600 mph jet travel. Like that old airliner though, it still does it's job and is still dependable (BTW saw a DC-3 flying overhead yesterday 70+ year old plane, doing just fine).
Note that I did NOT say that the RTX Titans were pooling all of their memory into a single pool. What I said was to pair them up. Each card has 24GB of VRAM. x2 = 48. If I had said 192, then that would be a common pool, and note that I never said 192.
You'd essentially Vlink the cards into pairs using the special connector that Nvidia makes for this purpose at the top of the card. See here for benchmarks using The NVLink bridge:
Octane Render is one of the benches used in the above link. Heres the OTOY discussion about what happens when you exceed the memory of a single card using NVLink:
They note that memory performance slows down at that point, due to the additional time required to access the VRAM on the other card.
Now, if the Daz Studio renderer will ever support NVLink is a completely different discussion. I have no idea if thats one of the things being looked at in the beta development group or not, or if it's already working in the beta, as I haven't been following the beta development thread.:
Even so, the RTX Titan is a fast card, and the 24 GB of VRAM is a nice bump up from other options in this price range. If you only do smaller scenes though, then the 2080 Ti may be all you need. Myself, I seem to hit the VRAM limit regularly with some of my scenes, even with a 1080 Ti, at which point I do the usual things to squeeze a scene into the ram or set up multiple passes.
Titan RTXs aren't cheap, but they are cheaper than most of the other pro cards. Sure, you could buy an older card on Ebay or something, but then you are rolling the dice and taking your chances, plus those cards are generally slower than the latest cards. For some people, time is money.
But, as I pointed out in my last post, this is a wish list thread, not an affordability thread. Even without NVLink, 8 cards cranking out a render is an awesome thing, as pointed out in Outrider's post a couple of posts above.
I'm also curious about that 17 GPU setup mentioned in Outrider's post. What does that even look like?
..the Quadro 8000 (48 GB VRAM) is 5,500USD which is down from an introductory price tag tag of nearly 10,000USD. That means for 11,000USD (+ the cost of the NVLink widget) you get 96 GB of VRAM at your disposal. With just two of these you are looking at 9,126 CUDA cores, 1,152 Tensor cores, and 144 RTX cores which is pretty significant already. OK, so I know the thread is about having unlimited resources for a system,. but is there that much of an advantage to spending another 11,000USD just for more cores?
If I had no budget my dream machine would be Watson. But not everyone might want or need a Watson. Tom hanks didn't, in Cast Away. And under similar circumstances I doubt Watson would be of much use. That's when you need a Wilson. Who knows where we could be or what could happen, especially when a F-16 fighter jet flys over your house and crashes into two warehouses, one being Amazon's fulfillment center. Maybe Watson would be better than Wilson in that event.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
I agree. Well said.
...and again, I question the practicality of an "Uber Machine" for what we do.
If I had no budget my dream machine would be Watson. But not everyone might want or need a Watson. Tom hanks didn't, in Cast Away. And under similar circumstances I doubt Watson would be of much use. That's when you need a Wilson. Who knows where we could be or what could happen, especially when a F-16 fighter jet flys over your house and crashes into two warehouses, one being Amazon's fulfillment center. Maybe Watson would be better than Wilson in that event.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
I agree. Well said.
...and again, I question the practicality of an "Uber Machine" for what we do.
Looking at the ''practicality' is possibly taking away the fun.
A little over 2 yrs ago, I started saving up for a new Rig.
Last Christmas, I saved up enough and built this Rig. Then nVidia released the RTX2080Ti
I sold my GTX1080Ti, got the RTX2080Ti and added another RTX2080Ti and the NVLink
of course it's now $2500 over budget.
No regrets.
When you can, just do it. We're not getting any younger.
Well said - if you can buy a dream computer, go for it.
Keep in mind that there are people that need to generate hundreds of renders a month. These are generally for games or visual novels, and these are the sort of people that can benefit greatly from faster machines.
Also, if you regularly regular animations, well at 24 FPS those frames add up fast. Again, here's where 'uber' machines can really get a workout.
Less time spent waiting on renders = more render output.
And, if you are trying to do a full 3d animated movie, well then you need a LOT of frames rendered! Daz Studio Animate isn't the most user friendly setup for that, but there are people out there that use Daz Studio and Animate for their movies. In this case, having a lot of GPU horsepower could mean getting said movie done in days, weeks or months instead of years.
Everyone's workload is different. If you are generating only a few renders a month, yeah you probably don't need a multi-GPU setup. But if you are generating 100+ still renders + a few minutes of animation each month, well that's where those 4 and 8 GPU setups become very attractive. In the latter case, you may have a 'setup' machine and a second render machine, but said render machine will be the thing with lots of GPUs in it. Gotta keep those patreon peeps happy, ya know!
I do wonder, though, how people are generating the scenes they render if they need to render many still images each month (animation is course another matter). I'd think that actually preparing the scenes would fairly quickly become the bottleneck, once a render was short enough to be a needed screen-break.
I do wonder, though, how people are generating the scenes they render if they need to render many still images each month (animation is course another matter). I'd think that actually preparing the scenes would fairly quickly become the bottleneck, once a render was short enough to be a needed screen-break.
For VN style projects and 'comics style' stories with still images, essentially you are often just making minor changes to poses of of characters as they are conversing. That's where those prebuilt partial body poses and expressions become very handy. If you have two instances open, you can alternate between instances, adjusting the pose in a second instance while rendering with the first, or alternating between two different scenes, doing your pose and position adjustments for the next shot while the other instance is baking a render.
Once you fall into the rythm, you can often make the necessary conversational pose adjustments in just a few minutes. If the scene takes you an hour to render on a single GPU, well even if you can drop those render times to 15 minutes by adding 3 additional GPUs, that's often more than enough time to set up the next render when you are only making conversational adjustments.
Or you can do completely crazy things like assign 4 GPUS to the first instance and 4 more GPUs to the second instance. I haven't tried that with more than 2 GPUs up to this point, and probably won't be buying more than 4 GPUs in any case, but I may try that with a 2 + 2 split when I finally build a dedicated rendering rig. 7nm Threadripper being possibly delayed until next year is annoying me at the moment...
Of course, when you are building a new scene, those can take much longer to set up, but once you've established the main locations for your story, say the I13 Bowling Alley, Apartment, Library, Jobsite, etc. and have dialed in the lighting, then you simply revisit those scenes with the same characters later in the story. There may be wardrobe changes, but the locations themselves often need only minor changes.
Beginning VN artists generally start out slow, but once their projects begin to attract a fair number of patrons on Patreon, etc., and the revenue stream picks up, that is the point when they begin upgrading their hardware, or hire on additional CG artists to keep up with their workload. At that point, it's kind of a full time job for the most successful authors. Of course, not everyone is successful with their projects, but then that's true in most pursuits.
Some VN authors will also cue up renders to bake while they sleep. This works particularly well for animations. Also, 8 GPUs for animated sequences is the dream!
If I had no budget my dream machine would be Watson. But not everyone might want or need a Watson. Tom hanks didn't, in Cast Away. And under similar circumstances I doubt Watson would be of much use. That's when you need a Wilson. Who knows where we could be or what could happen, especially when a F-16 fighter jet flys over your house and crashes into two warehouses, one being Amazon's fulfillment center. Maybe Watson would be better than Wilson in that event.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
I agree. Well said.
...and again, I question the practicality of an "Uber Machine" for what we do.
It is ALWAYS appropriate to push the limits of what we know and what we do.
It is ALWAYS appropriate to push the limits of our technology, even if it's just dreaming.
It is ALWAYS appropriate to push OUR OWN LIMITS, whether they are physical, emotional, or limits in our imagination.
That last one is sometimes the most difficult one. I know it is for me. But it may well be the most important, because without imagination, we would have never gone to the moon. We would have never built anything beyond thatched-roof huts, or driving anything beyond a horse and carriage. We wouldn't be flying anywhere, we'd still be getting sick from spoilt food, we wouldn't have modern medicines, and we would still be making dentures out of ivory or wood.
When it comes to the doing, then yes, we need to come down to reality so as to establish some boundaries of what's actually feasible and possible given physics, funding, or available technology.
The way I see it, there's nothing wrong at all with the idea of an Uber machine. It is actually possible that a discussion like this could result in somebody TRYING something completely ground-breaking and new, thereby forcing a change in the way we make hardware or write our software tools.
Innovation and change doesn't have to always stand still just because today's hardware works one particular limited way, or because we understand it in one particular limited way. When Apple created the iPhone, they had to write a whole new OS for it. Well, they did. And then humans, being the adaptable types we are, began to write applications to run in that new architecture.
As a result of the last 50+ years of innovation, nearly everything I own has processing capability an order of magnitude greater than the IBM 370/158 mainframe computer I worked on in my first job out of college.
My irrigation system. My garage door opener. My next refrigerator. The temperature probe that lets me see how my steak is doing on the grill outside, while I'm typing a forum post. My watch! Even my watch has more memory and cpu capability than that old IBM. These innovations are all iterative, of course, and nothing happens in a big bang (except for the original Big Bang). All of this amazing stuff only happens when people shoot for the moon, if you get my drift. This is a great time to be alive, and I love the possibilities of the future. My only limit right now is in my own physical and imaginative potentials.
Oh, and somebody needs to build an even more efficient air conditioning unit for that massively-parallel GPU rig in the pic above, or it's gonna get real toasty in the house during big rendering sessions.
I do wonder, though, how people are generating the scenes they render if they need to render many still images each month (animation is course another matter). I'd think that actually preparing the scenes would fairly quickly become the bottleneck, once a render was short enough to be a needed screen-break.
That's where I'm at. I added a 2070 last month to my 1080ti. Now using the render queue plug in I can set up several scenes to render. They run while I'm asleep and at work but I simply don't have the time to set up enough to keep my rig busy for that full 18 hours.
If I could I'd be able to produce 2 different VN's simultaneously rather than alternating.
Keep in mind that there are people that need to generate hundreds of renders a month. These are generally for games or visual novels, and these are the sort of people that can benefit greatly from faster machines.
Also, if you regularly regular animations, well at 24 FPS those frames add up fast. Again, here's where 'uber' machines can really get a workout.
Less time spent waiting on renders = more render output.
And, if you are trying to do a full 3d animated movie, well then you need a LOT of frames rendered! Daz Studio Animate isn't the most user friendly setup for that, but there are people out there that use Daz Studio and Animate for their movies. In this case, having a lot of GPU horsepower could mean getting said movie done in days, weeks or months instead of years.
Everyone's workload is different. If you are generating only a few renders a month, yeah you probably don't need a multi-GPU setup. But if you are generating 100+ still renders + a few minutes of animation each month, well that's where those 4 and 8 GPU setups become very attractive. In the latter case, you may have a 'setup' machine and a second render machine, but said render machine will be the thing with lots of GPUs in it. Gotta keep those patreon peeps happy, ya know!
..oh, I have no issue with getting the the best rendering system available for one's specific needs.
Granted, I don't work with animation or produce GNs, so my needs are not as extreme. However, having painted in oils and watercolours, I am into creating very detailed scenes in large resolution and high quality format for producing gallery level art prints of reasonable size on archival stock and possibly even canvas. Even working with 3DL, or Carrara, I realise this would still require a fair amount of horsepower (memory/CPU cores in this case). As the price for the RTX 8000 has come down to a little more than what the P6000 cost (originally it was priced around 8,000 - 9,000USD depending on the vendor), that makes serious GPU rendering power more of a reality for smaller studios.
Short of any of us winning a big Lotto or finding out we had a wealthy relative leave his/her fortune to us, most of these dream machines will remain just that. I would rather spend my time working on my art and upgrading what I have for the best performance I can get, and should that "bolt of lightning" strike then start considering something bigger.
Oh BTW I actually do have two networked systems one for scene assembly, modelling, and render testing and another which is now primarily a dedicated GPU render box.
I do wonder, though, how people are generating the scenes they render if they need to render many still images each month (animation is course another matter). I'd think that actually preparing the scenes would fairly quickly become the bottleneck, once a render was short enough to be a needed screen-break.
That's where I'm at. I added a 2070 last month to my 1080ti. Now using the render queue plug in I can set up several scenes to render. They run while I'm asleep and at work but I simply don't have the time to set up enough to keep my rig busy for that full 18 hours.
If I could I'd be able to produce 2 different VN's simultaneously rather than alternating.
While keeping your machine busy for 18-24 hours a day is a worthy goal, keep in mind that the main goal here is to increase your overall productivity. I'm sure that having both of those cards has increased the responsiveness of your Iray viewport, so you aren't waiting as long for the viewport in Iray mode to resolve to see if your lighting looks good etc before doing a full render. Anything that allows you to make better use of your time spent on the machine definitely helps the production pipeline.
Also, with a faster render pipeline, well sometimes you won't see problems in your scene until your image starts to resolve in a full render, so the faster you can get a render to begin to resolve, the faster you can see the issues that may need correcting, and the faster you can hit that cancel button so that you can fix said issues.
This is where having a decent amount of CPU cores and system memory comes in I think. It takes time for the system to prepare the scene info for the graphics cards. Waiting 3-4 minutes before the render iterations to actually start can be an eternity sometimes. My old system had 8 cores, and my current system has 4 cores, but with a separate graphics card for the viewport. My older system seemed to prep the graphics cards faster, but with the new system the viewport and system in general is much less laggy when a render is baking, as the 1080 Ti is 100% dedicated to rendering, and isn't driving a monitor or anything. Tradeoffs I guess.
My old system had dual 1080s, 64 GB of RAM, and 8 cores. My current system has half the cores and half the ram, but at least I can use the integrated graphics on the CPU/APU to drive my 4K monitor. My old system being a laptop, well when it died, I lost the use of those 1080s, which is why I now will NOT recommend spending thousands of dollars on a high end laptop. They are glorious until when they die, at which point the graphicss chips and such become unusable to you. I did get a fair amount of use out of that system though. BTW the BIOS bricked that system due to flaws in the BIOS, and I can't even get into the BIOS screen anymore, and it won't boot at all. Fortunately my drive info was still OK, so I transplanted that drive into my newer system
My current system is a 'bridge' system to hold me over until 7nm Threadripper drops, or maybe something else if 7nm Threadripper doesn't happen. It'll become a HTPC once I build my new system. Since it has a tiny motherboard, well it only has 1 PCIe slot, but it suits my needs for now..
I miss having dual 1080s (that machine died). The 1080 Ti isn't as fast as a pair of 1080s, but at least it can handle larger scenes, which means I'm not having to render scenes in two passes as much as I needed to before. These days though I'm often just doing individual art pieces, but I also do story based work off and on.
Anyways, I digress. At the moment, my dream system involves a 7nm Threadripper (if those ever come out) and 4 RTX Titans. If I could figure out how to squeeze a cheap AMD card into said new system I'd do that too, with said AMD card driving the 4K monitor, as that works quite well actually on my current system with the Vega integrated graphics. If they ever made a Threadripper chip and board and chip that accomodated integrated graphics, that'd be awesome! The 7nm Ryzen APUs aren't due until next year, but maybe at that point we'll have an 8 core Ryzen APU.
I may end up settling for a 2950X Thradripper though, if new options end up being a long ways off. A 7nm EPYC workstation may eventually be another option, but I suspect that 64 core EPYC won't be cheap. This thread is about unlimited budget, but my own budget has limits of course.
I could go the Intel route, but choose not to. All of those software vulnerabilities are beginning to stack up, with new ones showing up every couple of months. Hence, the performance hit from said vulnerability mitiagations is getting larger and larger. Plus, if Intel chips are THAT flawed, and since AMD has comparable chips now, well it's a no brainer for me. When Google and Apple both recommend disabling hyperthreading on Intel based systems, you KNOW there's a problem...
I suppose I could use a PCIe ribbon cable and/or a riser to bifurcate one of the x16 slots to do this, but that'd require a custom case or something. Or maybe an open air system... Not keen on the open air system thing. I have kitties, and kitties like to attack cables...
7nm Ryzen looks interesting with that 'rumored' 16 CPU chip that's been leaked a few times now, but Ryzen AM4 boards don't have enough PCIe slots or lanes for what I need. That being said, that 16 core chip has a pretty impressive Cinebench r15 score...
AMD has hinted that quad hyperthreading (4 threads per core) is just around the corner with future EPYC CPUs. That'll be 2020/2021 at the earliest, but yeah - 64 cores, 256 threads - I wonder how fast a CPU only render would be with THAT chip...
BTW, The AMD Computex Keynote is on Monday (5/27) at 10 AM Taipei Taiwan time. That'll be Sunday night for you Western Hemisphere types. Might be good popcorn viewing, it'll be livestreamed!
I do wonder, though, how people are generating the scenes they render if they need to render many still images each month (animation is course another matter). I'd think that actually preparing the scenes would fairly quickly become the bottleneck, once a render was short enough to be a needed screen-break.
That's where I'm at. I added a 2070 last month to my 1080ti. Now using the render queue plug in I can set up several scenes to render. They run while I'm asleep and at work but I simply don't have the time to set up enough to keep my rig busy for that full 18 hours.
If I could I'd be able to produce 2 different VN's simultaneously rather than alternating.
While keeping your machine busy for 18-24 hours a day is a worthy goal, keep in mind that the main goal here is to increase your overall productivity. I'm sure that having both of those cards has increased the responsiveness of your Iray viewport, so you aren't waiting as long for the viewport in Iray mode to resolve to see if your lighting looks good etc before doing a full render. Anything that allows you to make better use of your time spent on the machine definitely helps the production pipeline.
Also, with a faster render pipeline, well sometimes you won't see problems in your scene until your image starts to resolve in a full render, so the faster you can get a render to begin to resolve, the faster you can see the issues that may need correcting, and the faster you can hit that cancel button so that you can fix said issues.
This is where having a decent amount of CPU cores and system memory comes in I think. It takes time for the system to prepare the scene info for the graphics cards. Waiting 3-4 minutes before the render iterations to actually start can be an eternity sometimes. My old system had 8 cores, and my current system has 4 cores, but with a separate graphics card for the viewport. My older system seemed to prep the graphics cards faster, but with the new system the viewport and system in general is much less laggy when a render is baking, as the 1080 Ti is 100% dedicated to rendering, and isn't driving a monitor or anything. Tradeoffs I guess.
My old system had dual 1080s, 64 GB of RAM, and 8 cores. My current system has half the cores and half the ram, but at least I can use the integrated graphics on the CPU/APU to drive my 4K monitor. My old system being a laptop, well when it died, I lost the use of those 1080s, which is why I now will NOT recommend spending thousands of dollars on a high end laptop. They are glorious until when they die, at which point the graphicss chips and such become unusable to you. I did get a fair amount of use out of that system though. BTW the BIOS bricked that system due to flaws in the BIOS, and I can't even get into the BIOS screen anymore, and it won't boot at all. Fortunately my drive info was still OK, so I transplanted that drive into my newer system
My current system is a 'bridge' system to hold me over until 7nm Threadripper drops, or maybe something else if 7nm Threadripper doesn't happen. It'll become a HTPC once I build my new system. Since it has a tiny motherboard, well it only has 1 PCIe slot, but it suits my needs for now..
I miss having dual 1080s (that machine died). The 1080 Ti isn't as fast as a pair of 1080s, but at least it can handle larger scenes, which means I'm not having to render scenes in two passes as much as I needed to before. These days though I'm often just doing individual art pieces, but I also do story based work off and on.
Anyways, I digress. At the moment, my dream system involves a 7nm Threadripper (if those ever come out) and 4 RTX Titans. If I could figure out how to squeeze a cheap AMD card into said new system I'd do that too, with said AMD card driving the 4K monitor, as that works quite well actually on my current system with the Vega integrated graphics. If they ever made a Threadripper chip and board and chip that accomodated integrated graphics, that'd be awesome! The 7nm Ryzen APUs aren't due until next year, but maybe at that point we'll have an 8 core Ryzen APU.
I may end up settling for a 2950X Thradripper though, if new options end up being a long ways off. A 7nm EPYC workstation may eventually be another option, but I suspect that 64 core EPYC won't be cheap. This thread is about unlimited budget, but my own budget has limits of course.
I could go the Intel route, but choose not to. All of those software vulnerabilities are beginning to stack up, with new ones showing up every couple of months. Hence, the performance hit from said vulnerability mitiagations is getting larger and larger. Plus, if Intel chips are THAT flawed, and since AMD has comparable chips now, well it's a no brainer for me. When Google and Apple both recommend disabling hyperthreading on Intel based systems, you KNOW there's a problem...
I suppose I could use a PCIe ribbon cable and/or a riser to bifurcate one of the x16 slots to do this, but that'd require a custom case or something. Or maybe an open air system... Not keen on the open air system thing. I have kitties, and kitties like to attack cables...
7nm Ryzen looks interesting with that 'rumored' 16 CPU chip that's been leaked a few times now, but Ryzen AM4 boards don't have enough PCIe slots or lanes for what I need. That being said, that 16 core chip has a pretty impressive Cinebench r15 score...
AMD has hinted that quad hyperthreading (4 threads per core) is just around the corner with future EPYC CPUs. That'll be 2020/2021 at the earliest, but yeah - 64 cores, 256 threads - I wonder how fast a CPU only render would be with THAT chip...
BTW, The AMD Computex Keynote is on Monday (5/27) at 10 AM Taipei Taiwan time. That'll be Sunday night for you Western Hemisphere types. Might be good popcorn viewing, it'll be livestreamed!
Ryzen 3000 CPU's, at least the high end ones running on X570 Mobo's will support PCIE gen 4 so you won't need x16 for GPU's on them.
Furthermore one of the major benefits of running things the way I do, which you don't seem to understand, is I don't need to use Iray viewport. I set up a bunch of scenes each night and render them all and any that don't turn out simply get fixed and rendered the next day.
Comments
After googling tons I agree but the problem is I'm not a tech and bad accent or not I tend to have to read it twice or watch it three times, then find a few more sites and rinse and repeat. After watching everything it seems the titan RTX is the card that stays on top for most benchmarks. I used GPU render benchmarks for Premier Pro, Photoshop 2019, Blender and games (no a gamer) But the RTX 2080ti hugged a close second most times. Sometimes the Quadro RTX 4000 outperformed the five and under performed the Titan RTX and the RTx 2080ti and found you right, the NVLink has to be the connector.
Somewhere along the way I stumbled upon the i9-9980xe 18 core 36 thread cpu and now have a crush on it. I rely on GPU for certain programs and CPU for others. I have decided RTX is the way to go (was tempted by the Titan V but after some benchmarks decided it would not work for what I need.) I watched endless videos for the past two days found very little on the CPU I like other than it runs hot in some systems, which won't be an issue with my custom build, and discovered some intrigue info on CPU's. If my tech advised against the i9-9980xe my second choice is the i9-9900k 8c/16t 5.2G.
Why? Because I watched this video and took some screenshots (have screenshots for the cards from various videos and blogs, which I will post first. Due to the amount of screenshots I took, I may do a second post sharing the cpu info. I am posting these because not all of us have three days to watch videos (not that I did either but I was on a mission). Hope these can help some of you who have helped me.
Okay went to upload the pics and can't find them. Obviously I saved them to whatever Dad download folder I last screen shot but these are of the CPU data.
Edit I called Daz Dad. I'm surprised I didn't type GoDazzy because I'm trying to set up an ssl for my cpanel.
Copy and pasted this email from my tech. This is what the system will be after the changes requested.
Please See the Second Revision of the estimate below. Also note that under motherboard I put ASUS HIGH END Motherboard. The reason for that is because the CPU is so new that only certain ASUS Motherboards are shipping out with the BIOS that supports the CPU. I spent half the day on the phone with ASUS and Newegg. I am going to have to go down to LA where I know a few suppliers to physically look at the motherboard and determine which BIOS version it has. The problem with buying a board online is the resellers are not able to guarantee the motherboard will have the BIOS version I need. There are several boards that support this CPU socket but if it does not have the latest BIOS I will not be able to get the CPU to POST(no video output on the monitor). I really don't want to have to buy an older CPU for this socket motherboard, install it so I can do the latest BIOS update and then return it. That way the BIOS has the update to support the CPU you guys have requested. I still may have to do it that way but that is a last resort if I am not able to find a board with the latest BIOS. With that being said, I guarantee the board I select will be of the highest quality. ASUS has 3 different top tier models, they are the ASUS PRIME, ASUS STRIX, and the ASUS TUF series. It will definitely be one of their elite lines.
Custom PC for Rendering Setup:
Intel Core i9-9980XE Processor 18 Core Processor, Runs at 3.0GHZ Idle with auto Turbo mode boosting speed to 4.5GHZ during heavy workloads for faster performance
ASUS HIGH END Motherboard includes 7/9 USB ports and 1 USB C Port on the back + 2 USB Ports in the front
Full sealed system water cooler, Master Liquid kit for CPU Cooling. Includes Dual 240MM Fans with RGB Lighting.
SupremeFX Premium Audio Chipset (Full support for 7.1 Surround and Sound and Digital Optical Output – Capable of connecting to a Home Theater Receiver)
Samsung EVO 2TB SSD for primary OS
And Samsung EVO 4TB SSD for DATA Storage
Graphics Card for Rendering: Nvidia GeForce 2080TI 11GB DDR6 RAM x 2 with NVLink
G Skill Ripjaws V 128GB DDR4 3200 RAM
Expanison USB 3.1 Card adds an additional 4 USB Ports to the rear of the machine
Expanison USB C Card adds 2 additional USB C Ports with 1 Port being DATA Only and the other with Video Support
The machine will have a total of 13 USB 3.1 Ports and 3 USB C Ports
Full Tower Case with Clear Side Window, will include intake and exhaust cooling system for optimal airflow and heat dissipation
EVGA 1300Watt G2 Gold Certified Power Supply
LG DVD-RW DVD and CD Reader and Writer
Windows 10 Professional 64 Bit
-end of email-
I was amazed at the trouble the cpu is causing re: travel etc but lucky for us LA is around the corner and I want a system that works not only with DAZ, but with Blender, Adobe Apps, Video Apps, Recording Apps, other 3D software including VR. Excited about ray tracing! I also need a machine that will accommodate our recording studio and our Kronus Korg 88 LE and other instruments.
Special thanks to you for heading me in the right direction. All of you influenced my decision and thanks to each an every one of you. I will post the GPU screenshots here when I find them because they proved the Titan RTX was #1 the entire time for the apps they benchmarked. I hope to God I don't regret my choices.
Edit: The cpu card i9-9980XE overheats. If you get this card a cooling kit is extremely relevant.
A little over 2 yrs ago, I started saving up for a new Rig.
Last Christmas, I saved up enough and built this Rig. Then nVidia released the RTX2080Ti
I sold my GTX1080Ti, got the RTX2080Ti and added another RTX2080Ti and the NVLink
of course it's now $2500 over budget.
No regrets.
When you can, just do it. We're not getting any younger.
..yeah, but it wasn't a "given."
...and BTW I still am on W7 Pro.
...no point in having four RTX Titans as NVLInk only works with two for memory stacking. The only systems that allow memory stacking between more cards are those using NVLink motherboards which are high priced server components and then only work with Teslas and IBM Power 9s.
..yeah but first, you'd need an MB that could handle 4 double width cards. Next is it really worth the extra 5,000USD just for the added cores? Crikey for that you could a single Quadro 8000 or two Quadro 6000s and have the same total VRAM for about the same investment..
Unless you are a serious "armchair scientist," developing your own personal AI or say doing climate modelling, modelling, TCC is pretty pointless.
I think you've missed the entire point of this thread. It's a dream machine/unlimited budget thread, as indicated by the 'no budget/wish machine' thread title.
There ARE some 8 GPU solutions out there now. It doesn't HAVE to fit in an ATX case.
https://www.boxx.com/systems/rackmounts/apexx-8r
The system above can accomodate up to four 2000W power supplies, which would mean around 73 amps of circuit(s). I think you could get away with a lot less than that though, as the RTX Titans each have a TDP of 280W. 2400W should be able to accomodate 8 of those, plus throw in another 1000W or so for the dual Xeons... So a 30 amp circuit MIGHT be able to power it, although I'd probably use two 20 amp circuits instead so that there's a bit of overhead.
This dual Xeon Purely (2 x 28 core) config detailed in this article uses an 1100W power supply. The BOXX system I linked mentioned 22 core Xeons:
https://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade/11
NVLinking those 8 RTXs into four groups would provide you with up to 48 GB of VRAM headroom for larger scenes, with the advantage of having 8 GPUs to boost rendering speeds. It'd have to be a VERY detailed and huge scene to exceed that 48GB limit. I suppose if you included ALL of Airport island in a long shot without hiding anything, you might be able to get close to that.
That or have a couple dozen HD Genesis 8 characters, all with different skin maps, in a scene.
Outrider might be able suggest a scene that uses say more than 40GB of VRAM.
Hollywood CG blockbusters may also be able to exceed that 48GB cap... not sure if the OP is looking to do hollywood blockbusters with this system. If so, well NVIDIA does make cards with more than 24 GB of VRAM each.
There may be faster cards than those RTX Titans in Nvidia's library, and of course NVidia could announce new cards at Computex next week...
There's also those NVidia DGX systems, but you may need to repurpose the software for rendering from deep learning:
https://www.nvidia.com/en-us/data-center/dgx-2/
16 GPUs reside in one of those boxes. They are rather power hungry though...plus they are quite pricey!
Anyways, back on point, that 8 GPU BOXX system has a lot of potential. There are also numerous 4 GPU workstations available these days, but this is a dream machine, not a 'cut corners' machine.
Have you ever browsed the Octane benchmark scores? One awesome thing about Octane Bench is it uploads scores publicly. If you look at these machines, you will see that are plenty of crazy machines out there. It turns out some people are living the dream. I'm posting a partial list from that page.
A link to the full page here, I only skimmed the top portion of this page, there dozens of ridiculous machines with 4 or more GPUs on this bench. https://render.otoy.com/octanebench/results.php
Look at this, 17 1080tis??? 10 Titan RTXs??? A Quadro GV 100 paired with 8 Titan Vs and a 2080ti??? The GV100 is a $10K card by itself. 8 Tesla V100's, which might actually be the older DGX machine since that is what it had inside.
17x GTX 1080 Ti (1 result )
4,011
7x RTX 2080 Ti + 3x TITAN V (1 result )
3,677
10x TITAN RTX (1 result )
3,534
1x Quadro GV100 + 1x RTX 2080 Ti + 8x TITAN V (2 results )
3,347
11x RTX 2080 Ti (3 results )
3,295
10x RTX 2080 Ti (7 results )
3,099
8x Tesla V100-SXM2-16GB (8 results )
3,012
3x GTX 1080 Ti + 1x Quadro GV100 + 4x RTX 2080 Ti + 2x TITAN V (1 result )
2,852
9x RTX 2080 Ti (1 result )
2,688
8x RTX 2080 Ti (9 results )
2,438
11x GTX 1080 Ti (9 results )
2,429
12x GTX 1080 Ti (1 result )
2,416
4x GTX 1080 Ti + 2x RTX 2080 Ti + 2x TITAN Xp (2 results )
2,050
6x GTX 1080 Ti + 2x RTX 2080 Ti (12 results )
2,024
10x GTX 1080 Ti (10 results )
2,003
8x TITAN Xp COLLECTORS EDITION (1 result )
1,896
9x GTX 1080 Ti (3 results )
1,873
2x Quadro RTX 6000 + 2x RTX 2080 Ti + 2x TITAN RTX (1 result )
1,873
1x Quadro RTX 6000 + 3x RTX 2080 Ti + 2x TITAN RTX (1 result )
1,838
4x RTX 2080 Ti + 2x TITAN RTX (3 results )
1,813
8x GTX 1080 Ti + 1x Quadro P4000 (4 results )
1,783
5x GTX 1070 Ti + 7x GTX 1080 (1 result )
1,771
3x GTX 1070 Ti + 9x GTX 1080 (1 result )
1,746
4x GTX 1070 + 4x RTX 2080 Ti (1 result )
1,742
5x RTX 2080 Ti + 1x TITAN RTX (2 results )
1,710
13x GTX 1070 (1 result )
1,695
8x GTX 1080 Ti (6 results )
1,673
8x RTX 2070 (1 result )
1,623
7x RTX 2080 (1 result )
1,579
5x RTX 2080 Ti (2 results )
1,515
1x Quadro M4000 + 2x TITAN RTX + 4x TITAN X (Pascal) (1 result )
1,515
7x GTX 1080 Ti (9 results )
1,510
4x GTX 980 Ti + 3x RTX 2080 Ti (1 result )
1,452
4x GTX 1080 Ti + 1x RTX 2080 Ti + 1x TITAN RTX (1 result )
1,434
7x RTX 2080 Ti (4 results )
1,416
4x GTX 1080 Ti + 2x RTX 2080 Ti (2 results )
1,405
4x TITAN V (1 result )
1,380
2x GTX 1080 Ti + 1x Quadro M4000 + 4x TITAN X (Pascal) (1 result )
1,309
2x RTX 2080 Ti + 2x TITAN V (2 results )
1,300
1x Quadro P2000 + 4x Tesla V100-PCIE-32GB (1 result )
1,288
5x GTX 1080 Ti + 2x GTX 780 Ti (1 result )
1,281
4x TITAN RTX (3 results )
1,275
4x Quadro RTX 6000 (2 results )
1,270
4x Quadro RTX 8000 (1 result )
1,266
3x GTX 1080 Ti + 2x TITAN RTX (1 result )
1,254
6x GTX 1080 Ti (26 results )
1,247
8x GTX 1070 Ti (1 result )
1,225
8x GTX 1080 (4 results )
1,200
3x GTX 1070 Ti + 4x GTX 1080 Ti (1 result )
1,184
4x RTX 2080 Ti (58 results )
1,184
16x Tesla K80 (1 result ) *
1,175
9x GTX 1070 (1 result )
1,142
4x GTX 1080 Ti + 1x TITAN RTX (1 result )
1,139
2x GTX 1070 + 4x GTX 1080 Ti (1 result )
1,119
3x RTX 2080 Ti + 1x TITAN X (Pascal) (1 result )
1,119
4x Quadro GP100 (2 results )
1,117
7x GTX 1080 (1 result )
1,104
1x GTX 1080 + 3x GTX 1080 Ti + 1x RTX 2080 Ti (1 result )
1,089
1x GTX 1080 Ti + 3x RTX 2080 Ti (7 results )
1,086
7x RTX 2060 (1 result )
1,079
12x GTX 1060 6GB (7 results )
1,076
5x GTX 1080 Ti (12 results )
1,071
2x GTX 1080 + 3x GTX 1080 Ti (3 results )
1,066
1x GTX TITAN X + 3x RTX 2080 Ti (1 result )
1,056
4x TITAN Xp (3 results )
1,054
2x GTX 1080 Ti + 2x TITAN V (1 result )
1,050
1x GTX TITAN X + 1x TITAN V + 2x TITAN X (Pascal) (2 results )
1,035
14x Tesla K80 (2 results ) *
1,033
8x GTX 1070 (4 results )
1,031
7x GTX 980 Ti (1 result )
1,030
4x GTX 1070 Ti + 2x GTX 1080 Ti (1 result )
1,026
1x Quadro P4000 + 3x RTX 2080 Ti (5 results )
1,013
4x Tesla P100-SXM2-16GB (2 results )
1,008
2x GTX 1080 Ti + 2x RTX 2080 Ti (5 results )
1,004
4x GTX 1080 Ti + 1x GTX 980 Ti (1 result )
975
1x GTX 980 Ti + 4x RTX 2070 (7 results )
973
6x GTX 980 Ti (1 result )
959
If I had no budget, I would get a better laptop than this one.
edit: currently I have no budget for a new computer, so I am stuck with this laptop. The screen resolution is too low and there is not a dedicated GPU.
...only the Tesla V100 comes with an NVLInk expansion slot interface. RTX Quadros, Titan's, and GForce cards are still all PCIe. You need NVLink expansion slots on the MB to share the VRAM of all eight cards as the bridge only allows two cards to share resources. The remaining cards would primarily be there for the cores.
True 96 GB of VRAM (2 x RTX Quadro 8000) is quite a bit of overhead. but 512 GB (The aforementioned NVidia DGX-2H Server)? Unless you are running a professional animation/film studio that is overkill.
Yeah, it's fun to "dream big", but the biggest system is not always the most practical for what we look to do. In many cases it is massive overkill. Again this is why In my first post on the first page I specified a system that doesn't have "gawdawful" specs because they really are not needed for what most of us do. It becomes more like buying one of those 200 mph exotic supercars when the speed limit on most city streets is 30 - 35 mph and 50 - 70 mph on the highway (and most streets/roads today are in terrible shape).
Oh, and don't forget the tech curve. That souped up Tesla powered NVidia server may seem the hottest thing on the computing road today but in several years will likely end up being looked on as a "clunker." when the newest "latest and greatest" tech comes out with 1024 or 2048 GB of VRAM.
I remember people being impressed years ago when I mentioned my then "new" system had a a shredding i7 930 and whopping 12 GB of memory. Now it's sort of like a "DC-3" chugging along in these days of 500 - 600 mph jet travel. Like that old airliner though, it still does it's job and is still dependable (BTW saw a DC-3 flying overhead yesterday 70+ year old plane, doing just fine).
Of course tech is always going to change. That's not a reason to ignore a currently powerful machine. Those who bought a big bad machine before will simply buy a newer, bigger, badder machine in the future. I mean, if you have the budget to buy a pimp machine in the first place, odds are there is disposable income to do it again.
As a gamer, this is nothing new to me. Back in the day, we had our 8 bit NES and were proud! LOL. But then came Sega Genesis, and Super NES. Then came N64, Dreamcast, Playstation 1, 2, 3, 4....I could go on and on. Technology does not last. You either move on with it, or you don't. If I want to play the newest video games, I have to keep up.
That's why I always say 3D modeling is not for the feint of heart. You have technology to keep up with, plus the 3D assets as well. This requires a pretty constant disposable income.
Its not exactly fair, but that's how it is.
Note that I did NOT say that the RTX Titans were pooling all of their memory into a single pool. What I said was to pair them up. Each card has 24GB of VRAM. x2 = 48. If I had said 192, then that would be a common pool, and note that I never said 192.
You'd essentially Vlink the cards into pairs using the special connector that Nvidia makes for this purpose at the top of the card. See here for benchmarks using The NVLink bridge:
https://www.servethehome.com/dual-nvidia-titan-rtx-review-with-nvlink/
Octane Render is one of the benches used in the above link. Heres the OTOY discussion about what happens when you exceed the memory of a single card using NVLink:
https://render.otoy.com/forum/viewtopic.php?f=33&t=69896#p353452
They note that memory performance slows down at that point, due to the additional time required to access the VRAM on the other card.
Now, if the Daz Studio renderer will ever support NVLink is a completely different discussion. I have no idea if thats one of the things being looked at in the beta development group or not, or if it's already working in the beta, as I haven't been following the beta development thread.:
Even so, the RTX Titan is a fast card, and the 24 GB of VRAM is a nice bump up from other options in this price range. If you only do smaller scenes though, then the 2080 Ti may be all you need. Myself, I seem to hit the VRAM limit regularly with some of my scenes, even with a 1080 Ti, at which point I do the usual things to squeeze a scene into the ram or set up multiple passes.
Titan RTXs aren't cheap, but they are cheaper than most of the other pro cards. Sure, you could buy an older card on Ebay or something, but then you are rolling the dice and taking your chances, plus those cards are generally slower than the latest cards. For some people, time is money.
But, as I pointed out in my last post, this is a wish list thread, not an affordability thread. Even without NVLink, 8 cards cranking out a render is an awesome thing, as pointed out in Outrider's post a couple of posts above.
I'm also curious about that 17 GPU setup mentioned in Outrider's post. What does that even look like?
If I had no budget my dream machine would be Watson. But not everyone might want or need a Watson. Tom hanks didn't, in Cast Away. And under similar circumstances I doubt Watson would be of much use. That's when you need a Wilson. Who knows where we could be or what could happen, especially when a F-16 fighter jet flys over your house and crashes into two warehouses, one being Amazon's fulfillment center. Maybe Watson would be better than Wilson in that event.
https://disruptionhub.com/5-amazing-things-ibms-watson-can/
https://www.ibm.com/watson/health/
https://www.desertsun.com/story/news/2019/05/16/march-air-reserve-base-f-16-crashes-moreno-valley/3699895002/
As am I, LOL! I suspect it could be some sort of mining rack repurposed for rendering. I found this insane pic of 17 AMD 290s used for mining, I counted. What is even happening here? Notice the fan on the right. This can't be safe.
I image the rack of 17 1080tis is a bit nicer than this, LOL. But you never know!
What's not safe? The rack is wood and there are no exposed wires.That big fan should produce lots of airflow.
I've seen several "amateur" mining racks of this kind that were quite stable. They're probably cooler than putting those GPU's in racks.
..the Quadro 8000 (48 GB VRAM) is 5,500USD which is down from an introductory price tag tag of nearly 10,000USD. That means for 11,000USD (+ the cost of the NVLink widget) you get 96 GB of VRAM at your disposal. With just two of these you are looking at 9,126 CUDA cores, 1,152 Tensor cores, and 144 RTX cores which is pretty significant already. OK, so I know the thread is about having unlimited resources for a system,. but is there that much of an advantage to spending another 11,000USD just for more cores?
...and again, I question the practicality of an "Uber Machine" for what we do.
Looking at the ''practicality' is possibly taking away the fun.
Well said - if you can buy a dream computer, go for it.
Keep in mind that there are people that need to generate hundreds of renders a month. These are generally for games or visual novels, and these are the sort of people that can benefit greatly from faster machines.
Also, if you regularly regular animations, well at 24 FPS those frames add up fast. Again, here's where 'uber' machines can really get a workout.
Less time spent waiting on renders = more render output.
And, if you are trying to do a full 3d animated movie, well then you need a LOT of frames rendered! Daz Studio Animate isn't the most user friendly setup for that, but there are people out there that use Daz Studio and Animate for their movies. In this case, having a lot of GPU horsepower could mean getting said movie done in days, weeks or months instead of years.
Everyone's workload is different. If you are generating only a few renders a month, yeah you probably don't need a multi-GPU setup. But if you are generating 100+ still renders + a few minutes of animation each month, well that's where those 4 and 8 GPU setups become very attractive. In the latter case, you may have a 'setup' machine and a second render machine, but said render machine will be the thing with lots of GPUs in it. Gotta keep those patreon peeps happy, ya know!
I do wonder, though, how people are generating the scenes they render if they need to render many still images each month (animation is course another matter). I'd think that actually preparing the scenes would fairly quickly become the bottleneck, once a render was short enough to be a needed screen-break.
For VN style projects and 'comics style' stories with still images, essentially you are often just making minor changes to poses of of characters as they are conversing. That's where those prebuilt partial body poses and expressions become very handy. If you have two instances open, you can alternate between instances, adjusting the pose in a second instance while rendering with the first, or alternating between two different scenes, doing your pose and position adjustments for the next shot while the other instance is baking a render.
Once you fall into the rythm, you can often make the necessary conversational pose adjustments in just a few minutes. If the scene takes you an hour to render on a single GPU, well even if you can drop those render times to 15 minutes by adding 3 additional GPUs, that's often more than enough time to set up the next render when you are only making conversational adjustments.
Or you can do completely crazy things like assign 4 GPUS to the first instance and 4 more GPUs to the second instance. I haven't tried that with more than 2 GPUs up to this point, and probably won't be buying more than 4 GPUs in any case, but I may try that with a 2 + 2 split when I finally build a dedicated rendering rig. 7nm Threadripper being possibly delayed until next year is annoying me at the moment...
Of course, when you are building a new scene, those can take much longer to set up, but once you've established the main locations for your story, say the I13 Bowling Alley, Apartment, Library, Jobsite, etc. and have dialed in the lighting, then you simply revisit those scenes with the same characters later in the story. There may be wardrobe changes, but the locations themselves often need only minor changes.
Beginning VN artists generally start out slow, but once their projects begin to attract a fair number of patrons on Patreon, etc., and the revenue stream picks up, that is the point when they begin upgrading their hardware, or hire on additional CG artists to keep up with their workload. At that point, it's kind of a full time job for the most successful authors. Of course, not everyone is successful with their projects, but then that's true in most pursuits.
Some VN authors will also cue up renders to bake while they sleep. This works particularly well for animations. Also, 8 GPUs for animated sequences is the dream!
It is ALWAYS appropriate to push the limits of what we know and what we do.
It is ALWAYS appropriate to push the limits of our technology, even if it's just dreaming.
It is ALWAYS appropriate to push OUR OWN LIMITS, whether they are physical, emotional, or limits in our imagination.
That last one is sometimes the most difficult one. I know it is for me. But it may well be the most important, because without imagination, we would have never gone to the moon. We would have never built anything beyond thatched-roof huts, or driving anything beyond a horse and carriage. We wouldn't be flying anywhere, we'd still be getting sick from spoilt food, we wouldn't have modern medicines, and we would still be making dentures out of ivory or wood.
When it comes to the doing, then yes, we need to come down to reality so as to establish some boundaries of what's actually feasible and possible given physics, funding, or available technology.
The way I see it, there's nothing wrong at all with the idea of an Uber machine. It is actually possible that a discussion like this could result in somebody TRYING something completely ground-breaking and new, thereby forcing a change in the way we make hardware or write our software tools.
Innovation and change doesn't have to always stand still just because today's hardware works one particular limited way, or because we understand it in one particular limited way. When Apple created the iPhone, they had to write a whole new OS for it. Well, they did. And then humans, being the adaptable types we are, began to write applications to run in that new architecture.
As a result of the last 50+ years of innovation, nearly everything I own has processing capability an order of magnitude greater than the IBM 370/158 mainframe computer I worked on in my first job out of college.
My irrigation system. My garage door opener. My next refrigerator. The temperature probe that lets me see how my steak is doing on the grill outside, while I'm typing a forum post. My watch! Even my watch has more memory and cpu capability than that old IBM. These innovations are all iterative, of course, and nothing happens in a big bang (except for the original Big Bang). All of this amazing stuff only happens when people shoot for the moon, if you get my drift. This is a great time to be alive, and I love the possibilities of the future. My only limit right now is in my own physical and imaginative potentials.
Oh, and somebody needs to build an even more efficient air conditioning unit for that massively-parallel GPU rig in the pic above, or it's gonna get real toasty in the house during big rendering sessions.
That's where I'm at. I added a 2070 last month to my 1080ti. Now using the render queue plug in I can set up several scenes to render. They run while I'm asleep and at work but I simply don't have the time to set up enough to keep my rig busy for that full 18 hours.
If I could I'd be able to produce 2 different VN's simultaneously rather than alternating.
..oh, I have no issue with getting the the best rendering system available for one's specific needs.
Granted, I don't work with animation or produce GNs, so my needs are not as extreme. However, having painted in oils and watercolours, I am into creating very detailed scenes in large resolution and high quality format for producing gallery level art prints of reasonable size on archival stock and possibly even canvas. Even working with 3DL, or Carrara, I realise this would still require a fair amount of horsepower (memory/CPU cores in this case). As the price for the RTX 8000 has come down to a little more than what the P6000 cost (originally it was priced around 8,000 - 9,000USD depending on the vendor), that makes serious GPU rendering power more of a reality for smaller studios.
Short of any of us winning a big Lotto or finding out we had a wealthy relative leave his/her fortune to us, most of these dream machines will remain just that. I would rather spend my time working on my art and upgrading what I have for the best performance I can get, and should that "bolt of lightning" strike then start considering something bigger.
Oh BTW I actually do have two networked systems one for scene assembly, modelling, and render testing and another which is now primarily a dedicated GPU render box.
While keeping your machine busy for 18-24 hours a day is a worthy goal, keep in mind that the main goal here is to increase your overall productivity. I'm sure that having both of those cards has increased the responsiveness of your Iray viewport, so you aren't waiting as long for the viewport in Iray mode to resolve to see if your lighting looks good etc before doing a full render. Anything that allows you to make better use of your time spent on the machine definitely helps the production pipeline.
Also, with a faster render pipeline, well sometimes you won't see problems in your scene until your image starts to resolve in a full render, so the faster you can get a render to begin to resolve, the faster you can see the issues that may need correcting, and the faster you can hit that cancel button so that you can fix said issues.
This is where having a decent amount of CPU cores and system memory comes in I think. It takes time for the system to prepare the scene info for the graphics cards. Waiting 3-4 minutes before the render iterations to actually start can be an eternity sometimes. My old system had 8 cores, and my current system has 4 cores, but with a separate graphics card for the viewport. My older system seemed to prep the graphics cards faster, but with the new system the viewport and system in general is much less laggy when a render is baking, as the 1080 Ti is 100% dedicated to rendering, and isn't driving a monitor or anything. Tradeoffs I guess.
My old system had dual 1080s, 64 GB of RAM, and 8 cores. My current system has half the cores and half the ram, but at least I can use the integrated graphics on the CPU/APU to drive my 4K monitor. My old system being a laptop, well when it died, I lost the use of those 1080s, which is why I now will NOT recommend spending thousands of dollars on a high end laptop. They are glorious until when they die, at which point the graphicss chips and such become unusable to you. I did get a fair amount of use out of that system though. BTW the BIOS bricked that system due to flaws in the BIOS, and I can't even get into the BIOS screen anymore, and it won't boot at all. Fortunately my drive info was still OK, so I transplanted that drive into my newer system
My current system is a 'bridge' system to hold me over until 7nm Threadripper drops, or maybe something else if 7nm Threadripper doesn't happen. It'll become a HTPC once I build my new system. Since it has a tiny motherboard, well it only has 1 PCIe slot, but it suits my needs for now..
I miss having dual 1080s (that machine died). The 1080 Ti isn't as fast as a pair of 1080s, but at least it can handle larger scenes, which means I'm not having to render scenes in two passes as much as I needed to before. These days though I'm often just doing individual art pieces, but I also do story based work off and on.
Anyways, I digress. At the moment, my dream system involves a 7nm Threadripper (if those ever come out) and 4 RTX Titans. If I could figure out how to squeeze a cheap AMD card into said new system I'd do that too, with said AMD card driving the 4K monitor, as that works quite well actually on my current system with the Vega integrated graphics. If they ever made a Threadripper chip and board and chip that accomodated integrated graphics, that'd be awesome! The 7nm Ryzen APUs aren't due until next year, but maybe at that point we'll have an 8 core Ryzen APU.
I may end up settling for a 2950X Thradripper though, if new options end up being a long ways off. A 7nm EPYC workstation may eventually be another option, but I suspect that 64 core EPYC won't be cheap. This thread is about unlimited budget, but my own budget has limits of course.
I could go the Intel route, but choose not to. All of those software vulnerabilities are beginning to stack up, with new ones showing up every couple of months. Hence, the performance hit from said vulnerability mitiagations is getting larger and larger. Plus, if Intel chips are THAT flawed, and since AMD has comparable chips now, well it's a no brainer for me. When Google and Apple both recommend disabling hyperthreading on Intel based systems, you KNOW there's a problem...
I suppose I could use a PCIe ribbon cable and/or a riser to bifurcate one of the x16 slots to do this, but that'd require a custom case or something. Or maybe an open air system... Not keen on the open air system thing. I have kitties, and kitties like to attack cables...
7nm Ryzen looks interesting with that 'rumored' 16 CPU chip that's been leaked a few times now, but Ryzen AM4 boards don't have enough PCIe slots or lanes for what I need. That being said, that 16 core chip has a pretty impressive Cinebench r15 score...
AMD has hinted that quad hyperthreading (4 threads per core) is just around the corner with future EPYC CPUs. That'll be 2020/2021 at the earliest, but yeah - 64 cores, 256 threads - I wonder how fast a CPU only render would be with THAT chip...
BTW, The AMD Computex Keynote is on Monday (5/27) at 10 AM Taipei Taiwan time. That'll be Sunday night for you Western Hemisphere types. Might be good popcorn viewing, it'll be livestreamed!
One that tied in directly to my brain and knew what I wanted to do before I did. LOL
;)
Laurie
Ryzen 3000 CPU's, at least the high end ones running on X570 Mobo's will support PCIE gen 4 so you won't need x16 for GPU's on them.
Furthermore one of the major benefits of running things the way I do, which you don't seem to understand, is I don't need to use Iray viewport. I set up a bunch of scenes each night and render them all and any that don't turn out simply get fixed and rendered the next day.