I saw in the benchmarking thread where you were talking about water cooling your 3970X, but then switched over to air cooling for now. Didn't want to clutter that thread any more than we already have, so I decided to dig this thread back up instead.
This is more of a silly idea than a serious suggestion, but I thought I'd share anyways. I priced some car radiators at my local auto parts store, and they are surprisingly cheap. I saw one newly manufactured radiator for only $85 that looked particularly well suited for a DIY build.
Of course the radiator is still pretty large, about the size of a large ATX case, but I figure if you built a frame to attach the radiator and motherboard to, and then either canniabalized an old PC case or something for the motherboard tray and such, and then find some decent mesh grating to encase the system so that my cats wouldn't crawl in...
It'd definitely have a Mad Max/Road Warrior kinda feel, but that actually sounds interesting to me. All of this 'clean cut RGB future stuff' gets old after a while.
Of course, if someone were to actually do this (and at least one person on Youtube has used a car radiator for an overclock attempt of a FX 8350), you'd need to find the necessary adaptors, and of course watch your materials. Most car radiators are aluminum these days, so you'd need to find aluminum blocks for your CPU and GPUs. I did find a few aluminum blocks, but the majority of water blocks are copper, at least from what I saw in my parts browsing. There are also some 'radiator hose to NPT' adaptors, so I figured that a good way to go would be a pair of 4-6 port NPT manifolds and attaching them to said radiator hose adaptors. Then run the hoses to the pumps from the bottom manifold, returning through the top one.
You could also run an extra loop directly from the bottom manifold to the top, along the side of the radiator, with a clear tube so that you could visually check your water level without having to crack open the radiator cap. The car radiator hoses attach at the top and bottom of the back of the radiator, so a hose that passed direcly between these two points, with no pumps attached, should 'equalize' it's level with the level inside of the radiator when the system pumps are idle. And of course you'd pump from bottom to top, so that you aren't sucking air if the level starts to drop a bit. The top manifold shoud be well above the rest of the system, due to the radiator height, so it would require a significant drop in coolant level before the radiator fluid level would drop below the level of the water blocks.
The other reason for using npt manifolds with multiple ports is to allow multiple circuits for the water cooling loops. This way you could isolate the CPU loop from the GPU loop(s). This would require multiple pumps of course, and you'd only shave 1-2 degrees c off of each coolant loop doing this, but every little bit might help with the heat throttling on the 3970X...
The guy that car radiator'ed his 8350 noted that due to the sheer amount of coolant in the system, thanks to the high capacity of the car radiator, and the large surface area of said radiator, that he didn't even have to bother with a fan on the radiator, as his coolant temps weren't climbing more than a couple of degrees during use. He did ziptie a fan above the CPU water block to cool the VRMs around the CPU though, so that might have helped with the water cooling a tiny bit. If I do decide to go 'ghetto' with my next build, I'll probably still grab a portable house fan and point it at the radiator and the rest of the computer though, for the summer months since I often use a fan anyways in the summer to keep the air moving in my work area.
Just wanted to share my madness here. It sounds silly, until you start to think about the huge amount of surface area on your average car radiator. You'd also have to be mindful of leaks, but most car radiator systems leak very little once the hose clamps are suitably cinched up when you think about it. They tend to lose liquid mainly due to the fact the water is often heated near/above the boiling point in the car engine, and excess overflow from expansion gets dumped into the overflow bottle where a bit of it will evaporate from over time.
In any case, I do hope that you find a better water block and radiator system for your 3970X. Wendell at Level 1 Techs recently showcased a build where he did a bit of watercooling. Here's the link where he talks about Threadripper cooling:
In the meantime, sounds like your air cooler is doing the job sufficiently for now.
A car radiator is very unsuited to use as PC cooling. The fin density is terrible which means you'd need to push a lot of air across the fins to cool the water enough that means loud fans. Also the volume of water is huge so you'd need a pretty beefy pump, again that would be loud.
A custom loop for TR and multiple GPU's wouldn't be terribly hard but as pointed out installing blocks on GPU's voids the warranty. In a case like the O11 with more than sufficient airflow I wouldn't bother unless the rig overheats even with good fans and a TRx40 specific air cooler.
@xionis, here's a list of the currently known/speculated on paramters concerning functional memory pooling in Daz Studio/Iray:
GPU Hardware
Two (or more) NVLink-compatible GPUs (any two 2070 SUPER or higher RTX GPUs) required.
One (or more) NVLink bridge connectors physically compatible with those GPUs required.
Nvidia Drivers
Driver operating mode: WDDM/TCC - restricted to WDDM only on GeForce RTX cards as of January 2019 (see thesetwo posts)
Driver SLI configuration: enabled/disabled
Daz Studio/Iray
Daz Studio Version: 4.12.1.55 Beta x64 or higher required
Selected Photoreal Devices: CPU, GPU 1, GPU 2, etc.
NVLink Peer Group SIze: 0, 1, 2, etc.
Note: As indicated above, TCC mode is not presently known to be supported (at least officially) on any GeForce cards. This is potentially a problem for getting memory pooling to work with 2070 SUPER, 2080, 2080 SUPER or 2080Ti GPUs since it is supposedly essential for getting GPUDirect (the underlying technology behind memory pooling between Nvidia cards) to work properly in Windows due to a longstanding operating system limitation of some sort. However real-world testing has so far indicated that enabling SLI on these cards functions as a workaround to this limitation.
So what it all really boils down to is whether any combination of SLI on/off, WDDM/TCC mode toggle (if possible - directions for testing here), GPU1/2 selected under Photoreal Devices, NVLink Peer Group size (0-2) result in successful GPU only rendering of an Iray scene in excess of 8GBs.
By the way, you should bug report any outright application crashes you get as a result of playing around with this stuff directly to Daz asap. The fact that you are getting application crashes rather than just no change in observable behavior at all is itself actually a good sign since it means that it should work. Meaning that getting it to actually work is just a matter of bug fixing. Once those bugs are reported by the handful of users (like you) with hardware capable of testing things out.
This next guy is using a heater core not a coolant radiator. The coolant connectors are a more friendly size, and a number of people that do this have went the heater core route, but the radiator is a bit smaller...
He was quite happy with his temps after it was all said and done.
There are numerous other examples, plus Gamers nexus recently used a fairly large radiator for ther 9980XE overclock livestream. It's not quite the same thing though, although they did have four 200MM fans strapped to the front of the rather large radiator they picked. Temps were managaable, but the CPU hit it's hard limit so while Steve did manage a very respectable overclock, there were some niggling issues with the overclock that would take more time to chase down, which would have bored the livestream audience quite a bit there at the tail end, as is usual when trying to eek out the last few percentage points of a maxed out overlcock. Note that I'm not looking to overclock myself, at least not these days, but i do have to give props to the 500+ watts he was pushing through the CPU
A few people have went the car radiator route to passively water cool their PC's, no fans needed, but the examples I found weren't really looking to do extreme overclocking or anything like that. One guy even experimented with a geothermal loop, which was rather interesting... Another guy gave a 'after 1 year' update, where he drained his system and the system was still quite clean despite mixing copper and aluminum. He used a new radiator though, not something from a junkyard. He did note that over a longer period of time he expected to see a bit of corrosion, but he was planning on more diligent maintenance of the loop in the coming years as the components aged.
Anyways, I already know you are skeptical from previous posts, and I wasn't suggesting the OP go this route, as theres a fair amount of DIY involved. I just thought the OP would find it amusing and I like to amuse people occasionally. Plus I didn't want to clutter up the benchmark thread, hence why I posted here instead, as most people won't be paying attention to this thread in the first place.
By the time Cyberpunk 2077 releases in September the next gen Nvidia's may be out, or at the very least announced. Just food for thought. I plan on probably disappearing around that time as well, LOL.
At any rate, two 2080 Supers will easily best a 2080ti in speed. If you ever manage to get SLI working properly, you'll have more VRAM as well. Even if this extra pool of VRAM is just for texture data like Richard said, that is a huge chunk of data that will help free up space.
It has to be possible. The Vray guys got this working a long time ago on the 2080ti and 2080 in their experiments. They proved it by rendering a scene that was too large for a single card. They also observed a small performance hit from using SLI, but this hit was VERY small. With 4 different scenes, the differences were only a few percentage points in 3 of these scenes. The 2080ti's link is faster, so the 2080 Super will experience a slightly higher hit in SLI. So if you know your scene is going to fit under 8gb, then you can disable SLI. If your renders are running the same speed with SLI on and off, that would probably indicate something is not right.
Concerning TCC mode, this is what they say in the Vray documentation:
To use NVLINK on supported hardware, NVLINK devices must be set to TCC mode. This is recommended for Pascal, Volta and Turning-based Quadro models. For GeForce RTX cards, a SLI setup is sufficient. Also note that to prevent performance loss, not all data is shared between devices.
So there's that. Of course Vray is not Iray, but it would seem like Iray should be able to match would Vray can do.
This next guy is using a heater core not a coolant radiator. The coolant connectors are a more friendly size, and a number of people that do this have went the heater core route, but the radiator is a bit smaller...
He was quite happy with his temps after it was all said and done.
There are numerous other examples, plus Gamers nexus recently used a fairly large radiator for ther 9980XE overclock livestream. It's not quite the same thing though, although they did have four 200MM fans strapped to the front of the rather large radiator they picked. Temps were managaable, but the CPU hit it's hard limit so while Steve did manage a very respectable overclock, there were some niggling issues with the overclock that would take more time to chase down, which would have bored the livestream audience quite a bit there at the tail end, as is usual when trying to eek out the last few percentage points of a maxed out overlcock. Note that I'm not looking to overclock myself, at least not these days, but i do have to give props to the 500+ watts he was pushing through the CPU
A few people have went the car radiator route to passively water cool their PC's, no fans needed, but the examples I found weren't really looking to do extreme overclocking or anything like that. One guy even experimented with a geothermal loop, which was rather interesting... Another guy gave a 'after 1 year' update, where he drained his system and the system was still quite clean despite mixing copper and aluminum. He used a new radiator though, not something from a junkyard. He did note that over a longer period of time he expected to see a bit of corrosion, but he was planning on more diligent maintenance of the loop in the coming years as the components aged.
Anyways, I already know you are skeptical from previous posts, and I wasn't suggesting the OP go this route, as theres a fair amount of DIY involved. I just thought the OP would find it amusing and I like to amuse people occasionally. Plus I didn't want to clutter up the benchmark thread, hence why I posted here instead, as most people won't be paying attention to this thread in the first place.
C'est la vie!
The guy flat says the pump dies after an hour. The radiator has no fans so when the loop saturates the CPU will just overheat. So it won't actually work except as a stunt.
That the guy doesn't understand thermodynamics doesn't mean I don't.
Also did you notice he never shot the rig when it was on? That's because that pumop would have crushed his audio.
But you should definitely go out and mix metal in a loop. Galvanic corrosion isn't a thing except for you know chemistry and physics.
BTW that big rad GN used had decent fin density and used 4 fans pushing air through it. And it still overheated. No amount of tweaking voltage by hundredths of a volt was going to change that. When you saturate a loop as fast as that it means you're way past the loops limits.
Except that even without a fan, the radiator is still radiating heat. Cars do this all the time when they are parked, and older cars don't even have electric fans that stay on when the engine is shut off.
It won't radiate heat nearly as fast, but it still radiates heat. And this is how hot water radiators in houses work as well. No fan needed.
Linus even has a video where the promo pic shows a house hot water radiator sitting next to a computer, the inference being using the computer to heat the house a bit... I haven't watched the video though, 'cuz I already know about computers heating up rooms a bit. I probably should though, just to see if he actually attached his computer to a house hot water radiator, but that's just being silly...
Adding a fan will help of course, but as I noted, a LOT of people have created passive systems, well passive in the fact that they have no fans. And as I noted, Linus even built a 9980XE system with a bunch of radiators that was passively cooled.
Here's Steve's recap.
He never said the system overheated, he did note that the temps were running in the 80's though, in no doubt due to the 500W he as pushing through the system at some point. We was pretty happy with the cooling solution, although of course liquid nitrogen would probably yield better results. He wanted to see how well he could do on water, which was the whole point of the livestream.
To your point, at some point when pushing the overclocking limits, the CPU just can't shed heat fast enough, due to the aforementioned thermal dynamics. It takes time to transfer heat from one medium to another. Normally, this isn't a big issue, but when pushing a LOT of power through the system, you will hit a point where the CPU HAS to throttle, or worse yet just errors out, or if something goes wrong and you push too much power through it and it just fries itself. Kinda like pushing 1000 watts through a laptop CPU. It will die pretty much immediately because it was never designed for that amount of power, and to a related extent the resultiing heat that starts melting things...
This is also why liquid nitrogen is used so often in extreme overclocking. The huge temperature differential helps the CPU to shed a bit more heat through the exchange process, as the 'heat receptacle', in this case the LNO, is able to suck the heat a bit faster faster through the CPU plate Plus said LN, even though it's usually not being dumped on the whole motherboard, drops the ambient temps around the CPU a bit, which helps the VRMs and such due to the lower ambient temps around them.
Again, the key to passively cooled systems is surface area. Sure, fin density is denser on say a heater core or your 'typical' AIO cooler, but if we are talking a VERY LARGE radiator, say one that is used to cool a 110+ HP diesel tractor travelling at 2-3 miles per hour, burning 8 or more gallons of fuel an hour while pulling a large multi-bottomed plow, while keeping coolant temps below 180 degrees/82c, there's certainly enough surface area to deal with the heat, even with 'just' a single fan pushing air through it...Also, on the flipside, if the fins are spaced farther apart, this reduces the airflow resistance through the fins, simply due to the lower surface area cross section of said fins, allowing more air to 'naturally' pass through the fins, or providing less resistance to air being forced through the fins.
Tighter fins are designed to work in situations where you have fans spinning at a high rate pushing a lot of CFM through the radiators, but if you are looking at quieter solutions, well it's a tradeoff, said tradeoff being larger solutions vs. tighter fins.
Or doing completely different stuff like pumping your coolant through a geothermal gound loop, but of course the surface area of said ground loops, when you do the math, is quite significant, not to mention the sheer volume of said soil around the loop, plus the ambient temp of the soil around said geothermal loops is often lower than the ambient air in summer months. And warmer in winter months of course, which is why geothermal loops work so well for home heating.
This is why a lot of people that go the external radiator route are reporting temps in the 50s and 60's during 'regular' use over time, even for heavier workloads, as noted in the second article I linked above. The other key here is that since said radiators are often mounted externally, separate from the case, this improves the ambient airflow situation around said radiators. This should also help with the motherboard itself, as there's less hot air to push through the case, and fewer items radiating heat inside the case (i.e. the radiators and associated hardware) and you won't be pushing VRM heat and such through your already warm radiators, which lowers the temperature differential of said air.
Anyways, yeah I get that you are trying to show off your 'superior intelligence' here, so nothing to see here, move along.
Edit: forgot to add the link to this video. This is the '1 year' checkup' where the guy used an aluminum radiator.
He notes that the coolant used wasn't showing metal degradation YET, but that he'd be more diligent about cleaning the cooling loop in the future, as he expected it may take a while for such degradation to show up. His theory being that it was a very slow process, similar to how slowly copper ages as a roofing product. It takes a while to turn green...
Go for it then. Do not listen to people who do this for a living and believe what you want. The guy actually saying he didn't run the loop stable shouldn't matter to you.
The LTT video using a radiator to heat a room was a failure as they didn't actually push any serious heat through it because they knew they couldn't.
Passive systems have to be built to very specific requirements and you cannot do it in something like a TRx40 + 3 GPU rig without having radiator surface area approaching 8' x 10'.
The reason the guy destroying his system with galvanic corrosion didn't see any rust? Large volume of water + small particles. While there was lots of aluminum there was very little copper, just the cold plates. But the cold plates are getting pitted as they corrode. This will degrade performance and eventually degrade at the o ring seal where it leaks. With that much water you'd likely never see it (only the copper corrodes).
Again please go do it if you don't believe me or physics and chemistry.
My old Zhalman external tower fanless rad managed to disipate 150W of power from the cpus and gpus I ran on it. Pump wasn't completely silent but not bad.
Coolers have gotten much better since then and I no longer feel the need to mess with water cooling.
My old Zhalman external tower fanless rad managed to disipate 150W of power from the cpus and gpus I ran on it. Pump wasn't completely silent but not bad.
Coolers have gotten much better since then and I no longer feel the need to mess with water cooling.
At that low a draw you can easily set up a system, if you choose the right PSU, to run with the fans off or at very low rpm. Corsair's modular PSU's all can run at below 40% load with no fan. It's also easy enough to go into the UEFI and set all the fan curves to be 0 rpm if the temps stay low. Just realize that 150W is a modern CPU and about half a modern GPU so you can't really stress a system and keep the fans off without getting into exotic stuff like phase change cooling.
Except that even without a fan, the radiator is still radiating heat. Cars do this all the time when they are parked, and older cars don't even have electric fans that stay on when the engine is shut off.
The thing is, a parked car is never going to radiate peak heat while parked (unless you gun the engine at idle, in which case it WILL overheat unless the radiator has active cooling - just ask any motorcyclist.) And any time it is in the motion, the front grill of the car coupled with the fin design of the rad means you'll have airflow through it at an intensity proportional to the speed at which the car is traveling.
It won't radiate heat nearly as fast, but it still radiates heat. And this is how hot water radiators in houses work as well. No fan needed.
Household steam/water radiators are actually a COUNTER example to effective heat dissipation in a computing system. A steam/water system generates a consistent level of heat through INTERMITTENT boiler action which is then radiated NON-EXHAUSTIVELY through its radiators. If you were to apply a constant heat source to such a system, the boiler would eventually explode BECAUSE of the connected radiators not being able to radiate enough heat, by design.
ETA: To be clear, I'm not saying that passive watercooling is impossible/a bad idea (my production pc right now is actually a semi-passively watercooled system. And the performance/noise ratio on it is wonderful.) Just that these happen to be bad examples.
kenshaw: Just realize that 150W is a modern CPU and about half a modern GPU...
150W is an entire 1070. Gaming. Still relevant :p Iray is less power hungry.
The rad couldnt quite mange 200W TDP (total for entire card) gaming cards during the summer [eg stock 4850 + Core2, clocked 5850, 7970ghz].
The 150W I quoted was the final TDP draw of the Fanless rad that I calculated from wall readings after removing the parts not being cooled (VRMs, memory, not under water).
What I'm saying is loads of people have run car radiator setups over the years. They work. Just (as you said above) the corrosion maintenenace is never ever really worth it.
1) I was already planning on aluminum water blocks, as I noted in my original post on this subject for those paying attention. EVGA, etc. do make them, you just have to do some digging. There are also some copper car radiators out there, and then there's the whole question of corrosion inhibitors (i.e. antifreeze, etc.) as cars already mix metals. Thermostats are often copper, and used alongside radiators, but generally hold up for quite a long time before they eventually fail. Same for metal engine blocks vs aluminum radiators. The question then becomes what is the expected life expectancy of such usage. When using corrosion inhibitors, you do need to be mindful of the gasket and other materials used, as some inhibitors may react adversely with said materials. Again, it's about prolonging the life of the coolant system. You can also put a sacrificial anode into such systems, which incidentally is done in most home water heaters btw... Swapping out those anodes after say 7 years can help extend the life of your home water heater.
2) In the first video I linked with the car radiator, it was the water pump that was failing, not the cooling system as a whole. The video author noted that he would probably go with a different water pump in the future. He did note that coolant temps were staying very low pretty much all the time, but of course if the pump kicks out after an hour that pretty much kills the circulation at that point and the CPU will quickly heat up at that point.
3) Linus judged his 'heat this room' effort with the home hot water radiator a failure because despite flushing the radiator multiple times (just watched that video), there was still gunk in the radiator, which ended up gunking up his GPU Moral here: Used radiators are a bad idea, and even if you buy the chemicals specifically designed to 'un-gunk' radiators, well that's a hit or miss process. Filtration would have helped in this case, they do make filters for water loops, I have one of those lying around for some other purpose. He also noted the lagging heat curve, as it took a bit for the large amount of water to heat up. The big takeaway I got from his video was that the PC itself already is a pretty effective radiator/heat source, so you aren't going to gain anything buy pumping the same amount of thermal energy into a house radiator. It's still the same number of joules you are dealing with.
3) Here's the link to that passively cooled 18 core 9980XE I've mentioned a few times:
That's using the more 'traditional' PC radiators.
Linus deemed it an 'albiet expensive' big success, even with the GPU and CPU both incorporated into the loop. Temps stayed below 66c for the CPU, and 61c for the graphics card. I'd be curious to know if it was a single loop, and if so if the graphics card was ahead of the CPU in the loop or the other way around, just for academic purposes.
One other note. Vehicle radiators generally push a lot more BTU/joules, through them, by at least an order of magnitude, than your typical PC can ever generate. That's why I can use a 5500W Honda generator to run a PC with no issues, despite the low displacement of the 'lawnmower motor sized' motor.. Most PC's can run just fine on a 15 amp/1800 watt circuit, with room to spare. Car/Truck motors produce a LOT more energy, again by at least an order of magnitude, as compared to your typical 15 amp circuit.
There are other considerations of course, but the reason I linked a tractor radiator in my last post is to illustrate how effective such radiators can be, even when moving at very low speeds (3 MPH is typical walking speed btw) so you don't get a lot of air draft from such low speeds. You'll probably get more air movement from your air conditioning and room fan than we are talking here, and if there's a light wind moving in the same direction as the tractor, that pretty much negates any 'windage' airflow, and you often end up with a large dust cloud moving with you in the process, which isn't fun... The single fan sucking the air through the radiator is by far the major source of air movement in this usage case, with 'wind' airflow being negligable.
Note that, in the particular usage case I'm inferencing here, the pyrometer, which measures exhaust temperatures, routinely flloated between 800-900 degrees Farenheit. Yes I spent a significant portion of my youth operating and performing routine maintenance on one of these bad boys... The pyrometer readings were important as you didn't want to overheat/cook the turbo, which a previous owner had strongly recommended not doing again as turbos are expensive to replace. It was pretty easy to keep the pyrometer at or below 900 though, as long as the motor wasn't being 'overworked'. Picking a slower gearing was the solution in this case, still operating at full throttle, and also was a bit easier on the implements (bearings and such heat up more at higher speeds, which increases the rate of wear)...
Anyways, I' digressing here, my point STILL being that total surface area is very much a consideration with cooling, and that passive cooling HAS been done. With what I'm considering, there will still be a small handful of fans in play, mainly the chipset fan, power supply fan, and one of these 3 speed floor fans, set to low to keep the noise levels down. I'm after a very quiet albiet still robust build, which is why I'm looking at water cooling. Most people end up thermal throttling because they under-estimated the amount of heat dissapation area they should employ, i.e. the total surface area of the radiators. The solution here is pretty easy, go with a bigger radiator or hook another one up in series. Linus's 9980XE build essentially does exactly that - it has four radiators hooked up in series. The longer the coolant is in contact with the radiators, the more time it has to exchange heat with the vanes. It's all about the surface area...
Water flow rate is also important, but as Linus noted in his 9980XE video, the 'typical' water pump that he picked handled all of those radiators just fine.
kenshaw: Just realize that 150W is a modern CPU and about half a modern GPU...
150W is an entire 1070. Gaming. Still relevant :p Iray is less power hungry.
The rad couldnt quite mange 200W TDP (total for entire card) gaming cards during the summer [eg stock 4850 + Core2, clocked 5850, 7970ghz].
The 150W I quoted was the final TDP draw of the Fanless rad that I calculated from wall readings after removing the parts not being cooled (VRMs, memory, not under water).
What I'm saying is loads of people have run car radiator setups over the years. They work. Just (as you said above) the corrosion maintenenace is never ever really worth it.
150W is a modern CPU + half a modern GPU. Clearer now?
If someone really thinks they can build a production rig and not just a stunt that is cooled with a car radiator, have at it. It's your money. Just come and say your mea culpas when it blows up in your face.
Comments
I saw in the benchmarking thread where you were talking about water cooling your 3970X, but then switched over to air cooling for now. Didn't want to clutter that thread any more than we already have, so I decided to dig this thread back up instead.
This is more of a silly idea than a serious suggestion, but I thought I'd share anyways. I priced some car radiators at my local auto parts store, and they are surprisingly cheap. I saw one newly manufactured radiator for only $85 that looked particularly well suited for a DIY build.
Of course the radiator is still pretty large, about the size of a large ATX case, but I figure if you built a frame to attach the radiator and motherboard to, and then either canniabalized an old PC case or something for the motherboard tray and such, and then find some decent mesh grating to encase the system so that my cats wouldn't crawl in...
It'd definitely have a Mad Max/Road Warrior kinda feel, but that actually sounds interesting to me. All of this 'clean cut RGB future stuff' gets old after a while.
Of course, if someone were to actually do this (and at least one person on Youtube has used a car radiator for an overclock attempt of a FX 8350), you'd need to find the necessary adaptors, and of course watch your materials. Most car radiators are aluminum these days, so you'd need to find aluminum blocks for your CPU and GPUs. I did find a few aluminum blocks, but the majority of water blocks are copper, at least from what I saw in my parts browsing. There are also some 'radiator hose to NPT' adaptors, so I figured that a good way to go would be a pair of 4-6 port NPT manifolds and attaching them to said radiator hose adaptors. Then run the hoses to the pumps from the bottom manifold, returning through the top one.
You could also run an extra loop directly from the bottom manifold to the top, along the side of the radiator, with a clear tube so that you could visually check your water level without having to crack open the radiator cap. The car radiator hoses attach at the top and bottom of the back of the radiator, so a hose that passed direcly between these two points, with no pumps attached, should 'equalize' it's level with the level inside of the radiator when the system pumps are idle. And of course you'd pump from bottom to top, so that you aren't sucking air if the level starts to drop a bit. The top manifold shoud be well above the rest of the system, due to the radiator height, so it would require a significant drop in coolant level before the radiator fluid level would drop below the level of the water blocks.
The other reason for using npt manifolds with multiple ports is to allow multiple circuits for the water cooling loops. This way you could isolate the CPU loop from the GPU loop(s). This would require multiple pumps of course, and you'd only shave 1-2 degrees c off of each coolant loop doing this, but every little bit might help with the heat throttling on the 3970X...
The guy that car radiator'ed his 8350 noted that due to the sheer amount of coolant in the system, thanks to the high capacity of the car radiator, and the large surface area of said radiator, that he didn't even have to bother with a fan on the radiator, as his coolant temps weren't climbing more than a couple of degrees during use. He did ziptie a fan above the CPU water block to cool the VRMs around the CPU though, so that might have helped with the water cooling a tiny bit. If I do decide to go 'ghetto' with my next build, I'll probably still grab a portable house fan and point it at the radiator and the rest of the computer though, for the summer months since I often use a fan anyways in the summer to keep the air moving in my work area.
Just wanted to share my madness here. It sounds silly, until you start to think about the huge amount of surface area on your average car radiator. You'd also have to be mindful of leaks, but most car radiator systems leak very little once the hose clamps are suitably cinched up when you think about it. They tend to lose liquid mainly due to the fact the water is often heated near/above the boiling point in the car engine, and excess overflow from expansion gets dumped into the overflow bottle where a bit of it will evaporate from over time.
In any case, I do hope that you find a better water block and radiator system for your 3970X. Wendell at Level 1 Techs recently showcased a build where he did a bit of watercooling. Here's the link where he talks about Threadripper cooling:
In the meantime, sounds like your air cooler is doing the job sufficiently for now.
/mad DIY engineer steps off of podium...
A car radiator is very unsuited to use as PC cooling. The fin density is terrible which means you'd need to push a lot of air across the fins to cool the water enough that means loud fans. Also the volume of water is huge so you'd need a pretty beefy pump, again that would be loud.
A custom loop for TR and multiple GPU's wouldn't be terribly hard but as pointed out installing blocks on GPU's voids the warranty. In a case like the O11 with more than sufficient airflow I wouldn't bother unless the rig overheats even with good fans and a TRx40 specific air cooler.
@xionis, here's a list of the currently known/speculated on paramters concerning functional memory pooling in Daz Studio/Iray:
Note: As indicated above, TCC mode is not presently known to be supported (at least officially) on any GeForce cards. This is potentially a problem for getting memory pooling to work with 2070 SUPER, 2080, 2080 SUPER or 2080Ti GPUs since it is supposedly essential for getting GPUDirect (the underlying technology behind memory pooling between Nvidia cards) to work properly in Windows due to a longstanding operating system limitation of some sort. However real-world testing has so far indicated that enabling SLI on these cards functions as a workaround to this limitation.
So what it all really boils down to is whether any combination of SLI on/off, WDDM/TCC mode toggle (if possible - directions for testing here), GPU1/2 selected under Photoreal Devices, NVLink Peer Group size (0-2) result in successful GPU only rendering of an Iray scene in excess of 8GBs.
By the way, you should bug report any outright application crashes you get as a result of playing around with this stuff directly to Daz asap. The fact that you are getting application crashes rather than just no change in observable behavior at all is itself actually a good sign since it means that it should work. Meaning that getting it to actually work is just a matter of bug fixing. Once those bugs are reported by the handful of users (like you) with hardware capable of testing things out.
Testing for TCC driver mode support:
nvidia-smi -i 0 -fdm TCC
". This will attempt to switch your primary GPU into TCC driver mode.nvidia-smi -i 0 -fdm WDDM
" to set things back to how first were.This guy might disagree with you there kenshaw...
This next guy is using a heater core not a coolant radiator. The coolant connectors are a more friendly size, and a number of people that do this have went the heater core route, but the radiator is a bit smaller...
https://www.instructables.com/id/Computer-liquid-cooling-with-Car-parts/k
He was quite happy with his temps after it was all said and done.
There are numerous other examples, plus Gamers nexus recently used a fairly large radiator for ther 9980XE overclock livestream. It's not quite the same thing though, although they did have four 200MM fans strapped to the front of the rather large radiator they picked. Temps were managaable, but the CPU hit it's hard limit so while Steve did manage a very respectable overclock, there were some niggling issues with the overclock that would take more time to chase down, which would have bored the livestream audience quite a bit there at the tail end, as is usual when trying to eek out the last few percentage points of a maxed out overlcock. Note that I'm not looking to overclock myself, at least not these days, but i do have to give props to the 500+ watts he was pushing through the CPU
A few people have went the car radiator route to passively water cool their PC's, no fans needed, but the examples I found weren't really looking to do extreme overclocking or anything like that. One guy even experimented with a geothermal loop, which was rather interesting... Another guy gave a 'after 1 year' update, where he drained his system and the system was still quite clean despite mixing copper and aluminum. He used a new radiator though, not something from a junkyard. He did note that over a longer period of time he expected to see a bit of corrosion, but he was planning on more diligent maintenance of the loop in the coming years as the components aged.
Anyways, I already know you are skeptical from previous posts, and I wasn't suggesting the OP go this route, as theres a fair amount of DIY involved. I just thought the OP would find it amusing and I like to amuse people occasionally. Plus I didn't want to clutter up the benchmark thread, hence why I posted here instead, as most people won't be paying attention to this thread in the first place.
C'est la vie!
By the time Cyberpunk 2077 releases in September the next gen Nvidia's may be out, or at the very least announced. Just food for thought. I plan on probably disappearing around that time as well, LOL.
At any rate, two 2080 Supers will easily best a 2080ti in speed. If you ever manage to get SLI working properly, you'll have more VRAM as well. Even if this extra pool of VRAM is just for texture data like Richard said, that is a huge chunk of data that will help free up space.
It has to be possible. The Vray guys got this working a long time ago on the 2080ti and 2080 in their experiments. They proved it by rendering a scene that was too large for a single card. They also observed a small performance hit from using SLI, but this hit was VERY small. With 4 different scenes, the differences were only a few percentage points in 3 of these scenes. The 2080ti's link is faster, so the 2080 Super will experience a slightly higher hit in SLI. So if you know your scene is going to fit under 8gb, then you can disable SLI. If your renders are running the same speed with SLI on and off, that would probably indicate something is not right.
Concerning TCC mode, this is what they say in the Vray documentation:
To use NVLINK on supported hardware, NVLINK devices must be set to TCC mode. This is recommended for Pascal, Volta and Turning-based Quadro models. For GeForce RTX cards, a SLI setup is sufficient. Also note that to prevent performance loss, not all data is shared between devices.
So there's that. Of course Vray is not Iray, but it would seem like Iray should be able to match would Vray can do.
The guy flat says the pump dies after an hour. The radiator has no fans so when the loop saturates the CPU will just overheat. So it won't actually work except as a stunt.
That the guy doesn't understand thermodynamics doesn't mean I don't.
Also did you notice he never shot the rig when it was on? That's because that pumop would have crushed his audio.
But you should definitely go out and mix metal in a loop. Galvanic corrosion isn't a thing except for you know chemistry and physics.
BTW that big rad GN used had decent fin density and used 4 fans pushing air through it. And it still overheated. No amount of tweaking voltage by hundredths of a volt was going to change that. When you saturate a loop as fast as that it means you're way past the loops limits.
Except that even without a fan, the radiator is still radiating heat. Cars do this all the time when they are parked, and older cars don't even have electric fans that stay on when the engine is shut off.
It won't radiate heat nearly as fast, but it still radiates heat. And this is how hot water radiators in houses work as well. No fan needed.
Linus even has a video where the promo pic shows a house hot water radiator sitting next to a computer, the inference being using the computer to heat the house a bit... I haven't watched the video though, 'cuz I already know about computers heating up rooms a bit. I probably should though, just to see if he actually attached his computer to a house hot water radiator, but that's just being silly...
Adding a fan will help of course, but as I noted, a LOT of people have created passive systems, well passive in the fact that they have no fans. And as I noted, Linus even built a 9980XE system with a bunch of radiators that was passively cooled.
Here's Steve's recap.
He never said the system overheated, he did note that the temps were running in the 80's though, in no doubt due to the 500W he as pushing through the system at some point. We was pretty happy with the cooling solution, although of course liquid nitrogen would probably yield better results. He wanted to see how well he could do on water, which was the whole point of the livestream.
To your point, at some point when pushing the overclocking limits, the CPU just can't shed heat fast enough, due to the aforementioned thermal dynamics. It takes time to transfer heat from one medium to another. Normally, this isn't a big issue, but when pushing a LOT of power through the system, you will hit a point where the CPU HAS to throttle, or worse yet just errors out, or if something goes wrong and you push too much power through it and it just fries itself. Kinda like pushing 1000 watts through a laptop CPU. It will die pretty much immediately because it was never designed for that amount of power, and to a related extent the resultiing heat that starts melting things...
This is also why liquid nitrogen is used so often in extreme overclocking. The huge temperature differential helps the CPU to shed a bit more heat through the exchange process, as the 'heat receptacle', in this case the LNO, is able to suck the heat a bit faster faster through the CPU plate Plus said LN, even though it's usually not being dumped on the whole motherboard, drops the ambient temps around the CPU a bit, which helps the VRMs and such due to the lower ambient temps around them.
Again, the key to passively cooled systems is surface area. Sure, fin density is denser on say a heater core or your 'typical' AIO cooler, but if we are talking a VERY LARGE radiator, say one that is used to cool a 110+ HP diesel tractor travelling at 2-3 miles per hour, burning 8 or more gallons of fuel an hour while pulling a large multi-bottomed plow, while keeping coolant temps below 180 degrees/82c, there's certainly enough surface area to deal with the heat, even with 'just' a single fan pushing air through it...Also, on the flipside, if the fins are spaced farther apart, this reduces the airflow resistance through the fins, simply due to the lower surface area cross section of said fins, allowing more air to 'naturally' pass through the fins, or providing less resistance to air being forced through the fins.
Tighter fins are designed to work in situations where you have fans spinning at a high rate pushing a lot of CFM through the radiators, but if you are looking at quieter solutions, well it's a tradeoff, said tradeoff being larger solutions vs. tighter fins.
Or doing completely different stuff like pumping your coolant through a geothermal gound loop, but of course the surface area of said ground loops, when you do the math, is quite significant, not to mention the sheer volume of said soil around the loop, plus the ambient temp of the soil around said geothermal loops is often lower than the ambient air in summer months. And warmer in winter months of course, which is why geothermal loops work so well for home heating.
This is why a lot of people that go the external radiator route are reporting temps in the 50s and 60's during 'regular' use over time, even for heavier workloads, as noted in the second article I linked above. The other key here is that since said radiators are often mounted externally, separate from the case, this improves the ambient airflow situation around said radiators. This should also help with the motherboard itself, as there's less hot air to push through the case, and fewer items radiating heat inside the case (i.e. the radiators and associated hardware) and you won't be pushing VRM heat and such through your already warm radiators, which lowers the temperature differential of said air.
Anyways, yeah I get that you are trying to show off your 'superior intelligence' here, so nothing to see here, move along.
Edit: forgot to add the link to this video. This is the '1 year' checkup' where the guy used an aluminum radiator.
He notes that the coolant used wasn't showing metal degradation YET, but that he'd be more diligent about cleaning the cooling loop in the future, as he expected it may take a while for such degradation to show up. His theory being that it was a very slow process, similar to how slowly copper ages as a roofing product. It takes a while to turn green...
Go for it then. Do not listen to people who do this for a living and believe what you want. The guy actually saying he didn't run the loop stable shouldn't matter to you.
The LTT video using a radiator to heat a room was a failure as they didn't actually push any serious heat through it because they knew they couldn't.
Passive systems have to be built to very specific requirements and you cannot do it in something like a TRx40 + 3 GPU rig without having radiator surface area approaching 8' x 10'.
The reason the guy destroying his system with galvanic corrosion didn't see any rust? Large volume of water + small particles. While there was lots of aluminum there was very little copper, just the cold plates. But the cold plates are getting pitted as they corrode. This will degrade performance and eventually degrade at the o ring seal where it leaks. With that much water you'd likely never see it (only the copper corrodes).
Again please go do it if you don't believe me or physics and chemistry.
My old Zhalman external tower fanless rad managed to disipate 150W of power from the cpus and gpus I ran on it. Pump wasn't completely silent but not bad.
Coolers have gotten much better since then and I no longer feel the need to mess with water cooling.
At that low a draw you can easily set up a system, if you choose the right PSU, to run with the fans off or at very low rpm. Corsair's modular PSU's all can run at below 40% load with no fan. It's also easy enough to go into the UEFI and set all the fan curves to be 0 rpm if the temps stay low. Just realize that 150W is a modern CPU and about half a modern GPU so you can't really stress a system and keep the fans off without getting into exotic stuff like phase change cooling.
The thing is, a parked car is never going to radiate peak heat while parked (unless you gun the engine at idle, in which case it WILL overheat unless the radiator has active cooling - just ask any motorcyclist.) And any time it is in the motion, the front grill of the car coupled with the fin design of the rad means you'll have airflow through it at an intensity proportional to the speed at which the car is traveling.
Household steam/water radiators are actually a COUNTER example to effective heat dissipation in a computing system. A steam/water system generates a consistent level of heat through INTERMITTENT boiler action which is then radiated NON-EXHAUSTIVELY through its radiators. If you were to apply a constant heat source to such a system, the boiler would eventually explode BECAUSE of the connected radiators not being able to radiate enough heat, by design.
ETA: To be clear, I'm not saying that passive watercooling is impossible/a bad idea (my production pc right now is actually a semi-passively watercooled system. And the performance/noise ratio on it is wonderful.) Just that these happen to be bad examples.
150W is an entire 1070. Gaming. Still relevant :p Iray is less power hungry.
The rad couldnt quite mange 200W TDP (total for entire card) gaming cards during the summer [eg stock 4850 + Core2, clocked 5850, 7970ghz].
The 150W I quoted was the final TDP draw of the Fanless rad that I calculated from wall readings after removing the parts not being cooled (VRMs, memory, not under water).
What I'm saying is loads of people have run car radiator setups over the years. They work. Just (as you said above) the corrosion maintenenace is never ever really worth it.
Couple of things guys.
1) I was already planning on aluminum water blocks, as I noted in my original post on this subject for those paying attention. EVGA, etc. do make them, you just have to do some digging. There are also some copper car radiators out there, and then there's the whole question of corrosion inhibitors (i.e. antifreeze, etc.) as cars already mix metals. Thermostats are often copper, and used alongside radiators, but generally hold up for quite a long time before they eventually fail. Same for metal engine blocks vs aluminum radiators. The question then becomes what is the expected life expectancy of such usage. When using corrosion inhibitors, you do need to be mindful of the gasket and other materials used, as some inhibitors may react adversely with said materials. Again, it's about prolonging the life of the coolant system. You can also put a sacrificial anode into such systems, which incidentally is done in most home water heaters btw... Swapping out those anodes after say 7 years can help extend the life of your home water heater.
2) In the first video I linked with the car radiator, it was the water pump that was failing, not the cooling system as a whole. The video author noted that he would probably go with a different water pump in the future. He did note that coolant temps were staying very low pretty much all the time, but of course if the pump kicks out after an hour that pretty much kills the circulation at that point and the CPU will quickly heat up at that point.
3) Linus judged his 'heat this room' effort with the home hot water radiator a failure because despite flushing the radiator multiple times (just watched that video), there was still gunk in the radiator, which ended up gunking up his GPU Moral here: Used radiators are a bad idea, and even if you buy the chemicals specifically designed to 'un-gunk' radiators, well that's a hit or miss process. Filtration would have helped in this case, they do make filters for water loops, I have one of those lying around for some other purpose. He also noted the lagging heat curve, as it took a bit for the large amount of water to heat up. The big takeaway I got from his video was that the PC itself already is a pretty effective radiator/heat source, so you aren't going to gain anything buy pumping the same amount of thermal energy into a house radiator. It's still the same number of joules you are dealing with.
3) Here's the link to that passively cooled 18 core 9980XE I've mentioned a few times:
That's using the more 'traditional' PC radiators.
Linus deemed it an 'albiet expensive' big success, even with the GPU and CPU both incorporated into the loop. Temps stayed below 66c for the CPU, and 61c for the graphics card. I'd be curious to know if it was a single loop, and if so if the graphics card was ahead of the CPU in the loop or the other way around, just for academic purposes.
One other note. Vehicle radiators generally push a lot more BTU/joules, through them, by at least an order of magnitude, than your typical PC can ever generate. That's why I can use a 5500W Honda generator to run a PC with no issues, despite the low displacement of the 'lawnmower motor sized' motor.. Most PC's can run just fine on a 15 amp/1800 watt circuit, with room to spare. Car/Truck motors produce a LOT more energy, again by at least an order of magnitude, as compared to your typical 15 amp circuit.
There are other considerations of course, but the reason I linked a tractor radiator in my last post is to illustrate how effective such radiators can be, even when moving at very low speeds (3 MPH is typical walking speed btw) so you don't get a lot of air draft from such low speeds. You'll probably get more air movement from your air conditioning and room fan than we are talking here, and if there's a light wind moving in the same direction as the tractor, that pretty much negates any 'windage' airflow, and you often end up with a large dust cloud moving with you in the process, which isn't fun... The single fan sucking the air through the radiator is by far the major source of air movement in this usage case, with 'wind' airflow being negligable.
Note that, in the particular usage case I'm inferencing here, the pyrometer, which measures exhaust temperatures, routinely flloated between 800-900 degrees Farenheit. Yes I spent a significant portion of my youth operating and performing routine maintenance on one of these bad boys... The pyrometer readings were important as you didn't want to overheat/cook the turbo, which a previous owner had strongly recommended not doing again as turbos are expensive to replace. It was pretty easy to keep the pyrometer at or below 900 though, as long as the motor wasn't being 'overworked'. Picking a slower gearing was the solution in this case, still operating at full throttle, and also was a bit easier on the implements (bearings and such heat up more at higher speeds, which increases the rate of wear)...
Anyways, I' digressing here, my point STILL being that total surface area is very much a consideration with cooling, and that passive cooling HAS been done. With what I'm considering, there will still be a small handful of fans in play, mainly the chipset fan, power supply fan, and one of these 3 speed floor fans, set to low to keep the noise levels down. I'm after a very quiet albiet still robust build, which is why I'm looking at water cooling. Most people end up thermal throttling because they under-estimated the amount of heat dissapation area they should employ, i.e. the total surface area of the radiators. The solution here is pretty easy, go with a bigger radiator or hook another one up in series. Linus's 9980XE build essentially does exactly that - it has four radiators hooked up in series. The longer the coolant is in contact with the radiators, the more time it has to exchange heat with the vanes. It's all about the surface area...
Water flow rate is also important, but as Linus noted in his 9980XE video, the 'typical' water pump that he picked handled all of those radiators just fine.
150W is a modern CPU + half a modern GPU. Clearer now?
If someone really thinks they can build a production rig and not just a stunt that is cooled with a car radiator, have at it. It's your money. Just come and say your mea culpas when it blows up in your face.