The Titan X (Maxwell), Quadro M6000, and Tesla M40 should be pretty close in terms of performance. I will be testing the M40 vs. the Titan X with each in a PCIE X16 slot. But tests I have seen reported before suggest that the speed of the PCIE connection really only matters during loading/unloading of the graphics card. Must wait for my power adapters to arrive.
...so wonder if the USB connection may be the chokepoint. . 150 iterations per 5 min seems awfully slow for that powerful of a card. I've had proof/test renders using a Maxwell 750 Ti with only about 1/5 the cores of the M40) exceed 1,200 iterations in a little over over 5 min (just ran a test with that card and ended up with 1,270 iterations in 337s (5 min 37s) of actual render time at 1,200 x 900 resolution and render quality 3. This included a fairly high poly hair prop and texture.
USB 3 is 5 gbps, or about 640MBps. PCIE gen 3 is 8 Gts (transfer roughly to a bit) or about 985 MBps per lane. So USB 3 is roughly equivalent to a single PCIE lane. That should make loading the scene on the card take longer,maybe a lot longer if you're loading all 24 GB. But it should not substantially effect performance otherwise.
It is entirely possible the M40 is undervolted, we have done that routinely in the past. It is also possible the card is failing in some way, these used cards have seen some pretty serious use.
...thanks for the clarification. Maybe not so good a deal then. Wouldn't know how to adjust the voltage and really not sure I want to mess around with that anyway. Also makes sense that these cards in servers and deep learning workstations take something of a beating.as they tend to be driven at peak levels for long periods of time.
The M40's came in, look like new! The M40 24GB has what looks like a single EPS 12V 8-pin socket (may be a proprietary variation). The original 12GB M40 took one 8-pin and one 6-pin PCIE plug. Today the special power adapter cables (lets you run the M40 off of two 8-pin PCIE) came in. This is what uezi noted he had used. Be a few days, need to adapt my open frame server board for testing, that way I have plenty of room to work, and I can fit even an 18" card plus cooler. I am going to use a PCIE 3.0 x16 slot to avoid any bandwidth issues.
I plan to test initially with LuxMark 3.1, which is my standard. It is easy to set up and run (assuming that the Tesla drivers include OpenCl), and there are no issues with new builds, etc. I will start with the cooling adapter than uses 60mm fans. I can put two 40 CFM fans in series. I have a blower, but that is just not going to fit in even the big EEB case.
I can test vs. a TitanX (Maxwell), maybe a couple of other GPUs. I wil be monitoring clock rates, temperature, and voltage, and GPU duty cycle. I can run the tests with both OpenCL and C++ LuxCore engines that are included with LuxMark. LuxMark gives a number of speed ratings, including the overall "LuxMarks", iterations per second, and some others. I always use the most complex test scene it comes with, Sala (lobby).
Now I see that a couple of vendors are selling Brand New M40 units with a one year warranty for as little as $400! Some people are still trying to sell new units for $5000 + !
The M40's came in, look like new! The M40 24GB has what looks like a single EPS 12V 8-pin socket (may be a proprietary variation). The original 12GB M40 took one 8-pin and one 6-pin PCIE plug. Today the special power adapter cables (lets you run the M40 off of two 8-pin PCIE) came in. This is what uezi noted he had used. Be a few days, need to adapt my open frame server board for testing, that way I have plenty of room to work, and I can fit even an 18" card plus cooler. I am going to use a PCIE 3.0 x16 slot to avoid any bandwidth issues.
I plan to test initially with LuxMark 3.1, which is my standard. It is easy to set up and run (assuming that the Tesla drivers include OpenCl), and there are no issues with new builds, etc. I will start with the cooling adapter than uses 60mm fans. I can put two 40 CFM fans in series. I have a blower, but that is just not going to fit in even the big EEB case.
I can test vs. a TitanX (Maxwell), maybe a couple of other GPUs. I wil be monitoring clock rates, temperature, and voltage, and GPU duty cycle. I can run the tests with both OpenCL and C++ LuxCore engines that are included with LuxMark. LuxMark gives a number of speed ratings, including the overall "LuxMarks", iterations per second, and some others. I always use the most complex test scene it comes with, Sala (lobby).
Now I see that a couple of vendors are selling Brand New M40 units with a one year warranty for as little as $400! Some people are still trying to sell new units for $5000 + !
I'm pretty sure the 24Gb M40 is one of the ones that uses an 8 pin CPU power connector. Those were a PITA, and sorry I'd forgotten all about it.
If your PSU doesn't have a spare CPU plug you'll need a 2x8 pin PCIE to single 8 pin CPU cable.
I'm pretty sure the 24Gb M40 is one of the ones that uses an 8 pin CPU power connector. Those were a PITA, and sorry I'd forgotten all about it.
If your PSU doesn't have a spare CPU plug you'll need a 2x8 pin PCIE to single 8 pin CPU cable.
Yep, I ordered some of the adapters. Several vendors have them, even selling the original NVIDIA part (or so they say). The boards I want to install the Teslas on are dual-cpu, so no spare EPS 12v CPU connectors. I seem to have every adapter imaginable, except that one even have the reverse of the one I need, Eight-pin CPU (female) to two 8-pin PCIE (Male). Got two of them in today (NVIDIA #030-0571-000), so I am set. At least while I was looking, I ran across the adapters to run SuperMicro X8 blade server boards with standard ATX power supplies. I seem to only find stuff when I am looking for something else.
I'm pretty sure the 24Gb M40 is one of the ones that uses an 8 pin CPU power connector. Those were a PITA, and sorry I'd forgotten all about it.
If your PSU doesn't have a spare CPU plug you'll need a 2x8 pin PCIE to single 8 pin CPU cable.
Yep, I ordered some of the adapters. Several vendors have them, even selling the original NVIDIA part (or so they say). The boards I want to install the Teslas on are dual-cpu, so no spare EPS 12v CPU connectors. I seem to have every adapter imaginable, except that one even have the reverse of the one I need, Eight-pin CPU (female) to two 8-pin PCIE (Male). Got two of them in today (NVIDIA #030-0571-000), so I am set. At least while I was looking, I ran across the adapters to run SuperMicro X8 blade server boards with standard ATX power supplies. I seem to only find stuff when I am looking for something else.
They have non standard connectors? I'm always dealing with them in racks and just plug in server PSU's so I've never really paid any attention to how they wired up.
They have non standard connectors? I'm always dealing with them in racks and just plug in server PSU's so I've never really paid any attention to how they wired up.
They may not be non-standard, just a thought. I never saw any information in my searches that indicated that you could power an M40 (24GB) directly from a standard CPU plug. And the new units for sale come with the PCIE adapter. I am a little paranoid after dealing with proprietary connectors that look like standard, but I will check the wiring against a CPU plug (one of the things on my list). Then I will try powering one directly from a EPS 12V CPU plug on my single-CPU test rig.
The manul for the K80 (Dual-Kepler GPU) says: "Power Connectors: One 8-pin CPU power connector".
Update: found a M40 (12GB) tech summary that states: "One CPU 8-pin auxiliary power connector". So, I will give it a try with a bit more confidence.
Hmmmm...The cooling for the M40-2's may be more challenging than I thought.
One thing I noticed is that the heat sink fins are not open like most I have seen, they are actually closed channels. This card is designed for cooling with air blown in from either the front or the back, and nothing else. So, removing the shroud is not really going to help much, and my thought of mounting fans on the side, or just positioning fans to blow onto the heat sink is not going to work.
Uezi's adapter and blower solution looks to be the easiest to implement, but it will make the unit too long to fit in even my huge Rosewill Vampire server case. Taking out the drive bays is not really an option, I need some of them. I have 13" of space available and the card is 10.5".
The adapter that mounts a blower on the side of the card and has a 180 degree duct to route the air to the card could work for some people, but the adapter has no provision for supporting the blower or the shroud?!. No brackets, no screw holes, nothing. The 60mm fan adapter I ordered may work, but the adapter sticks out to the side, and you could not mount two cards. Also, it generates a fair amount of back pressure, so I don't get much air flow with one fan. Going to try two 40cfm fans in series, maybe that will at least let me test the cards in my open-frame test workstation.
There is another cooler that you can order that mounts on the back of the cards, and uses two high-speed server fans. From experience, these are incredibly noisy (they even have a warning in the ad), sort of a cross between a dentist drill and an air raid siren.
Meanwhile, I am designing an air box/plenum to fit over the back of 1 or 2 cards and use a 120mm 130 CFM fan. This will sort of mimic the cooling in the servers that these would normally be mounted in. Dollar Tree has a clear plastic box that I think is just the right size. I might be able to rig this up so it will fit in my case.
My fallback is that I will use the cards in the open frame server. It is intended to be the master node for my small renderfarm anyway, and it will not matter how long the cards are.
Since I am working on a more general installation of the M40, and may or may not get to trying various external installations, should I start a new thread?
Hmmmm...The cooling for the M40-2's may be more challenging than I thought.
One thing I noticed is that the heat sink fins are not open like most I have seen, they are actually closed channels. This card is designed for cooling with air blown in from either the front or the back, and nothing else. So, removing the shroud is not really going to help much, and my thought of mounting fans on the side, or just positioning fans to blow onto the heat sink is not going to work.
Uezi's adapter and blower solution looks to be the easiest to implement, but it will make the unit too long to fit in even my huge Rosewill Vampire server case. Taking out the drive bays is not really an option, I need some of them. I have 13" of space available and the card is 10.5".
The adapter that mounts a blower on the side of the card and has a 180 degree duct to route the air to the card could work for some people, but the adapter has no provision for supporting the blower or the shroud?!. No brackets, no screw holes, nothing. The 60mm fan adapter I ordered may work, but the adapter sticks out to the side, and you could not mount two cards. Also, it generates a fair amount of back pressure, so I don't get much air flow with one fan. Going to try two 40cfm fans in series, maybe that will at least let me test the cards in my open-frame test workstation.
There is another cooler that you can order that mounts on the back of the cards, and uses two high-speed server fans. From experience, these are incredibly noisy (they even have a warning in the ad), sort of a cross between a dentist drill and an air raid siren.
Meanwhile, I am designing an air box/plenum to fit over the back of 1 or 2 cards and use a 120mm 130 CFM fan. This will sort of mimic the cooling in the servers that these would normally be mounted in. Dollar Tree has a clear plastic box that I think is just the right size. I might be able to rig this up so it will fit in my case.
My fallback is that I will use the cards in the open frame server. It is intended to be the master node for my small renderfarm anyway, and it will not matter how long the cards are.
2 Deltas? That's crazy overkill. Most of the 2U racks in my datacenter just have 4 total, some have 4 but those are the storage servers with just massive loads of HDD's so they 4 in front and 4 behind.
If you really need alot of CFM get a single Delta but they're load. But first I'd put in the case and monitor what it does during a stress test.
2 Deltas? That's crazy overkill. Most of the 2U racks in my datacenter just have 4 total, some have 4 but those are the storage servers with just massive loads of HDD's so they 4 in front and 4 behind.
If you really need alot of CFM get a single Delta but they're load. But first I'd put in the case and monitor what it does during a stress test.
The problem is that the design of the adapter includes a narrow air channel and two sharp turns, so there is a lot of back pressure. I am not getting anywhere near the air flow I hoped, even with two 40 CFM fans in series. I will try it cautiously next couple of days while monitoring temperatures.
I have some triple-stacked high-speed fans from 1U servers, but they are just too loud.
Found an issue with my air plenum design - there is not enought clearance between the bottom of this card and the heat sink for the C602 chip on the motherboard without blocking air flow to that heat sink. Needs more thought.
I thought about reproducing Uezi's setup, since it works for him, and would fit on my open frame mount, but the Australian vendor for the blower adapters now warns of increased delays, so it looks like it might take 2 months to get an adapter.
This video has an interesting varition on cooling Teslas with a blower, using a PCI slot case cooler:
Unfortunately, the particular cooler they specify is not available. But, I have a couple of similar ones to try. This would allow, I think, installing two cards on one motherboard, width-wise. The size and configuration of these blowers will make them much easier to adapt than the ones I have now. The lack of restrictions should allow full air flow.
I think it might be time to find someone with a 3D printer and Solidworks. Get some good measurements and design your own fan mount and shroud.
I have been thinking about that. For the price of a few of those fan shrouds (the ones from Australia end up being about $40) I could buy a low-end 3D printer. The big public library supposedly put in some 3d printers you reserve time on. Must check this out.
...nice kitbash solution, but making those modifications look a bit tough for my stiff arthritic hands to handle. Also being a "hammer wheel" type fan, it would make the entire assembly too long for my case. Again not interested in an external setup because of the older slower USB inputs I have (2.0) and having to keep the case open for the other connections. a perpendicular fan and adaptor shroud would already be a very tight fit .
...nice kitbash solution, but making those modifications look a bit tough for my stiff arthritic hands to handle. Also being a "hammer wheel" type fan, it would make the entire assembly too long for my case. Again not interested in an external setup because of the older slower USB inputs I have (2.0) and having to keep the case open for the other connections. a perpendicular fan and adaptor shroud would already be a very tight fit .
I will be testing the 60mm fan adapter first. Even with two fans, it is just under 13 inches total length, so a possible fit in several cases. The blower-type units won't fit in any case I have unless I remove the drive cages, which I don't want to do. I can use the open-frame PC, but I would rather not.
I think it might be time to find someone with a 3D printer and Solidworks. Get some good measurements and design your own fan mount and shroud.
I have been thinking about that. For the price of a few of those fan shrouds (the ones from Australia end up being about $40) I could buy a low-end 3D printer. The big public library supposedly put in some 3d printers you reserve time on. Must check this out.
My first 3D printer was the low end model from Monoprice.
I think it might be time to find someone with a 3D printer and Solidworks. Get some good measurements and design your own fan mount and shroud.
I have been thinking about that. For the price of a few of those fan shrouds (the ones from Australia end up being about $40) I could buy a low-end 3D printer. The big public library supposedly put in some 3d printers you reserve time on. Must check this out.
My first 3D printer was the low end model from Monoprice.
Hmmm...with the first M40 installed, will not POST on two different motherboards. Tried powering with both options, the PCIE adapter, and (on one board) the spare CPU plug.
This suggests that it won't work without 4G Address decoding enabled.
Both the motherboards I tried do not have an option to enable 4G decoding (most of my motherboards are LGA 2011V1/V2 vintage), although all work with my TitanX, which has basically the same GPU. Well, I do have that 4G decoding option on at least one of my newer machines, so I will see if I can fit it in to test it. Will try the other board too.
As Rosanne Rosannadanna said, "It's always something!"
Woke up this morning with the certainty that I seen that option for 4GB decoding in the ASROCK C602 bios screens, even though I could not find it in the documentation I have. Finally found it buried in the North Bridge settings.
So, I may have more flexibility than I thought. So far I can't find the same option for the Supermicro C602 board, but I will look more later.
Also found a note on one of the NVIDIA forums that confirmed that enabling that option was necessary for TESLA modules in general.
So, I will try testing the M40's with the ASROCK server board on the open frame mount.
Woke up this morning with the certainty that I seen that option for 4GB decoding in the ASROCK C602 bios screens, even though I could not find it in the documentation I have. Finally found it buried in the North Bridge settings.
So, I may have more flexibility than I thought. So far I can't find the same option for the Supermicro C602 board, but I will look more later.
Also found a note on one of the NVIDIA forums that confirmed that enabling that option was necessary for TESLA modules in general.
So, I will try testing the M40's with the ASROCK server board on the open frame mount.
That isn't set by default? How old are these motherboards?
Woke up this morning with the certainty that I seen that option for 4GB decoding in the ASROCK C602 bios screens, even though I could not find it in the documentation I have. Finally found it buried in the North Bridge settings.
So, I may have more flexibility than I thought. So far I can't find the same option for the Supermicro C602 board, but I will look more later.
Also found a note on one of the NVIDIA forums that confirmed that enabling that option was necessary for TESLA modules in general.
So, I will try testing the M40's with the ASROCK server board on the open frame mount.
That isn't set by default? How old are these motherboards?
They are LGA-2011 V1/V2 with C602 North Bridge. I had only tried the Supermicro C602 version, which would not POST with a TESLA installed. Apparently that setting is not available. Aftter some hunting, found the option on the ASROCK RACK C602, defaulting to "AUTO". So, it might have worked if I had tried it, but went ahead and set it to "Enabled". Will try the TESLAS tomorrow.
Of course, the original plan was to mount them in the workstation with the Supermicro board, as they seem to be more reliable, but I can swap things around.
Edit: Ok, never mind. Found the setting for the Supermicro boards now too. Defaults to "disabled".
All Maxwell GPU, 3072 CUDA cores, installed in PCIE 3.0 x 16 slot. The memory clock frequency is lower for the M-40, and that may explain the lower performance.
The "space-saving" fan adapter, in spite of the back pressure, was able to keep the GPU temp in the low 60s with 2x 40 CFM 60mm fans. So the GPU stayed at the "Boost Clock" frequency throuhout the test. The card + fans is 13 inches long. Would just barely fit in the big case.
The results above are consistent with averages from the LuxMark 3.1 database at the same GPU clock rates.
So next I will test the other card, then set up testing with DS Iray.
Have a couple of mining links on the way, so I will try those later.
...again I wonder if I have enough airflow in the case already with the 200mm intake fan on the side the 120mm intake fan on the mid-front. (both specified as GPU cooling) and twin 140 mm exhaust fans on the top to draw heat out.
...again I wonder if I have enough airflow in the case already with the 200mm intake fan on the side the 120mm intake fan on the mid-front. (both specified as GPU cooling) and twin 140 mm exhaust fans on the top to draw heat out.
That is what I hoped at first too, but the heat sinks (contrary to some pictures folks claimed were M40s with the shroud removed) are closed channels, designed to have air forced through them. There might be enough flow if you had a good fan directly behind it. I would think taking off the shroud would help to some degree just by exposing that one heat sink surface. But it was not immediatey obvious to me how to remove the shroud.
Some have reported that you can replace the heat sink with one from another Maxwell-based card, such as a GTX980, but I did not want to go that far unless I have too.
Another bad point is that if your bios does not have the option to enable "above 4GB address decoding" the card won't work. The option may be worded as "above 4G addressing" or "above 4g" or "above 4G decoding", it is not consistent. The option was well hidden on my motherboard, and I could find no mention of it in the documentation. This is the only card I have ever seen with this requirement, but I assume it is there to allow the use of many cards in a given server (some have 8-16).
Now I am concerned by what I hear about problems with non-RTX cards and the latest version of DS. I am hoping I am not stuck with high-tech boat anchors or paperweights.
I will continue to tinker. Must see what older versions of DS I have to try.
Comments
The Titan X (Maxwell), Quadro M6000, and Tesla M40 should be pretty close in terms of performance. I will be testing the M40 vs. the Titan X with each in a PCIE X16 slot. But tests I have seen reported before suggest that the speed of the PCIE connection really only matters during loading/unloading of the graphics card. Must wait for my power adapters to arrive.
...so wonder if the USB connection may be the chokepoint. . 150 iterations per 5 min seems awfully slow for that powerful of a card. I've had proof/test renders using a Maxwell 750 Ti with only about 1/5 the cores of the M40) exceed 1,200 iterations in a little over over 5 min (just ran a test with that card and ended up with 1,270 iterations in 337s (5 min 37s) of actual render time at 1,200 x 900 resolution and render quality 3. This included a fairly high poly hair prop and texture.
USB 3 is 5 gbps, or about 640MBps. PCIE gen 3 is 8 Gts (transfer roughly to a bit) or about 985 MBps per lane. So USB 3 is roughly equivalent to a single PCIE lane. That should make loading the scene on the card take longer,maybe a lot longer if you're loading all 24 GB. But it should not substantially effect performance otherwise.
It is entirely possible the M40 is undervolted, we have done that routinely in the past. It is also possible the card is failing in some way, these used cards have seen some pretty serious use.
...thanks for the clarification. Maybe not so good a deal then. Wouldn't know how to adjust the voltage and really not sure I want to mess around with that anyway. Also makes sense that these cards in servers and deep learning workstations take something of a beating.as they tend to be driven at peak levels for long periods of time.
Waiting to see what Greymom comes up with.
The M40's came in, look like new! The M40 24GB has what looks like a single EPS 12V 8-pin socket (may be a proprietary variation). The original 12GB M40 took one 8-pin and one 6-pin PCIE plug. Today the special power adapter cables (lets you run the M40 off of two 8-pin PCIE) came in. This is what uezi noted he had used. Be a few days, need to adapt my open frame server board for testing, that way I have plenty of room to work, and I can fit even an 18" card plus cooler. I am going to use a PCIE 3.0 x16 slot to avoid any bandwidth issues.
I plan to test initially with LuxMark 3.1, which is my standard. It is easy to set up and run (assuming that the Tesla drivers include OpenCl), and there are no issues with new builds, etc. I will start with the cooling adapter than uses 60mm fans. I can put two 40 CFM fans in series. I have a blower, but that is just not going to fit in even the big EEB case.
I can test vs. a TitanX (Maxwell), maybe a couple of other GPUs. I wil be monitoring clock rates, temperature, and voltage, and GPU duty cycle. I can run the tests with both OpenCL and C++ LuxCore engines that are included with LuxMark. LuxMark gives a number of speed ratings, including the overall "LuxMarks", iterations per second, and some others. I always use the most complex test scene it comes with, Sala (lobby).
Now I see that a couple of vendors are selling Brand New M40 units with a one year warranty for as little as $400! Some people are still trying to sell new units for $5000 + !
If all of the other tests go well, I will try running with DAZ Studio, and will probably try a couple of mining extenders of different bandwidths.
I will measure actual power consumption as I go.
I'm pretty sure the 24Gb M40 is one of the ones that uses an 8 pin CPU power connector. Those were a PITA, and sorry I'd forgotten all about it.
If your PSU doesn't have a spare CPU plug you'll need a 2x8 pin PCIE to single 8 pin CPU cable.
Yep, I ordered some of the adapters. Several vendors have them, even selling the original NVIDIA part (or so they say). The boards I want to install the Teslas on are dual-cpu, so no spare EPS 12v CPU connectors. I seem to have every adapter imaginable, except that one
even have the reverse of the one I need, Eight-pin CPU (female) to two 8-pin PCIE (Male). Got two of them in today (NVIDIA #030-0571-000), so I am set. At least while I was looking, I ran across the adapters to run SuperMicro X8 blade server boards with standard ATX power supplies. I seem to only find stuff when I am looking for something else.
They have non standard connectors? I'm always dealing with them in racks and just plug in server PSU's so I've never really paid any attention to how they wired up.
They may not be non-standard, just a thought. I never saw any information in my searches that indicated that you could power an M40 (24GB) directly from a standard CPU plug. And the new units for sale come with the PCIE adapter. I am a little paranoid after dealing with proprietary connectors that look like standard, but I will check the wiring against a CPU plug (one of the things on my list). Then I will try powering one directly from a EPS 12V CPU plug on my single-CPU test rig.
The manul for the K80 (Dual-Kepler GPU) says: "Power Connectors: One 8-pin CPU power connector".
Update: found a M40 (12GB) tech summary that states: "One CPU 8-pin auxiliary power connector". So, I will give it a try with a bit more confidence.
Hmmmm...The cooling for the M40-2's may be more challenging than I thought.
One thing I noticed is that the heat sink fins are not open like most I have seen, they are actually closed channels. This card is designed for cooling with air blown in from either the front or the back, and nothing else. So, removing the shroud is not really going to help much, and my thought of mounting fans on the side, or just positioning fans to blow onto the heat sink is not going to work.
Uezi's adapter and blower solution looks to be the easiest to implement, but it will make the unit too long to fit in even my huge Rosewill Vampire server case. Taking out the drive bays is not really an option, I need some of them. I have 13" of space available and the card is 10.5".
The adapter that mounts a blower on the side of the card and has a 180 degree duct to route the air to the card could work for some people, but the adapter has no provision for supporting the blower or the shroud?!. No brackets, no screw holes, nothing. The 60mm fan adapter I ordered may work, but the adapter sticks out to the side, and you could not mount two cards. Also, it generates a fair amount of back pressure, so I don't get much air flow with one fan. Going to try two 40cfm fans in series, maybe that will at least let me test the cards in my open-frame test workstation.
There is another cooler that you can order that mounts on the back of the cards, and uses two high-speed server fans. From experience, these are incredibly noisy (they even have a warning in the ad), sort of a cross between a dentist drill and an air raid siren.
Meanwhile, I am designing an air box/plenum to fit over the back of 1 or 2 cards and use a 120mm 130 CFM fan. This will sort of mimic the cooling in the servers that these would normally be mounted in. Dollar Tree has a clear plastic box that I think is just the right size. I might be able to rig this up so it will fit in my case.
My fallback is that I will use the cards in the open frame server. It is intended to be the master node for my small renderfarm anyway, and it will not matter how long the cards are.
A question fo Uezi/Peter and interested parties:
Since I am working on a more general installation of the M40, and may or may not get to trying various external installations, should I start a new thread?
...I think this one is good enough to keep going with as what you are doing does fit the topic.
2 Deltas? That's crazy overkill. Most of the 2U racks in my datacenter just have 4 total, some have 4 but those are the storage servers with just massive loads of HDD's so they 4 in front and 4 behind.
If you really need alot of CFM get a single Delta but they're load. But first I'd put in the case and monitor what it does during a stress test.
The problem is that the design of the adapter includes a narrow air channel and two sharp turns, so there is a lot of back pressure. I am not getting anywhere near the air flow I hoped, even with two 40 CFM fans in series. I will try it cautiously next couple of days while monitoring temperatures.
I have some triple-stacked high-speed fans from 1U servers, but they are just too loud.
Found an issue with my air plenum design - there is not enought clearance between the bottom of this card and the heat sink for the C602 chip on the motherboard without blocking air flow to that heat sink. Needs more thought.
I thought about reproducing Uezi's setup, since it works for him, and would fit on my open frame mount, but the Australian vendor for the blower adapters now warns of increased delays, so it looks like it might take 2 months to get an adapter.
This video has an interesting varition on cooling Teslas with a blower, using a PCI slot case cooler:
Unfortunately, the particular cooler they specify is not available. But, I have a couple of similar ones to try. This would allow, I think, installing two cards on one motherboard, width-wise. The size and configuration of these blowers will make them much easier to adapt than the ones I have now. The lack of restrictions should allow full air flow.
I think it might be time to find someone with a 3D printer and Solidworks. Get some good measurements and design your own fan mount and shroud.
I have been thinking about that. For the price of a few of those fan shrouds (the ones from Australia end up being about $40) I could buy a low-end 3D printer. The big public library supposedly put in some 3d printers you reserve time on. Must check this out.
...nice kitbash solution, but making those modifications look a bit tough for my stiff arthritic hands to handle. Also being a "hammer wheel" type fan, it would make the entire assembly too long for my case. Again not interested in an external setup because of the older slower USB inputs I have (2.0) and having to keep the case open for the other connections. a perpendicular fan and adaptor shroud would already be a very tight fit .
I will be testing the 60mm fan adapter first. Even with two fans, it is just under 13 inches total length, so a possible fit in several cases. The blower-type units won't fit in any case I have unless I remove the drive cages, which I don't want to do. I can use the open-frame PC, but I would rather not.
My first 3D printer was the low end model from Monoprice.
https://www.monoprice.com/product?p_id=21711
It would be able to do what you need. You'd just need some program to do the design.
That's what I am talking about! Thanks! We will see how things go when I get some time to work on these cards.
Hmmm...with the first M40 installed, will not POST on two different motherboards. Tried powering with both options, the PCIE adapter, and (on one board) the spare CPU plug.
Now I find this useful thread:
https://www.daz3d.com/forums/discussion/310966/anyone-have-experience-with-2-x-tesla-m40-24gb
This suggests that it won't work without 4G Address decoding enabled.
Both the motherboards I tried do not have an option to enable 4G decoding (most of my motherboards are LGA 2011V1/V2 vintage), although all work with my TitanX, which has basically the same GPU. Well, I do have that 4G decoding option on at least one of my newer machines, so I will see if I can fit it in to test it. Will try the other board too.
As Rosanne Rosannadanna said, "It's always something!"
Note: also called "above 4 GB address decoding"
Another useful link from the older thread:
https://www.migenius.com/products/nvidia-iray/iray-rtx-2019-1-1-benchmarks
Shows direct comparisons of many cards with IRAY
About "above 4GB address decoding" support:
Woke up this morning with the certainty that I seen that option for 4GB decoding in the ASROCK C602 bios screens, even though I could not find it in the documentation I have. Finally found it buried in the North Bridge settings.
So, I may have more flexibility than I thought. So far I can't find the same option for the Supermicro C602 board, but I will look more later.
Also found a note on one of the NVIDIA forums that confirmed that enabling that option was necessary for TESLA modules in general.
So, I will try testing the M40's with the ASROCK server board on the open frame mount.
That isn't set by default? How old are these motherboards?
They are LGA-2011 V1/V2 with C602 North Bridge. I had only tried the Supermicro C602 version, which would not POST with a TESLA installed. Apparently that setting is not available. Aftter some hunting, found the option on the ASROCK RACK C602, defaulting to "AUTO". So, it might have worked if I had tried it, but went ahead and set it to "Enabled". Will try the TESLAS tomorrow.
Of course, the original plan was to mount them in the workstation with the Supermicro board, as they seem to be more reliable, but I can swap things around.
Edit: Ok, never mind. Found the setting for the Supermicro boards now too. Defaults to "disabled".
Enabling the "above 4 GB address decoding" did the trick, and allowed the server to POST and boot with the first M40 installed. Working on drivers.
Progress!
Attached pic of test rig below.
Anyway, I have the first M40-2 installed and running under Windows 7 64-bit Pro. Took a couple of tries to get the OpenCl driver installed ok.
LuxMark 3.1 (OpenCl) Results
CARD GPU Clock (MHZ) Memory Bandwidth LuxMarks (3.1)
(from GPU-Z) GB/Sec (Spec) (Avg.3+ Tests)
Titan X Hybrid (12GB) 1241 337 4300
Titan X (12 GB) 1075 337 3900
Tesla M40-2 (24 GB) 1112 288 3500
All Maxwell GPU, 3072 CUDA cores, installed in PCIE 3.0 x 16 slot. The memory clock frequency is lower for the M-40, and that may explain the lower performance.
The "space-saving" fan adapter, in spite of the back pressure, was able to keep the GPU temp in the low 60s with 2x 40 CFM 60mm fans. So the GPU stayed at the "Boost Clock" frequency throuhout the test. The card + fans is 13 inches long. Would just barely fit in the big case.
The results above are consistent with averages from the LuxMark 3.1 database at the same GPU clock rates.
So next I will test the other card, then set up testing with DS Iray.
Have a couple of mining links on the way, so I will try those later.
...again I wonder if I have enough airflow in the case already with the 200mm intake fan on the side the 120mm intake fan on the mid-front. (both specified as GPU cooling) and twin 140 mm exhaust fans on the top to draw heat out.
That is what I hoped at first too, but the heat sinks (contrary to some pictures folks claimed were M40s with the shroud removed) are closed channels, designed to have air forced through them. There might be enough flow if you had a good fan directly behind it. I would think taking off the shroud would help to some degree just by exposing that one heat sink surface. But it was not immediatey obvious to me how to remove the shroud.
Some have reported that you can replace the heat sink with one from another Maxwell-based card, such as a GTX980, but I did not want to go that far unless I have too.
Another bad point is that if your bios does not have the option to enable "above 4GB address decoding" the card won't work. The option may be worded as "above 4G addressing" or "above 4g" or "above 4G decoding", it is not consistent. The option was well hidden on my motherboard, and I could find no mention of it in the documentation. This is the only card I have ever seen with this requirement, but I assume it is there to allow the use of many cards in a given server (some have 8-16).
Now I am concerned by what I hear about problems with non-RTX cards and the latest version of DS. I am hoping I am not stuck with high-tech boat anchors or paperweights.
I will continue to tinker. Must see what older versions of DS I have to try.
I will give updates as I learn more.