DLC CPU V2 Module For Intel Processors (LGA 3647/4189)

Introduction

First and only DLC CPU V2 MODULE FOR INTEL LGA 3647/4189 PROCESSORS that allows individual user to create his own cooling loop and to implement direct liquid cooling for any server model.

Overview

LGA 3647/4189 DLC CPU V2 MODULE is a universal DLC component that allows datacenter operators to use Direct Liquid Cooling or Direct Chip Cooling technology for any current or future server which utilise LCA 3647 or 4189 Intel socket. CPU MODULE has been adapted work in both positive and negative pressure DDLC CDU’s and may be used with various coolants – single or dual phase proprietary engineered fluids. This device is designed to match any server / mainboard designs. This is possible due to a streamlined in-line connection with 1 inlet and 3 outlets, as well as PTL quick disconnects ports. The above elements allow the customer to configure the DLC LOOP to match a given specific server design needs. If need arises, it is possible to reconfigure the connection and make use of it in a different or future server with the same socket. Our open solution utilises state of the art microelectronics cooling technologies including vortex chamber, jet impingement and split microchannel coldplate that allows fluid distribution with a minimal pressure drop. Altogether, the module provides the best cooling efficiency even for most high-performance intel processors.

In the V2 version of the CPU  Module for Intel Processors we added Heat Transfer Components (HTC) which interconnects with coldplate and efficiently transfers the heat from other board components, including VRMs (voltage regulator modules), mainboard bridge, networking and and RAID card. This way we can extract more than 80% of the heat from the mainboard covering all modules which may generate more heat in server without fans or with lower fan speed.

Ask for quoteAsk for quote
Get the specsheetGet the specsheet

Custom Or Large System? You Need Scale-Up Or Specific Solution? We Will Help In Any Scale.

Contact our staff for custom liquid cooling system, quote and services

Characteristics

Unmatched flexibility and performance for current and future Intel processor platforms.

Safe liquid cooling

Negative pressure 100% leak proof system and engineered fluid guarantees complete safety.

Open not Proprietary

User installable as every other server component, may be used it in every server model.

Saves 40% of energy.

Renders the need for fans and HVAC. Removing 100% CPU heat = 365 days of free cooling.

Reusable

You may use it in current and then reuse in future LGA 3647/4189 socket servers

Features & Competitive Advantages

Safe and open solution – like nothing else on the market

100% Leak proof technology

We base on negative pressure CDU systems from Chilldyne. This way we can deliver 100% leak proof system and guarantee complete safety. The best is that any maintenance, change in loop topology may be done online, because there is no way water may leak out.

100% Safety

The product utilises the most durable components available. The tolerance margins on temperature and pressure exceed one order of magnitude. Negative pressure CDU option and dielectric engineered coolant option allows the system to achieve 100% safety against damage.

Extracts >80% of Heat

Only DLC module with optional Heat Transfer Components which transfers the heat from other board components, including VRMs,  bridge, networking and and RAID cards. This way we can extract more than 80% of the heat from the server.

Compatible with all servers

First ever open system without painful customisation. You will never ever have to order specific loop for a specific server. Installation is no different than every other third party server component. Just install the modules instead of heatsinks, connect the tubing with quick disconnect modules, and that’s it.

In Line Connection – More Performance

We eliminated 90-degree joints, corrugated tubing and clumsy loops. Instead an inlet and 3 outlets with PTL ports allow ease design of own LOOP with straight and streamlined connection between modules. It means more safety, less breakpoints, lower pressure drop and better efficiency.

One investment for years

This DLC modules which can be reused in many servers. Thanks to quick disconnect ports you can create your own loop, and if you replace server you just change the tubing length if needed. Modules stay the same. Lifespan of your investment is then 3 times longer than competitors “custom loop for each server model” systems.

632 W

For current and future processors.

50C/122F

Cooling with hot fluid.

10 Years

Best investment possible.

Ask for quoteAsk for quote

HEAT TRANSFER COMPONENTS

V2 CPU  Module for Intel Processors we may used optional Heat Transfer Components (HTC) which interconnect with CPU module base  and efficiently transfer the heat from other board components, including VRMs (voltage regulator modules), mainboard bridge, networking and and RAID card. This way we can extract more than 80% of the heat from the mainboard covering all modules which may generate more heat in server without fans or with lower fan speed.

ModelNumber of
Directjet
Modules
Tubing diameterMinimum bend
radius
3647MOD16.35 mm / 0.25 in25.4 mm / 1 in
3647MOD26.35 mm / 0.25 in25.4 mm / 1 in
3647MOD39.53 mm / o.375 in38.1 mm / 1.5 in
3647MOD49.53 mm / o.375 in38.1 mm / 1.5 in
ModelNumber of
Modules
Total Heat Removal W3Total Heat
Removal W4
V21632 W474 W
V221137 W853 W
V231611 W1208 W
V242022 W1516 W

Simple and Effective

COMPATIBILITY

Suitable for all Intel Scalable Platform Platinum, Gold, Silver and Bronze processors. Supports all current and future server platforms using Intel Processors & LGA 3647 /4189 socket: Skylake, Cascade Lake, Cooper Lake, Ice Lake.

PHYSICAL ATTIBUTES

Materials:Polymer / Glass Fiber body
Coldplate:Copper custom coldplate with optional Heat Transfer Components.
Surface Flatness0.002 mm/mm
SealingEPDM O - Ring
Dimensions (HxWxD) [mm]20x108x78
Weight320 g (coldplate weight: 259 g)
Regulatory Certification:2011/65/eu
Material
Safety2004/108/EC; UL-94HB

PERFORMANCE

Heat Removal CapacityUp to 632 W each module
Thermal Conductivity0.28 W/m-°C
EL Effective Thermal Resistance0.03-0.06 K/W

FLUID

TypeProprietary Water / Glycol solution with additives.
Safety0 GWP

Complete liquid cooling portfolio

We know that every cooling system must guarantee reliability and safety for costly computing infrastructure. We saw large area for improvement in existing solutions and decided to design something better. Our systems are crafted carefully to deliver ultimate density, energy efficiency & sustainability without painful customisation. We serve customers complete, 100% safe, open, well-thought portfolio of direct chip and immersion liquid cooling.

DLC CDU300 – Coolant Distribution Unit – the crucial component of every DLC (Direct Liquid Cooling) System. Cooling capacities ranging from 20kW to 305kW with hot water cooling. CDU ensure condensate-free, optimum operation with dew point temperature control. Redundant pumps & remote control guarantee high availability and uptime.
DLC LDU – Liquid Distribution Unit. Modular LDU distributes fluid to and from the devices. Offers the biggest density in the DLC industry, with hexagonally arranged CTC dry break quick disconnects. A single 42 rack scaled unit can provide from 40 to 128 sockets. The CPC couplings allow easy maintenance, as disconnection and connection can be performed using just one hand. The unit is equipped with VRV valves (Vent Release Valve) so no complicated maintenance is necessary. Sold with a set of universal brackets which fit any 42U rack.
DLC CPU LGA3647 MODULE – first and only DLC CPU module for intel processors with LGA 3647 socket, that allows operator to create his own cooling loop and to implement liquid cooling to any server model / mainboard design. This is possible due to a streamlined in-line connection with 1 inlet and 3 outlets, as well as PTL quick disconnects ports.
ILC UNI ENCLOSURE – Universal and Open ILC Enclosure for standard servers and crypto-currency mining infrastructure. Each UNI ILC Enclosure will accommodate 48 GPUs (4 rigs), or 10 standard S9/Z9/A9 asic miners or 15 web servers. UNI Enclosures can be stacked safely on proprietary rack system providing best space utilisation. 20ft iso container can hold up to 2680 GPU’s or 560 asic miners with 4 level UNI Enclosure stack.
DLC VCDU 300 is a main DLC (Direct Liquid Cooling) System component. This unit provides ultimate safety and high availability features, delivering coolant under negative pressure. VCDU300 has been designed specifically to be fault tolerant and to eliminate virtually any risks associated with liquid cooling. VCDU system patented leak – proof design runs coolant under negative pressure on both the supply and return so coolant cannot leak out – only air can leak into the system. protects costly electronic equipment.
DLC CDU50 – The 5U Cooling Distribution Unit (CDU) is a compact version of cooling distribution unit designed to fit in the rack and provide 50kW of cooling capacity. It is designed to provide clean, filtered water supply for CPU / GPU modules. The operation is secure and closely controlled above dew-point. CDU50 is capable of 50kW cooling capacity with supply water between 15C (59F) to 45C (113F). On the front face of the 5U Unit is the touch screen operator interface for ease of operation and monitoring.

Advantages

Next Gen Liquid Cooling System. Safe. Open. Flexible. Different than anything else.

We know from experience all the troubles related to positive pressure water cooling systems. Risk of flooding the costly systems is biggest concern for infrastructure operators. This is why we chose state of the art technologies to provide completely safe, 100% leak-proof system. We guarantee complete safety using three technologies:

  • One is VCDU negative pressure vacuum based system that guarantees total safety in difference to positive pressure CDU’s. In an unlikely event of integrity loss, Cool-Flo® technology works with negative pressure on both supply and return, so if anything happens air will flow into the system instead of coolant leaking out. The “leak” does not stop the operation and electronic components are safe from water damage. The system also automatically evacuates coolant from a server when it is disconnected from a liquid cooling system for maintenance.
  • Second feature: proprietary nanoparticles enhanced engineered fluid. We use two types of dielectric fluids for DLC and ILC cooling. Single or dual phase dielectric fluid option provides another layer of protection.
  • Third component: over engineering policy. All modules, ports, tubing, dry break quick disconnects and couplings are more robust than necessary. We use high pressure rated  components in our low-pressure system. We choose best in class simple solutions – e.g. flat tubing so the loop is twice as much durable than corrugated. Our tube can withstand over 650 PSI, when typical pressure in our system is up to 10 PSI. Because safety & uptime matters most.

Liquid cooling technology provide massive saving for hyperscale, enterprise and smb installations. Even individual users see significant cost cut. Savings can be measured in CAPEX and OPEX so ROI is immediate. In terms of CAPEX there are three biggest factors:

  • cost of liquid cooling systems is usually lower than air cooling infrastructure of the same capacity. In most cases it’s 15-20% less.
  • liquid cooling renders the need for costly HVAC infrastructure which effects in fever pieces of critical equipment in data hall space. Liquid cooled servers require only 20% of previous airflow with allows for free-cooling during all seasons. Of course immersion cooling requires virtually zero airflow thus providing another discount.
  • Increase in  rack power density (from 10-20 kW to 100 kW and beyond) allows to lower data center footprint, resign from air exchange plenum, stack data center vertically. Fewer server racks and interconnects means another reduction in capital cost.

Both of this factors results in reduced site & structural construction, compared to traditional build. Simplified electrical and mechanical topology and faster go to market gives tremendous advantage to data centers outfitted with liquid cooling, over air cooled designs. For new data center projects, the cost savings is even more dramatic as capital expenditures can be cut nearly in half. 

The biggest advantages however starts with OPEX:

  • For most installations we observe an overall reduction in average data center power consumption by up to 45%. The savings comes from a combination of HVAC elimination, reduced infrastructure footprint, and reduced fan power consumption. This cuts lot’s of operational cost associated with power cost.
  • Increase in computing power of liquid cooled processors and gpu’s effects in 15-20% more performance. For specific application it can reach with additional fine tuning and overclocking even 30% more compared to air cooled server rooms. This means we can spend 20% less on computing infrastructure having more performance for less.
  • Increased reliability of equipment through elimination of the most of airflow. Liquid protects IT devices from harsh environment including high temperature, humidity, vibration, dust, air contamination extends MTBF of ICT infrastructure and extends lifespan of systems.
  • Riddance of most of aircooling critical infrastructure that must be maintained on regular basis, less maintenance overhead, means reduced maintenance and personel cost

Significant reduction in CAPEX and just fraction of traditional datacenter OPEX decrease Total Cost of Ownership (TCO) of running ICT infrastructure.

Currently available liquid cooling systems are highly customised – “boutique” solutions. Water cooling coldplates and tubing length must be carefully prepared for one and one only specific server model. Even the size of the rack which holds the device must be taken into consideration, as tubing length between the coldplates and manifold must be carefully measured. All recognised hardware vendors propose liquid cooling for one single server in their portfolio – in the same way as IBM in Manframe s/360 servers in 1965.  In case of immersion cooling available solutions support only some types of servers, switches and storage systems because of dimensions of immersion baths. Summing up – all solutions we saw and tested were unique and proprietary.

Industry however requires open solutions, and typical cloud / colo infrastructure is usually diversified with many server vendors and types. In case of immersion cooling our universal enclosures and computing enclosures can accommodate variety of servers, switches and storage systems from a-brands and oem vendors. DCX believes in open systems and our liquid cooling systems support over 95% of available hardware without specific customisation. In case od direct chip cooling – DCX patent pending Next Gen DLC system features Push To Lock quick disconnect ports on cooling module / rack level. Customer can purchase Next Gen DLC components as every other third party product: memory chip, disk drive or gpu card. Administrators can link DLC modules with our proprietary tubing and create the loop for every server and platform worldwide – without being forced to order custom solution for each device.

The best is that using provided leak- proof tubing & PTL quick disconnects the Customer can reconfigure the LOOP if needed and move to the next generation server, utilising the same socket, cooling modules, and LDUs (liquid distribution units). This extends lifetime and ROI of Next Gen Liquid Cooling components  three times comparing to current DLC offering.

Modern, energy efficient CPU’s, GPU’s and memory chips are subject to thermal throttling. Vendors know that overheating can cause errors and accelerate component failure. This is why all existing hardware operate with lower performance than advertised if reaching certain thermal point. In Intel Skylake architecture AVX-512 and heavy AVX2 instructions throttle the CPU’s frequency. This is why Intel diversifies processor models to “thermally optimised” and not In case of memory – command rates are reduced if system works over certain limits.

One need to realise that in real case application most of the customers get 20% less performance from their chips – if CPU temperature reaches over 55C/130F – cpu frequency will be reduced and performance hit may be even higher than 20%. For most of Nvidia GPU’s – thermal throttle point starts at 50C/122F and clock frequency will drop – step by step to 50% of base Mhz.

To put it simply – you pay 100%  price for the chip and get 80% of performance in real world applications. There are two ways to cope with that issue: one is to use power hungry chillers to run the systems in cold air. The second is to use direct liquid cooling and extract the heat with warm fluid – at the source and keep the chips at optimum 50C/120F temperature. Liquid cooling allows also, with additional fine tuning and overclocking, to turbo boost your chips getting additional increase in performance of  15% to 25%. Without any compressed cooling.

Sharp increases in energy prices have forced many IT pros to look at how inefficient existing cooling practices are. Traditionally only half the total data center energy is used at the equipment with typically about 30-45% or even over 50% of the total data center energy is consumed by the cooling infrastructure. Most of this is consumed by the site chiller plant, used to provide chilled water to the data center, and by computer room air conditioners (CRAC) and air handlers (CRAH), used to cool the computer room. With average PUE of 1.89 for many datacentres over 50% of energy consumption and carbon footprint is not caused by computing but by powering the necessary cooling systems to keep the processors from overheating.

Datacenters use currently 3% of the world’s energy (around 420 terawatts) which is around 45% more than the entire United Kingdom energy spend. And this consumption will double every four years. Researchers predicts that by 2025, data centres will amount to largest share of global electricity production at 33% This is why EU Commission issued EU Code of Conduct on Data Centre Energy Efficiency along with best practice guidelines.

DCX systems save over 50 % of typical data center’s energy consumption with hot liquid cooling and 30% save can be expected in small scale consumer systems. For most installations we observe an overall reduction in average data center power consumption by up to 45%. The savings comes from a combination of HVAC elimination, reduced infrastructure footprint, and reduced fan power consumption. This cuts lot’s of operational cost associated with power cost. Increase in computing power of liquid cooled processors and gpu’s effects also in 15-20% more performance. For specific application it can reach with additional fine tuning and overclocking even 30% more compared to air cooled server rooms. This means we can do the same having 15-20% less servers.

Moreover, high-grade heat at the output can be used for such needs as heating building spaces. In our liquid cooling installations, with 50-60*C hot fluid output we reuse from 64% to over 80% of usually wasted heat. This is why liquid cooling sites demonstrate the extreme energy efficiency and can deliver heat to local community or office space.

Climate change is recognised as one of the key challenges humankind is facing.
The Information and Communication Technology (ICT) sector including data centres generates up to 2% of the global CO2 emissions, a number on par to the aviation sector contribution. Problem with supply of renewable sourced energy will make data centres one of the biggest polluters in just seven years.

Additionally data centres are estimated to have the fastest growing carbon footprint from across the whole ICT sector, mainly due to technological advances such as the cloud computing and the rapid growth of the use of Internet services. ICT industry is posed to be responsible for up to 3.5% of global emissions by 2020, with this value potentially escalating to 14% by 2040, according to Climate Change News. Researchers say this will be directly related to the fact that the data centre sector could be using 20% of all available electricity in the world by 2025 on the back of the large amounts of data being created at a fastest speed than ever before seen.

Our solutions render the need to use high GWP refrigerant associated with chilling plants, reduce 3 tons carbon dioxide emission per kW of ICT equipment in one year and reduce power consumption on average by 50%. Simple as that. Moreover, high-grade heat at the output can be used for such needs as heating building spaces. In our liquid cooling installations, with 50-60*C hot fluid output we reuse from 64% to over 80% of usually wasted heat. This is why liquid cooling sites demonstrate the extreme energy efficiency and can deliver heat to local community or office space.

We will guide
you the whole time.

How to Buy

Product Support

Email Sales

Chat with Sales