Data Center In A Box

From WBITT's Cooker!

(Difference between revisions)
Jump to: navigation, search
(Similar projects by others)
(Project (Design) Goals)
Line 6: Line 6:
===Project (Design) Goals===
===Project (Design) Goals===
-
* Utilize in-expensive and un-branded COTS (Common Off The Shelf) components. e.g. Common ATX or uATX motherboards, common Core2Duo or Core2Quad processors, etc.
+
* Utilize in-expensive and un-branded COTS (Common Off The Shelf) components. e.g. Common ATX or uATX motherboards, common Core2Duo or Core2Quad processors, etc.
 +
* Even Mini-ITX motherboards can be used, because the motherboard has hole-placement matching to that of the ATX motherboard. I am still skeptical, that what processors they support, how much RAM they support and most importantly, their availability. ATX and uATX are commonly available all over the world.
* Should not take more than 19" x 19" on the desk/ floor.  
* Should not take more than 19" x 19" on the desk/ floor.  
* Being 19" wide, it should have ability to be placed in any server rack. The height is 14", which is 8U in terms of rack space.
* Being 19" wide, it should have ability to be placed in any server rack. The height is 14", which is 8U in terms of rack space.

Revision as of 04:33, 30 May 2010

Few weeks ago, I was thinking to make an HPCC in a box for myself, based on the idea given by Douglas Eadline, on his website [[1]]. Later, while dealing with some problems of one of my clients, I decided to design an inexpensive chassis, which could hold in-expensive "servers" . My client's setup required multiple Xen/KVM hosts running x number of virtual machines. The chassis is to hold everything, from the cluster mother-boards, to their power supplies, network switches, power cables, KVM, etc. Currently our servers are rented from various server renting companies. But, managing them remotely, and particularly the cost of a SAN slice and the private network requirements, was sending us off the edge. So I thought, why not design something, which I could place anywhere in the world, (where I am living), and do everything from the comfort of my home! (or my own office! ? )

For most of you, this would definitely sound like re-inventing the wheel. True. It is. However, the wheels I know of, (from major players, such as Dell, IBM. HP, etc), are too expensive for a small IT shop, such as my client (and even myself!). Thus, this is an attempt to re-invent the wheel, but an "in-expensive" one. The one which everyone can afford. The solution, which can use COTS (Common Of The Shelf) components and utilize FOSS (Free/Open Source Software), and yet can deliver the computational power necessary to perform certain tasks, while consuming less electricity,... and less cooling.

Here are the design goals of this project:

Contents

Project (Design) Goals

  • Utilize in-expensive and un-branded COTS (Common Off The Shelf) components. e.g. Common ATX or uATX motherboards, common Core2Duo or Core2Quad processors, etc.
  • Even Mini-ITX motherboards can be used, because the motherboard has hole-placement matching to that of the ATX motherboard. I am still skeptical, that what processors they support, how much RAM they support and most importantly, their availability. ATX and uATX are commonly available all over the world.
  • Should not take more than 19" x 19" on the desk/ floor.
  • Being 19" wide, it should have ability to be placed in any server rack. The height is 14", which is 8U in terms of rack space.
  • Use low power CPUs on the mother-boards, so less power would be needed. (At the moment, I know of 45 Watt CPUs and 65 Watt CPUs). Ideally the whole chassis should consume less than 100 Watts when idle, and not more than 800-1000 Watts when loaded. (These are starting figures. Will change based on further calculations / data).
  • Should have power supply efficiency above 80% , so less heat is generated by the PSU
  • While being low power would mean that it would need less cooling.
  • Try using in-expensive USB (zip/pen/thumb) drives to boot the worker nodes. Can also utilize PXE booting from a central NFS server.
  • All network switches and KVMs with their cables will be placed inside the chassis. This would essentially result in only one main power cable coming out of the chassis, and a standard RJ45 port to connect to the network.
  • Expand as you go. Means, you can start with a couple of blades. And increase capacity when needed.
  • The tray which holds the motherboard will support standard ATX, Mini ATX and Micro ATX boards, without any modification. That means, you can have a mix and match of various form factors of ATX motherboards, depending on your requirements. This also provides options for upgrade-ability. See next point.
  • The boards will support both Intel Core2Duo and Core2Quad processors. Similarly in case you are AMD fan, such boards would be used which would support the latest models of AMD's desktop processors. This means, that processors can be upgraded when needed. In other words, you can start with Core2Duo for example, and later upgrade to Core2Quad.
  • Maintenance is designed to be very easy. Just pull the blade, plug out it's power connections and network connections, replace whatever is faulty, or upgrade and slide it back in.
  • The "thickness" of the blade is thought to be 1" in maximum, including CPU heat sinks/ fans. If this is do-able, the chassis density can be increased to holding 16 blades! . If 1" cannot be achieved, (mainly because of the size of the CPU heat-sink), then the thickness must not exceed 2" .
  • Air flow will be provided to the chassis through large 4" low RPM, brush-less fans, mounted in front of the chassis (not shown). A "fan-tray" is thought to be placed on both front and back of the chassis.
  • Successful attempt of fan less, low-rise CPU heat sinks and low RPM "quiet" fans should have a noise signature that is acceptable in a modern office environment (around 45dB).

Software components

  • Linux - For both HPCC and Virtualization utilizations/implementations. (RHEL, CentOS, Fedora, Scientific Linux, Debian)
    • Apache, MySQL, PostgreSQL, PHP, for various web hosting needs
  • Cobbler for node provisioning
  • PDSH, XCAT
  • Virtualization Software: XEN or KVM
  • HPCC software: PVM, MPICH, MPICH2, LAM/MPI, OpenMPI, ATLAS, GotoBLAS, Torque/OpenPBS
  • OpenFiler as the central storage server
  • DRBD, HeartBeat, ipvs, ldirectord, etc for various high availability requirements.
  • Monitoring tools - MRTG, Ganglia, Nagios, ZenOSS

Project benefits/utilization

  • Ideal for both HPCC and Virtualization setups.
  • Small IT shops who want to run their own web/db/cache/firewalls and don't want to waste a lot of money, power and space for it.
  • Ideal for training institutes and demonstration units.
  • Low power consumption when idle.
  • Efficient space usage.
  • Replaceable parts, fully serviceable.
  • Fewer cables.
  • Plug and play. i.e. Just connect the enclosure to power and network, and that is it.
  • Expandable as per your budget/pocket. (Increased number of nodes)
  • Upgrade nodes which need an upgrade. (Increase RAM from 1GB to 2GB, or from 2 GB to 4 GB. Upgrade from Core2Duo to Core2Quad, etc).


The basic designs of the chassis and the blade itself are produced below. I drew them by hand, so pardon any ink smears. I hope I can ask someone to draw them for me in some CAD system.

Diagrams (hand drawn)

file:Datacenter-in-a-box-chassis.png file:Datacenter-in-a-box-blade-components.png

Components

These are the components which I intend to use. The actual components used may vary depending on availability and further study / analysis of the blade design.

Power Supply Units (PSU) for the motherboards

I am interested in last two in the list below:

File:PW-200-V-200W-12V-DC-DC.jpg File:PW-200-M-200W-12V-DC-DC.jpg File:PW-200-M-200W-12V-DC-DC-P4-connector.jpg

File:Picopsu-160-xt-dc-to-dc.jpg File:PicoPSU-160-XT-connectors.jpg

File:PicoPSU-160-XT-plugged-in.jpg

Processors

Motherboards

Similar projects by others

Personal tools