Computers & Electronics

Need advice on a high-end server purchase

  • Last Updated:
  • Mar 3rd, 2018 9:57 am
[OP]
Member
Aug 27, 2006
277 posts
76 upvotes

Need advice on a high-end server purchase

Dear server experts! I am a researcher looking into buying a high-end server for some memory-intensive simulations (mostly finite-element method for those that are interested). I know that I'll need a minimum of 1TB of RAM, as many physical cores as I can get (for parallel computing), and an array of 15K rpm hard drives, but I have no idea of what else might be needed. I came across the following specs for a *refurbished* server and wanted some advice on anything I might be missing that will be necessary for its smooth operation, or anything I might need to watch out for. I have some comments in-line with the specs and some additional questions that I've included at the end of this message, and would really appreciate your advice! Thank you in advance:

  • Dell PowerEdge R920 server 4U rack, supports up to 6TB of memory in 96 slots (I know this is discontinued and replaced with the R940)
  • 4 x Intel Xeon 6-core E7-8893 v2 3.4GHz 37.5MB cache/processor (are there faster/cheaper newer processors, e.g. i5, out there? Do they offer the same amount of cache?)
  • 64 x 16GB registered server RAM (does the brand of RAM really matter, e.g. between Hynix, Samsung, Kingston?)
  • 10 x DELL 600GB 15K rpm 2.5" SAS Hard Drives (is there any need for a RAID configuration if my priority is speed of writing and reading? Should some of the drives, e.g. for the OS, be excluded from the RAID setup? What kind of RAID would you recommend?)
  • 24 x 2.5" SFF SAS/SATA/SSD BAYS
  • PERC H730P 2GB NV cache RAID controller
  • IDRAC7 express (Is there a big difference between IDRAC7 and IDRAC8? Would the latter even be supported on the R920?)
  • No optical drive (is there ever a need for an optical drive?)
  • Redundant 1100W power supply (does "redundant" here mean two supplies with a total power of 1100W, or 1100W each? Is this enough for what I want to put in here?)
  • Broadcom 5720 1GB quad port daughter card
  • Up to 10 PCI-E slots: 8 x PCI-E 3.0 + 1 RAID slot + 1 NDC slot + 2 optional PCI-E slots (what are these, and what can I do with them?)
  • E-VGA NVIDIA GEForce GTX 980 TI 6GB (Will this card even work on this system? Can the power supply handle it? If not, what would be a good supported alternative? I don't need GPUs for computation, but will it help when accessing the server remotely?)
  • Rail Kit
  • No bezel

Some additional questions:

  1. What would be a decent price for the above system (please note, it is refurbished)?
  2. Are there any components that are not included, but which will definitely be needed? If so, what are they?
  3. I heard that Intel will stop supporting/selling these processors after 2019. Is this a big problem?
  4. Any suggestions on a good-quality reliable UPS for this server?

Thank you!

TRB
11 replies
Deal Fanatic
User avatar
Aug 29, 2001
6361 posts
1568 upvotes
rural ontario
How many IOPS do you need? Enterprise grade SSDs or FusionIO cards will give you lots more IOPS then spinning disk.
72 69 6c 6c 65 73
Deal Addict
Oct 6, 2015
2463 posts
1400 upvotes
Probably best to hit up a forum like "servethehome" that has people who actually know what that equipment is. What operating system? Do you have a handle on the sort of CPU, RAM, and I/O use of the application?

Do you know anyone in the universities, etc., that you could talk to/perhaps borrow use of their system to sort of evaluate what you need? Maybe a research 'accelerator' or something?

I'm thinking there's maybe $30-$40k of hardware there. So we're n
Deal Addict
Sep 12, 2007
2933 posts
1047 upvotes
rilles wrote: How many IOPS do you need? Enterprise grade SSDs or FusionIO cards will give you lots more IOPS then spinning disk.
Good question. How much writing will you do? If it's not a lot (sounds like you will run in mem like HANA stuff) you can go entry/mainstream SSDs. Around 6 SSDs, anything more you will likely saturate your controller anyway. I would not recommend enterprise class SSDs as most people will not use them as they are meant and will pay more for no reason.

Spinning disks don't hold a candle to SSD when it comes to w/r performance, go RAID5 w/hot spare or RAID6 with the SSDs. Even like this you will have much higher performance than Raid10 with 15k drives (not to mention the heat and electricity). If you don't need high IOPs, and workload isn't random, use SATA drives, they are fine for sequential writes.

Sorry, can't help on price; all that stuff is now on its way out, so you should be able to get a good deal.

But back to your question for the OP, what IOPs do you need?
[OP]
Member
Aug 27, 2006
277 posts
76 upvotes
I really don't know enough to answer this question about IOPS...does the number 100-150 sound right?
Deal Fanatic
User avatar
Aug 29, 2001
6361 posts
1568 upvotes
rural ontario
For low IOPS just a few RAID1 consumer grade SATA SSDs would use way less power, make way less noise and would be less complex to manage and you can ditch the raid controller and use software RAID (1TB SSDs are getting real cheap now). But I guess this depends on the criticality of the service and the personal leanings of the server admin.
72 69 6c 6c 65 73
Deal Addict
Feb 4, 2018
1009 posts
49 upvotes
1. Apparently that CPU is equivalent to a Core i7 5960X. If several benchmarks on the web are to be believed, it's outperforming a Kaby Lake processor easily by 10-15%. So at least we know that no i5 will outmatch it persormance wise.

Power wise seems to be similar to Sky Lake/Kaby Lake.


2. RAM brand does not matter. Most of it is from the same suppliers anyway, even the ECC stuff.

3. 15k SAS drives "should" be decently fast, but will not beat even a cheap SSD. Those drives are higher performance compared to SATA, by usually 10-15%. Where SAS shines is reliability and seek times due to very fast spin speeds. For RAID, well, if all you care is about the quickest access to the data, RAID 0, but it offers ZERO protection for the data on your disks. If you set all disks as RAID 0, your server will see a large logical disk that is really fast, but one drive dies, and you're toast. RAID 0, and religious backups to another array, or tape, anything. RAID 10 is great for a little fault tolerance. You can usually afford to lose one disk and data still safe. But data storage is cut in half. 4 disks in RAID 10 is getting the performance of 2 disks in RAID 0 with a backup RAID 0 in parallel. RAID 5 sucks for writing to, but about equal to RAID 10 for reading. You need a very good controller for decent performance. You can afford to lose one disk, but then you're running in degraded mode (meaning slow as molasses) until disk is replaced. Only advantage of RAID 5 is that you only lose one disk in a set (If you have 6 disks, RAID 5 you lose one to the parity. RAID 10 would you lose 3).

Setting quality SSD's all on RAID 0 would yield some super kickass data access, and be "reasolably" safe from death. You could install the OS on the RAID 0 and the system would not suffer much in performance. Backup to other device CRITICAL though.


4. iDRAC. Do you need to access the Server remotely to monitor stats? or to Shutdown/Manage remotely?

5. Optical drive depends on your needs. Some stuff is still on CD/DVD. But most things you can load on USB stick now.

6. Redundant PSU usually means "each" PSU is 1100w, for a total of 2200w, but the System will not use more than 1100w total, so two PSU's share half the load each.

7. PCI-E server slots basically the same as desktop PCI-E. RAID is speficically for RAID card. NDC is a Dell proprietary Network Card with proprietary connector. And you can usually get riser cards that add 2 more PCI-E.

Limitations on cards you can install there usually depend if Dell's Hardware Compatibility List allows installs.

In my personal experience, you can usually install eSATA or USB 3.0 cards there, and as long as there's a Windows 7 or Win 10 driver, it will work in an Win Server OS (with a bunch of fighting/hacking Windows Server driver signing)

8. Absolutely do NOT need a GPU to access server remotely. You usually attach a keyboard/mouse (or KVM switch) to server to install/configure a Server OS (Win Server, ESx, Some flavour of Linux Server). Depending on what OS, that will determine remote access to it. Windows can use RDP or VNC (Linux is VNC primarily I believe). ESx will be ESx Client/VSphere, etc)

For computation it is possible to install some GPU's, but normally not officially supported by Dell or whatever company makes the Server. Most of the time they might not even physically fit, and at least Windows Server will give you a hell of a time to load drivers for it.
Deal Addict
Feb 4, 2018
1009 posts
49 upvotes
rilles wrote: For low IOPS just a few RAID1 consumer grade SATA SSDs would use way less power, make way less noise and would be less complex to manage and you can ditch the raid controller and use software RAID (1TB SSDs are getting real cheap now). But I guess this depends on the criticality of the service and the personal leanings of the server admin.
Most Servers come at least with a basic hardware RAID Controller. Dell uses the PERC series.

"Software" RAID can mean different things.

Example:

A PC might have the Intel RAID config utility in the BIOS. Usually only has RAID 0 or 1. All that is is a passthru for the Intel CPU to handle the RAID. RAID 0 and 1 have next to zero overhead for what most home users will do (maybe tops 6 HDD's in RAID?)

Once you get to Enterprise Level, with Storage Appliances that have 48+ disks, well at that point you start needing dedicated controllers.

The other "software" RAID is something like Windows Dynamic Disks. All that is, is that you boot to Windows, and whatever disks you have installed you can bunch them together and format them in a large "logical" disk, as RAID something. CPU usage also negligible for RAID 0 or 1. And with DynDisks you cannot install the OS to a DynDisk.

So what's the advantage? Far FAR easier to transfer those disks physically to another PC. Any Windows will recognize them as a RAID set.

If you do the "Intel" RAID utility thing, you risk not being able to read those disks on older Intel chipsets, or non Intel ones.

Dedicated RAID controllers? Usually you need the same controller to read, or you're (mostly) toast.
Deal Fanatic
User avatar
Jan 6, 2011
6107 posts
1502 upvotes
GTA
My understanding is when tasks are large enough to migrate to GPU, the whole thing goes and CPU no longer relevant i.e. no gain in splitting.

Then the bottleneck shifts to disk io (load/write back results) and the channel between GPU and RAM, not CPU or CPU cache.
Member
Dec 7, 2015
479 posts
105 upvotes
Ottawa, ON
Finite element analysis is suitable to highly parallel processing. Hence, you can use lots of GPU cores rather than CPU as the previous poster suggested. The biggest super-computers are GPU based and they are suited to big FEA problems.

It will also be processor bound, so disk access may not be critical. You will need a MB that can carry a lot of GPU cards and you'll be running a Linux cluster distro. Otherwise. several MBs that are clustered, each with several GPU cards. Think game computers rather than server.

Trivia: Years ago, the McLaren racing team did this for about $50,000 using 8 Playstations (Cell Broadband Engine Architecture) and a workstation as a controller. Sony killed the Linux capability and this was a dead end.
[OP]
Member
Aug 27, 2006
277 posts
76 upvotes
Thanks all for your help! In the end, I did purchase this server with a few modifications -- no HDDs (I bought 3 x 2TB Micron SSDs instead) and no graphics card. I also ensured that the RAM was Samsung and 1600MHz. It cost me CAD $14K.

Top