l‎ > ‎

h

Biz & IT —

Microsoft’s latest open source servers shown off with Intel, AMD, and even ARM chips

An inevitability is becoming a reality.

Peter Bright - Mar 8, 2017 9:47 pm UTC

Qualcomm Centriq 2400 server for Project Olympus.Microsoft

At the Open Compute Summit in Santa Clara, California, today, Microsoft showed off the latest iterations of Project Olympus, its open source data center server design. Until now, the servers in Microsoft's data centers have all used Intel x86 processors, but now both of those elements—"Intel" and "x86"—have new competition.

In news that's both surprising and unsurprising, Microsoft demonstrated Windows Server running on ARM processors. Qualcomm and Cavium have both designed motherboards for the Project Olympus form factor that use ARM chips: Qualcomm's Centriq 2400 processor, a 10nm 48 core part, and Cavium's ThunderX2 ARMv8-A, with up to 54 cores. In addition to offering lots of cores, both are highly integrated systems-on-chips with PCIe, SATA, and tens of gigabits of Ethernet all integrated.

Microsoft isn't yet letting third parties use these systems. The Windows Server build is an internal build, and Microsoft is using the systems in those applications where it says they make the most sense, with the company listing search and indexing, storage, databases, big data, and machine learning as workloads that benefit from the high throughput the ARM systems offer.

More broadly, the company says that ARM is appealing because it has greater scope for extending the instruction set. The 64-bit ARM instruction set is (relatively) clean and neatly designed, making it easier to integrate new capabilities and extensions. How true this is in practice is less clear, especially as customized, extended ARM designs jeopardize another desirable ARM trait: a large extant software and developer base that is familiar with the platform.

Cavium ThunderX2 server for Project Olympus.Enlarge / Cavium ThunderX2 server for Project Olympus.Microsoft

On the one hand, porting Windows Server to ARM is not tremendously surprising; we know that Windows client is coming back to ARM (indeed, although Microsoft stopped selling Windows client systems with ARM processors in 2015, Windows on ARM never actually went away, as Microsoft continued to develop it for use in Internet-of-Things devices). Given that Microsoft has largely consolidated Windows into a single common platform, cranking out a Server build for ARM should have been relatively straightforward.

As ARM has made a push for the server room, industry observers have long anticipated a build of Windows Server for ARM to ensure that Windows does not get left behind in this space. With credible ARM hardware on the cusp of widespread availability, it makes sense for Microsoft to go public.

But on the other hand, Microsoft has never been very successful with Windows built for anything other than x86 processors, so much so that "Wintel"—Windows on Intel—became synonymous with the PC platform. There are reasons to believe that Windows Server on ARM will be more successful than Windows on Itanium, PowerPC, Alpha, and MIPS, because ARM, unlike all those other non-x86 (and, except for Itanium, non-Intel) processor families, has massive industry support and designs that are at least somewhat competitive across a wide range of workloads, but success and widespread adoption is no certainty.

And Microsoft certainly has plenty of other options. ARM systems weren't the only Project Olympus hardware to be shown off. Microsoft has been working with AMD and has designs using the company's forthcoming Naples processor, a 32 core, 64 thread system-on-chip with enormous I/O bandwidth from its 128 PCIe lanes, built around AMD's new Zen core.

And industry stalwart Intel isn't sitting on its hands, either. Project Olympus servers using Skylake chips were also on display. Skylake-EP, the server version of Skylake, isn't available to buy yet, but Google announced in late February that it was already using the chips for its cloud services as part of a partnership with Intel. Skylake-EP is not merely a bigger version of the desktop chips with more cores, more cache, and more processor sockets; it also adds AVX-512, an extension of the existing AVX instruction set that enlarges it to operate on 512 bit data types, up from 256 in desktop Skylake. This makes Skylake-EP even stronger at a wide range of number-crunching workloads and highlights one of the ways in which Intel might differentiate its chips from Naples: Skylake-EP with AVX-512 will likely boast four times the floating point performance than Naples at the same clock speed.

Peter Bright Peter is Technology Editor at Ars. He covers Microsoft, programming and software development, Web technology and browsers, and security. He is based in Brooklyn, NY. Email peter.bright@arstechnica.com // Twitter @drpizza

73 Reader Comments

Chronological View | Best Comments
  1. Contingency Ars Centurion The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.
  2. SilverSee Ars Scholae Palatinae Quote:Windows on ARM never actually went away, as Microsoft continued to develop it for use in Internet-of-Things devices).
    And (ahem) Windows Mobile.
  3. Dilbert Ars Legatus Legionis Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.
    Maybe this?

    Image
  4. sep332 Ars Tribunus Militum et Subscriptor Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.
    Not sure about the expansion slot, but it's not uncommon for dense storage arrays to lack hot-swap. Backblaze pods, and Microsoft's Open Cloud Server design for example.
  5. Faanchou Ars Scholae Palatinae Quote:An inevitability is becoming a reality.
    Like this?
  6. tipoo Ars Tribunus Militum It would be *just like* AMD if the moment they fully caught up to Intel even in the edge cases, things had already shifted to ARM :P

    They did have some ARM projects going but shelved them to focus on Ryzen, which was a good move in the short run, but I hope they don't take their eye off that ball in the long run. I think ARMs more restrained front end will actually end up making it easier to scale up, once the market shows demand for large ARM cores.
  7. dvanh Smack-Fu Master, in training Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.

    It doesn't matter for cloud infra, the server is the smallest operational unit. If a drive fails, the whole node is decommissioned.
  8. Burner1515 Wise, Aged Ars Veteran dvanh wrote:Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.

    It doesn't matter for cloud infra, the server is the smallest operational unit. If a drive fails, the whole node is decommissioned.

    Surely you mean it's removed from the array of servers until it is fixed and placed back in right? Decommissioned would mean they literally throw the server out like the above xkcd jokes.
  9. WaveRunner Ars Praefectus Burner1515 wrote:dvanh wrote:Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.

    It doesn't matter for cloud infra, the server is the smallest operational unit. If a drive fails, the whole node is decommissioned.

    Surely you mean it's removed from the array of servers until it is fixed and placed back in right? Decommissioned would mean they literally throw the server out like the above xkcd jokes.

    Who says it's a joke? :) At least the element of truth is Microsoft's current gen datacenters servers are fixed by the container load. Not individually or by the rack.
  10. Contingency Ars Centurion sep332 wrote:Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.
    Not sure about the expansion slot, but it's not uncommon for dense storage arrays to lack hot-swap. Backblaze pods, and Microsoft's Open Cloud Server design for example.

    It's not even a disk array though. If it's a server, put non-OS storage on an array. Handle storage redundancy at the array level or higher. The fewer drives on a server, the lower the likelihood of component failure on the server.

    The design is bizarre--like designers playing "Junkyard Wars" instead of anything belonging in production. The only scenario I can think of is cramming a bunch of spares in the chassis to take over for failed drives as time passes, to keep the server functioning as long as possible. If that's the case though, it'd likely make more sense to go to VMs, and handle failures via a pool of cheaper servers that can be added to as needed.
  11. LeopardSeal Ars Centurion et Subscriptor Quote:Windows on ARM never actually went away, as Microsoft continued to develop it for use in Internet-of-Things devices)

    And Windows Phone.
  12. normally butters Ars Scholae Palatinae I eagerly await the day when my Python or NodeJS applications on AWS Lambda run on either Intel, AMD, or ARM hardware without me knowing or caring. The programming model is already in place to allow the cloud providers to make massive shifts in hardware procurement without affecting customers. That can only be good for competition.
  13. Mistrose Ars Scholae Palatinae LeopardSeal wrote:Quote:Windows on ARM never actually went away, as Microsoft continued to develop it for use in Internet-of-Things devices)

    And Windows Phone.

    Ah, no.

    On edit: Okay, cheap shot. One day they might release a decent phone again, Continuum shows promise.

    Last edited by Mistrose on Wed Mar 08, 2017 6:14 pm

  14. Splynn Ars Centurion Ars has mentioned something important when they have discussed Android. That is the concept of a hardware platform. AMD helped here by putting forward a server platform spec. There were issues like things at the firmware level and I wondered for a long time if it was just AMD and Cavium working on ARM server platforms with everyone else doing their own ARM thing in traditional fashion.
    It's interesting to see this progressing. The many players in the ARM world could inject competition in enterprise like we have not seen in a long time.
  15. Splynn Ars Centurion Contingency wrote:sep332 wrote:Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.
    Not sure about the expansion slot, but it's not uncommon for dense storage arrays to lack hot-swap. Backblaze pods, and Microsoft's Open Cloud Server design for example.

    It's not even a disk array though. If it's a server, put non-OS storage on an array. Handle storage redundancy at the array level or higher. The fewer drives on a server, the lower the likelihood of component failure on the server.

    The design is bizarre--like designers playing "Junkyard Wars" instead of anything belonging in production. The only scenario I can think of is cramming a bunch of spares in the chassis to take over for failed drives as time passes, to keep the server functioning as long as possible. If that's the case though, it'd likely make more sense to go to VMs, and handle failures via a pool of cheaper servers that can be added to as needed.

    Consider something distributed like hyper converged infrastructure.
  16. whenthewallsfell Smack-Fu Master, in training The article doesn't directly address this, but I'm assuming the Windows on ARM servers will utilize the same sort of emulation capability that other forthcoming Windows on ARM machines/devices will. Meaning they'll be able to run a large subset (at least) of existing applications written for Intel/AMD Windows servers. Something that is very, very different from any versions of Windows that Microsoft has released for ARM in the past.

    Last edited by whenthewallsfell on Wed Mar 08, 2017 11:24 pm

  17. SymmetricChaos Wise, Aged Ars Veteran Quote:AVX-512, an extension of the existing AVX instruction set that enlarges it to operate on 512 bit data types, up from 256 in desktop Skylake

    Wait Skylake support 256-bit arithmetic at the instruction level? (or hardware level possibly?)

    I didn't know that.
  18. DrPizza Moderator et Subscriptor SymmetricChaos wrote:Quote:AVX-512, an extension of the existing AVX instruction set that enlarges it to operate on 512 bit data types, up from 256 in desktop Skylake

    Wait Skylake support 256-bit arithmetic at the instruction level? (or hardware level possibly?)

    I didn't know that.
    The data types in question are packed floats and integers. 256 is 4 64-bit double precision floats (or 8 32-bit single precision) or various sized integers. AVX-512 takes that up to 8 doubles/16 singles.
  19. SmokeTest Ars Praetorian WaveRunner wrote:Burner1515 wrote:dvanh wrote:Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.

    It doesn't matter for cloud infra, the server is the smallest operational unit. If a drive fails, the whole node is decommissioned.

    Surely you mean it's removed from the array of servers until it is fixed and placed back in right? Decommissioned would mean they literally throw the server out like the above xkcd jokes.

    Who says it's a joke? :) At least the element of truth is Microsoft's current gen datacenters servers are fixed by the container load. Not individually or by the rack.
    I'm going to guess they probably send the server back to the manufacturer, who repairs it and sends it back to be returned to service. Meanwhile, MS just spins up a new server and considers the old server to no longer exist.

    Easy and efficient. And probably cheaper than paying somebody $75/hour to do it onsite.
  20. skierpage Wise, Aged Ars Veteran Quote:Microsoft isn't yet letting third parties use these systems. The Windows Server build is an internal build
    So why is this featured at "Open Compute Summit"? Open source software FTW, with or without open hardware. And I don't see this AMD design at https://github.com/opencomputeproject/P ... ster/Specs yet.
  21. Smeghead Ars Praefectus Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.
    Presenting (in no particular order):

    https://www.supermicro.com/products/sys ... -AR12L.cfm
    http://www.raidinc.com/wp-content/uploa ... -Sheet.pdf
    https://www.avantek.co.uk/product/1u-se ... nt-hadoop/
  22. Shmerl Ars Centurion Quote:Microsoft’s latest open source servers shown off
    ...
    Microsoft isn't yet letting third parties use these systems.

    Open not allowed to be used server... Oxymoron?
  23. BloodNinja Ars Centurion Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.
    It's an Apple version. When a drive fails, you toss the whole rack.

    Edit: ninja'd and XKCD'ed multiple times.

    Last edited by BloodNinja on Wed Mar 08, 2017 9:28 pm

  24. fuzzyfuzzyfungus Ars Tribunus Militum I didn't know that Qualcomm made server parts.

    Do they care more about enterprise customers; or are you expected to throw out the server and buy a new one if you want more than 1, maybe 2, OS version updates?
  25. panton41 Ars Tribunus Militum fuzzyfuzzyfungus wrote:I didn't know that Qualcomm made server parts.

    Do they care more about enterprise customers; or are you expected to throw out the server and buy a new one if you want more than 1, maybe 2, OS version updates?

    I've gotten the impression the biggest problem with the longevity of Qualcomm chips on Android is that the stability of driver APIs on Linux is a dumpster fire compared to Windows. A driver written for a specific Windows version might be good for 5-10 years on newer Windows versions because at the driver level nothing really changes. On Linux (and thus Andorid) a simply kernel-level security patch kills driver module compatibility and forces a recompile of the driver module.

    Though I've never understood why Qualcomm couldn't do something like nVidia does (or used to do) and have a script that compiles a kernel module to target the latest version on your system without their explicit blessing. (Or rather, compile the new Android build for that phone model to use it before it's uploaded to the server.)
  26. TheNetAvenger Wise, Aged Ars Veteran Contingency wrote:sep332 wrote:Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.
    Not sure about the expansion slot, but it's not uncommon for dense storage arrays to lack hot-swap. Backblaze pods, and Microsoft's Open Cloud Server design for example.

    It's not even a disk array though. If it's a server, put non-OS storage on an array. Handle storage redundancy at the array level or higher. The fewer drives on a server, the lower the likelihood of component failure on the server.

    The design is bizarre--like designers playing "Junkyard Wars" instead of anything belonging in production. The only scenario I can think of is cramming a bunch of spares in the chassis to take over for failed drives as time passes, to keep the server functioning as long as possible. If that's the case though, it'd likely make more sense to go to VMs, and handle failures via a pool of cheaper servers that can be added to as needed.

    This like many posts seem to be missing the points...

    1) These are designed for Microsoft's own data centers, not joe smoo's rack in the basement of the building.

    2) The way Windows Server runs on 'server hardware' is in itself agnostic to the hardware, meaning that processes and VM can exist across several servers and server hardware at once and move without interruption of service.


    So if a drive or server has issues, the 'running' Windows OS on top of it, simply moves to other hardware until it is fixed again and available in the pool of hardware.

    This is where people seem to get a bit lost with Windows Server and how Azure itself works, as it doesn't just sit on one set of hardware by design. This is also how it scales up and down and can utilize other hardware features as they are available in various pools - thing like GPU access that also can scale across several physical machines.

    (Which is possible due to GPU technologies from the WDDM in Vista, that are still essentially exclusive to Windows.)


    Everyone go back and look at the initial designs of Windows Server and Azure and how it continued to expand in those directions in the last few years.

    Windows NT Server and Azure are rather impressive technologies in how they work on top of a base server core that can disappear without hurting the upper level OS or processes running on it.
  27. TheNetAvenger Wise, Aged Ars Veteran panton41 wrote:fuzzyfuzzyfungus wrote:I didn't know that Qualcomm made server parts.

    Do they care more about enterprise customers; or are you expected to throw out the server and buy a new one if you want more than 1, maybe 2, OS version updates?

    I've gotten the impression the biggest problem with the longevity of Qualcomm chips on Android is that the stability of driver APIs on Linux is a dumpster fire compared to Windows. A driver written for a specific Windows version might be good for 5-10 years on newer Windows versions because at the driver level nothing really changes. On Linux (and thus Andorid) a simply kernel-level security patch kills driver module compatibility and forces a recompile of the driver module.

    Though I've never understood why Qualcomm couldn't do something like nVidia does (or used to do) and have a script that compiles a kernel module to target the latest version on your system without their explicit blessing. (Or rather, compile the new Android build for that phone model to use it before it's uploaded to the server.)

    Due to the Linux model, even with a fairly intelligent script, the dependency levels would make something like this disastrous.

    What NVidia does is rather simplistic to recompiling their drivers for the current kernel, what you are talking about requires recompiling the entire kernel and all dependencies.

    There are real reasons why Microsoft designed NT with an object kernel model that not only avoids these dependencies, but by using objects, even drastic changes can be added without affecting software using older calls or methods.

    A good example would be Vista, as it kept the XPDM while adding on the new WDDM, and calls used features available to them in each stack without incident.

    This is where I normally remind people that Windows NT STILL has some impressive architecture and model concepts that are implemented beautifully.

    Sadly the anti-Windows/MS people in the OSS world ignore the good things in Windows and for 20 years have avoided taking steps to TRULY create an OSS OS that offers some of these technologies.

    Even when the WDDM technologies and pre-emptive/managed/SMP GPU technologies were started to be added to Windows with Vista, the world ran around making fun of Vista, and 10 years later, Windows is the only OS with a kernel level GPU scheduler technology. (That is now even more agnostic and extends to any additional processors beyond GPU and CPU now.)

    The OSS world should have a wonderful alternative by now, instead we have 1980s models and architectures at 'best' with a lot of duct tape and driver wrapping just to make them usable on todays hardware. (sad)
  28. senso Smack-Fu Master, in training TheNetAvenger wrote:panton41 wrote:fuzzyfuzzyfungus wrote:I didn't know that Qualcomm made server parts.

    Do they care more about enterprise customers; or are you expected to throw out the server and buy a new one if you want more than 1, maybe 2, OS version updates?

    I've gotten the impression the biggest problem with the longevity of Qualcomm chips on Android is that the stability of driver APIs on Linux is a dumpster fire compared to Windows. A driver written for a specific Windows version might be good for 5-10 years on newer Windows versions because at the driver level nothing really changes. On Linux (and thus Andorid) a simply kernel-level security patch kills driver module compatibility and forces a recompile of the driver module.

    Though I've never understood why Qualcomm couldn't do something like nVidia does (or used to do) and have a script that compiles a kernel module to target the latest version on your system without their explicit blessing. (Or rather, compile the new Android build for that phone model to use it before it's uploaded to the server.)

    Due to the Linux model, even with a fairly intelligent script, the dependency levels would make something like this disastrous.

    What NVidia does is rather simplistic to recompiling their drivers for the current kernel, what you are talking about requires recompiling the entire kernel and all dependencies.

    There are real reasons why Microsoft designed NT with an object kernel model that not only avoids these dependencies, but by using objects, even drastic changes can be added without affecting software using older calls or methods.

    A good example would be Vista, as it kept the XPDM while adding on the new WDDM, and calls used features available to them in each stack without incident.

    This is where I normally remind people that Windows NT STILL has some impressive architecture and model concepts that are implemented beautifully.

    Sadly the anti-Windows/MS people in the OSS world ignore the good things in Windows and for 20 years have avoided taking steps to TRULY create an OSS OS that offers some of these technologies.

    Even when the WDDM technologies and pre-emptive/managed/SMP GPU technologies were started to be added to Windows with Vista, the world ran around making fun of Vista, and 10 years later, Windows is the only OS with a kernel level GPU scheduler technology. (That is now even more agnostic and extends to any additional processors beyond GPU and CPU now.)

    The OSS world should have a wonderful alternative by now, instead we have 1980s models and architectures at 'best' with a lot of duct tape and driver wrapping just to make them usable on todays hardware. (sad)

    That kinda explains why using the same kernell versions in ubuntu and mint, one leaves me with a working soundcard and the other not?

    In fact, that explains a lot of "strange" things that happen between flavours of linux that are supposed to come from the same base "tree"..
  29. orome Smack-Fu Master, in training panton41 wrote:
    I've gotten the impression the biggest problem with the longevity of Qualcomm chips on Android is that the stability of driver APIs on Linux is a dumpster fire compared to Windows. A driver written for a specific Windows version might be good for 5-10 years on newer Windows versions because at the driver level nothing really changes. On Linux (and thus Andorid) a simply kernel-level security patch kills driver module compatibility and forces a recompile of the driver module.


    This is not true. Enterprise distros (RHEL, SLES, not sure about ubuntu) provide binary compatibility in major version.
    RHEL 6 kernel maintains 2.6.32 compatible module ABI (although feature wise it's much closer latest upstream) and binary modules written for 6.0 run OK in 6.X.
    that said, drivers are provided with kernel updates, and out of tree drivers are usually not supported.
  30. orome Smack-Fu Master, in training Quote:The ARM instruction set is (relatively) clean and neatly designed,


    Which one? armv7,8,8.1 ? AArch32, AArch64? Thumb, Thumb2, Thumb16, ThumbEE? Jazelle ?
    or one of the floating point options; VFPv3 ? v4? v4-D16? NEON?

    arm instruction sets are an unholy mess of evolving execution modes and extensions.
    the advantages over x86 are;
    a) fixed instruction length (easier decode), sw does not care
    b) old instructions are deprecated and removed, instead of microcoded, sw needs to be recompiled, assembler code rewritten
    c) arm instruction don't have implicit parameters, compilers are happier
    d) there are no x86 style "copy N-bytes of data from location A to location B, stop when the copied byte is zero" instructions, again compilers are happier
  31. AM16 Ars Scholae Palatinae et Subscriptor This great. I'd use arm for power savings on specific workloads that don't require too much processing and intel or and depending on how they perform vs price in the upcoming years.
  32. redleader Ars Legatus Legionis orome wrote:Quote:The ARM instruction set is (relatively) clean and neatly designed,


    Which one? armv7,8,8.1 ? AArch32, AArch64? Thumb, Thumb2, Thumb16, ThumbEE? Jazelle ?
    or one of the floating point options; VFPv3 ? v4? v4-D16? NEON?


    As the article notes, ARMv8A. ARM has a huge number of variants targeting different applications, but individual platforms rarely target more than a few of them, and a server processor will probably only target v8. The fact that ARM extensions are specialized for specific applications but not carried forward into subsequent ISA revisions is precisely why ARM is reasonably clean.
  33. Elrabin Ars Praetorian Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.


    Companies who are working at OCP scale don't care about individual failed nodes.

    Their redundancy is at the application layer.

    AWS for instance goes through and yanks nodes with problems daily and replaces them, they troubleshoot on the back end after it's been deracked

    If it can be fixed, they fix it and rerack it elsewhere

    if it can't, it's junked and they get a credit towards more nodes from the ODM/OEM
  34. Anonymous Freak Ars Scholae Palatinae Quote:The ARM instruction set is (relatively) clean and neatly designed, making it easier to integrate new capabilities and extensions.

    AKA: ARM isn't all munged up with crap, so it's easy for us to mung up with crap!
  35. xWidget Ars Scholae Palatinae Elrabin wrote:Contingency wrote:The lead image must be a prototype--there doesn't appear to be a way to pull a failed drive without deracking the server, and the expansion slot goes nowhere.


    Companies who are working at OCP scale don't care about individual failed nodes.

    Their redundancy is at the application layer.

    AWS for instance goes through and yanks nodes with problems daily and replaces them, they troubleshoot on the back end after it's been deracked

    If it can be fixed, they fix it and rerack it elsewhere

    if it can't, it's junked and they get a credit towards more nodes from the ODM/OEM
    To expand on this a little, a lot of storage systems look like this:

    - Storage server comes online, notices it has 40TB in it, and tells a load balancer server about it.
    - Load balancer says "Alright, that's 2% of the cluster, so I'll start putting 2% of the data on you."
    - Requests come in, data gets saved in triplicate across the cluster.
    - Eventually a disk dies or fails health checks in the storage server. Server tells the load balancer "Hey, I just lost all these files, and btw I only store 32TB now."
    - Proxy server goes and makes sure that the third copy of each file that was lost has a new place to live.
    - Drive gets offlined and eventually replaced by workers.

    Usually you're continuously adding capacity to your cluster with only a certain amount of headroom, since empty disks only cost you money. Whether you put your new disks on a new server or an old one doesn't really matter that much if you have the servers anyway, and the cluster doesn't care where the disks live. Usually they're set up so a machine can be rebooted and the cluster won't freak out and decide the node is dead until after an hour or so anyway, so you can take a machine down for 5 minutes without any issues at all, since the data also lives in other servers.
  36. fuzzyfuzzyfungus Ars Tribunus Militum panton41 wrote:fuzzyfuzzyfungus wrote:I didn't know that Qualcomm made server parts.

    Do they care more about enterprise customers; or are you expected to throw out the server and buy a new one if you want more than 1, maybe 2, OS version updates?

    I've gotten the impression the biggest problem with the longevity of Qualcomm chips on Android is that the stability of driver APIs on Linux is a dumpster fire compared to Windows. A driver written for a specific Windows version might be good for 5-10 years on newer Windows versions because at the driver level nothing really changes. On Linux (and thus Andorid) a simply kernel-level security patch kills driver module compatibility and forces a recompile of the driver module.

    Though I've never understood why Qualcomm couldn't do something like nVidia does (or used to do) and have a script that compiles a kernel module to target the latest version on your system without their explicit blessing. (Or rather, compile the new Android build for that phone model to use it before it's uploaded to the server.)

    It's certainly true that Linux's deliberate lack of binary compatibility on the driver side makes it much harder to use a vendor's necrotic and unsupported drivers despite the fact that they've lost interest in them; but that mostly just emphasizes how dramatic the difference is between vendors who actually cooperate with mainline(like Intel for the most part) or care(like Nvidia for the most part) and vendors who phone in a shoddy BSP, maybe two; and then lose interest.

    Qualcomm isn't quite as dreadful as some of the 'GPL compliance with Chinese Characteristics' SoC slingers; but it's not a flattering comparison between them and people you'd normally buy servers from.
  37. mpat Ars Praefectus DrPizza wrote:SymmetricChaos wrote:Quote:AVX-512, an extension of the existing AVX instruction set that enlarges it to operate on 512 bit data types, up from 256 in desktop Skylake

    Wait Skylake support 256-bit arithmetic at the instruction level? (or hardware level possibly?)

    I didn't know that.
    The data types in question are packed floats and integers. 256 is 4 64-bit double precision floats (or 8 32-bit single precision) or various sized integers. AVX-512 takes that up to 8 doubles/16 singles.

    Worth pointing out that some version of AVX has been in place since Sandy Bridge, but Intel has for some reason disabled it in all Pentiums and Celerons. Because of this, AVX is rarely used in consumer software such as games.
  38. Elrabin Ars Praetorian mpat wrote:DrPizza wrote:SymmetricChaos wrote:Quote:AVX-512, an extension of the existing AVX instruction set that enlarges it to operate on 512 bit data types, up from 256 in desktop Skylake

    Wait Skylake support 256-bit arithmetic at the instruction level? (or hardware level possibly?)

    I didn't know that.
    The data types in question are packed floats and integers. 256 is 4 64-bit double precision floats (or 8 32-bit single precision) or various sized integers. AVX-512 takes that up to 8 doubles/16 singles.

    Worth pointing out that some version of AVX has been in place since Sandy Bridge, but Intel has for some reason disabled it in all Pentiums and Celerons. Because of this, AVX is rarely used in consumer software such as games.

    AVX is on a plethora of consumer parts

    https://ark.intel.com/products/88195/In ... o-4_20-GHz

    Here's my i7 6700k which clearly shows AVX/AVX2 instructions

    http://i.imgur.com/TDUjoHv.jpg

    Here's an old i5 3570k with AVX

    http://ark.intel.com/products/65520/Int ... o-3_80-GHz

    AIDA64 also tests instruction sets

    https://www.aida64.com/products/features/benchmarking
  39. Fotan Ars Tribunus Militum SilverSee wrote:Quote:Windows on ARM never actually went away, as Microsoft continued to develop it for use in Internet-of-Things devices).
    And (ahem) Windows Mobile.

    Heh.
    I assumed this was a proofreading balls up and what he meant to say was: "Windows on ARM never actually went anywhere."

    Which certainly seems more correct.
Chronological View | Best Comments

You must login or create an account to comment.

← Previous story Next story →

#auto

Subpages (6): 3 7 b d f k
Comments