g‎ > ‎

s

zdnet-facebook-ai-hardware-open-source.jpg

Facebook has already established a reputation for its internal engineering projects, whether they be simply tweaking algorithms on the News Feed or brand new hardware for its homegrown data centers scattered around the globe.

The world's largest social network has also been vocal on its contributions to the open source community.

Thus, the latest news out of Menlo Park, Calif. this week taps into all of the above with the announcement that it plans to open source its artificial intelligence (AI) hardware.

More about Facebook's infrastructure:

  • Facebook details two years of work to turn on default HTTPS
  • Facebook explains how 'TAO' serves social workloads, data requests
  • Facebook devs explain how it maps user connections to other 'entities'
  • Facebook translates natural language interface under Graph Search
  • Facebook engineers reveal how Parse fits into Platform, B2B strategies
  • Facebook releasing new Social Graph database benchmark: LinkBench
  • Facebook reveals the makings behind App Center recommendation engine
  • Understanding Unicorn: A deep dive into Facebook's Graph Search

Facebook engineers Kevin Lee and Serkan Piantino stressed in a blog post on Thursday that the open sourced AI hardware built from scratch is both more efficient and versatile than off-the-shelf options because the servers can be operated within data centers based on Open Compute Project standards.

"While many high-performance computing systems require special cooling and other unique infrastructure to operate, we have optimized these new servers for thermal and power efficiency, allowing us to operate them even in our own free-air cooled, Open Compute standard data centers," Lee and Piantino explained.

Code-named "Big Sur" (but not to be confused with Apple's current pattern for naming OS editions after California landmarks), the next-generation hardware was designed for training neural networks.

Aside from AI, the technology is often referred and tied to machine or deep learning.

Chip maker Nvidia has also been pushing its own deep learning portfolio over the last year. Thus, the two have teamed up on this project, which already involves a bevy of moving parts.

Facebook is being touted as the first company to adopt Nvidia's Tesla M40 GPU accelerator that debuted last month. The high-powered M40 GPU, intended for deploying deep neural networks, is being framed as the linchpin for powering the Big Sur platform and Open Rack-compatible hardware.

With the M40 in the background, Facebook engineers boasted that Big Sur is twice as fast as Facebook's previous generation, offering the potential to train networks twice as large at twice the speed.

Nvidia also highlighted that Big Sur is the first computing system developed for machine learning and AI research to be released as an open source solution when the design materials are submitted by Facebook to the Open Compute Project.

As the size of Facebook's membership base worldwide only continues to grow (it's already at 1.55 billion monthly active users as of September 30), the amount of data and possible insights to be gleaned from that information will swell as well.

The social media giant appears to be taking full advantage of this as Facebook's AI research team (FAIR) plans to more than triple investments in GPU hardware in order to extend machine learning techniques to more products across the company.

Image via Facebook

#auto

Subpages (4): 3 8 i j
Comments