xFS: Serverless Network File Service

Current distributed file systems are designed around a centralized server model. This model provides easy sharing of data for a network of computers. However, as more clients are added, the server's cpu quickly becomes a performance limitation. In response, faster (but more expensive) servers have been built and recent file system designs have attempted to alleviate the server's workload by designating as much work as possible to the clients. But even in these systems, the speed of the server remains the limiting factor in file system scalability.

We are currently designing a serverless file system called xFS which will attempt to provide low latency, high bandwidth access to file system data by distributing the functionality of the server among the clients. The typical duties of a server include maintaining cache coherence, locating data, and servicing disk requests.

We are currently developing cache coherence protocols which use the collective memory of the clients as a system-wide cache. By reducing the amount of redundant caching among clients and allowing the memory of idle machines to be utilized, cooperative caching can lower the latency of reads by reducing number of requests which must go to disk.

The function of locating data in xFS is distributed by having each client responsible for servicing requests on a subset of the files. File data is striped across multiple clients to provide high bandwidth. The striped data includes parity information which can be used to reconstruct a missing stripe segment due to, for example, a machine being down. In this way, no node is a single point of failure.

Why "xFS" and what's with that logo


More Information About xFS

Papers

Code

Traces

Demo


People working on xFS

Faculty:

Students:

  • Mike Dahlin
  • Jeanna Neefe
  • Drew Roselli
  • Randy Wang
xFS student retreat photo

Technical Staff:


Back to NOW