Architecture Design of Global Distributed Storage System for Data Grid
Longbo Ran, Hai Jin, Zhiping Wang, Chen Huang, Yong Chen, and Yongjie Jia
Internet and Cluster Computing Center
Huazhong University of Science and Technology, Wuhan 430074, China
Email: [email protected]
Abstract
Data grids are becoming increasingly important for sharing large data collections, archiving and disseminating. In this paper we describe architecture of global distributed storage system for data grid. We focus on the management and the capability for the maximum users and maximum resources on the Internet, as well as performance and other issues.
Keywords: Data grids, Match tree, Metadata, Name space
1. Introduction
Data-intensive, high-performance computing applications require the efficient management and transfer of terabytes or petabytes of information in wide-area, distributed computing environments [1][28]. Examples of data-intensive applications include experimental analyses and simulations in several scientific disciplines, such as high-energy physics, climate modeling, earthquake engineering and astronomy [2][3]. These applications share several requirements. Massive data sets must be shared by a large community of hundreds or thousands of users distributed around the world. Data grids are becoming increasingly important for sharing large data collections, archiving and disseminating.
Researches on massive storage system have gained significant achievements. There are already a number of storage systems used by the grid community, each of which was designed to satisfy specific needs and requirements for storing, transferring and accessing large datasets. These include Distributed Parallel Storage System (DPSS) and High Performance Storage System (HPSS), which provide high performance access to data and utilize parallel data transfer and/or striping across multiple servers to improve performance
[4][28]. Distributed File System (DFS) supports high-volume usage, dataset replication and local caching. OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, high available, and durable storage utility atop an infrastructure comprised of untrusted servers [5]. GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks [6][8]. Storage Resource Broker (SRB) connects heterogeneous data collections, provides a uniform client interface to
storage repositories, and provides a metadata catalog for describing and locating data within the storage system [4]. Other systems allow clients to access structured data from a variety of underlying storage systems.
In this paper, we present a novel architecture of a global distributed storage system built atop SAN, NAS, or any other storage systems, called Global Storage Provider (GSP). We provide a data management service in the data grid environment. Our purpose is to construct a distribute storage system with high scalability, high security, high efficiency, which offers a high quality storage service to millions of users over Internet. The scalability and efficiency of global name space and the meta-data service is discussed in detail in order to provide easy and efficient access or share of files to the wide area storage system.
We give a user and group-based multi-namespace architecture, and develop a new approach to solve the bottleneck problem of metadata server. A new component, called Storage Service Provider (SSP), is introduced to supply storage service to users, and plays as a user agent to the storage system. The data sharing and access control among different users and groups are completed by the combination of user-based access control methods and role-based access control methods. To supply different QoS to different users, files can be replicated, clipped, and stored in different storage pools. A prototype has been developed to provide extended ftp service to end users and some simple file APIs.
The rest of the paper is organized as follows. Section 2 describes the design principles of GSP. Section 3 details the architecture of GSP. Section 4 discusses the relate works closely to our project. Section 5 ends with the current state of our project and the future work.
2. Design Principles of Global Storage Provider
GSP is a middleware to unify heterogeneous storage resources to provide huge available storage resources for enormous users. The system provides high availability, high expansibility and high speed.
2.1 GSP Interfaces
In the global data grid environment there are many kinds of storage resources existing on different platforms. Uniform interface for the users access different resources transparently must be provided.
In order to meet the requirements of different applications, three kinds of interfaces are needed. First, standard FTP interface is needed, because most general methods for access storage resources through network is still by using FTP. This is why GridFTP chooses FTP protocol as the basic protocol [7]. Second, interface like file system is needed. This is mainly because for many special circumstances, file system interface makes it easy and convenient to develop the applications. Third, parallel file interface is also needed, as many high performance computing applications need parallel file interface.
2.2 Metadata Server of GSP
For a global storage system the huge storage resource must be managed effectively
[9][13]. In our system directory-based metadata server (MS) is adopted to store the metadata. MS contains many important metadata such as file logical view, data location, file slices, file copies, file content abstract. The search efficiency affects the whole efficiency of the system, and MS easily becomes the bottleneck of the whole system on expansibility and availability. With the resources increasing in the system, the information of the files and the directories become enormous, a good approach to store and search the metadata efficiently is needed.
In many systems, such as SRB [7] or GridFTP [6], hierarchical directory structure is adopted. Generally when the metadata becomes enormous, the system employs several metadata servers. There are several limitations for directory servers. One is that it must keep the logic tree among the directory servers; the other is that the directory servers must cooperate and the result will return from the root node that adds the overload to the root server; the third is that when the root server is out of service the whole meta server will out of service too, so it is difficult to provide high availability; and the last is that it is difficult to
Figure 1 Example of Metadata Server Logical Structure
In this paper, we bring out an algorithm called match tree. Figure 1 is a logical structure of a MS stored on 4 directory servers, showing the file structure of a user or a group. For example, if we want to access a file \root\soft\sys\net\3com\switch\readme, the process will trace from DS1 to DS3, and then reach DS4, finally return the metadata from DS1 to user.
Match tree is kept in the memory of the scheduler. Figure 2 shows the corresponding match tree of Fig.1. The match tree is a concentrated tree indicating the directory server stored the item. With the match tree the scheduler can find out the directory server storing the needed metadata. For example, a user wants to access a file \root\soft\sys\net\3com\switch\readme. The scheduler first looks up the match tree and make the furthest match. It will find out that the file stored on directory server DS4 soon and send the request to DS4 directly.
错误!
Figure 2 Example of Corresponding Match Tree of Fig.1
In order to search efficiently, all the directory servers must keep the logic structure itself. For example on directory server 4 (DS4) there still has item root that have not any content. When a search reaches DS4 it will look up directly without any change to the request. This empty item just need a very little storage space and a little coherence maintenance cost.
2.3 Data Transfer and Availability of GSP
There are many researches on data transfer, especial for bulk data transfer, such as
[6][8][13][15]:
● Third-party control of data transfer
● Parallel data transfer or multi-stream transfer
● Striped data transfer
● Partial file transfer
● Reliable data transfer
● Automatic negotiation of TCP buffer/window sizes
● Automatic retry
GridFTP has almost all the methods noticed above and BBFTP focuses on the bulk data transfer [15].
We focus on providing flexible method to deal with different data size. We find out that the data size affects the transfer efficiency directly. Besides, different availability requirement and access frequency also affect the transfer method. In order to have better transfer speed for different file size or different user requirements, different TCP buffer/window sizes, number of streams and strips are needed.
In order to have high availability, the system must guarantee the availability of metadata.
In global storage system, replica is generally used to guarantee the availability of the data. In GSP we focus on how many slices or copies are used and how to store them on different storage devices.
2.4 Multiple Name Space
Traditional file system such as FAT32 file system usually presents a tree-like global name space for all users, which is only suit for system with very few users and resources. In our GSP system, which may have millions of users and billions of files, in order to provide user with both complex data sharing and efficient information navigating, we propose a user-and-group-based multiple name space model to organize massive information.
We provide each user registered in our system an independent name space, which is invisible to other users. At the same time, group is used to present a name space, which is used to organizing and sharing data among a specific group of users. User can apply for joining a group and access the data stored. All the name spaces are composed of two metadata: one is user’s visible name spaces which contain user’s own name space and all the group name spaces user registered; the other is user’s invisible name space, which contains other users’ name space, and name space of all the groups. A group can advertise its information at CA to specify which user can register to it and which group can share it.
2.5 Sharing Mechanism
As a global storage system there need an efficient share mechanism so that the millions of users or groups can share resources conveniently. In order to reduce the share unit of metadata, directory is used in the share mechanism. The system supports share inheriting. For example if a user “A” shares a directory to another user “B”, the user “B” also shares the directory to a user “C”, and the property can be inherited or redefined as long as it doesn’t exceed its authority.
The data sharing in our system is divided into four kinds: user to user, user to group, group to user, and group to group. We use two different mechanisms to meet the need of both efficiency and complex data sharing.
The first mechanism is directory level access control. Each directory in a group name space has two access control lists (ACLs), one is user access control list, and the other is group access control list. Users in a group are divided into several classes such as administrators, normal users, and limited users. Each class has a basic privilege. Furthermore privilege can be set to a specific class on a directory and/or to a specific single user.
A group can share its whole name space to a specific user class with some default privilege level of other groups. At same time privilege can be set to a group on each directory, which is stored in directory’s group access control list. All users belong to that class can access the name space of shared group.
Directory level access control is only used in the name space of groups where complex data sharing is needed. Data sharing among user name spaces and from user name space to group name space is simple. There is no need to bind access control list to each directory. In each user name space there is an in-sharing directory list and an out-sharing directory list. If a user wants to share one of his directories to other users or groups, he first registers information of sharing at his out-sharing directory list, and later directory sharing information will also be sent to in-sharing group list of the other end user group.
Information of directory sharing contains path of shared directory, destination user and group list, corresponding ACL and so on. By using both directory level access control and directory sharing, we achieve both complex data sharing and efficiency.
A user’s view of name space is illustrated as follow:
2.6 Security of GSP
In GSP, the basic security infrastructure provides:
•
•
•
•
Secure communication Mutual authentication happens before data transfer. During data transfer the data can be encrypted and integrity can be guaranteed. Security across organizational boundaries There may be many security domains and all the domains can coordinate to provide distributed manageable security system. Single sign-on In order to support the mobile user, the system provides single sign on so that the user can access the system anywhere via any SSP. Use-defined security class The user can define the security class to reduce some unnecessary overhead. Home-----------ROOT |_______IN_SHARE_ROOT |_______OUT_SHARE_ROOT |_______IN_SHARE_ROOT |_______OUT_SHARE_ROOT |_______SHARE_GROUP_NAME |_______SHARE_GROUP_NAME |_______... Group Name-----ROOT Other groups...
3. Global Storage Provide Architecture
3.1 Architecture of GSP
GSP is a middleware to unify heterogeneous storage resources. Through the system
millions of clients can get high quality of storage services. The system is composed of Certificate Authority (CA), Global Naming Server (GNS), Storage Service Provider (SSP) and Agent as shown in Figure 3.
错误!
Figure 3 Global Storage Provide Architecture
CA stores the user information such as user name, user ID, password, priority, and group. When a user wants to access the system it will verify the user name and password first to get his authority. Due to the global environment there are many autonomic groups and each of them have different priorities. The system also provides a method to unify these priorities. CA also contains the make tables of users, which help the scheduler to build up the matching tree. With the increasing number of the users, CAs are also distributed.
SSP is the key component in this architecture, it contains:
● Several popular user interfaces
● Name server for fast locating and load balancer
● Meta-data cache and cooperative cache between all the SSP
● Search engine
SSP provides a meta-data cache that boosts the search speed of meta-data. It also provides limited cooperative cache among several SSPs. There is a tradeoff between the efficiency and the consistency maintenance costs.
Except the cooperated cache, SSP is independent which makes it easy to join or quit the system dynamically. It is also easy to add enough SSP to meet the requirement of increasing overload.
GNS is composed of directory servers. In order to have high search speed, GNS is a distribute directory servers. Details about it are described in section 2.2.
Agent provides the uniform access methods for SSP and the users, which make all kinds of file systems transparent to the users. Agent also provides data transfer control to support transparent high speed and available data transfer. More details can be found in section 2.3.
3.2 Operation of GSP
Figure 4 show the read file process through the APIs of the system.
1) A client sends a connect command to a SSP with user name, group name and
password;
2) The SSP passes the connect request to a CA, the CA verifies the information. If the
current CA could not verify this user, it sends the verification request to another CA. For a legal user, the user information table and match table will be returned;
3) The user sends the read command with the file names to the SSP. The SSP looks up
the match tree and finds out the meta-servers keeping the meta-data of the files. The SSP sends the file names to the corresponding meta-server and gets the meta-data to the client;
4) With the meta-data the client can easily get data from Agents.
In some cases, a user has no special clients that can use the API defined by the system. When a user accesses the system through standard FTP clients or HTTP clients, the operation procedures are different from above, shown in Figure 5. SSP does much work for client. It must be note that when the file is divided the file must be merged through SSP, the data transfer will through SSP. Otherwise the third part transfer will be adopted shown in dotted line in Fig.5.
错误! Figure 4 Read Operation through API of GSP
Figure 5 Read Operation through General Interface of GSP
4. Related Works
There are many efforts on mass storage or global storage [28], such as DPSS, SRB, OceanStore, GridFTP.
Distributed Parallel Storage System (DPSS) is a scalable, high-performance, distributed-parallel data storage system originally developed as part of the DARPA-funded MAGIC Testbed, with additional support from the U.S. Dept. of Energy. DPSS is a data block server, which provides high-performance data handling and architecture for building high-performance storage systems from low-cost commodity hardware components. This technology has been quite successful in providing an economical, high-performance, widely distributed, and highly scalable architecture for caching large amounts of data that can potentially be used by many different users [4].
SDSC Storage Resource Broker (SRB) is a client-server middleware that provides a uniform interface for connecting to heterogeneous data resources over a network and accessing replicated data sets. In conjunction with the Metadata Catalog, SRB provides a way to access data sets and resources based on their attributes rather than their names or physical locations [7].
Projects such as OceanStore use a peer-to-peer infrastructure. OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, high-available, and durable storage utility atop an infrastructure comprised of untrusted servers. Any computer can join the infrastructure, contributing storage or providing local user access in exchange for economic compensation. Users need only subscribe to a single OceanStore service provider, although they may consume storage and bandwidth from many different providers. The providers automatically buy and sell capacity and coverage among themselves, transparently to the users. The utility model thus combines the resources from federated systems to provide a quality of service higher than that achievable by any single company. OceanStore employs a Byzantine-fault tolerant commit protocol to provide strong consistency across replicas [24].
GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks. The GridFTP protocol is based on FTP. It provides a universal grid data transfer and access protocol for secure, efficient data movement in grid environments. GridFTP extends the standard FTP protocol, and provides a superset of the features offered by the various grid storage systems currently in use [8][25].
5. Conclusions and Future Work
GSP provides a novel architecture of a global distributed storage system with high scalability, high security and high efficiency.
There are still many interesting issues to be further studied in GSP, such as:
● how to cooperate with agent to cache the data;
● how to design high security protocol in global file system;
● how to extend the resource from storage to other resource such as database?
References
[1] The DataGrid Project: http://www.cern.ch/grid/
[2] The GriPhyN Project, http://www.griphyn.org
[3] HDF, http://hdf.ncsa.uiuc.edu/
[4] “Basics of the High Performance Storage System”, http://www.sdsc.edu/projects/HPSS
[5] OceanStore project, http://oceanstore.cs.berkeley.edu/
[6] W. Allcock, J. Bester, J. Bresnahan, A. Chervenak, L. Liming, S. Meder, and S. Tuecke,
“GridFTP Protocol Specification”, GGF GridFTP Working Group Document, September 2002.
[7] The Storage Resource Broker Project, http://www.npaci.edu/DICE/SRB/
[8] The Globus Project - White Paper. “GridFTP: Universal Data Transfer for the Gird”,
http://www.globus.org/
[9] C. Baru, R. Moore, A. Rajasekar, and M. Wan, “The SDSC Storage Resource Broker”,
Proc. CASCON'98 Conference, Nov.30-Dec.3, 1998, Toronto, Canada.
[10] E. Deelman, K. Blackburn, P. Ehrens, C. Kesselman, S. Koranda, A. Lazzarini, G. Mehta,
L. Meshkat, L. Pearlman, K. Blackburn, and R. Williams, “GriPhyN and LIGO: Building a Virtual Data Grid for Gravitational Wave Scientists”, Proceedings of 11th Intl Symposium on High Performance Distributed Computing, 2002.
[11] W. Hoschek, J. Jaen-Martinez, A. Samar, H. Stockinger, and K. Stockinger, “Data
Management in an International Grid Project”, Proceedings of 2000 International Workshop on Grid Computing (GRID 2000), Bangalore, India, December 2000.
[12] B. Allcock, J. Bester, J. Bresnahan, A. L. Chervenak, I. Foster, C. Kesselman, S. Meder,
V. Nefedova, D. Quesnal, and S. Tuecke. “Data Management and Transfer in High Performance Computational Grid Environments”, Parallel Computing Journal, Vol.28, No.5, May 2002, pp.749-771.
[13] E. Deelman, I. Foster, C. Kesselman, and M. Livny, “Representing Virtual Data: A
Metadata Architecture for Location and Materialization Transparency”, Technical Report GriPhyN-2001-14, 2001.
[14] A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke, “The Data Grid:
Towards an Architecture for the Distributed Management and Analysis of Large Scientific Datasets,” Journal of Network and Computer Applications.
[15] BBFTP Project, http://doc.in2p3.fr/bbftp/
[16] O. F. Rana and D. W. Walker, “The Agent Grid: Agent-Based Resource Integration in
PSEs”, Proceedings of 16th IMACS World Congress on Scientific Computation, Applied Mathematics and Simulation, Special session on Problem Solving Environments, Lausanne, Switzerland, August 2000
[17] S. Vazhkudai and J. Schopf, “Using Disk Throughput Data in Predictions of End-to-End
Grid Transfers”, Proceedings of the 3rd International Workshop on Grid Computing
(GRID 2002), Baltimore, MD, November 2002.
[18] K. Ranganathan and I. Foster, “Decoupling Computation and Data Scheduling in
Distributed Data-Intensive Applications”, Proceedings of 11th IEEE International Symposium on High Performance Distributed Computing (HPDC-11), Edinburgh, Scotland, July 2002.
[19] K. Ranganathan, A. Iamnitchi, and I. Foster, “Improving Data Availability through
Dynamic Model-Driven Replication in Large Peer-to-Peer Communities”, Proceedings of Global and Peer-to-Peer Computing on Large Scale Distributed Systems Workshop, Berlin, Germany, May 2002.
[20] I. Foster, J. Voeckler, M. Wilde, and Y. Zhao, “Chimera: A Virtual Data System for
Representing, Querying and Automating Data Derivation”, Proceedings of the 14th Conference on Scientific and Statistical Database Management, Edinburgh, Scotland, July 2002.
[21] J. M. Schopf and S. Vazhkudai, “Predicting Sporadic Grid Data Transfers”, Proceedings
of 11th IEEE International Symposium on High-Performance Distributed Computing (HPDC-11), Edinburgh, Scotland, July 2002.
[22] J. Bester, I. Foster, C. Kesselman, J. Tedesco, and S. Tuecke, “GASS: A Data Movement
and Access Service for Wide Area Computing Systems”, Proceedings of Sixth Workshop on I/O in Parallel and Distributed Systems, May 1999.
[23] C. Baru, “Managing Very Large Scientific Data Collections”, Proceedings of 5th
International Conference on High Performance Computing (HiPC'98), Dec. 1998, Chennai, India.
[24] S. Rhea, P. Eaton, D. Geels, H. Weatherspoon, B. Zhao, and J. Kubiatowicz, “Pond: the
OceanStore Prototype”, Proceedings of the 2nd USENIX Conference on File and Storage Technologies (FAST '03), March 2003
[25] B. Allcock, J. Bester, J. Bresnahan, A. L. Chervenak, I. Foster, C. Kesselman, S. Meder,
V. Nefedova, D. Quesnel, and S. Tuecke, “Secure, Efficient Data Transport and Replica Management for High-Performance Data-Intensive Computing”, Proceedings of IEEE Mass Storage Conference, April 2001.
[26] A. S. Szalay, P. Z. Kunszt, A. Thakar, J. Gray, D. Slutz, and R. J. Brunner, “Designing
and Mining Multi-Terabyte Astronomy Archives: The Sloan Digital Sky Survey”, SIGMOD Record, Vol. 29, pp.451-462, 2000.
[27] B. Tierney, J. Lee, B. Crowley, M. Holding, J. Hylton, and F. Drake, “A Network-Aware
Distributed Storage Cache for Data Intensive Environments”, Proceedings of IEEE High Performance Distributed Computing conference (HPDC-8), August 1999.
[28] H. Jin, T. Cortés, and R. Buyya, High Performance Mass Storage and Parallel I/O, IEEE
Press, John Wiley & Sons, Inc., 2002
11
Architecture Design of Global Distributed Storage System for Data Grid
Longbo Ran, Hai Jin, Zhiping Wang, Chen Huang, Yong Chen, and Yongjie Jia
Internet and Cluster Computing Center
Huazhong University of Science and Technology, Wuhan 430074, China
Email: [email protected]
Abstract
Data grids are becoming increasingly important for sharing large data collections, archiving and disseminating. In this paper we describe architecture of global distributed storage system for data grid. We focus on the management and the capability for the maximum users and maximum resources on the Internet, as well as performance and other issues.
Keywords: Data grids, Match tree, Metadata, Name space
1. Introduction
Data-intensive, high-performance computing applications require the efficient management and transfer of terabytes or petabytes of information in wide-area, distributed computing environments [1][28]. Examples of data-intensive applications include experimental analyses and simulations in several scientific disciplines, such as high-energy physics, climate modeling, earthquake engineering and astronomy [2][3]. These applications share several requirements. Massive data sets must be shared by a large community of hundreds or thousands of users distributed around the world. Data grids are becoming increasingly important for sharing large data collections, archiving and disseminating.
Researches on massive storage system have gained significant achievements. There are already a number of storage systems used by the grid community, each of which was designed to satisfy specific needs and requirements for storing, transferring and accessing large datasets. These include Distributed Parallel Storage System (DPSS) and High Performance Storage System (HPSS), which provide high performance access to data and utilize parallel data transfer and/or striping across multiple servers to improve performance
[4][28]. Distributed File System (DFS) supports high-volume usage, dataset replication and local caching. OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, high available, and durable storage utility atop an infrastructure comprised of untrusted servers [5]. GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks [6][8]. Storage Resource Broker (SRB) connects heterogeneous data collections, provides a uniform client interface to
storage repositories, and provides a metadata catalog for describing and locating data within the storage system [4]. Other systems allow clients to access structured data from a variety of underlying storage systems.
In this paper, we present a novel architecture of a global distributed storage system built atop SAN, NAS, or any other storage systems, called Global Storage Provider (GSP). We provide a data management service in the data grid environment. Our purpose is to construct a distribute storage system with high scalability, high security, high efficiency, which offers a high quality storage service to millions of users over Internet. The scalability and efficiency of global name space and the meta-data service is discussed in detail in order to provide easy and efficient access or share of files to the wide area storage system.
We give a user and group-based multi-namespace architecture, and develop a new approach to solve the bottleneck problem of metadata server. A new component, called Storage Service Provider (SSP), is introduced to supply storage service to users, and plays as a user agent to the storage system. The data sharing and access control among different users and groups are completed by the combination of user-based access control methods and role-based access control methods. To supply different QoS to different users, files can be replicated, clipped, and stored in different storage pools. A prototype has been developed to provide extended ftp service to end users and some simple file APIs.
The rest of the paper is organized as follows. Section 2 describes the design principles of GSP. Section 3 details the architecture of GSP. Section 4 discusses the relate works closely to our project. Section 5 ends with the current state of our project and the future work.
2. Design Principles of Global Storage Provider
GSP is a middleware to unify heterogeneous storage resources to provide huge available storage resources for enormous users. The system provides high availability, high expansibility and high speed.
2.1 GSP Interfaces
In the global data grid environment there are many kinds of storage resources existing on different platforms. Uniform interface for the users access different resources transparently must be provided.
In order to meet the requirements of different applications, three kinds of interfaces are needed. First, standard FTP interface is needed, because most general methods for access storage resources through network is still by using FTP. This is why GridFTP chooses FTP protocol as the basic protocol [7]. Second, interface like file system is needed. This is mainly because for many special circumstances, file system interface makes it easy and convenient to develop the applications. Third, parallel file interface is also needed, as many high performance computing applications need parallel file interface.
2.2 Metadata Server of GSP
For a global storage system the huge storage resource must be managed effectively
[9][13]. In our system directory-based metadata server (MS) is adopted to store the metadata. MS contains many important metadata such as file logical view, data location, file slices, file copies, file content abstract. The search efficiency affects the whole efficiency of the system, and MS easily becomes the bottleneck of the whole system on expansibility and availability. With the resources increasing in the system, the information of the files and the directories become enormous, a good approach to store and search the metadata efficiently is needed.
In many systems, such as SRB [7] or GridFTP [6], hierarchical directory structure is adopted. Generally when the metadata becomes enormous, the system employs several metadata servers. There are several limitations for directory servers. One is that it must keep the logic tree among the directory servers; the other is that the directory servers must cooperate and the result will return from the root node that adds the overload to the root server; the third is that when the root server is out of service the whole meta server will out of service too, so it is difficult to provide high availability; and the last is that it is difficult to
Figure 1 Example of Metadata Server Logical Structure
In this paper, we bring out an algorithm called match tree. Figure 1 is a logical structure of a MS stored on 4 directory servers, showing the file structure of a user or a group. For example, if we want to access a file \root\soft\sys\net\3com\switch\readme, the process will trace from DS1 to DS3, and then reach DS4, finally return the metadata from DS1 to user.
Match tree is kept in the memory of the scheduler. Figure 2 shows the corresponding match tree of Fig.1. The match tree is a concentrated tree indicating the directory server stored the item. With the match tree the scheduler can find out the directory server storing the needed metadata. For example, a user wants to access a file \root\soft\sys\net\3com\switch\readme. The scheduler first looks up the match tree and make the furthest match. It will find out that the file stored on directory server DS4 soon and send the request to DS4 directly.
错误!
Figure 2 Example of Corresponding Match Tree of Fig.1
In order to search efficiently, all the directory servers must keep the logic structure itself. For example on directory server 4 (DS4) there still has item root that have not any content. When a search reaches DS4 it will look up directly without any change to the request. This empty item just need a very little storage space and a little coherence maintenance cost.
2.3 Data Transfer and Availability of GSP
There are many researches on data transfer, especial for bulk data transfer, such as
[6][8][13][15]:
● Third-party control of data transfer
● Parallel data transfer or multi-stream transfer
● Striped data transfer
● Partial file transfer
● Reliable data transfer
● Automatic negotiation of TCP buffer/window sizes
● Automatic retry
GridFTP has almost all the methods noticed above and BBFTP focuses on the bulk data transfer [15].
We focus on providing flexible method to deal with different data size. We find out that the data size affects the transfer efficiency directly. Besides, different availability requirement and access frequency also affect the transfer method. In order to have better transfer speed for different file size or different user requirements, different TCP buffer/window sizes, number of streams and strips are needed.
In order to have high availability, the system must guarantee the availability of metadata.
In global storage system, replica is generally used to guarantee the availability of the data. In GSP we focus on how many slices or copies are used and how to store them on different storage devices.
2.4 Multiple Name Space
Traditional file system such as FAT32 file system usually presents a tree-like global name space for all users, which is only suit for system with very few users and resources. In our GSP system, which may have millions of users and billions of files, in order to provide user with both complex data sharing and efficient information navigating, we propose a user-and-group-based multiple name space model to organize massive information.
We provide each user registered in our system an independent name space, which is invisible to other users. At the same time, group is used to present a name space, which is used to organizing and sharing data among a specific group of users. User can apply for joining a group and access the data stored. All the name spaces are composed of two metadata: one is user’s visible name spaces which contain user’s own name space and all the group name spaces user registered; the other is user’s invisible name space, which contains other users’ name space, and name space of all the groups. A group can advertise its information at CA to specify which user can register to it and which group can share it.
2.5 Sharing Mechanism
As a global storage system there need an efficient share mechanism so that the millions of users or groups can share resources conveniently. In order to reduce the share unit of metadata, directory is used in the share mechanism. The system supports share inheriting. For example if a user “A” shares a directory to another user “B”, the user “B” also shares the directory to a user “C”, and the property can be inherited or redefined as long as it doesn’t exceed its authority.
The data sharing in our system is divided into four kinds: user to user, user to group, group to user, and group to group. We use two different mechanisms to meet the need of both efficiency and complex data sharing.
The first mechanism is directory level access control. Each directory in a group name space has two access control lists (ACLs), one is user access control list, and the other is group access control list. Users in a group are divided into several classes such as administrators, normal users, and limited users. Each class has a basic privilege. Furthermore privilege can be set to a specific class on a directory and/or to a specific single user.
A group can share its whole name space to a specific user class with some default privilege level of other groups. At same time privilege can be set to a group on each directory, which is stored in directory’s group access control list. All users belong to that class can access the name space of shared group.
Directory level access control is only used in the name space of groups where complex data sharing is needed. Data sharing among user name spaces and from user name space to group name space is simple. There is no need to bind access control list to each directory. In each user name space there is an in-sharing directory list and an out-sharing directory list. If a user wants to share one of his directories to other users or groups, he first registers information of sharing at his out-sharing directory list, and later directory sharing information will also be sent to in-sharing group list of the other end user group.
Information of directory sharing contains path of shared directory, destination user and group list, corresponding ACL and so on. By using both directory level access control and directory sharing, we achieve both complex data sharing and efficiency.
A user’s view of name space is illustrated as follow:
2.6 Security of GSP
In GSP, the basic security infrastructure provides:
•
•
•
•
Secure communication Mutual authentication happens before data transfer. During data transfer the data can be encrypted and integrity can be guaranteed. Security across organizational boundaries There may be many security domains and all the domains can coordinate to provide distributed manageable security system. Single sign-on In order to support the mobile user, the system provides single sign on so that the user can access the system anywhere via any SSP. Use-defined security class The user can define the security class to reduce some unnecessary overhead. Home-----------ROOT |_______IN_SHARE_ROOT |_______OUT_SHARE_ROOT |_______IN_SHARE_ROOT |_______OUT_SHARE_ROOT |_______SHARE_GROUP_NAME |_______SHARE_GROUP_NAME |_______... Group Name-----ROOT Other groups...
3. Global Storage Provide Architecture
3.1 Architecture of GSP
GSP is a middleware to unify heterogeneous storage resources. Through the system
millions of clients can get high quality of storage services. The system is composed of Certificate Authority (CA), Global Naming Server (GNS), Storage Service Provider (SSP) and Agent as shown in Figure 3.
错误!
Figure 3 Global Storage Provide Architecture
CA stores the user information such as user name, user ID, password, priority, and group. When a user wants to access the system it will verify the user name and password first to get his authority. Due to the global environment there are many autonomic groups and each of them have different priorities. The system also provides a method to unify these priorities. CA also contains the make tables of users, which help the scheduler to build up the matching tree. With the increasing number of the users, CAs are also distributed.
SSP is the key component in this architecture, it contains:
● Several popular user interfaces
● Name server for fast locating and load balancer
● Meta-data cache and cooperative cache between all the SSP
● Search engine
SSP provides a meta-data cache that boosts the search speed of meta-data. It also provides limited cooperative cache among several SSPs. There is a tradeoff between the efficiency and the consistency maintenance costs.
Except the cooperated cache, SSP is independent which makes it easy to join or quit the system dynamically. It is also easy to add enough SSP to meet the requirement of increasing overload.
GNS is composed of directory servers. In order to have high search speed, GNS is a distribute directory servers. Details about it are described in section 2.2.
Agent provides the uniform access methods for SSP and the users, which make all kinds of file systems transparent to the users. Agent also provides data transfer control to support transparent high speed and available data transfer. More details can be found in section 2.3.
3.2 Operation of GSP
Figure 4 show the read file process through the APIs of the system.
1) A client sends a connect command to a SSP with user name, group name and
password;
2) The SSP passes the connect request to a CA, the CA verifies the information. If the
current CA could not verify this user, it sends the verification request to another CA. For a legal user, the user information table and match table will be returned;
3) The user sends the read command with the file names to the SSP. The SSP looks up
the match tree and finds out the meta-servers keeping the meta-data of the files. The SSP sends the file names to the corresponding meta-server and gets the meta-data to the client;
4) With the meta-data the client can easily get data from Agents.
In some cases, a user has no special clients that can use the API defined by the system. When a user accesses the system through standard FTP clients or HTTP clients, the operation procedures are different from above, shown in Figure 5. SSP does much work for client. It must be note that when the file is divided the file must be merged through SSP, the data transfer will through SSP. Otherwise the third part transfer will be adopted shown in dotted line in Fig.5.
错误! Figure 4 Read Operation through API of GSP
Figure 5 Read Operation through General Interface of GSP
4. Related Works
There are many efforts on mass storage or global storage [28], such as DPSS, SRB, OceanStore, GridFTP.
Distributed Parallel Storage System (DPSS) is a scalable, high-performance, distributed-parallel data storage system originally developed as part of the DARPA-funded MAGIC Testbed, with additional support from the U.S. Dept. of Energy. DPSS is a data block server, which provides high-performance data handling and architecture for building high-performance storage systems from low-cost commodity hardware components. This technology has been quite successful in providing an economical, high-performance, widely distributed, and highly scalable architecture for caching large amounts of data that can potentially be used by many different users [4].
SDSC Storage Resource Broker (SRB) is a client-server middleware that provides a uniform interface for connecting to heterogeneous data resources over a network and accessing replicated data sets. In conjunction with the Metadata Catalog, SRB provides a way to access data sets and resources based on their attributes rather than their names or physical locations [7].
Projects such as OceanStore use a peer-to-peer infrastructure. OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, high-available, and durable storage utility atop an infrastructure comprised of untrusted servers. Any computer can join the infrastructure, contributing storage or providing local user access in exchange for economic compensation. Users need only subscribe to a single OceanStore service provider, although they may consume storage and bandwidth from many different providers. The providers automatically buy and sell capacity and coverage among themselves, transparently to the users. The utility model thus combines the resources from federated systems to provide a quality of service higher than that achievable by any single company. OceanStore employs a Byzantine-fault tolerant commit protocol to provide strong consistency across replicas [24].
GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks. The GridFTP protocol is based on FTP. It provides a universal grid data transfer and access protocol for secure, efficient data movement in grid environments. GridFTP extends the standard FTP protocol, and provides a superset of the features offered by the various grid storage systems currently in use [8][25].
5. Conclusions and Future Work
GSP provides a novel architecture of a global distributed storage system with high scalability, high security and high efficiency.
There are still many interesting issues to be further studied in GSP, such as:
● how to cooperate with agent to cache the data;
● how to design high security protocol in global file system;
● how to extend the resource from storage to other resource such as database?
References
[1] The DataGrid Project: http://www.cern.ch/grid/
[2] The GriPhyN Project, http://www.griphyn.org
[3] HDF, http://hdf.ncsa.uiuc.edu/
[4] “Basics of the High Performance Storage System”, http://www.sdsc.edu/projects/HPSS
[5] OceanStore project, http://oceanstore.cs.berkeley.edu/
[6] W. Allcock, J. Bester, J. Bresnahan, A. Chervenak, L. Liming, S. Meder, and S. Tuecke,
“GridFTP Protocol Specification”, GGF GridFTP Working Group Document, September 2002.
[7] The Storage Resource Broker Project, http://www.npaci.edu/DICE/SRB/
[8] The Globus Project - White Paper. “GridFTP: Universal Data Transfer for the Gird”,
http://www.globus.org/
[9] C. Baru, R. Moore, A. Rajasekar, and M. Wan, “The SDSC Storage Resource Broker”,
Proc. CASCON'98 Conference, Nov.30-Dec.3, 1998, Toronto, Canada.
[10] E. Deelman, K. Blackburn, P. Ehrens, C. Kesselman, S. Koranda, A. Lazzarini, G. Mehta,
L. Meshkat, L. Pearlman, K. Blackburn, and R. Williams, “GriPhyN and LIGO: Building a Virtual Data Grid for Gravitational Wave Scientists”, Proceedings of 11th Intl Symposium on High Performance Distributed Computing, 2002.
[11] W. Hoschek, J. Jaen-Martinez, A. Samar, H. Stockinger, and K. Stockinger, “Data
Management in an International Grid Project”, Proceedings of 2000 International Workshop on Grid Computing (GRID 2000), Bangalore, India, December 2000.
[12] B. Allcock, J. Bester, J. Bresnahan, A. L. Chervenak, I. Foster, C. Kesselman, S. Meder,
V. Nefedova, D. Quesnal, and S. Tuecke. “Data Management and Transfer in High Performance Computational Grid Environments”, Parallel Computing Journal, Vol.28, No.5, May 2002, pp.749-771.
[13] E. Deelman, I. Foster, C. Kesselman, and M. Livny, “Representing Virtual Data: A
Metadata Architecture for Location and Materialization Transparency”, Technical Report GriPhyN-2001-14, 2001.
[14] A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke, “The Data Grid:
Towards an Architecture for the Distributed Management and Analysis of Large Scientific Datasets,” Journal of Network and Computer Applications.
[15] BBFTP Project, http://doc.in2p3.fr/bbftp/
[16] O. F. Rana and D. W. Walker, “The Agent Grid: Agent-Based Resource Integration in
PSEs”, Proceedings of 16th IMACS World Congress on Scientific Computation, Applied Mathematics and Simulation, Special session on Problem Solving Environments, Lausanne, Switzerland, August 2000
[17] S. Vazhkudai and J. Schopf, “Using Disk Throughput Data in Predictions of End-to-End
Grid Transfers”, Proceedings of the 3rd International Workshop on Grid Computing
(GRID 2002), Baltimore, MD, November 2002.
[18] K. Ranganathan and I. Foster, “Decoupling Computation and Data Scheduling in
Distributed Data-Intensive Applications”, Proceedings of 11th IEEE International Symposium on High Performance Distributed Computing (HPDC-11), Edinburgh, Scotland, July 2002.
[19] K. Ranganathan, A. Iamnitchi, and I. Foster, “Improving Data Availability through
Dynamic Model-Driven Replication in Large Peer-to-Peer Communities”, Proceedings of Global and Peer-to-Peer Computing on Large Scale Distributed Systems Workshop, Berlin, Germany, May 2002.
[20] I. Foster, J. Voeckler, M. Wilde, and Y. Zhao, “Chimera: A Virtual Data System for
Representing, Querying and Automating Data Derivation”, Proceedings of the 14th Conference on Scientific and Statistical Database Management, Edinburgh, Scotland, July 2002.
[21] J. M. Schopf and S. Vazhkudai, “Predicting Sporadic Grid Data Transfers”, Proceedings
of 11th IEEE International Symposium on High-Performance Distributed Computing (HPDC-11), Edinburgh, Scotland, July 2002.
[22] J. Bester, I. Foster, C. Kesselman, J. Tedesco, and S. Tuecke, “GASS: A Data Movement
and Access Service for Wide Area Computing Systems”, Proceedings of Sixth Workshop on I/O in Parallel and Distributed Systems, May 1999.
[23] C. Baru, “Managing Very Large Scientific Data Collections”, Proceedings of 5th
International Conference on High Performance Computing (HiPC'98), Dec. 1998, Chennai, India.
[24] S. Rhea, P. Eaton, D. Geels, H. Weatherspoon, B. Zhao, and J. Kubiatowicz, “Pond: the
OceanStore Prototype”, Proceedings of the 2nd USENIX Conference on File and Storage Technologies (FAST '03), March 2003
[25] B. Allcock, J. Bester, J. Bresnahan, A. L. Chervenak, I. Foster, C. Kesselman, S. Meder,
V. Nefedova, D. Quesnel, and S. Tuecke, “Secure, Efficient Data Transport and Replica Management for High-Performance Data-Intensive Computing”, Proceedings of IEEE Mass Storage Conference, April 2001.
[26] A. S. Szalay, P. Z. Kunszt, A. Thakar, J. Gray, D. Slutz, and R. J. Brunner, “Designing
and Mining Multi-Terabyte Astronomy Archives: The Sloan Digital Sky Survey”, SIGMOD Record, Vol. 29, pp.451-462, 2000.
[27] B. Tierney, J. Lee, B. Crowley, M. Holding, J. Hylton, and F. Drake, “A Network-Aware
Distributed Storage Cache for Data Intensive Environments”, Proceedings of IEEE High Performance Distributed Computing conference (HPDC-8), August 1999.
[28] H. Jin, T. Cortés, and R. Buyya, High Performance Mass Storage and Parallel I/O, IEEE
Press, John Wiley & Sons, Inc., 2002
11