4. Get access to the School's high performance file server
Last revision February 11, 2016
On this page: |
Additional information: |
The Stanford University School of Earth, Energy & Environmental Sciences File Server Cluster provides high capacity and high performance data storage for individual and research group use for all faculty, students, and staff in the School. Its purpose is to provide a central respository of protected shareable disk storage that can be accessed from any computer on the network.
This Stanford Earth file server cluster contains multiple disk storage pools containing file shares that are accessed under different server names, including sesfs.stanford.edu, seslabfs.stanford.edu, sacfs.stanford.edu, srbfs.stanford.edu, tecgeofs.stanford.edu, gbfs.stanford.edu, stressfs.stanford.edu, and others that may be created for new large research file shares. For simplicity, this note will use the server name sesfs.stanford.edu, which accesses the largest number of file shares. The information in this note applies in all cases, regardless of the server name used to access the cluster.
Who can use sesfs.stanford.edu?
All regular faculty, staff, and students in any of the departments or programs of the School of Earth, Energy & Environmental Sciences may use this server. Basically, if the Stanford Directory shows that you have an official affiliation with any part of School Earth, you are eligible to be a "standard" user of this server.
Standard users get a private home share of 10 gigabytes to store personal files. They may also access common spaces, such as the ftp share for anonymous ftp service, the scr1 scratch share, and any research, teaching or administrative group file shares to which they have been granted access by the faculty or staff "owner". If a user is maintaining content on a Stanford Earth school, department, or research group web site on pangea, the web managers will give him/her access to the appropriate folder in the WWW file share.
All connections to sesfs.stanford.edu must be made using your SUNet ID name and password. There are no separate accounts for this system.
Although you may be eligible, access is not automatic! The system managers need to add your SUNet ID to the list of authorized users and create your home share. Departments and programs normally supply lists of new graduate students each year who need access, but don't always notify the system managers about new faculty, staff, and post-docs. If you think you should have access, but cannot login, contact the system managers.
Research, teaching, or administrative group share "owners" can request that colleagues or students in units of the university other than Stanford Earth be given access to their group shares or to folders they maintain on the common shares (ftp and WWW). Those individuals will have only the access granted by the share owner, and will not get a home share on sesfs.stanford.edu.
Similarly, colleagues at other institutions can be given guest access to research or administrative group shares on sesfs.stanford.edu. However, the share owner must first sponsor this colleague for a full-service SUNet ID. Due to technical limitations in the way the new server authenticates users, guests will need a "full service" ID that must be charged to a university financial account; the free "base service" is not sufficient. Such guests will have only the access granted by the share owner, and will not get a home share on sesfs.stanford.edu.
Access termination
This file server follows the official policy for termination of SUNet IDs, which gives faculty and students a 120 day grace period after they leave the university before their SUNet ID is de-activated. There is no grace period for staff. But faculty and staff emeriti have permanently active SUNet IDs.
When your SUNet ID is no longer active, you can no longer connect to the sesfs.stanford.edu file server. Your home share will be retained on disk for 30 days after your SUNet ID becomes inactive. Then it will be archived to tape and erased from disk, unless your SUNet ID is reactivated in the meanwhile due to re-affiliation with the university or full-service guest sponsorship.
Disk storage allocation and policies
Sesfs.stanford.edu has total usable storage space of 60 terabytes. This can be incrementally expanded to at least 100 terabytes as funds are available. In addition to free space allocations (see below), groups in the School with needs for multiple terabytes of storage can "cost-share" in the purchase of additional disk arrays and benefit from data protection features such as hardware redundancy, professional management, and data backup.
Disk space on sesfs.stanford.edu is organized into units called "file shares". This file server is based on Unix but mimics a Windows file server. With a few exceptions, all shares employ the Windows NTFS file permission model, to get inheritance and fine-grained access control. All control is by lists of specific SUNet IDs - use of active directory groups is not supported yet.
Four types of file shares can be created on sesfs.stanford.edu: home shares; common shares; group shares; and system shares (for server internal use only). Please see the table of share names for complete details on the names, sizes, purposes, and access policies of all non-system shares on this server.
Home shares
Every standard user is given a home share that is configured as a separate file share in order to maintain privacy of user files. No user can access another's home share in any way. For special needs, you can request that the system managers give read or write access to a particular folder in your home share to a specific list of SUNet IDs. But you are encouraged to use group shares and the common /scr1 share for sharing files with others.
Each home share is preconfigured with two special folders that should not be removed by the user.
-
WWW - this folder is your personal web site. Any files you place in this folder are accessible on the web using the URL
http://earth.stanford.edu/~sunetid/filename
where sunetid is your SUNet ID name and filename is the name or path of the file to be served. The tilde character (~) before your SUNet ID name is required. -
GRIDLab - this folder is used to automatically store files you make when working on the computers in the School's G.R.I.D. research lab. Those computers automatically connect to your home share and put the files you save to the Desktop or Documents folders into the GRIDLab folder of your home share. That way, your files are accessible from any computer in the lab or outside the lab.
Files you store on this server, particularly those served on the web in your home share WWW folder, are subject to the university's Computer and Network Usage Policy. Among other things, illegal copying or sharing of copyrighted materials will not be tolerated.
All users are given an initial 10 gigabyte quota for their home share. Stanford Earth faculty members may request a larger quota for themselves or their students in order to store academically related files, such as thesis project files. Send the request to the system managers. If the same files will be used by more than one person in the research group, it is more appropriate for the faculty member to request a group share (below).
How to apply for research, teaching, and administrative group space
Any faculty member or lab manager in Stanford Earth may request the creation of a file share to be used for storage of common data, results, analyses, etc., by members of his/her research group. These research group shares may be linked to the cluster computers in the School's Center for Computational Earth and Environmental Science.
Stanford Earth faculty may also request creation of a file share to store student work or other materials for a course they are teaching.
Stanford Earth department managers and other senior staff may request file shares for administrative use.
You may request multiple shares for different purposes.
Send your request via email to the system managers. Please provide answers to the questions below.
- Name of the research or admnistrative group.
- Desired file share name. This name may only contain letters and numerals with no punctuation or special characters. Examples: sac, ulfem and gesadmin.
- Some idea of the purposes of the file share, particularly the quantity, size and longevity of the files that will be stored. Will there be many small, short-lived files? Or a growing collection of large data sets that need to be archived? This information will help in planning data protection.
- Total amount of disk space desired during the first year. Also, describe likely long-term growth needs. Groups can request up to 200 gigabytes of storage for free. If larger amounts are needed, the group will be asked to contribute a one-time fee of $500 for each terabyte (1,024 gigabytes) needed. This fee is used as a cost-sharing contribution to the purchase of additional protected enterprise-class disk arrays. It covers part of the marginal costs of the new disks and multiple years of maintenance.
- Names and SUNet IDs of the people who will be the regular users of the file share. You can include collaborators from other departments not in Stanford Earth, or guests whom you have sponsored for a full-service SUNet ID. We will assume that all the people you list need write access to the entire share, unless you specify restrictions.
- Whether the share will be "open" or "closed". An open share can be accessed in read-only mode by anyone with a valid full-service SUNet ID. This is suitable for a repository of data to be shared to all. A closed share can only be accessed by the persons you list as regular users.
- All group shares are generally made accessible via direct connection to sesfs.stanford.edu (or a special group server name for large research group shares) to mount this share as a network file share, and via sftp connections to sestransfer.stanford.edu to copy files back and forth. Sftp access is often useful for off-campus connections. You may request that one of these connection methods be disabled if there are special security requirements.
- List any special permission settings needed, such as folders that should be accessible only to certain people within your group. You can also specify disk quotas (limits on how much disk space particular people can use). Additional restrictions can be added at any time in the future - just send email.
- Describe any other special needs, such as direct access to files in this share from the pangea web server or the cluster computers in the Center for Computational Earth and Environmental Science.
How to connect to file shares on sesfs.stanford.edu
Sesfs.stanford.edu can be directly accessed using built-in features of Windows and Macintosh PCs. Linux workstations use the Samba software package (possibly an optional installation) to access it.
The basic idea is that you "mount" one or more "network file shares" defined on sesfs.stanford.edu so they appear to be directly connected to your workstation, using the Common Internet File System (CIFS) protocol. Then you can copy files to or from the server or directly open and edit files on the server. There is no need to copy files to the local disk to work with them. This makes it easy to use the same files on multiple computers. The server enforces file locking to make sure simultaneous connections from different computers cannot make conflicting file updates.
On Windows PCs, a mounted file share appears in Windows Explorer as another disk drive letter. On Mac OS X, it shows up on the desktop as a separate disk volume. On Linux, you attach the share to a empty local directory.
You can make the connection to a file share only when needed, or you can make an automatic connection every time you startup or login to your local system.
To get started, you need the name of the server - generally sesfs.stanford.edu, but can be a special name for large research group shares - and the name of the file share(s) you want to use from the table of share names. Use your SUNet ID and password to login.
Follow the detailed directions, with screenshots, for your operating system from the list below. You can request configuration help from our CRC desktop consultants using the HelpSU web form.
Alternate connection methods for off-campus use
Due to campus-wide firewall rules, "mounting" file shares from sesfs.stanford.edu via the CIFS protocol only works for computers connected to the Stanford campus network (including Stanford DSL). If you want to access file shares from off-campus, you can install and use the Stanford VPN client on your computer. That client authenticates you (using your SUNet ID) so your off-campus network connection will be treated by the firewall as if you were on campus.
Alternatively, you can perform file transfers to/from your home share and most other shares from any computer on the Internet using an sftp client program, such as the Stanford site-licensed SecureFX program for Windows PCs, site-licensed Fetch program for Mac OS X, or the built-in sftp command line program on Linux and Mac OS X. These programs allow you to copy files back and forth between your computer and a server, but you cannot edit files directly on the server this way.
The sesfs.stanford.edu file server does not directly support sftp connections, so we have set up another server for that purpose, which has private "back-end" connections to the file server. Connect your sftp client program to sestransfer.stanford.edu using your SUNet ID and password.
Once connected, you will be located in your home share. You can navigate to most other shares using their directory locations shown in the last column of the table of share names.
Server specifications, including data protection features
Sesfs.stanford.edu is a dual-node clustered network file server using the NexentaStor operating system on "commodity" servers and enterprise class disks.
Each server node has a quad-core Xeon processor, multiple gigabit ethernet interfaces, and 24 gigabytes of error-correcting RAM. Each node has redundant 24 gigabit/second SAS channels to the RAID-6 protected disk arrays. With this level of RAID protection, each set of disks can tolerate the simultaneous failure of two disks without interruption in service or loss of data while the information on the failed disks is automatically rebuilt onto new disks.
Individual disk arrays constitute "volumes" that are subdivided into individual user or group file shares with a designated disk size or quota. Each volume has a separate network name and possibly aliases and is served by one of the server nodes at a time, with automatic failover if the active node dies. Users who were connected to a failing node may need to reconnect after the volume is served by the other node.
In addition to the service and data protection offered by the redundant hardware features, the sesfs.stanford.edu cluster has two features designed to maintain copies of all files in case of accidental disk erasure or corruption: "snapshots" and a tape backup system.
A "snapshot" is a read-only image of all files on the share "frozen" at a specific point in time. Any changes to files (including erasure) made after the snapshot is taken do not affect the version stored in the snapshot. Eventually, the snapshot is removed to recover disk space. Prior versions of files can be recovered from the snapshots. Snapshots are made automatically by the system at pre-set intervals. They can be accessed directly by the user to recover a file.
Snapshots are made several times a day for home shares and small group shares with many files that change frequently. Some are kept only a few hours, others for two weeks, and one per week for two months. Research group shares with large files that change infrequently have fewer snapshots, taken at longer intervals. Specific snapshot frequency and retention policies are listed in the table of share names.
Protection against total system failure or other disasters is provided by a separate backup system with automated tape library. Backups are made every night of all files on sesfs.stanford.edu that were created or changed during the previous day. Two copies of each backup are kept - one in the tape library, for easy access, and the other moved daily (Monday to Friday only) to an offsite location, for security.
<--Previous Step | Overview | Next Step--> |