Configure An NF Sv 4 Server And Client
Network File System Basics
- Network File System (NFS) is designed for sharing files and directories over a network.
- File systems are exported by an NFS server, and appear and behave on a NFS client as if they were located on the local machine.
- Directories like
/home/
and directories with shared information are frequently exported via NFS.
NFS Example
- The following is an example of mounting the
/export/home/
directory from NFS server sun machine as/home
on earth machine:
A computer can be both an NFS server and an NFS client. It can supply file systems over the network (export) and mount file systems from other hosts (import). SLES12 does not support exporting and importing NFS shares on the same server for production systems.
The NFS daemon is part of the kernel and only needs to be configured and then activated. The start script is /etc/init.d/nfsserver
but, due to systemd's SystemV compatibility, NFS is actually controlled via systemd. The kernel NFS daemon includes file locking, which means that only one user at a time has write access to files.
How NFS Works
- NFS is an RPC (Remote Procedure Call) service
- An essential component for RPC services is
rpcbind
- RPC service start up and bind to a port and communicates port and service offered to
rpcbind
- RPC programs must be restarted each time you restart
rpcbind
NFS is an RPC (Remote Procedure Call) service. An essential component for RPC services is rpcbind that manages these services and needs to be started first. When an RPC service starts up, it binds to a port in the system, but also communicates this port and the service it offers to rpcbind. As every RPC program must be registered by rpcbind when it is started, RPC programs must be restarted each time you restart rpcbind
NFSv4 Improvements
- NFSv4 offers a wide range of improvements in areas such as:
- Uses stateful rather than stateless operations
The Network File System Version 4 (NFSv4) is a distributed file system similar to previous versions of NFS in its straightforward design, and independence of transport protocols and operating systems for file access in a heterogeneous network. Unlike earlier versions of NFS, the new protocol integrates file locking, strong security, Compound RPCs (combining relevant operations), and delegation capabilities to enhance client performance for narrow data sharing applications on high-bandwidth networks.
Note: NFSv4 ACLs and krb5p (Kerberos Privacy) are currently not supported on SLES 12.
The NFSv4 protocol is not backward compatible with the NFSv2/3 protocols, but the Linux NFS daemon supports NVS v2/3/4 protocol versions.
- Uses TCP for transport by default
- Uses Compound RPC calls
- Single NFS daemon
Using UDP for NFS with high speed links such as Gbit Ethernet can lead to file corruption. The underlying problem is explained in the nfs manual page under “Using NFS over UDP on high-speed links”.
Security Improvements
- Encryption is part of the specification
- Client-server interactions can now be secured with GSS-API
- UID to user name mapping
Interoperability Improvements
- Exports a single “pseudo file system” rather than multiple file systems
- Mandatory and advisory locking of files is now supported
/etc/exports
does not require any special entries to work with NFSv4. SLES 11 releases required 'fsid=0'
on precisely one entry, and 'bind='
annotations on others. This is no longer required and should be removed. It is still supported, so there is no immediate need to change /etc/exports
when upgrading to SLES 12.
NFS Server Configuration
/etc/exports
- Syntax:
/shared/directory host(option1,option2)
- Example:
/export/data server1(rw,sync)
host
can be an IP address, network address, host name, domain name with wild card, or netgroupoptions
define read only or read-write access, how the root user is treated, etc.
Possible host entry examples:
Options include:
From release 1.1.0 of nfs-utils onwards, the default will be no_subtree_check as subtree_checking tends to cause more problems than it is worth. If you genuinely require subtree checking, you should explicitly put that option in the exports file.
For a detailed explanation of all options and their meaning, refer to the man page of exports (man exports
).
Whenever you want to specify different permissions for a subdirectory (such as /home/geeko/pictures/
) from an already exported directory (such as /home/geeko/
), the additional directory needs its own separate entry in /etc/exports
.
The following is an example of an edited /etc/exports file that works for NFS version3 and 4:
# # /etc/exports # /home *(rw,sync,no_subtree_check) /export/data *(ro,sync,no_subtree_check) On the client you can mount the NFS pseudo-root /: server2:~ # mount server1:/ /mnt server2:~ # ls /mnt export home
Note that it is not /
on the server's file system that is mounted, but a pseudo root that contains only /home/
and /export/
, not other directories such as /etc
that exist in the server's file system at the same level. If there are other directories in the /export
directory on the server they are not visible on the client, only /export/data
appears.
If you want to specifically mount the file system with NFSv3, you have to use the command:
mount -o nfsvers=3 server1:/home /mnt
Using mount -o nfsvers=3 server1:/ /mnt
would not work, because there is no NFSv3 export of / in the above /etc/exports file.
/etc/sysconfig/nfs
/etc/idmap.conf
server1:~ # cat /etc/sysconfig/nfs ## Path: Network/File systems/NFS server ## Description: number of threads for kernel nfs server ## Type: integer ## Default: 4 ## ServiceRestart: nfsserver # # the kernel nfsserver supports multiple server threads # USE_KERNEL_NFSD_NUMBER="4" ... server1:~ # cat /etc/idmap.conf [General] Verbosity = 0 PipefsDirectory = /var/lib/nfs/rpc_pipefs Domain = localdomain [Mapping] NobodyUser = nobody NobodyGroup = nobody
Required Services
Control the NFS Server
- Startscripts in /etc/init.d/ still in use:
/etc/init.d/nfsserver start
/etc/init.d/nfsserver stop
- Control via systemd works because of systemd's compatibility with SysV init scripts
- After changing /etc/exports or /etc/sysconfig/nfs, start or restart the NFS server service:
systemctl restart nfsserver.service
- After changing /etc/idmapd.conf, reload the configuration file:
killall HUP
rpc.idmapd
You can use the /etc/init.d/nfsserver
command to start the NFS server. The nfsserver script passes the list of exported directories to the kernel, and then starts or stops the daemon rpc.mountd
and, using rpc.nfsd
, the nfsd kernel threads. The mount daemon (/usr/sbin/rpc.mountd
) accepts each mount request and compares it with the entries in the configuration file /etc/exports. If access is allowed, the data is delivered to the client.
Because rpc.nfsd
can start several kernel threads, the start script interprets the variable USE_KERNEL_NFSD_NUMBER
in the file /etc/sysconfig/nfs
. This variable determines the number of threads to start. By default, four server threads are started.
NFSv4 support is activated by setting the variable NFS4_SUPPORT to yes in /etc/sysconfig/nfs.
NFS Server Commands
exportfs
/software
directory to all hosts in the network 192.168.0.0/24, enter the following:
exportfs -o ro,root_squash,sync 192.168.0.0/24:/software
exportfs -r
/var/lib/nfs/etab file
showmount -e hostname
Configure an NFSv4 Server with YaST
- Yast > Network Services > NFS Server
- Start On Boot
- Enable NFSv4
- Enter NFSv4 domain name
- (optional) Enter GSS Security
- > Next
- Click Add Directory > /export > OK
- Add Host > Host WIldcard - * ; Export Options - ro, root_sqash, sync, no_subtree_check
Configure an NFS Client
- From the command line
- Using YaST
NFS Client: Command line
- You can import a directory manually from an NFS server by using the mount command.
- The only prerequisite is a running RPC port mapper, which you can start by entering the command
rcrpcbind start
- Use the mount option
-t
to indicate the file system type. - In the following example, the file system type NFSv4 is specified:
mount -t nfs4 -o options host:/directory /mountpt
- Use the umount command to unmount an imported file system.
Mount NFS Directories Automatically
- To mount directories automatically when booting, you need to make corresponding entries in the
/etc/fstab
file. - When the system is booted, the start script
/etc/init.d/nfs
, using systemd's SysV compatibility, loads the/etc/fstab
file. - The following is an example of an entry for an NFS mount point in the
/etc/fstab
file:
server1:/data/ISOs /import/ISOs nfs4 defaults 0 0
In the above /etc/fstab entry, the first value indicates the hostname of the NFS server (server1) and the directory it exports (/data/ISOs/).
The second value indicates the mount point, which is the directory in the local file system where the exported directory should be attached (/import/ISOs/).
The third value indicates the file system type (nfs4).
The fourth value, comma-separated values following the file system type, can be used to provide NFSspecific and general mounting options. In this case, default values are used.
At the end of the line, there are two numbers (0 0). The first indicates whether to back up the file system with the help of dump (1) or not (0). The second number configures whether the file system check is disabled (0), done on this file system with no parallel checks (1), or parallelized when multiple disks are available on the computer (2). In the example, the system does neither, as both options are set to 0.
- After modifying an entry of a currently mounted file system in the /etc/fstab file, you can have the system read the changes by entering
mount -o remount /mountpoint
- To mount all file systems that are not currently mounted and do not contain the noauto option, enter:
mount -a
Configure an NFS Client with YaST
- Yast > Network Services > NFS Client
- NFS Settings > Enable NFSv4 > Enter NFS Domain Name > OK
- NFS Shares > Add > NFS Server ; Remote Directory ; Mount point ; Options > OK
- check
/etc/fstab
Monitor the NFS System
- Information about rpcbind (portmapper):
rpcinfo -p
The NFS server daemon registers itself to the rpcbind ( portmapper) with the name nfs.
The NFS mount daemon uses the name mountd.
- Information on the exported directories of an NFS server:
showmount
- To display the exported directories on the machine server1, use the command
showmount -e server1
- The option @-a@@ shows which computers have mounted which directories.