CertificationMaterials

Configure An NF Sv 4 Server And Client

Network File System Basics

  • Network File System (NFS) is designed for sharing files and directories over a network.
  • File systems are exported by an NFS server, and appear and behave on a NFS client as if they were located on the local machine.
  • Directories like /home/ and directories with shared information are frequently exported via NFS.

NFS Example

  • The following is an example of mounting the /export/home/ directory from NFS server sun machine as /home on earth machine:

Attach:nfs1.png

A computer can be both an NFS server and an NFS client. It can supply file systems over the network (export) and mount file systems from other hosts (import). SLES12 does not support exporting and importing NFS shares on the same server for production systems.

The NFS daemon is part of the kernel and only needs to be configured and then activated. The start script is /etc/init.d/nfsserver but, due to systemd's SystemV compatibility, NFS is actually controlled via systemd. The kernel NFS daemon includes file locking, which means that only one user at a time has write access to files.

How NFS Works

  • NFS is an RPC (Remote Procedure Call) service
  • An essential component for RPC services is rpcbind
  • RPC service start up and bind to a port and communicates port and service offered to rpcbind
  • RPC programs must be restarted each time you restart rpcbind

NFS is an RPC (Remote Procedure Call) service. An essential component for RPC services is rpcbind that manages these services and needs to be started first. When an RPC service starts up, it binds to a port in the system, but also communicates this port and the service it offers to rpcbind. As every RPC program must be registered by rpcbind when it is started, RPC programs must be restarted each time you restart rpcbind

NFSv4 Improvements

  • NFSv4 offers a wide range of improvements in areas such as:
* Performance
* Security
* Interoperability
  • Uses stateful rather than stateless operations
* client uses states to notify the server of its intentions on a file
* server can return information to clients about other clients' intentions

The Network File System Version 4 (NFSv4) is a distributed file system similar to previous versions of NFS in its straightforward design, and independence of transport protocols and operating systems for file access in a heterogeneous network. Unlike earlier versions of NFS, the new protocol integrates file locking, strong security, Compound RPCs (combining relevant operations), and delegation capabilities to enhance client performance for narrow data sharing applications on high-bandwidth networks.

Note: NFSv4 ACLs and krb5p (Kerberos Privacy) are currently not supported on SLES 12.

The NFSv4 protocol is not backward compatible with the NFSv2/3 protocols, but the Linux NFS daemon supports NVS v2/3/4 protocol versions.

  • Uses TCP for transport by default
* Requires only one single well-known port (tcp:2049) for communication, as the mount and lock protocol are now part of the protocol
  • Uses Compound RPC calls
* Several NFS operation can be included in a single RPC request
  • Single NFS daemon
* nfsd encompasses all features/functionality of v2/3/4 suite of daemons

Using UDP for NFS with high speed links such as Gbit Ethernet can lead to file corruption. The underlying problem is explained in the nfs manual page under “Using NFS over UDP on high-speed links”.

Security Improvements

  • Encryption is part of the specification
  • Client-server interactions can now be secured with GSS-API
* Level of security can be determined (auth, interactions, full session)
  • UID to user name mapping
* Users are passed as a string (user@domain) rather than a UID

Interoperability Improvements

  • Exports a single “pseudo file system” rather than multiple file systems
  • Mandatory and advisory locking of files is now supported
* Locking is lease-based

/etc/exports does not require any special entries to work with NFSv4. SLES 11 releases required 'fsid=0' on precisely one entry, and 'bind=' annotations on others. This is no longer required and should be removed. It is still supported, so there is no immediate need to change /etc/exports when upgrading to SLES 12.

NFS Server Configuration

  • /etc/exports
* Contains the list of exported file systems
  • Syntax:
/shared/directory host(option1,option2)
  • Example:
/export/data server1(rw,sync)
  • host can be an IP address, network address, host name, domain name with wild card, or netgroup
  • options define read only or read-write access, how the root user is treated, etc.

Possible host entry examples:

IP Address: 192.168.1.2
Network address: 192.168.1.0/24
Host name: server1.example.com
Domain name with wildcard: *.example.com
Any host: *
NIS netgoup: @my-hosts

Options include:

ro: File system is exported with read only permissions (underlying file system permissions have to allow read access as well). Set by default.
rw: File system is exported with read-write permissions (underlying file system permissions have to allow read and write access as well).
root_squash: This ensures that the root user of the client machine does not have root permissions on this file system. This is achieved by assigning user ID 65534 to users with user ID 0 (root). This user ID should be set to nobody (which is the default). Set by default
no_root_squash: Does not assign user ID 65534 to user ID 0, keeping the root permissions valid. root user can read and write files as root
subtree_check: If a subdirectory of a file system is exported, but the whole file system is not, then whenever an NFS request arrives, the server must check not only that the accessed file is in the appropriate file system but also that it is in the exported tree. This check is called subtree check.
no_subtree_check: No subtree_check is performed.As a general guide, a home directory filesystem, which is normally exported at the root and may see lots of file renames, should be exported with subtree checking dis-abled. A filesystem which is mostly readonly, and at least doesn't see many file renames (e.g. /usr or /var) and for which subdirectories may be exported, should probably be exported with subtree checks enabled.

From release 1.1.0 of nfs-utils onwards, the default will be no_subtree_check as subtree_checking tends to cause more problems than it is worth. If you genuinely require subtree checking, you should explicitly put that option in the exports file.

sync: Reply to requests only after the changes have been committed to stable storage (this is the default, but if neither sync or async are specified, a warning appears when starting the NFS server).
crossmnt: This option is similar to nohide but it makes it possible for clients to move from the filesystem marked with crossmnt to exported filesystems mounted on it. Thus when a child filesystem "B" is mounted on a parent "A", setting crossmnt on "A" has the same effect as setting "nohide" on B.

For a detailed explanation of all options and their meaning, refer to the man page of exports (man exports).

Whenever you want to specify different permissions for a subdirectory (such as /home/geeko/pictures/) from an already exported directory (such as /home/geeko/), the additional directory needs its own separate entry in /etc/exports.

The following is an example of an edited /etc/exports file that works for NFS version3 and 4:

    #
    # /etc/exports
    #
    /home *(rw,sync,no_subtree_check)
    /export/data *(ro,sync,no_subtree_check)
    On the client you can mount the NFS pseudo-root /:
    server2:~ # mount server1:/ /mnt
    server2:~ # ls /mnt
    export home

Note that it is not / on the server's file system that is mounted, but a pseudo root that contains only /home/ and /export/, not other directories such as /etc that exist in the server's file system at the same level. If there are other directories in the /export directory on the server they are not visible on the client, only /export/data appears.

If you want to specifically mount the file system with NFSv3, you have to use the command:

mount -o nfsvers=3 server1:/home /mnt

Using mount -o nfsvers=3 server1:/ /mnt would not work, because there is no NFSv3 export of / in the above /etc/exports file.

  • /etc/sysconfig/nfs
* Contains variables used by the nfsserver and nfs init scripts that determine the behavior of the daemons
* Set NFS4_SUPPORT=”yes” to turn on support of version 4
* Some variables determine which daemons to start
  • /etc/idmap.conf
* Contains information about the IDmap domain and specific ID mappings such as the nobody user/group (required for NFSv4 only)
* The Domain parameter must be the same on server and client
    server1:~ # cat /etc/sysconfig/nfs
    ## Path: Network/File systems/NFS server
    ## Description: number of threads for kernel nfs server
    ## Type: integer
    ## Default: 4
    ## ServiceRestart: nfsserver
    #
    # the kernel nfsserver
    supports multiple server threads
    #
    USE_KERNEL_NFSD_NUMBER="4"
    ...

    server1:~ # cat /etc/idmap.conf
    [General]
    Verbosity = 0
    PipefsDirectory
    = /var/lib/nfs/rpc_pipefs
    Domain = localdomain
    [Mapping]
    NobodyUser
    = nobody
    NobodyGroup
    = nobody

Required Services

Attach:nfs2.png

Control the NFS Server

  • Startscripts in /etc/init.d/ still in use:
/etc/init.d/nfsserver start
/etc/init.d/nfsserver stop
  • Control via systemd works because of systemd's compatibility with SysV init scripts
  • After changing /etc/exports or /etc/sysconfig/nfs, start or restart the NFS server service:
systemctl restart nfsserver.service
  • After changing /etc/idmapd.conf, reload the configuration file:
killall HUP
rpc.idmapd

You can use the /etc/init.d/nfsserver command to start the NFS server. The nfsserver script passes the list of exported directories to the kernel, and then starts or stops the daemon rpc.mountd and, using rpc.nfsd, the nfsd kernel threads. The mount daemon (/usr/sbin/rpc.mountd) accepts each mount request and compares it with the entries in the configuration file /etc/exports. If access is allowed, the data is delivered to the client.

Because rpc.nfsd can start several kernel threads, the start script interprets the variable USE_KERNEL_NFSD_NUMBER in the file /etc/sysconfig/nfs. This variable determines the number of threads to start. By default, four server threads are started.

NFSv4 support is activated by setting the variable NFS4_SUPPORT to yes in /etc/sysconfig/nfs.

NFS Server Commands

  • exportfs
* Used to apply changes to exported file systems without restarting NFS daemons
* For example, to export the /software directory to all hosts in the network 192.168.0.0/24, enter the following:
exportfs -o ro,root_squash,sync 192.168.0.0/24:/software
* To revert to the /etc/exports configuration, enter the command
exportfs -r
* The directories that are currently exported are listed in the /var/lib/nfs/etab file
  • showmount -e hostname
* displays mount/export information for a remote host

Configure an NFSv4 Server with YaST

  • Yast > Network Services > NFS Server
  • Start On Boot
  • Enable NFSv4
  • Enter NFSv4 domain name
  • (optional) Enter GSS Security
  • > Next
  • Click Add Directory > /export > OK
  • Add Host > Host WIldcard - * ; Export Options - ro, root_sqash, sync, no_subtree_check

Configure an NFS Client

  • From the command line
  • Using YaST

NFS Client: Command line

  • You can import a directory manually from an NFS server by using the mount command.
  • The only prerequisite is a running RPC port mapper, which you can start by entering the command
rcrpcbind start
  • Use the mount option -t to indicate the file system type.
  • In the following example, the file system type NFSv4 is specified:
mount -t nfs4 -o options host:/directory /mountpt
  • Use the umount command to unmount an imported file system.

Mount NFS Directories Automatically

  • To mount directories automatically when booting, you need to make corresponding entries in the /etc/fstab file.
  • When the system is booted, the start script /etc/init.d/nfs, using systemd's SysV compatibility, loads the /etc/fstab file.
  • The following is an example of an entry for an NFS mount point in the /etc/fstab file:
server1:/data/ISOs /import/ISOs nfs4 defaults 0 0

In the above /etc/fstab entry, the first value indicates the hostname of the NFS server (server1) and the directory it exports (/data/ISOs/).

The second value indicates the mount point, which is the directory in the local file system where the exported directory should be attached (/import/ISOs/).

The third value indicates the file system type (nfs4).

The fourth value, comma-separated values following the file system type, can be used to provide NFSspecific and general mounting options. In this case, default values are used.

At the end of the line, there are two numbers (0 0). The first indicates whether to back up the file system with the help of dump (1) or not (0). The second number configures whether the file system check is disabled (0), done on this file system with no parallel checks (1), or parallelized when multiple disks are available on the computer (2). In the example, the system does neither, as both options are set to 0.

  • After modifying an entry of a currently mounted file system in the /etc/fstab file, you can have the system read the changes by entering
mount -o remount /mountpoint
  • To mount all file systems that are not currently mounted and do not contain the noauto option, enter:
mount -a

Configure an NFS Client with YaST

  • Yast > Network Services > NFS Client
  • NFS Settings > Enable NFSv4 > Enter NFS Domain Name > OK
  • NFS Shares > Add > NFS Server ; Remote Directory ; Mount point ; Options > OK
  • check /etc/fstab

Monitor the NFS System

  • Information about rpcbind (portmapper): rpcinfo -p

Attach:nfs3.png

The NFS server daemon registers itself to the rpcbind ( portmapper) with the name nfs.

The NFS mount daemon uses the name mountd.

  • Information on the exported directories of an NFS server:
showmount
  • To display the exported directories on the machine server1, use the command
showmount -e server1
  • The option @-a@@ shows which computers have mounted which directories.

Trick and tweaks

Other

Certification materials

edit SideBar

Blix theme adapted by David Gilbert, powered by PmWiki