Time to Use direct nfs uids

I’ve been trying to run an nfs server for home mounts using the rpc username syncing functionality built in. But this system doesn”t work very well in the system I have, new users are created on the fly and folder access gets messed up all the damn time.

Basically the filesystem ownership gets messed up. I have tried searching for all sorts of help, but I can’t find anything about this issue since I can only assume I’m using nfs in a special way not intended.

So since my server usernames and uids match up to the client usernames and uids, I should be able to switch off the rpc and use direct uids and get away with that.

Thoughts? Advice?

Client Configuration Experiments

Because I’m not a very good systems administrator but a programmer, when life gives me a problem, I see opportunities to create programs. If I was a better sys-admin I would already be familiar with tools that could solve this problem. So I’m giving any competent system administrators prior warning, what you read here might make you faint. I’m very likely reinventing several wheels and possibly even fire; But it came out so well I wanted to share with everyone.

The problem at the SETC is how to manage a set of client machines such that their configurations don’t drift apart from each other. The reason why they’re likely to do this is because certain students have access to install programs and admin the machine as part of their classes. We’d obviously encourage the use of VMs or some other system, but these machines aren’t powerful enough. For the past few months we’ve been using a system of selective files made available via http which use a sync script to pull them down off the server and conk them into etc. It worked quite well for selective configurations and making sure certain configs were always the same.

This included a configuration called /etc/apt/package-sync which is used by another script (update-packages.sh) to make damn well sure the machines all have the same things installed. (basically set-selections and lots of force operators)

So this obviously isn’t very secure, or safe (no ssl on local network) or even workable for crufty configs. So to improve the situation I, Scott and Tim at our regular Wednesday advocacy event set about replacing http with nfs (it could just as easily be smb, but nfs is easier to set up for experiments). “global configuration” is a copy of all config files in /etc which we want the same on all machines, “local configuration” is any remaining config files in /etc which are specific to an individual client hostname.

syncThe way this new system works is it mounts a network file system using automount in /mnt/etc/ which contains a copy of the global configuration and each of the local configurations for each machine, each local configuration is named for the hostname it applies to1. Using rsync you then merge the global configuration with the client’s /etc directory using the local configuration’s contents as an exceptions list. This is so you don’t delete the hostname and various other files during the first syncing step. The second step uses rsync without the delete option to sync any changes in the local configuration (see sync-etc.sh)2.

addTo add a new client machine to this system we simply use a bootstrapping deb package made available at the local apt repository. The kickseed configuration for the install process includes this package. The bootstrapping deb contains not only the syncing scripts but also the nfs automount configurations that point to the right server. When it does it’s sync, if the client can’t find a local configuration directory matching it’s hostname, it just assumes it’s not being managed.3

updateNow this did present to me an something interesting, with some simple scripting and a way to log into a special client machine with write access, I could use a desktop computer to add new things or change settings using any graphical tool or see how things would work and then commit them back to both the global or local configurations on the server. This would then propagate globals to all other machines syncing to that mount. It’s also possible to make changes specific to one machine, since the differences would all go into the local configuration for that hostname.

I’m trying to keep this post short, so I’ve tried to be brief4. If your interested in details or telling me where I’m going horribly wrong, please do comment at the bottom there. I love hearing from all the interlects of the community about how they solve these problems.

1 Note that this means that changing the hostname file will redirect the client to use a different set of local configs.
2 This does mean you can have a host which has different configurations and even have server side management of them.
3 Because the system is managing the configs, moving from one server to another is trivial but this makes the setup rather fragile, moving to using avahi would help strengthen the system from failure.
4 If it continues to work well and baring any logical falicies pointed out by commentators, I could package this up too.