zlacker

[parent] [thread] 1 comments
1. grayli+(OP)[view] [source] 2013-06-26 12:12:04
I've historically been of opinion that #!/bin/name is preferred. Then I hit a workplace with network mappings and mixed architecture servers. It makes me appreciate linux's default mapping because when you start going off on your own it becomes nasty.

Some machines only have python 2.4 and others 2.7. So /usr/local/python is not a good answer for python scripts in /project/x/bin (network mapped). Worse someone puts gnu coreutils in /project/x/bin, but for sparc architectures, so PATH becomes touchy if you're on a x86 server.

I've resorted to having my bashrc build my path by scanning uname for architecture/platform and conditionally adding path entries to these network folders.

All organization scripts on network drives now have caveats "This only works on linux x86 servers" or "This only works on server X". Not because it can't work elsewhere, but because the PATH and #! management problems when the dependencies are installed at different places on different servers.

Blegh </rant>

replies(1): >>chalst+f8
2. chalst+f8[view] [source] 2013-06-26 13:54:09
>>grayli+(OP)
You could create a script /usr/bin/dispatch that contains

    #!/bin/sh
    file=`env PATH=$THIS_ARCHITECURES_PREFERRED_PATH /usr/bin/which $1`
    shift; exit "$file" "$@"
for each machine to use in your scripts. Tuning the path becomes a once per architecture, rather than once per script, task.
[go to top]