Running Shell Commands
One of rulebooks main goals is to make it extremely easy to call out to shell commands.
Shell commands may be executed either locally (in the Clarive server) or remotely (ie. in a remote node). You will need to decide where to execute your commands.
Running Commands Locally¶
Local commands run in a shell in the Clarive server, always inside a Docker container.
do: # This is the long way of executing a shell command: - shell: cmd: ls args: - "-l" - "-a"
Which, luckly, can be shortened to something along these lines:
do: - shell: ls -la # oh, this is much better
If you'd prefer to separate your args, there's also a concise version:
do: - shell: - apt - install - build-essential - openssl # which becomes "apt install build-essential openssl" in the shell
Or in concise array notation:
do: - shell: [ 'apt', 'install', 'build-essential', 'openssl' ] # same, it becomes "apt install build-essential openssl" in the shell
Or better yet, using the direct notation. Unfortunately the direct notation
does not have an array argument (args
), but is definitely the way to go.
do: - ls -lart # yes, this is much better - "ls -lart" # ditto
If you need to run multiple commands at once you can separate them with newline:
do: - shell: cmd: | echo hello >> /clarive/myfile echo world >> /clarive/myfile cat /clarive/myfile # or like this: - shell: | echo hello >> /clarive/myfile echo world >> /clarive/myfile cat /clarive/myfile # or yet: - | echo hello >> /clarive/myfile echo world >> /clarive/myfile cat /clarive/myfile
Assigning the output to a variable¶
Now the best feature of rulebook shell commands is that you can assign command
values to variables. You need to prepend the shell:
command with the variable
you want to use.
do: - result = shell: ls -lart /clarive - echo: "exit code was ${result.rc}, output: ${result.output}"
Commands always return a hash object with 2 subkeys:
{ rc: 0, # the exit code of the command output: '...' # the capture output combining stderr and stdout }
You can assign a multiline command sequence just the same, but you gotta use shell:
:
do: - ret = shell: | echo hello >> /clarive/myfile echo world >> /clarive/myfile cat /clarive/myfile - echo: "OUTPUT = {{ ret.output }}" - echo: "EXIT CODE OF LAST COMMAND = {{ ret.rc }}"
Note
Failed shell commands (exit code <> 0) that are not assigned to variables will throw errors that will interrupt your rulebook.
Therefore, assigning commands to variables is a good way to trap errors and process them yourself.
Here's an example of captured variable with error control:
do: # capturing and error control combined - ret = shell: | ls -lart /clarive/ ls -lart /clarive/this_folder_not_here - echo: "OUTPUT = {{ ret.output }}" - if: "{{ ret.rc }}" then: # use log: for colored output in the job log viewer - log: msg: "We failed with rc={{ ret.rc }}" level: error else: - echo: "Everything okay"
That will produce the following output:
ls: cannot access '/clarive/not_exists': No such file or directory We failed with rc=2
Piping commands¶
Sometimes you may want to pipe several shell commands together, redirecting
output from one command to the next. This is quite trivial and it's managed
by the shell runner (a C open3()
function):
do: - cat /etc/hosts | wc -l
If you want to pipe the output back into your rulebook, then you should setup
a stdout
or stderr
callback. The shell process will invoke the
callback with chunks of output.
do: - shell: cmd: find / pipe_out: - echo: "Got a stdout chunk here: ${chunk}"
Or process it with ClaJS directly:
do: - shell: cmd: find / pipe_out: | print( "Got it here now: " + cla.stash('chunk') );
Selecting a Docker Image¶
Commands that run in the server always run in a Docker container.
By default, commands run on the clarive default image.
But you can easily control what container image is being used with the image:
op.
do: # download and install a python image from Docker Hub # (if you don't already have it installed) - image: python - python --version
You can alternate between images as you go:
do: # python first - image: python - python --version # then ruby - image: ruby - ruby --version - gem install json
Or run it within an image:
block with do:
, which
groups it nicely:
do: # watch out for correctly indenting image and do - image: ruby - do: - gem install json - gem install mongo
Important
Each command that runs in a Docker image will
docker run -it
everytime. Use the multi-line commands
shown above to run them in the same container run session.
Running commands remotely¶
You can run shell commands on remote servers by using the
host:
or worker:
options of the shell:
op. But first
you need to have the remote server connected to the Clarive server
by using one of the following methods:
-
Clarive Worker - probably the easiest way, just download a worker on the remote machine and register it with your server.
-
SSH - the Clarive server needs to have its public key in the destination server's
authorized_keys
for the users you intend to connect with. -
Clarive ClaX Agent - the ClaX agent needs to be running as a InetD process or independent agent listening at a given port.
In this document only convers the Clarive Worker, as the SSH and ClaX agents are better suited for Clarive rules and not rulebooks.
Running a command in the Clarive Worker¶
Before running any commands, make sure the Clarive Worker is registered and started.
Let's assume your worker was registered with the id myworkerid
.
cla-worker run --id myworkerid --token [the token given to you by the register command]
To run a shell command with the Clarive Worker:
do: - ret = shell: worker: myworkerid cmd: ls -lart /tmp/ - echo: "OUTPUT = {{ ret.output }}" - echo: "EXIT CODE OF LAST COMMAND = {{ ret.rc }}"
In case you just want to pick any available worker for your command, you can use the asterisk:
do: - ret = shell: worker: '*' # just the first available worker cmd: ls -lart /tmp/
In case you set some tags to identify your worker, use that to find the first available worker that can handle one or a set of tags:
cla-worker run --id myworkerid --token [...] --tags node,npm
Then run the command intended for the tagged workers:
do: - ret = shell: worker: { tags: ['npm','node'] } cmd: | cd /opt/build/myproject npm run tests
That's it! Make sure to also read how to send and retrieve files from the worker documentation.