Documentation Pia 0.1.0

KubeOps-PIA v0.1.0 #

How to use PIA #

This Guide explains how to use PIA with detailed instructions and examples.
If you havn’t installed PIA yet, refer to the Installation Guide for PIA.

  • the node where PIA is running can execute any kubectl commands
  • the node where PIA is running has the tar binary installed
  • the default Kubernetes labels kubernetes.io/ are set
  • the label kubernative-net/pia is not used

The wget binary has to be installed on every Kubernetes node.*

# How to install wget on CentOS
yum install -y wget

*wget does not need to be installed if the default image busybox is used. For more information see the chapter “PIA syntax explanation”.

How to use PIA? #

In order to use PIA

  1. Create a .yaml file and update it in standard syntax for PIA and provide the necessary information for the task to be executed.
  2. Now you only need to execute the file with command pia run.

PIA Commands #

General Commands #

Overview of all PIA commands

pia
  PIA - Plugin-based Infrastructure Administrator

Usage:
  pia [options] [command]

Options:
  --version       Show version information
  -?, -h, --help  Show help and usage information

Commands:
  run <file path>         Run PIA based on properties of a yaml file
  version                 Basic info about PIA
  delete <resource type>  Delete resources in the cluster

Command ‘pia run’ #

The pia run command executes the provided yaml file. You can create this yaml file by following a standard syntax. Refer YAML file syntax for more details.

The flag ‘-f’ is required to execute this command.

pia run -f conf.yaml

Command ‘pia version’ #

The pia version command shows you the current version of pia.

[root@admin ~]# pia version

Command ‘pia delete’ #

The pia delete commands deletes the resources from your Kubernetes cluster.
The flag ‘-r’ is required to execute this command.

# delete httpd pod
pia delete -r httpd
This command is currently limited to the resource ‘httpd’, hence only accepts it as argument.

YAML file syntax for PIA #

You need to create a .yaml file as shown in following example. It contains all the necessary information which enables PIA to execute the tasks successfully.

Below is an example to show the structure of the file ‘conf.yaml’

apiVersion: kubernative/pia/v1
spec:
  affinity:
    schedule: role/worker, node/master1
    limit: 5
  runtime:
    # image: busybox
    runOnHost: true
    hostNetwork: true
    # hostDirs:
    # - /etc/docker
  run:
    artefacts:
      - src: /etc/docker/daemon.json
        dest: /etc/docker/
    tasks:
      - cmd:
        - "systemctl restart docker"

apiVersion (Mandatory) #

Version string which defines the format of the API Object.
The only currently supported version is:

kubernative/pia/v1

The rest of the syntax is mainly divided into 3 sections:

  • affinity
  • runtime
  • run

affinity area #

affinity:
schedule: role/worker, node/master1   # PIA deploys the DaemonSet on all worker nodes and on the node "master1"
limit: 5                              # PIA takes the first 5 nodes and executes the given tasks on them. After that PIA repeats with the next 5 nodes.

schedule (Optional) #

defines the nodes to be used.
You can select the value based on your requirements.

Value Description
role/worker for all worker nodes
role/master for all master nodes
node/ for a specific node
label/foo for all nodes with the label foo
allNodes for all nodes ( default)

limit (Optional) #

describes how many nodes should be used at once.
This value can be any integer number. The default value is set to 2.

For example, if you have 20 nodes in your Kubernetes cluster and set the limit to 3, then PIA will take the first 3 nodes at the start and execute the given tasks and then will take the next 3 repeating the process till all of your 20 nodes are finished. \

runtime area #

runtime:
  # image: myOwnImage   # Default value `busybox:latest` will be used because the key `image` is commented out
  runOnHost: true       # PIA DaemonSet runs all tasks directly on the file system of the underlying node
  hostNetwork: true     # PIA DaemonSet will use the same network namespace of the underlying node
  hostDirs:             # Mounts the directory /etc/docker? No! 'runOnHost' is 'true' so the value of 'hostDirs' will be ignored because you already gain access to the whole file system of the underlying node! So it is the same as you would comment it out.
    - /etc/docker

image (Optional) #

Specification of the image used by the DaemonSet.
This value has to be any valid docker image.*
Default: busybox:latest

Note: While using the self-created images, they must have the wget and tar binaries installed to avoid the errors when using artefacts.

runOnHost (Optional) #

Describes whether PIA should run in the container or directly on the host system.
This value can be set to

  • false : the file system of the container will be used
  • true : the file system of the underlying kubernetes node will be used

Default: false

runOnHost is equivalent to hostPID in the pod security policies from Kubernetes.
Important: runOnHost: true will grant access to the whole file system of the underlying node.

hostNetwork (Optional) #

Describes whether PIA should use the same network namespace on the underlying host or not.
This value can be set to

  • false : the same network namespace will not be used.
  • true : the same network namespace will be used.
    Default: false
Equivalent to hostNetwork in the pod security policies from Kubernetes.

hostDirs (Optional) #

Defines listing of directories to be mounted.
The values should be one or more valid folder paths to directories to be mounted.

Important: hostDirs is dependent on runOnHost. If runOnHost is set to

  • false: the mounted folders must be matched to the container.
  • true: there is no need to mount directories as access to the whole file system of the underlying node is provided.

run area #

run:
  artefacts:                          # Copies the daemon.json from the admin node and drops the file in the /etc/docker from nodes that are specified in the affinity area
    - src: /etc/docker/daemon.json
      dest: /etc/docker/
  tasks:
     - cmd:                           # Executes the following commands on the nodes that are specified in the affinity area
       - "systemctl restart docker"

artefacts (Optional) #

Defines files which needs to be transferred to the other nodes.
This value consists of

  • src : It is the source path of the file to be transferred.
  • dest: It is the destination path. It defines the directory path where the file should be transferred.

tasks (Optional) #

Describes the list of tasks that should be executed on the scheduled nodes.
You can add the command to be executed node. This input can have multiple keywords.
Currently only cmd (command) is supported.

Examples #

Install SINA on all nodes (CentOS) #

In the following example, we are going to see how to use PIA to install the sina.rpm(local file) on all the nodes including admin node of your kubernetes cluster.

Below is the .yaml file created for the task.

apiVersion: kubernative/pia/v1
spec:
  affinity:
    schedule: allNodes
    limit: 5
  runtime:
    runOnHost: true
    hostNetwork: true
  run:
    - artefacts:
      - src: /myRpmsFolder/sina-2.3.0_beta6-0.el7.x86_64.rpm
        dest: /root/sina-2.3.0_beta6-0.el7.x86_64.rpm
    tasks:
      - cmd:
        - "yum install -y /root/sina-2.3.0_beta6-0.el7.x86_64.rpm"

Add a custom config file to all worker nodes #

The following is an example YAML that can be used to save a custom config file to one specific node (e.g. master3) and all nodes with the label ‘foo’

apiVersion: kubernative/pia/v1
spec:
  affinity:
    schedule: node/master3, label/foo
    limit: 2
  runtime:
    hostDirs:
      - /path/to/my/mounted/folder/        # can also be a parent directory e.g. '/etc' instead of '/etc/crio'
  run:
    artefacts:
      - src: /path/to/myConfig.conf
        dest: /path/to/my/mounted/folder/  # absolute path

Further notes #

  • if you want to mount the entire file system you have to set runOnHost: true
  • when using artefacts be sure that file permissions (chmod 777) are given
  • logs are saved in the directory “/tmp/piaData/logs” and will be deleted at each restart of PIA

Known Issues #

Click here to get to the Known Issues