Welcome to the online documentation of Rainbow Project and Rainbow Platform!

Introduction to Cloud Computing, Containers, Microservices and DevOps

Cloud Computing has reached virtually all areas of society and its impact on service development, production, provision and consumption is manifold and far-reaching. It lowers innovation barriers and thereby impacts industry, small and large businesses, governments and society, and offers significant benefits for everyone.

According to Gartner Inc , while cloud computing has been an application deployment and infrastructure management paradigm for many years now, the cloud market is still expanding, reaching an impressive $200bn milestone projection for 2016 with an increasing growth rate of 16%. In this digital economy, Small and Medium Enterprises (SMEs) and today’s Startups are migrating core services and products of their business to the cloud. Recent studies shows that in 2015 more than 37% of SMEs have embraced the cloud to run parts of their business, while projections show that by 2020 this number will grow and reach 80%. However, properly preparing for tomorrow’s cloud challenges is crucial if one wants to unleash the full potential of the technology.

Below is a set of resources in the form of dissemination and scientific articles, implementation examples, blog entries, videos, tutorials and courses and at different levels of difficulty. The purpose of collecting this information is to give potential users of the Rainbow platform quick access to useful and high quality resources on different related topics, namely, general information on Cloud Computing, information on the challenges one must face when deciding to adopt this technology and, finally, aspects related to the agile processes of software development for Cloud Computing.

Moreover, you can read some useful information about analytic services, decision making, auto-scaling and monitoring using the RAINBOW platform.

What is Rainbow!

A framework that allows the design and deployment of secure and elastic by design cloud applications and services.

Rainbow Technology Stack

Developed on top of popular and open-source frameworks including Kubernetes, Docker, CoreOS to support multi-cloud application runtime management

Why Rainbow?

  • All Rainbow apps are packaged and enabled by Docker Runtime Engine to create and isolate the containerized execution environment. * Docker Runtime Engine and Docker Compose are tools sufficient for small deployments, but limited to a single host.

  • Kubernetes can support the orchestration of large-scale distributed containerized deployments spanning across multiple hosts. * However Kubernetes has limitations in regard on the provisioning and deprovisioning of infrastructure resources and the auto-scaling. * Also Kubernetes cannot support cross-cloud deployments.

  • Underlying containerized environment based on CoreOS which enables fast boot times and secure-out-of-the Docker runtime. * Enhanced by security service to filter network traffic and apply privacy preserving ruling.

  • Rainbow Smart Orchestrator is suitable for Highly Available (HA) host management. * Taps into auto-scaling offered by cloud offerings to estimate and assess app elasticity behavior and scaling effects. * Low-cost and self-adaptive monitoring to reduce network traffic propagation.

  • Rainbow Smart Orchestrator enables deployments across multiple cloud sites. * Cross-cloud network overlay is provided.

  • Compatibility with Docker Compose is preserved as an extension of Docker Compose is used to describe, configure and deploy multi-container applications using YAML syntax.

Online Documentation Contents

Supported OS

Servers:

We support every distrubution system, but currently the scripts work only for debian and ubuntu. if there is a hard limitation please contact in order to extend the scripts for other distributions

RPI:

We support only 64bit ubuntu based distributions due to limitation of k8s working with calico as network plugin

Nvidia jetson/xavier/etc:

Kernerl recompilation is essential to enable flags.

CJDNS enablement for Xavier:

Instructions can be found under https://gitlab.com/rainbow-project1/rainbow-installation/-/tree/main#cjdns-enablement-for-xavier

Setup

Here you can find instructions for the installation of Rainbow Platform. This page references the repository under https://gitlab.com/rainbow-project1/rainbow-installation

Requirements

For the advanced platform package, that includes everything, the minimum requirements are:

Component

#vCPUs

RAM(GB)

k8s master

4

10

k8s worker server

2

8

k8s rpi worker

2

8

k8s jetson worker

2

8

Rainbow Dashboard

2

8

For the core package platform package, that does not include complex analytics, RAM can be dropped to 4GB.

Component

#vCPUs

RAM(GB)

k8s master

4

10

k8s worker server

2

4

k8s rpi worker

2

4

k8s jetson worker

2

4

Rainbow Dashboard

2

4

Master Setup

For master setup you will need to follow the steps below:

  1. Clone the repository.

  2. Go to *installation-scripts* directory and give execution rights to all scripts.

  3. Update the following lines of the *rainbow-v2-master.sh* script with the appropriate values.

docker_server="<DOCKER_SERVER>"
docker_username="<DOCKER_USERNAME>"
docker_password="<DOCKER_PASSWORD>"
docker_email="<DOCKER_EMAIL>"
  1. Execute *rainbow-v2-master.sh* script.

  2. After script execution it prints out the kubernetes and cjdns credentials. Those credentials are necessary for the next steps, workers setup and dashboard setup.

Workers Setup

For master setup you will need to follow the steps below:

  1. Clone the repository.

  2. Go to *installation-scripts* directory and give execution rights to all scripts.

  3. Update the following lines of the *rainbow-v2-worker.sh* script with the appropriate values. serverIP is the master’s ip, the other values are the cjdns credentials from the Master setup step.

serverIP="<SERVER_IP>"
port="<SERVER_PORT>"
password="<PASSWORD>"
publicKey="<PUBLICKEY>"
serverHostname="<HOSTNAME>"
serverIPv6="<IPv6>"
  1. Execute *rainbow-v2-worker.sh* script.

Special Case

If there are NVIDIA Xavier devices on the cluster go to *xavier-device* directory and follow the instructions

Analytic Stack Setup

This page references the repository under https://gitlab.com/rainbow-project1/rainbow-installation

Data Storage and Analytic Services on Master

For the data storage and processing stack, users initially must execute a docker-compose file on master that includes all necessary services.

Navigate to the repository and go to the *analytic-stack/master* directory where the docker-compose.yaml file is provided.

Next we provide details about the parameters of the docker-compose file. For providing the details, edit the *.env* file inside the directory.

NODE_IPV6="..." # Node's IPV6
NODE_IPV4="..." # Node's IPV4
PROVIDER_HOSTS="..." # The IPs of the nodes that the system will retrieve its data (all nodes' ips)
NODE_HOSTNAME="..." # The hostname or ip of the node
STORM_NIMBUS_CONFIG_FILE="..." # The path of Storm Nimbus configuration file
STORAGE_PLACEMENT="..." # Enables and disables the placement algorithm of the storage. Default is False.
STORAGE_DATA_FOLDER="...." # The folder that the data of storage will be stored

Storm Nimbus and Storm UI Configurations

Generally, the configuration of Nimbus needs no alteration. However, users can update the following file accordingly. Furthermore, users can add also other configurations of Storm Framework. Finally, users can introduce other scheduling strategies (including RAINBOW’s strategies) via this confiugration file. For instance, if users set storm.scheduler equals to ResourceAwareScheduler and its strategy to be EnergyAwareStrategy, the execution will try to minimize the energy consumption.

Configuration file exists in *analytic-stack/master/storm* directory

A typical configuration file is the following:

storm.zookeeper.servers:
     - "cluster-head-IP" # update with master's IPv4
nimbus.seeds: [ "cluster-head-IP" ]  # update with master's IPv4
storm.log.dir: "/logs"
storm.local.dir: "/data"
storm.local.hostname: "cluster-head-IP" # update with master's IPv4
supervisor.slots.ports:
     - 6700
     - 6701
     - 6702
     - 6703
nimbus.thrift.max_buffer_size: 20480000
supervisor.thrift.max_buffer_size: 20480000
worker.childopts: "-Xmx%HEAP-MEM%m -Xloggc:artifacts/gc.log -XX:+HeapDumpOnOutOfMemoryError    XX:HeapDumpPath=artifacts/heapdump"
topology.component.cpu.pcore.percent: 1000.0
topology.component.resources.onheap.memory.mb: 512.0
#storm.scheduler: "org.apache.storm.scheduler.resource.ResourceAwareScheduler"
#topology.scheduler.strategy: "eu.rainbowh2020.Schedulers.EnergyAwareStrategy"

Stack Execution

In order to execute the stack, users have only to run the following command in the *analytic-stack/master* directory.

docker-compose up

And the system will start all services. We should note that users have to run firstly the services of master and after that all the other storage and processing services on the rest of the nodes.

Monitoring, Data Storage and Analytic Services on Edge Nodes

For the data storage and processing stack on Edge Nodes, users must execute a docker-compose file on all nodes.

Navigate to the repository and go to the *analytic-stack/nodes* directory where the docker-compose.yaml file is provided.

Next we provide details about the parameters of the docker-compose file.

MONITORING_CONFIGURATION_FILE="..." # The path of monitoring agent configuration file
STORAGE_RAINBOW_HEAD="..." # Cluster head's IPV6
STORAGE_NODE_NAME="..." # The hostname or ip of the node
STORAGE_PLACEMENT="..." # Enables and disables the placement algorithm of the storage. Default is False.
STORAGE_DATA_FOLDER="...." # The folder that the data of storage will be stored

Monitoring Agent Configurations

node_id: "node_id"  # user need to provide a node id (e.g. hostname)

sensing-units:
  general-periodicity: 1s # general sensing rate
  DefaultMonitoring: # Node-level metrics will be enable, users can disable this by removing it
    periodicity: 1s
#    disabled-groups: # metric-groups that the system will not start at all metric groups include CPU, memory, disk, network
##      - "disk"
#    metric-groups: # override the sensing preferences on specific groups
#      - name: "memory"
#        periodicity: 15s # change a static periodicity
#      - name: "cpu"
  UserDefinedMetrics: # specific implementation of sensing interface for user-defined metrics
    periodicity: 1s
    sources:
      - "/"
  ContainerMetrics: # Container-level monitoring metrics
    periodicity: 1s

dissemination-units: # users can enable multiple dissemination units however in rainbow we use Ignite as storage
 IgniteExporter:
    hostname: ignite-server
    port: 50000

#adaptivity:   # optional adaptivity properties
#  sensing:  # adaptivity in sensing units
#    DockerProbe:  # e.g., enable adaptivity for the container metrics
#      target_name: demo_test|cpu_ptc  # and set as target metric the cpu percentage of demo_test container
#      minimum_periodicity: 1
#      maximum_periodicity: 15
#      confidence: 0.95
#  dissemination:  # adaptivity in dissemination
#    all:  # the system sends adaptively all metrics to the storage
#      minimum_periodicity: 1
#      maximum_periodicity: 15
#      confidence: 0.95
#    metric_id:  # or it can send adaptively only specific metrics
#    - minimum_periodicity: 5s
#      maximum_periodicity: 35s
#      confidence: 95
#

For more information about the monitoring and its configuration please check its repository https://gitlab.com/rainbow-project1/rainbow-monitoring

Storm Worker Configurations

Generally, we need only to update the IPs of the Storm Worker configurations. However, users can update the following file accordingly to alter the processing characteristics of a node. Specifically, users can add other configurations from Storm Framework (since we utilize storm as execution engine).

Configuration file exists in *analytic-stack/nodes/storm* directory.

A typical configuration file is the following:

nimbus.seeds: ["cluster-head-IP"] # The cluster head IPV4
ui.port: 8080
storm.zookeeper.servers:
   - "cluster-head-IP" # The cluster head IPV4
storm.local.hostname: "node-IP" # As hostname we need to provide the node's IPV4
supervisor.slots.ports:
   - 6700
   - 6701
   - 6702
   - 6703

Stack Execution

In order to execute the stack, users have only to run the following command:

docker-compose up

For the execution on ARM processors, user need to execute specific docker-compose files. For instance, for 32 bit arm processors users need to run the following command:

docker-compose -f docker-compose-arm32.yaml up

or for 64 bit arm processors, users should execute the following command:

docker-compose -f docker-compose-arm64.yaml up

Ports and Networking

Apache Storm needs specific ports to be open namely or users can configure other ports in the storm configurations:

Default Port

Storm Config

Client Hosts/Processes

Server

2181

storm.zookeeper.port

Nimbus, Supervisors, and Worker processes

Zookeeper

6627

nimbus.thrift.port

Storm clients, Supervisors, and UI

Nimbus

6628

supervisor.thrift.port

Nimbus

Supervisors

8080

ui.port

Client Web Browsers

UI

8000

logviewer.port

Client Web Browsers

Logviewer

3772

drpc.port

External DRPC Clients

DRPC

3773

drpc.invocations.port

Worker Processes

DRPC

3774

drpc.http.port

External HTTP DRPC Clients

DRPC

670{0,1,2,3}

supervisor.slots.ports

Worker Processes

Worker Processes

Furthermore, Storage Agents need ports 50000, 47500, 47100 to be open as well.

Dashboard Setup

Here you can find instructions for the Dashboard Setup, which is the last step of the whole setup procedure. This page references the repository under https://gitlab.com/rainbow-project1/rainbow-installation.

For installation a token is required, which can be requested at ktheodosiou@ubitech.eu and avasileiou@ubitech.eu

  1. Clone the repository.

  2. Go to *dashboard* directory and give execution rights to all scripts.

  3. Update the following lines of the *rainbow-dashboard.sh* script with the appropriate values. serverIP is the master’s ip, the other values are the cjdns credentials from the Rainbow Setup - Master setup step.

serverIP="<SERVER_IP>"
port="<SERVER_PORT>"
password="<PASSWORD>"
publicKey="<PUBLICKEY>"
serverHostname="<HOSTNAME>"
serverIPv6="<IPv6>"
  1. Execute *rainbow-dashboard.sh*

  2. Update the following lines of the *.env* file with the appropriate values. Server here is the local machine where the setup takes place.

SERVER_IP=<SERVER_IP>
SERVER_BASE_PATH=<VOLUME_DATA_PATH>
  1. Execute the following command

docker-compose up -d

Getting started with Rainbow Dashboard

Here you can find instructions for the basic functionalities of Rainbow Platform.

Login/Logout

Login

  • Provide your login credentials and click the <SIGN IN> button.

_images/login.png
  • Upon successful authentication the following screen will be presented.

_images/dashboard.png

Logout

  • In order to perform logout click the <Log-out > Button.

_images/logout.png
  • Upon successful logout the following screen will be presented.

_images/login.png

Dashboard Main View

The main dashboard:

  • Contains useful information on the cloud resources usage (CPU, Memory, Instances)

  • Provides health tracking logs on the running instances

  • Lists elasticity and security policies

  • Presents a total overview of the existing components, applications, vCPUS and Ram available.

_images/dashboard.png

Platform Usage Video

This screencast can also help you get started:

Components

In Rainbow Platform each service is represented by a Component which is instantiated as a docker process in a unique Virtual Machine. Then applications are formed by connecting individuals components (VMs) together.

View Available Components

By selecting Components from the left sidebar menu, the user is redirected to the list of the available components.

_images/components.png

Create a Component

The user can create a new component by clicking on the button <Create new> at the top right corner of the page. Then there is a tabbed form where the configuration of the component takes place.

  • General Settings can be configured as the component name, machine architecture, and elasticity controller scale function.

_images/create_components_1.png
  • The “blueprint” of the component, which is the docker image and the registry of it, are set in the Distribution Parameters tab.
    • Custom Docker registries are supported.

_images/create_components_2.png
  • The communication protocol is setting by exposing or requiring interfaces, which is the network port that the Component exposes or requires to communicate.
    • A user can select one of the existing interfaces, like a TCP access through port 80, or define a new interface.

    • For the definition of the required interface, an existing exposed interface of another component has to be selected.

_images/create_components_3.png
  • Advanced Options can be set such as run the component image in host mode, priviliged mode, add System Capablities, assign hostname

_images/create_components_4.png

The user can also:

  • Specify resources limit for the Component (VCPUs, RAM, Storage, Hypervisor Type)

  • Set Enviromental Variables

  • Define Health Check Commands and Entrypoint Commands

  • Map Volumes

  • Add Plugins that are available in the Plugins Section

  • Add labels to the component

View/Edit a Component

The user can view and edit the parameters of an existing component by clicking Edit button in the right corner of the item list.

_images/edit_component.png

Applications

  • After the needed components have been defined, the user can proceed with the definition of the application.

  • The application will be created through the help of a visual graph editor and then will be available for deployment.

View Available Applications

By selecting Applications from the left sidebar menu, the user is redirected to the list of the available applications.

_images/applications.png

Create An Application

  • To create a new Application, the user has to select <Create new> at the top right corner of the page.

  • The user is redirected to the visual edito, where the application components are presented as the nodes of a graph.

  • The connection between the nodes is describing the interfaces between the components - services.

_images/create_app_1.png
  • Through the left side panel, the components can be retrieved and added to the editor.

_images/create_app_2.png
  • By selecting the required interface and dragging it to another node, the connection between the interfaces of the components can be done.

_images/create_app_3.png
  • This procedure is followed until all required interfaces have been connected in order to save a valid application graph.

_images/create_app_4.png

Application Instances

After an application has been configured, an instance of it can be deployed to the cloud.

Create an Application Instance

  • To create a new Application Instance, the user has to select <Create Instance> at the top right side of application list item.

_images/create_app_instance_1.png

Prior to the deployment some settings can be configured:

  • Genereal settings concerning the deployment infrastructure.

_images/create_app_instance_3.png
  • Security mechanisms such as Intrusion Prevention Mechanism (IPS), Security Operations Center (SOC), Intrusion Detection Mechanism (IDS)

_images/create_app_instance_4.png
  • Interface parameters

_images/create_app_instance_5.png
  • Environmental variables of the components.

_images/create_app_instance_6.png
  • Advanced options such as network mode host or priviliged mode

_images/create_app_instance_7.png

The user can also:

  • Specify the minimum and maximum amount of workers per node that control the scalability profile of the application.

  • Set Health Check Commands and Entrypoint Execution Commands

  • Map Volumes

  • Mange Component Plugins

  • Manage components’ labels

Deploy an Application Instance

  • By pressing “Proceed” the deployment starts.

_images/deploy_app_1.png
  • At the Instances View, the user can see the list of deployed instances, identifiers and their status.

_images/deploy_app_2.png
  • Deployment procedure needs few minutes to finish. The user is constantly informed by viewing the logs aggregated from all the nodes of the application.
    • The total deployment time depends on the cloud infrastructure selected, as the spawning of new VMs might take more time in some IaaS.

    • Total time is also affected by the network delays between the cloud infrastructure and the docker registry that is used to fetch the components container image.

_images/deploy_app_3.png
  • When deployment finishes all nodes turn green
    • On the instance list the application is shown as “DEPLOYED”.

Application Monitoring and Scaling

Monitoring Applications instances

  • Monitoring metrics are presented for each one of the application nodes

Application Elasticity

  • From the application instance list, the user must select the “Elasticity Policies” option for the deployed application, in order to configure how the application scales.

  • By selecting the appropriate function, user can to aggregate the monitoring results in various ways.

  • For the monitored parameter we select the metric and it’s dimension from appropriate lists.

  • An operand shall be added to the policy and the threshold that the policy shall confirm to.

  • The period field is used to set the size of the time window that the metric values are collected and aggregated for the policy enforcement.

  • On the scaling action we can select the component to scale in or scale out, and the number of workers to scale.
    • After a scaling action is performed, some time is needed for having the component workers deployed. For this reason we should ensure that for this period we don’t fire additional scaling actions.

    • This is done through the “Inertia” field that is used to define the time in minutes that after a scaling action is done,no further action is performed.

  • Multiple scaling actions can be added.

  • The policy can be saved and will be enforced to the application within few seconds.

  • In this example we had initially only one worker of the WordPress component.

  • But due to the scaling rule, an additional node has been created.
    • A load balancer had been already deployed from the initial deployment since we had defined that this component might need multiple workers.

    • The scaling action is visible to the user through the logs and the number on workers in the “WordPress” node in the graphs.

_images/slo1.png _images/slo2.png _images/slo3.png _images/slo4.png _images/slo5.png _images/slo6.png

Analytics

Cloud Resources

Manage Cloud Resources

  • Rainbow user can add cloud resources in compatible cloud providers in order to allow the deployment of an application

  • OpenStack, Amazon AWS and Google Cloud are supported

  • Appropriate forms for of the each cloud providers are available;

  • OpenStack

  • Amazon AWS

  • Google Cloud

Manage Your Keys

You can add keys for your account with the appropriate form.