Distributed / Cloud Deployment

This article describes an example distributed Cloud deployment of EaaS.

Emulation Components (EmuComp)

Emulation components (EmuComp) are the work horses of a distributed EaaS installation, i.e. on EmuComp machines emulators are executed. Therefore, EmuComp instances are usually deployed at compute instances, e.g. blades / Cloud instances.

The following sections describe the preparation of such an “image”. Once such a Cloud/blade image is ready, these images can be instantiated as static compute instances or governed by the EaaS allocator.

Build

Build and package only EmuComp related modules (in src/):

mvn clean install -P emucomp -pl ear -am

Make sure that to adapt ear/src/main/application/META-INF/jboss-deployment-structure.xml, for partial deployment, e.g. for EmuComp only deployment:

<jboss-deployment-structure>
  <ear-subdeployments-isolated>false</ear-subdeployments-isolated>
  <deployment>
    <dependencies>
      <module name="deployment.eaas-server.ear.eaas-components-impl-0.0.1-SNAPSHOT.war" services="import"/>
      <module name="deployment.eaas-server.ear.eaas-proxy-impl-0.0.1-SNAPSHOT.war" services="import"/>
    </dependencies>
  </deployment>
</jboss-deployment-structure>

Note

Partial deployment will be automated at some point. Currently not a priority.

In src/ear/target there will be the build eaas-server.ear.

Package

Create emucomp container with eaas-server deployment (TBD)

Deploy

Prepare the target machine (image). Create a docker-compose.yaml.

Example:

version: '2'
services:
  eaas:
    image: eaas/emucomp
    container_name: eaas
    privileged: true
    ports:
      - 80:80
    volumes:
      - ./log:/eaas/log

Typically, for a pure EmuComp instance no additional configuration is required.

Create a systemd unit. Create a file /etc/systemd/system/eaas.service Example:

[Unit]
Description=EmuComp Service
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/home/centos
ExecStartPre=-/usr/local/bin/docker-compose pull
ExecStart=/usr/local/bin/docker-compose up
ExecStop=/usr/local/bin/docker-compose down

[Install]
WantedBy=default.target

Set the working directory to the folder containing the docker-compose.yaml and the log/ directory. Finally, activate the service to be executed at machine startup: sudo systemctl enable eaas.

To test the setup, run sudo systemctl start eaas and observe the eaas logfile.

EaaS Gateway / Frontend

For this example we assume that the EaaS gateway machine acts as REST endpoint and web UI frontend as well as hosts the object- and image-archives.

Note

To follow this example, determine your public IP address or domain name of the gateway machine. Furthermore, allocate a port, i.e. make sure the desired port (e.g. port 80) is unused and reachable by clients (e.g not firewalled).

Preparations

To organize EaaS related files in an eaas-working directory (only to simplify management and not for technical reasons, see also Note). This folder should contain the following subdirectories:

  • config containing a eaas-config.yaml file (see below)
  • demo-ui containing a configured demo-ui instance (optional, see below)
  • objects containing an object-archive`(optional)
  • image-archive containing the image-archive

Note

The objects and image-archive directory require usually a significant amount of storage space and may be mounted from NAS or other storage devices. In this case do not use symbolic links, because linking to files outside of a shared Docker folder would undermine the container isolation and is therefore not allowed. Instead, adpat the docker-compose.yaml pointing to the real paths.

The next step is to create a EaaS configuration.

OpenStack Nova

In this example, a OpenStack-based Cloud provider is used: bwCloud. This example can be simply adapted to be used with other OpenStack Cloud instances.

clustermanager:
    name: "default"
    admin_api_access_token: "secret"
    providers:
      - name: bwcloud
        type: jclouds

        labels:
            rank: 1

        node_allocator:
            provider:
                name: openstack-nova
                endpoint: https://bwcloud.ruf.uni-freiburg.de:5000/v2.0/
                identity: eaas_project:userid
                credential: mypassword

            security_group_name: default
            node_group_name: eaas-nodes
            node_name_prefix: eaas-testing-

            vm:
                network_id: an-uuid-use-openstack-ui-to-find-out
                hardware_id: Freiburg/an-uuid-use-openstack-ui-to-find-out  # 8 vcpus / 8 GB
                image_id: Freiburg/an-uuid-use-openstack-ui-to-find-out  # emucomp

        poolscaler:
            min_poolsize: 2
            max_poolsize: 6
            scaledown:
                node_warmup_period: 2 mins
                node_cooldown_period: 1 mins

Google Cloud

In this example, the Google Compute Cloud is used.

clustermanager:
    name: "default"
    admin_api_access_token: "secret"
    providers:
      - name: default
        type: gce
        node_allocator:
            project_id: myproject
            zone_name: europe-west1-d
            network_name: my-network
            credentials_file: /home/bwfla/.bwFLA/secret.json
            node_name_prefix: "emucomp-eu-"
            vm:
                # https://cloud.google.com/compute/docs/machine-types
                machine_type: n1-highcpu-8
                persistent_disk:
                    type: pd-standard
                    size: 10  # in GB
                    image_url: projects/myproject/global/images/eaas-emucomp-20170406
        poolscaler:
            # specs must currently be exactly: num_nodes * node_capacity
            min_poolsize:  1
            max_poolsize:  25
            scaledown:
                    node_warmup_period: 10 mins
                    node_cooldown_period: 15 mins
            preallocation:
                min_bound: { cpu: 0, memory: 0 }
                max_bound: { cpu: +inf, memory: +inf }
                equest_history_multiplier: 0.5

Static Compute Cluster

Alternatively, a static compute cluster (e.g. a number of dedicated machines) can be configured to host emulation components.

components.timeout: 50s
clustermanager:
    name: "default"
    providers:
      - name: "default"
        type: "blades"

        node_allocator:
            healthcheck:
                url_template: "http://{{address}}/emucomp/health"
            node_capacity: { cpu: 16, memory: 32GB }
            node_addresses:
                    - "emucomp1.emulation-solutions:8080"
                    - "emucomp2.emulation-solutions:8080"