/
Deploy on cloud - Where are we? Deploy on cloud - Where are we?

Deploy on cloud - Where are we? - PowerPoint Presentation

webraph
webraph . @webraph
Follow
343 views
Uploaded On 2020-08-03

Deploy on cloud - Where are we? - PPT Presentation

Qiming Teng tengqimcnibmcom Agenda Heat Basic Heat SoftwareConfig Heat BootConfig Heat DockerCompose Heat Kubelet Heat Docker Plugin Heat Ansible Senlin Convergence from template to ID: 796855

config heat server openstack heat config openstack server api docker software policy autoscaling service container senlin image git org

Share:

Link:

Embed:

Download Presentation from below link

Download The PPT/PDF document "Deploy on cloud - Where are we?" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Deploy on cloud - Where are we?

Qiming

Teng

tengqim@cn.ibm.com

Slide2

Agenda

Heat

Basic

Heat

SoftwareConfig

Heat BootConfig

Heat DockerCompose

HeatKubelet

Heat Docker Plugin

Heat Ansible

Senlin

Convergence

Slide3

from template to Stack

version: xxx

parameters:

key:

mykeyresources:

server

: OS::Nova::Server key: {get: key}

image: gold flavor: m1.small

network: {get: network}

volume: {get: volume}

network: OS::Neutron::Network

...

volume: OS::Cinder::Volume

...

Heat

Nova

Neutron

Cinder

Volume

Instance

orchestrator not bandmaster

deployment tool

Slide4

not just a deployment tool, please!!!

version: xxx

parameters:

key:

mykey

resources:

server

: OS::Heat::ServerGroup

count: 5

volume: OS::Cinder::Volume

...

Heat

S0

S1

S2

S3

S4

(1) Parallelized Operation

(2) listen

Observed States

S0

S1

S2

S3

S4

Desired States

(3) converge

S2

S3

Slide5

Instance

OS

App

software-

config

/software-deployment

version: xxx

parameters:

key: mykey

resources:

config: OS::Heat::SoftConfig

group: script

config:

# your script

server: OS::Nova::Server

key: {get: key}

image: gold

flavor: m1.small network: {get: network}

volume: {get: volume}

user_data

:

{get:

config

}

Heat

Nova

Chef

Puppet

OS::Heat::

CloudConfig

OS::Heat::

SoftwareConfig

OS::Heat::

StructuredConfig

OS::Heat::

SoftwareDeployment

OS::Heat::

SoftwareDeployments

OS::Heat::

SoftwareComponent

Slide6

software-config/software-deployment

version: xxx

resources:

config-1: OS::Heat::SoftConfig

deploy-1:

config

: config-1 server: server-1

server-1: OS::Nova::Server

config-2: OS::Heat::SoftConfig

deploy-2:

depends_on

: deploy-1 config

: config-2 server: server-2

server-2: OS::Nova::Server

server-1

server-2

config-1

config-2

depends on

DECLARATIVE

server-1

server-2

config-1

config-2

depends on

Slide7

Instance

OS

software-

config

/software-deployment

App

Chef

Puppet

Q

: What is it?

[1]

http://git.openstack.org/cgit/openstack/os-collect-config/

[2]

http://git.openstack.org/cgit/openstack/os-refresh-config/

[3] http://git.openstack.org/cgit/openstack/os-apply-config/

[4] http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/elements

A

: THEY are collection of agents including

os

-collect-

config

[1]

os

-refresh-

config

[2]

os

-apply-

config

[3]

heat-

config

-script

[4]

heat-

config

-puppet

[4]

heat-

config

-

docker

-compose

[4]

heat-

config

-

kubelet

[4]

...

Slide8

Instance

OS

software-

config

/software-deployment

App

Chef

Puppet

Q

: How does the agent authenticate?

NOTE: There is a side path of generating EC2 tokens

A

: Heat does secret job in the background

heat

domain

created during setup

stack_domain_project

name

 stack id

stack_domain_user

name

 resource name

password

?

uuid.uuid4().hex

Slide9

Instance

OS

software-

config

/software-deployment

App

Chef

Puppet

Q

: How are the agents injected/installed?

[1]

http://git.openstack.org/cgit/openstack/diskimage-builder/

[2]

http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/boot-config

A

: There are two ways

disk-image-builder

(dib)

a

TripleO

project [1]

prebuilt images for use

Heat

boot-

config

[2]

install these agents on the fly when VM boots up

Slide10

goal

install agents required to use certain software deployments in templates [1]

how it's used

define an

env

yaml file with a Heat::

InstallConfigAgent resourcerefer to this resource in your

server.properties.user_data

heat boot-config[1] http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/boot-config/

env

yaml

inst-config

config-config

start-

config

#!/bin/

sh

yum install ..

#!/bin/

sh

cat << EOF ...

mkdir

...

#!/bin/

sh

systemctl

enable ...

systemctl

start ...

heat stack-create -f template -e environment

mystack

template

server

user_data

environment

InstAgent

config

MIME

Slide11

goalprepare guest environment for container deployment with docker-compose

heat container agent

install_container_agent

write_image_pull_script

install_container_agent

#cloud-

config

write_files

:

# write a script that will

# grab specified image via

# 1.

curl + docker

load, or

# 2. docker

pull

#!/bin/

sh

# 1. create a service:

#

heat-container-agent

# 2. enable/start

docker

svc

# 3. enable/start agent svc

template

server

user_data

environment

InstAgent

config

# execute the image pull script to get the <image>

#

docker

run --name heat-container-agent ... \

<image>

heat-container-agent service

sample image:

http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/heat-container-agent/

Slide12

goala 'hook' that uses '

docker

-compose

' to deploy containers [1]

an element that you will build into your guest imageheat Docker compose

[1] http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/elements/heat-config-docker-compose

template

server

user_data

config

group:

docker-compose

inputs: [env_files

]

config:

db:

image: redis

web:

image:

nginx

Instance

OS

os

-collect-

config

docker

-compose

up -d --no-build

heat-

config

heat-

docker

-compose

container

container

metadata

env

files

yml

file

Slide13

template

server

user_data

config

group:

kubelet

inputs: [

env_files

]

config:

containers:

- name: doecho

image:

busybox

command: ...

Instance

OS

os

-collect-

config

metadata

heat-

config

config

hook-

kubelet

goal

a 'hook' that uses

'

kubelet

'

agent from

kubernetes

to deploy containers

an element that you will build into your guest image

heat

kubelet

[1]

http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/elements/heat-config-docker-compose

container

container

poll

kubelet.service

/opt/heat-

docker

/images.tar

preinstalled

preinstalled

poll

Slide14

template

server

user_data

config

group:

ansible

inputs: [...]

config

:

# your ansible

book

# here

Instance

OS

os

-collect-

config

metadata

heat-

config

playbook

goal

a 'hook' that uses

'

ansible

'

to configure

A

instance

heat

config

ansible

[1]

http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/elements/heat-config-ansible

ansible-playbook -

i

localhost <file>

hook-ansible.py

container

application

Slide15

A Resource Type (Contrib[1]) for Heat

DockerInc

::

Docker

::Container

built on docker-py [2]heat docker plugin

[1] http://git.openstack.org/cgit/openstack/heat/tree/contrib/heat_docker/[2] https://github.com/docker/docker-py

image: string

command: listhostname: stringuser: string

stdin_open: boolean

tty:

boolean

mem_limit: integerports: list

environment: listdns

: listvolumes: mapcpu_shares

: integercpuset

: string

CREATE

privileged:

boolean

binds: map (volumes)

volumes_from

: list

port_bindings

: map

links: map

restart_policy

: map

cap_add

: list

cap_drop

: list

read_only

: Boolean

devices: list

START

info

network_info

network_ip

network_gateway

network_tcp_ports

network_udp_ports

logs

logs_head

logs_tail

SHOW

Slide16

Slide17

Services roadmap on SuperVessel

17

SuperVessel

Cloud Infrastructure

SuperVessel

Cloud

Service

SuperVessel

Big Data and HPCService

Super

ClassService

OpenPOWER

Enablement Service

Super Project Team

Service

VM and container service

Storage service

Network serviceAccelerator

as service

Image service

Big Data:

MapReduce

(Symphony), SPARK

Performance tuning service

X-to-P migration:

AutoPort

tool

OpenPOWER

new system test service

On-line video courses

Teacher course management

User contribution management

Project management service

DevOps

automation

Storage

IBM POWER servers

OpenPOWER

server

FPGA/GPU

Docker

(Online)

(Online)

(Preparing)

(Online)

Slide18

Supervessel

Slide19

heat -- tried not to just a deployerSupports to High-Availability

OS::Heat::

HARestarter

recreates a resource when failure detected

Supports to Auto-Scaling

OS::Heat::InstanceGroup

OS::Heat::

ResourceGroup

OS::Heat::AutoScalingResourceGroup

OS::Heat::ScalingPolicyAWS::AutoScaling::

AutoScalingGroup

AWS::AutoScaling::ScalingPolicyAWS::AutoScaling::

LaunchConfiguration

Slide20

Autoscaling reorg

Slide21

Blueprints on reworking Heat autoscaling

BP

Priority

Description

autoscaling-api-resources

high

Heat resources invoking AS

APIs

as-api-group-resource

high

ScalingGroup resource wrapping AS API's group functionality

as-api-policy-resource

high

ScalingPolicy resource wrapping AS API's policy functionality

as-api-webhook-resourcehigh

Webhook resource wrapping AS API's execution of webhooks

autoscaling-api-client

high

A python client for Heat to interact

with AS API

autoscaling-api

-

A separate

service for the implementation of autoscaling w/ Heat

as-engine

-

A separate engine/service

for autoscaling support AS API

as-engine-db

-

A DB

dedicated to autoscaling, using schema created in as-lib-db

as-lib

-

A separate module to be used by the AS service

as-lib-db

-

A DB for autoscaling bookkeeping

Slide22

Dependencies among BPs

autoscaling-api-resources

as-api-group-resources

as-api-policy-resources

as-api-webhook-resources

autoscaling-api-client

autoscaling-api

as-engine

as-engine-db

as-lib

as-lib-db

Slide23

Overview of Autoscaling

Slide24

A Struggle before Senlin Starts

Should we do this

within

Heat or

outside Heat?Within Heatprossmooth transition; strict reviews  better quality

conslong (maybe forever) code churn; eventually, a dedicated service is needed, thus the pain to switch overOutside Heatpros

quick development; less code churn to Heatconshigh requirements of skills and cycles; eventual switch over, i.e. another animal to feed in the OpenStack zooWe choose OUTSIDE HEAT

There are Heat core team supports to this approachWe see a lot potentials in a standalone clustering serviceWe don't have to do everything from scratch  we "borrow" and "steal" code whenever license permits

Slide25

What Do We Really Need?

Scalable

Load-Balanced

Highly-Available

Manageable

......

of any (OpenStack) objects

-- What is missing from OpenStack

Slide26

Senlin

[Chinese Pinyin for Forest]

Slide27

Senlin Architecture

Senlin Engine

Senlin API

Senlin Database

Senlin Client

REST

RPC

Profiles

Policies

Slide28

ER Diagram

cluster

name

uuid

user

project

parentprofile_idstatus

profile

name

uuidtypespec

node

name

uuidcluster_id

profile_idindexstatus

created_timeupdated_time

policy

name

uuid

typelevelspec

placement_policy

update_policy

deletion_policy

scaling_policy

health_policy

create()

delete()

update()

add()

remove()

lb_policy

«policy_type»

os.nova.server

os.cinder.volume

os.keystone.user

os.heat.stack

cluster_policy

cluster_id

policy_id

enabled

level

cooldown

priority

«profile_type»

action

context

action

inputs

outputs

webhook

target

action

user

API

plugins

plugins

Slide29

Senlin Operations (Actions)

Cluster

CREATE

DELETE

UPDATE

LIST

SHOW

ADD_NODESDEL_NODES

SCALE_OUTSCALE_IN

POLICY_ATTACH

POLICY_DETACH

POLICY_UPDATE

Node

CREATE

DELETE

UPDATE

LIST

SHOW

JOIN

LEAVE

MIGRATE

Policy

CREATE

UPDATE

DELETE

LIST

SHOW

Profile

CREATE

UPDATE

DELETE

LIST

SHOW

Action

LIST

SHOW

Event

LIST

SHOW

Webhook

CREATE

DELETE

LIST

SHOW

Slide30

Relation To Other ProjectsSenlin provides the

array

data type for cloud programming

Senlin

Ceilometer

Heat

Nova

Cinder

Neutron

Swift

Keystone

Horizon

Primitive Data Types

Complex Data Types

struct person {

int age;

char name[0];

}

person team[10];

// Senlin cluster of Heat stacks

// Senlin cluster of nova servers

// Heat stack containing senlin clusters

Slide31

Current StatusCode Base

http://git.openstack.org/cgit/stackforge/senlin

including API design under doc subdirectoryhttp://git.openstack.org/cgit/stackforge/python-senlinclient

IRC channel: #senlin

DateMilestone

2014-12-10Initial Git Repository inside CRL2014-12-25Migration to github.com

2014-01-14Introduction to IBM Heat Community2015-01-19

Weekly conference call started2015-02-06Announcement on IBM openstack-dev mailinglist2015-02-13

Email to OpenStack Heat Core team2015-03-16Senlin project accepted to OpenStack StackForge

2015-03-21Senlin client project accepted to OpenStack StackForge2015-03-26

Project announcement in community (link)

Slide32

Next StepComplete AutoScaling supportCross-Region AutoScaling

Features Pipeline (draft)

High Priority

Middle Priority

Low Priority

Item

W

Item

W

Item

W

Event Listening

**

Horizon Plug-in

*

Metrics Collection

*

Scavenger Process

*

User Defined Actions/Ansible

***

AWS Compatible API

***

Multi-Engine Support

*

Quota

Enforcement

*

Integration with Mistral

**

Test Case Coverage

**

Event Notification

*

Cluster suspend/resume

**

Babican Support

*

Scheduled actions

*

VPNaaS support

**

Interaction with Congress

*

Nova ServerGroup API

*

Integration with Tooz

**

Slide33

Thank you!