Friday, September 30, 2022
HomeHealthcareChatOps: Managing Kubernetes Deployments in Webex

ChatOps: Managing Kubernetes Deployments in Webex


That is the third put up in a collection about writing ChatOps companies on high of the Webex API.  Within the first put up, we constructed a Webex Bot that obtained message occasions from a bunch room and printed the occasion JSON out to the console.  In the second, we added safety to that Bot, including an encrypted authentication header to Webex occasions, and subsequently including a easy record of approved customers to the occasion handler.  We additionally added person suggestions by posting messages again to the room the place the occasion was raised.

On this put up, we’ll construct on what was finished within the first two posts, and begin to apply real-world use circumstances to our Bot.  The aim right here will probably be to handle Deployments in a Kubernetes cluster utilizing instructions entered right into a Webex room.  Not solely is that this a enjoyable problem to resolve, however it additionally supplies wider visibility into the goings-on of an ops group, as they will scale a Deployment or push out a brand new container model within the public view of a Webex room.  You could find the finished code for this put up on GitHub.

This put up assumes that you simply’ve accomplished the steps listed within the first two weblog posts.  You could find the code from the second put up right here.  Additionally, essential, make sure you learn the primary put up to discover ways to make your native growth surroundings publicly accessible in order that Webex Webhook occasions can attain your API.  Be certain your tunnel is up and working and Webhook occasions can movement via to your API efficiently earlier than continuing on to the subsequent part.  On this case, I’ve arrange a brand new Bot referred to as Kubernetes Deployment Supervisor, however you should utilize your current Bot should you like.  From right here on out, this put up assumes that you simply’ve taken these steps and have a profitable end-to-end knowledge movement.

Structure

Let’s check out what we’re going to construct:

Architecture Diagram

Constructing on high of our current Bot, we’re going to create two new companies: MessageIngestion, and Kubernetes.  The latter will take a configuration object that provides it entry to our Kubernetes cluster and will probably be chargeable for sending requests to the K8s management airplane.  Our Index Router will proceed to behave as a controller, orchestrating knowledge flows between companies.  And our WebexNotification service that we constructed within the second put up will proceed to be chargeable for sending messages again to the person in Webex.

Our Kubernetes Sources

On this part, we’ll arrange a easy Deployment in Kubernetes, in addition to a Service Account that we are able to leverage to speak with the Kubernetes API utilizing the NodeJS SDK.  Be at liberty to skip this half if you have already got these sources created.

This part additionally assumes that you’ve a Kubernetes cluster up and working, and each you and your Bot have community entry to work together with its API.  There are many sources on-line for getting a Kubernetes cluster arrange, and getting kubetcl put in, each of that are past the scope of this weblog put up.

Our Take a look at Deployment

To maintain factor easy, I’m going to make use of Nginx as my deployment container – an easily-accessible picture that doesn’t have any dependencies to rise up and working.  When you’ve got a Deployment of your personal that you simply’d like to make use of as a substitute, be at liberty to switch what I’ve listed right here with that.

# in sources/nginx-deployment.yaml
apiVersion: apps/v1
variety: Deployment
metadata:
    identify: nginx-deployment
  labels:
      app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
template:
  metadata:
    labels:
      app: nginx
  spec:
    containers:
    - identify: nginx
      picture: nginx:1.20
      ports:
      - containerPort: 80

Our Service Account and Function

The subsequent step is to verify our Bot code has a manner of interacting with the Kubernetes API.  We will do this by making a Service Account (SA) that our Bot will assume as its id when calling the Kubernetes API, and making certain it has correct entry with a Kubernetes Function.

First, let’s arrange an SA that may work together with the Kubernetes API:

# in sources/sa.yaml
apiVersion: v1
variety: ServiceAccount
metadata:
  identify: chatops-bot

Now we’ll create a Function in our Kubernetes cluster that may have entry to just about all the things within the default Namespace.  In a real-world utility, you’ll possible need to take a extra restrictive method, solely offering the permissions that enable your Bot to do what you plan; however wide-open entry will work for a easy demo:

# in sources/position.yaml
variety: Function
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  identify: chatops-admin
guidelines:
- apiGroups: ["*"]
  sources: ["*"]
  verbs: ["*"]

Lastly, we’ll bind the Function to our SA utilizing a RoleBinding useful resource:

# in sources/rb.yaml
variety: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  identify: chatops-admin-binding
  namespace: default
topics:
- variety: ServiceAccount
  identify: chatops-bot
  apiGroup: ""
roleRef:
  variety: Function
  identify: chatops-admin
  apiGroup: ""

Apply these utilizing kubectl:

$ kubectl apply -f sources/sa.yaml
$ kubectl apply -f sources/position.yaml
$ kubectl apply -f sources/rb.yaml

As soon as your SA is created, fetching its information will present you the identify of the Secret wherein its Token is saved.

Screenshot of the Service Account's describe output

Fetching information about that Secret will print out the Token string within the console.  Watch out with this Token, because it’s your SA’s secret, used to entry the Kubernetes API!

The secret token value

Configuring the Kubernetes SDK

Since we’re writing a NodeJS Bot on this weblog put up, we’ll use the JavaScript Kubernetes SDK for calling our Kubernetes API.  You’ll discover, should you take a look at the examples within the Readme, that the SDK expects to have the ability to pull from an area kubectl configuration file (which, for instance, is saved on a Mac at ~/.kube/config).  Whereas that may work for native growth, that’s not superb for Twelve Issue growth, the place we sometimes cross in our configurations as surroundings variables.  To get round this, we are able to cross in a pair of configuration objects that mimic the contents of our native Kubernetes config file and might use these configuration objects to imagine the id of our newly created service account.

Let’s add some surroundings variables to the AppConfig class that we created within the earlier put up:

// in config/AppConfig.js
// contained in the constructor block
// after earlier surroundings variables

// no matter you’d like to call this cluster, any string will do
this.clusterName = course of.env['CLUSTER_NAME'];
// the bottom URL of your cluster, the place the API might be reached
this.clusterUrl = course of.env['CLUSTER_URL'];
// the CA cert arrange on your cluster, if relevant
this.clusterCert = course of.env['CLUSTER_CERT'];
// the SA identify from above - chatops-bot
this.kubernetesUserame = course of.env['KUBERNETES_USERNAME'];
// the token worth referenced within the screenshot above
this.kubernetesToken = course of.env['KUBERNETES_TOKEN'];

// the remainder of the file is unchanged…

These 5 strains will enable us to cross configuration values into our Kubernetes SDK, and configure an area consumer.  To try this, we’ll create a brand new service referred to as KubernetesService, which we’ll use to speak with our K8s cluster:

// in companies/kubernetes.js

import {KubeConfig, AppsV1Api, PatchUtils} from '@kubernetes/client-node';

export class KubernetesService {
    constructor(appConfig) {
        this.appClient = this._initAppClient(appConfig);
        this.requestOptions = { "headers": { "Content material-type": 
PatchUtils.PATCH_FORMAT_JSON_PATCH}};
    }

    _initAppClient(appConfig) { /* we’ll fill this in quickly */  }

    async takeAction(k8sCommand) { /* we’ll fill this in later */ }
}

This set of imports on the high provides us the objects and strategies that we’ll want from the Kubernetes SDK to rise up and working.  The requestOptions property set on this constructor will probably be used once we ship updates to the K8s API.

Now, let’s populate the contents of the _initAppClient methodology in order that we are able to have an occasion of the SDK prepared to make use of in our class:

// contained in the KubernetesService class
_initAppClient(appConfig) {
    // constructing objects from the env vars we pulled in
    const cluster = {
        identify: appConfig.clusterName,
        server: appConfig.clusterUrl,
        caData: appConfig.clusterCert
    };
    const person = {
        identify: appConfig.kubernetesUserame,
        token: appConfig.kubernetesToken,
    };
    // create a brand new config manufacturing unit object
    const kc = new KubeConfig();
    // cross in our cluster and person objects
    kc.loadFromClusterAndUser(cluster, person);
    // return the consumer created by the manufacturing unit object
    return kc.makeApiClient(AppsV1Api);
}

Easy sufficient.  At this level, we’ve got a Kubernetes API consumer prepared to make use of, and saved in a category property in order that public strategies can leverage it of their inner logic.  Let’s transfer on to wiring this into our route handler.

Message Ingestion and Validation

In a earlier put up, we took a take a look at the complete payload of JSON that Webex sends to our Bot when a brand new message occasion is raised.  It’s value looking once more, since this may point out what we have to do in our subsequent step:

Message event body

When you look via this JSON, you’ll discover that nowhere does it record the precise content material of the message that was despatched; it merely provides occasion knowledge.  Nonetheless, we are able to use the knowledge.id area to name the Webex API and fetch that content material, in order that we are able to take motion on it.  To take action, we’ll create a brand new service referred to as MessageIngestion, which will probably be chargeable for pulling in messages and validating their content material.

Fetching Message Content material

We’ll begin with a quite simple constructor that pulls within the AppConfig to construct out its properties, one easy methodology that calls a few stubbed-out personal strategies:

// in companies/MessageIngestion.js
export class MessageIngestion {
    constructor(appConfig) {
        this.botToken = appConfig.botToken;
    }

    async determineCommand(occasion) {
        const message = await this._fetchMessage(occasion);
        return this._interpret(message);
     }

    async _fetchMessage(occasion) { /* we’ll fill this in subsequent */ }

    _interpret(rawMessageText) { /* we’ll speak about this */ }
}

We’ve acquired a very good begin, so now it’s time to write down our code for fetching the uncooked message textual content.  We’ll name the identical /messages endpoint that we used to create messages within the earlier weblog put up, however on this case, we’ll fetch a selected message by its ID:

// in companies/MessageIngestion.js
// contained in the MessageIngestion class

// discover we’re utilizing fetch, which requires NodeJS 17.5 or greater, and a runtime flag
// see earlier put up for more information
async _fetchMessage(occasion) {
    const res = await fetch("https://webexapis.com/v1/messages/" + 
occasion.knowledge.id, {
        headers: {
            "Content material-Sort": "utility/json",
            "Authorization": `Bearer ${this.botToken}`
        },
        methodology: "GET"
    });
    const messageData = await res.json();
    if(!messageData.textual content) {
        throw new Error("Couldn't fetch message content material.");
    }
    return messageData.textual content;
}

When you console.log the messageData output from this fetch request, it’ll look one thing like this:

The messageData object

As you may see, the message content material takes two kinds – first in plain textual content (identified with a purple arrow), and second in an HTML block.  For our functions, as you may see from the code block above, we’ll use the plain textual content content material that doesn’t embrace any formatting.

Message Evaluation and Validation

It is a advanced matter to say the least, and the complexities are past the scope of this weblog put up.  There are lots of methods to research the content material of the message to find out person intent.  You would discover pure language processing (NLP), for which Cisco affords an open-source Python library referred to as MindMeld.  Or you can leverage OTS software program like Amazon Lex.

In my code, I took the straightforward method of static string evaluation, with some inflexible guidelines across the anticipated format of the message, e.g.:

<tagged-bot-name> scale <name-of-deployment> to <number-of-instances>

It’s not essentially the most user-friendly method, however it will get the job finished for a weblog put up.

I’ve two intents out there in my codebase – scaling a Deployment and updating a Deployment with a brand new picture tag.  A change assertion runs evaluation on the message textual content to find out which of the actions is meant, and a default case throws an error that will probably be dealt with within the index route handler.  Each have their very own validation logic, which provides as much as over sixty strains of string manipulation, so I gained’t record all of it right here.  When you’re interested by studying via or leveraging my string manipulation code, it may be discovered on GitHub.

Evaluation Output

The completely happy path output of the _interpret methodology is a brand new knowledge switch object (DTO) created in a brand new file:

// in dto/KubernetesCommand.js
export class KubernetesCommand {
    constructor(props = {}) {
        this.kind = props.kind;
        this.deploymentName = props.deploymentName;
        this.imageTag = props.imageTag;
        this.scaleTarget = props.scaleTarget;
    }
}

This standardizes the anticipated format of the evaluation output, which might be anticipated by the varied command handlers that we’ll add to our Kubernetes service.

Sending Instructions to Kubernetes

For simplicity’s sake, we’ll concentrate on the scaling workflow as a substitute of the 2 I’ve acquired coded.  Suffice it to say, that is certainly not scratching the floor of what’s doable along with your Bot’s interactions with the Kubernetes API.

Making a Webex Notification DTO

The very first thing we’ll do is craft the shared DTO that may comprise the output of our Kubernetes command strategies.  This will probably be handed into the WebexNotification service that we inbuilt our final weblog put up and can standardize the anticipated fields for the strategies in that service.  It’s a quite simple class:

// in dto/Notification.js
export class Notification {
    constructor(props = {}) {
        this.success = props.success;
        this.message = props.message;
    }
}

That is the item we’ll construct once we return the outcomes of our interactions with the Kubernetes SDK.

Dealing with Instructions

Beforehand on this put up, we stubbed out the general public takeAction methodology within the Kubernetes Service.  That is the place we’ll decide what motion is being requested, after which cross it to inner personal strategies.  Since we’re solely wanting on the scale method on this put up, we’ll have two paths on this implementation.  The code on GitHub has extra.

// in companies/Kuberetes.js
// contained in the KubernetesService class
async takeAction(k8sCommand) {
    let consequence;
    change (k8sCommand.kind) {
        case "scale":
            consequence = await this._updateDeploymentScale(k8sCommand);
            break;
        default:
            throw new Error(`The motion kind ${k8sCommand.kind} that was 
decided by the system shouldn't be supported.`);
    }
    return consequence;
}

Very easy – if a acknowledged command kind is recognized (on this case, simply “scale”) an inner methodology is named and the outcomes are returned.  If not, an error is thrown.

Implementing our inner _updateDeploymentScale methodology requires little or no code.  Nonetheless it leverages the K8s SDK, which, to say the least, isn’t very intuitive.  The info payload that we create consists of an operation (op) that we’ll carry out on a Deployment configuration property (path), with a brand new worth (worth).  The SDK’s patchNamespacedDeployment methodology is documented within the Typedocs linked from the SDK repo.  Right here’s my implementation:

// in companies/Kubernetes.js
// contained in the KubernetesService class
async _updateDeploymentScale(k8sCommand) {
    // craft a PATCH physique with an up to date duplicate rely
    const patch = [
        {
            "op": "replace",
            "path":"/spec/replicas",
            "value": k8sCommand.scaleTarget
        }
    ];
    // name the K8s API with a PATCH request
    const res = await 
this.appClient.patchNamespacedDeployment(k8sCommand.deploymentName, 
"default", patch, undefined, undefined, undefined, undefined, 
this.requestOptions);
    // validate response and return an success object to the
    return this._validateScaleResponse(k8sCommand, res.physique)
}

The strategy on the final line of that code block is chargeable for crafting our response output.

// in companies/Kubernetes.js
// contained in the KubernetesService class
_validateScaleResponse(k8sCommand, template) {
    if (template.spec.replicas === k8sCommand.scaleTarget) {
        return new Notification({
            success: true,
            message: `Efficiently scaled to ${k8sCommand.scaleTarget} 
cases on the ${k8sCommand.deploymentName} deployment`
        });
    } else {
        return new Notification({
            success: false,
            message: `The Kubernetes API returned a reproduction rely of 
${template.spec.replicas}, which doesn't match the specified 
${k8sCommand.scaleTarget}`
        });
    }
}

Updating the Webex Notification Service

We’re virtually on the finish!  We nonetheless have one service that must be up to date.  In our final weblog put up, we created a quite simple methodology that despatched a message to the Webex room the place the Bot was referred to as, primarily based on a easy success or failure flag.  Now that we’ve constructed a extra advanced Bot, we want extra advanced person suggestions.

There are solely two strategies that we have to cowl right here.  They might simply be compacted into one, however I want to maintain them separate for granularity.

The general public methodology that our route handler will name is sendNotification, which we’ll refactor as follows right here:

// in companies/WebexNotifications
// contained in the WebexNotifications class
// discover that we’re including the unique occasion
// and the Notification object
async sendNotification(occasion, notification) {
    let message = `<@personEmail:${occasion.knowledge.personEmail}>`;
    if (!notification.success) {
        message += ` Oh no! One thing went incorrect! 
${notification.message}`;
    } else {
        message += ` Properly finished! ${notification.message}`;
    }
    const req = this._buildRequest(occasion, message); // a brand new personal 
message, outlined under
    const res = await fetch(req);
    return res.json();
}

Lastly, we’ll construct the personal _buildRequest methodology, which returns a Request object that may be despatched to the fetch name within the methodology above:

// in companies/WebexNotifications
// contained in the WebexNotifications class
_buildRequest(occasion, message) {
    return new Request("https://webexapis.com/v1/messages/", {
        headers: this._setHeaders(),
        methodology: "POST",
        physique: JSON.stringify({
            roomId: occasion.knowledge.roomId,
            markdown: message
        })
    })
}

Tying The whole lot Collectively within the Route Handler

In earlier posts, we used easy route handler logic in routes/index.js that first logged out the occasion knowledge, after which went on to answer a Webex person relying on their entry.  We’ll now take a distinct method, which is to wire in our companies.  We’ll begin with pulling within the companies we’ve created to date, protecting in thoughts that this may all happen after the auth/authz middleware checks are run.  Right here is the complete code of the refactored route handler, with adjustments going down within the import statements, initializations, and handler logic.

// revised routes/index.js
import categorical from 'categorical'
import {AppConfig} from '../config/AppConfig.js';
import {WebexNotifications} from '../companies/WebexNotifications.js';
// ADD OUR NEW SERVICES AND TYPES
import {MessageIngestion} from "../companies/MessageIngestion.js";
import {KubernetesService} from '../companies/Kubernetes.js';
import {Notification} from "../dto/Notification.js";

const router = categorical.Router();
const config = new AppConfig();
const webex = new WebexNotifications(config);
// INSTANIATE THE NEW SERVICES
const ingestion = new MessageIngestion(config);
const k8s = new KubernetesService(config);

// Our refactored route handler
router.put up('/', async operate(req, res) {
  const occasion = req.physique;
  attempt {
    // message ingestion and evaluation
    const command = await ingestion.determineCommand(occasion);
    // taking motion primarily based on the command, at present stubbed-out
    const notification = await k8s.takeAction(command);
    // reply to the person 
    const wbxOutput = await webex.sendNotification(occasion, notification);
    res.statusCode = 200;
    res.ship(wbxOutput);
  } catch (e) {
    // reply to the person
    await webex.sendNotification(occasion, new Notification({success: false, 
message: e}));
    res.statusCode = 500;
    res.finish('One thing went terribly incorrect!');
  }
}
export default router;

Testing It Out!

In case your service is publicly out there, or if it’s working domestically and your tunnel is exposing it to the web, go forward and ship a message to your Bot to try it out.  Do not forget that our take a look at Deployment was referred to as nginx-deployment, and we began with two cases.  Let’s scale to 3:

Successful scale to 3 instances

That takes care of the completely happy path.  Now let’s see what occurs if our command fails validation:

Failing validation

Success!  From right here, the chances are limitless.  Be at liberty to share your whole experiences leveraging ChatOps for managing your Kubernetes deployments within the feedback part under.

Observe Cisco Studying & Certifications

Twitter, Fb, LinkedIn and Instagram.

Share:



Positive Recharge
Positive Rechargehttp://allthingsrelief.com
Hi, and welcome to allthingsrelief.com. Your all inclusive blog where we post about all things health, sports health, healthcare, weight loss, gym, nutrition, hiking, and so much more. Enjoy and make sure to leave a comment if you like the content. Have a beautiful day!
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments