Planning and managing your cloud ecosystem and environments is important for decreasing manufacturing downtime and sustaining a functioning workload. Within the “Managing your cloud ecosystems” weblog sequence, we cowl completely different methods for guaranteeing that your setup capabilities easily with minimal downtime.
To start out issues off, the primary subject on this weblog sequence is guaranteeing workload continuity throughout employee node upgrades.
What are employee node upgrades?
Employee node upgrades apply essential safety updates and patches and needs to be accomplished recurrently. For extra info on varieties of employee node upgrades, see Updating VPC worker nodes and Updating Classic worker nodes within the IBM Cloud Kubernetes Service documentation.
Throughout an improve, a few of your employee nodes might turn out to be unavailable. It’s essential to verify your cluster has sufficient capability to proceed operating your workload all through the improve course of. Constructing a pipeline to replace your employee nodes with out inflicting software downtime will can help you simply apply employee node upgrades recurrently.
For traditional employee nodes
Create a Kubernetes configmap that defines the utmost variety of employee nodes that may be unavailable at a time, together with throughout an improve. The utmost worth is specified as a share. You may also use labels to use completely different guidelines to completely different employee nodes. For full directions, see Updating Classic worker nodes in the CLI with a configmap within the Kubernetes service documentation. In case you select to not create a configmap, the default most quantity of employee nodes that turn out to be unavailable is 20%.
In case you want your whole variety of employee nodes to stay up and operating, use the ibmcloud ks worker-pool resize
command to briefly add further employee nodes to your cluster throughout the improve course of. When the improve is full, use the identical command to take away the extra employee nodes and return your employee pool to its earlier measurement.
For VPC employee nodes
VPC employee nodes are changed by eradicating the previous employee node and provisioning a brand new employee node that runs on the new model. You’ll be able to improve a number of employee nodes on the similar time, however in the event you improve a number of without delay, they turn out to be unavailable on the similar time. To be sure to have sufficient capability to run your workload through the improve, you may plan to both resize your worker pools to briefly add further employee nodes (much like the method described for traditional employee nodes) or plan to improve your employee nodes one after the other.
Wrap up
Whether or not you select to implement a configmap, resize your employee pool or improve elements one-by-one, making a workload continuity plan earlier than you improve your employee nodes will help you create a extra streamlined, environment friendly setup with restricted downtime.
Now that you’ve a plan to stop disruptions throughout employee node upgrades, maintain an eye fixed out for the subsequent weblog in our sequence, which can talk about how, when and why to implement main, minor or patch upgrades to your clusters and employee nodes.
Learn more about IBM Cloud Kubernetes Service clusters