When you’ve lost a Docker image from your remote repository but it’s still cached on a Kubernetes Node, you can recover it using containerd’s command-line tool, ctr. This tool is typically bundled with containerd installations.
Using ctr
ctr, often included in containerd installations, can be used in conjunction with crictl, a standalone kubernetes-sigs project. Here’s how to interact with ctr and containerd:
Set up the environment: To interact with ctr define this environment variable:
In scenarios that you want to run a Daemonset, but want to run daemonset pods in specific nodes or avoid running pods without deleting the daemonset for a particular timeframe. You can use labes with nodeSelector.
The nature of a DaemonSet was designed to run one pod per node in your Kubernetes cluster, if you have 3 nodes, you will see 3 daemonset pods running.
To effectively reduce the number of Daemonset pods to 0 without deleting the DaemonSet or to chose which node will be running the daemonset pods, it is recommended to use Node Selectors. Here are two ways to implement this solution:
Setting Labels to Kubernetes Nodes
Add a label to all existing nodes. In this example I will use “fluent-bit=false” to control how many FluentBit Daemonset pods will be running in my nodes. To add a label use this command:
kubectl get nodes -o name | xargs -I{} kubectl label {} fluent-bit=false --overwrite
Note: You may need to rerun this command if new nodes are added to the cluster.
Verify the label change:
kubectl get nodes --show-labels
Modify your DaemonSet manifest adding a new selector:
To re-enable Daemonset pods in the future, you can update the node labels to a desired label and value. e.g. “fluent-bit=true”:
Patching Node Selectors
Patching on the fly. In this case, the command is setting the nodeSelector to include a label “non-existing”: “true”, which means that the fluent-bit pods will only be scheduled on nodes that have this label.
Windows applications often use different logging mechanisms than their Linux counterparts. Instead of relying on STDOUT, Windows apps typically utilize Windows-specific logging methods, such as ETW, the Event Log, or custom log files like IIS logs. If you’re running a Windows container in an ECS cluster and want to capture Windows and IIS logs in CloudWatch, you’ll need to implement Log Monitor instrumentation. This setup redirects IIS and system logs to STDOUT, allowing the awslogs driver to automatically capture and forward them to CloudWatch Logs.
Setting Up Log Monitor and CloudWatch for IIS Logs
Follow these steps to configure Log Monitor and send IIS logs to CloudWatch:
Identify the Log Providers.
Determine the providers to include in the configuration file using:
logman query providers | findstr "<GUID or Provider Name>"
For IIS, you can use:
IIS: WWW Server with GUID 3A2A4E84-4C21-4981-AE10-3FDA0D9B0F83
Create the LogMonitorConfig.json file.
This file specifies which logs to capture. Below is an example configuration capturing system logs, application logs, and IIS logs:
Whenever you are designing an architecture with microservices, you might encounter in how to implement an API Gateway, since you need a way to communicate and consume multiple services, generally through APIs. A possible solution is to have a single entry point for all your clients and implement an API Gateway, which will handle all the requests and route those to appropiate microservices.
There are different ways to implement an API Gateway or pay for built-in services in cloud hostings.
In this post I will pick the easiest way that I found to create one for a microservice architecture using .NET and YARP. Here is a general overview of a microservice architecture.
YARP
YARP (which stands for “Yet Another Reverse Proxy”) is an open-source project that Microsoft built for improving routing through internal services using a built-in reverse proxy server. This become very popular and was implemented for several Azure products as App Service.
To get started, you need to create an ASP.NET core empty project. I chose .NET 7.0 for this post.
To add the YARP configuration you will use appsettings.json file. YARP uses routes and clusters, regularilly inside ReverseProxy object defined in builder configuration section. You can find more information about different configuration settings here.
In this example, I am using Products and Employee microservices. So I will have routes like employee-route and product-route and clusters as product-cluster and employee-cluster pointing to destinations. Open your appsettings.json and apply the following configuration.
In scenarios that you need to allow CORS to specific origins you can add a cors policy described in this Microsoft Doc. Here is a configuration example:
Finally if you want to get more information about YARP logging for future debugging or production information, you can add the YARP log level (Information,Warning or Error) inside Logging object as followed: