Assign Memory Resources to Containers and Pods
Last updated
Was this helpful?
Last updated
Was this helpful?
A Pod (a group of sea mammals LOL) is a group of one or more (such as Docker containers), with shared storage/network, and a specification for how to run the containers. (即是一群搬運工人 - Docker!) A Pod’s contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific “logical host” - it contains one or more application containers which are relatively tightly coupled — in a pre-container world, being executed on the same physical or virtual machine would mean being executed on the same logical host.
If you are running Minikube, run the following command to enable the metrics-server:
To see whether the metrics-server is running, or another provider of the resource metrics API (metrics.k8s.io
), run the following command:
If the resource metrics API is available, the output includes a reference to metrics.k8s.io
.
Create a namespace so that the resources you create in this exercise are isolated from the rest of your cluster.
To specify a memory request for a Container, include the resources:requests
field in the Container’s resource manifest. To specify a memory limit, include resources:limits
.
In this exercise, you create a Pod that has one Container. The Container has a memory request of 100 MiB and a memory limit of 200 MiB. Here’s the configuration file for the Pod( I changed a bit for becoming my version):
The args
section in the configuration file provides arguments for the Container when it starts. The "--vm-bytes", "150M"
arguments tell the Container to attempt to allocate 150 MiB of memory.
Create the Pod(I created a file called 'share' on the root path and put the yaml inside it):
Verify that the Pod Container is running:
View detailed information about the Pod:
The output shows that the one Container in the Pod has a memory request of 100 MiB and a memory limit of 200 MiB.
Run kubectl top
to fetch the metrics for the pod:
The output shows that the Pod is using about 162,900,000 bytes of memory, which is about 150 MiB. This is greater than the Pod’s 100 MiB request, but within the Pod’s 200 MiB limit.
Delete your Pod:
A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the Container becomes a candidate for termination. If the Container continues to consume memory beyond its limit, the Container is terminated. If a terminated Container can be restarted, the kubelet restarts it, as with any other type of runtime failure.
In this exercise, you create a Pod that attempts to allocate more memory than its limit. Here is the configuration file for a Pod that has one Container with a memory request of 50 MiB and a memory limit of 100 MiB:
In the args
section of the configuration file, you can see that the Container will attempt to allocate 250 MiB of memory, which is well above the 100 MiB limit.
Create the Pod:
View detailed information about the Pod:
At this point, the Container might be running or killed. Repeat the preceding command until the Container is killed:
Get a more detailed view of the Container status:
The output shows that the Container was killed because it is out of memory (OOM):
The Container in this exercise can be restarted, so the kubelet restarts it. Repeat this command several times to see that the Container is repeatedly killed and restarted:
The output shows that the Container is killed, restarted, killed again, restarted again, and so on:
View detailed information about the Pod history:
The output shows that the Container starts and fails repeatedly:
View detailed information about your cluster’s Nodes:
The output includes a record of the Container being killed because of an out-of-memory condition:
Delete your Pod: