How to identify orphaned veth interfaces and how to delete them?
Asked Answered
F

4

12

When I start any container by docker run, we get a new veth interface. After deleting container, veth interface which was linked with container should be removed. However, sometimes it's fail ( oftern then container started with errors):

root@hostname /home # ifconfig | grep veth | wc -l
53
root@hostname /home # docker run -d -P  axibase/atsd -name axibase-atsd-
28381035d1ae2800dea51474c4dee9525f56c2347b1583f56131d8a23451a84e
Error response from daemon: Cannot start container 28381035d1ae2800dea51474c4dee9525f56c2347b1583f56131d8a23451a84e: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 33359 -j DNAT --to-destination 172.17.2.136:8883 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)
root@hostname /home # ifconfig | grep veth | wc -l
55
root@hostname /home # docker rm -f 2838
2838
root@hostname /home # ifconfig | grep veth | wc -l
55

How I can identify which interfaces are linked with existing containers, and how I can remove extra interface which was linked with removed contrainers?

This way doesn't work (by root):

ifconfig veth55d245e down
brctl delbr veth55d245e
can't delete bridge veth55d245e: Operation not permitted

Extra interfaces now defined by transmitted traffic (if there are no activity, it's extra interface).

UPDATE

root@hostname ~ # uname -a
Linux hostname 3.13.0-53-generic #89-Ubuntu SMP Wed May 20 10:34:39 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

root@hostname ~ # docker info
Containers: 10
Images: 273
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 502
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-53-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 8
Total Memory: 47.16 GiB
Name: hostname
ID: 3SQM:44OG:77HJ:GBAU:2OWZ:C5CN:UWDV:JHRZ:LM7L:FJUN:AGUQ:HFAL
WARNING: No swap limit support

root@hostname ~ # docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64
Fagoting answered 13/8, 2015 at 13:18 Comment(0)
F
5

Fixed by upgrade docker to last version. New version:

root@hostname ~ # docker version
Client:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 02:35:49 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 02:35:49 UTC 2015
 OS/Arch:      linux/amd64

Now interfaces remove together with containers. Old orphaned interfaces were deleted manually by following command:

# ip link delete <ifname>
Fagoting answered 14/8, 2015 at 10:33 Comment(0)
S
8

There are three problems here:

  1. Starting a single container should not increase the count of veth interfaces on your system by 2, because when Docker creates a veth pair, one end of the pair is isolated in the container namespace and is not visible from the host.

  2. It looks like you're not able to start a container:

    Error response from daemon: Cannot start container ...
    
  3. Docker should be cleaning up the veth interfaces automatically.

These facts make me suspect that there is something fundamentally wrong in your environment. Can you update your question with details about what distribution you're using, which kernel version, and which Docker version?

How I can identify which interfaces are linked with existing containers, and how I can remove extra interface which was linked with removed contrainers?

With respect to manually deleting veth interfaces: A veth interface isn't a bridge, so of course you can't delete one with brctl.

To delete a veth interface:

# ip link delete <ifname>

Detecting "idle" interfaces is a thornier problem, because if you just look at traffic you're liable to accidentally delete something that was still in use but that just wasn't seeing much activity.

I think what you would actually want to look for are veth interfaces whose peer is also visible in the global network namespace. You can find the peer of a veth interface using these instructions, and then it would be a simple matter of seeing if that interface is visible, and then deleting one or the other (deleting a veth interface will also remove its peer).

Stunk answered 13/8, 2015 at 13:27 Comment(1)
Thanks for advice.To generate some traffic activity I have written a simple script which iterate all containers and exec ping -c 1 google.com. System information added to question.# ip link delete <ifname> works correctly. Now I'm interesting in identifying of orphaned interfaces and find a reference between actual interfaces and running containers.Fagoting
F
5

Fixed by upgrade docker to last version. New version:

root@hostname ~ # docker version
Client:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 02:35:49 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 02:35:49 UTC 2015
 OS/Arch:      linux/amd64

Now interfaces remove together with containers. Old orphaned interfaces were deleted manually by following command:

# ip link delete <ifname>
Fagoting answered 14/8, 2015 at 10:33 Comment(0)
O
3

Here is how you can delete them all together by pattern.

for name in $(ifconfig -a | sed 's/[ \t].*//;/^\(lo\|\)$/d' | grep veth)
do
    echo $name
    # ip link delete $name # uncomment this
done
Oculo answered 18/3, 2020 at 7:55 Comment(3)
Depending on your OS, try ip a (unix.stackexchange.com/questions/145447/…)Oculo
My proposal is to update the answer to use ip a in favor of ipconfig, sorry I was maybe not clear enough.Globoid
Also the output is different with ip a, so the sed should be adapted.Globoid
P
0

In my case, all virtual ethernet network interface were created by Docker. For solving that, I've stopped all Docker services:

docker stop $(docker ps -q)

And the deleted all networks created by Docker:

docker network rm $(docker network ls -q)
Permeance answered 3/1, 2022 at 19:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.