When to set “Don't fragment” flag in IP header?
Asked Answered
E

3

10

There is a “Don't fragment” flag in IP header. Could applications set this flag ? When to set this flag and why?

Elishaelision answered 30/8, 2017 at 6:45 Comment(0)
G
12

If the 'DF' bit is set on packets, a router which normally would fragment a packet larger than MTU (and potentially deliver it out of order), instead will drop the packet. The router is expected to send "ICMP Fragmentation Needed" packet, allowing the sending host to account for the lower MTU on the path to the destination host. The sending side will then reduce its estimate of the connection's Path MTU (Maximum Transmission Unit) and re-send in smaller segments.This process is called PMTU-D ("Path MTU Discovery").

A fragmentation causes extra overhead on CPU processing to re-assemble packets at the other end (and to handle missing fragments).

Typically, the 'DF' bit is a configurable parameter for the IP stack. I know of ping utility with an option to set DF.

It is often useful to avoid fragmentation, since apart from CPU utilization for fragmentation and re-assembly, it may affect throughput (if lost fragments need re-transmission). For this reason, it is often desirable to know the maximum transmission unit. So the 'Path MTU discovery' is used to find this size, by simply setting the DF bit (say for a ping)

Guardsman answered 30/8, 2017 at 10:0 Comment(0)
H
4

Further down in RFC 791 it says:

If the Don't Fragment flag (DF) bit is set, then internet
fragmentation of this datagram is NOT permitted, although it may be
discarded.  This can be used to prohibit fragmentation in cases
where the receiving host does not have sufficient resources to
reassemble internet fragments.

So it appears what they had in mind originally was small embedded devices with the simplest possible implementation of IP, and little memory. Today, you might think of an IoT device like a smart light bulb or smoke alarm. They might not have the code or memory to reassemble fragments, and so the software communicating with them would set DF.

Haggerty answered 17/3, 2020 at 21:57 Comment(0)
I
1

The only situations I can think of where you would possibly want to set this flag is:

  1. If you are building something like a client-server application where you don't want the other side having to deal with a fragmented packet but rather prefer a packet loss to that.
  2. Or if you are on a network with a very specific set of restrictions, possibly caused by bandwidth issues or a specific firewall behaviour.

Except for such specific circumstances you would likely never touch it.

From RFC 791:

Fragmentation of an internet datagram is necessary when it originates in a local net that allows a large packet size and must traverse a local net that limits packets to a smaller size to reach its destination.

An internet datagram can be marked "don't fragment." Any internet datagram so marked is not to be internet fragmented under any circumstances. If internet datagram marked don't fragment cannot be delivered to its destination without fragmenting it, it is to be discarded instead.

Could applications set this flag ? Yes if you write low-level enough code that you are dealing with the IP header. This part of the question is a bit broad for giving a more specific answer, you should probably figure out if you want to set it before worrying about the how.

Illgotten answered 30/8, 2017 at 10:16 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.