What was the motivation for adding the IPV6_V6ONLY flag?
Asked Answered
S

8

25

In IPv6 networking, the IPV6_V6ONLY flag is used to ensure that a socket will only use IPv6, and in particular that IPv4-to-IPv6 mapping won't be used for that socket. On many OS's, the IPV6_V6ONLY is not set by default, but on some OS's (e.g. Windows 7), it is set by default.

My question is: What was the motivation for introducing this flag? Is there something about IPv4-to-IPv6 mapping that was causing problems, and thus people needed a way to disable it? It would seem to me that if someone didn't want to use IPv4-to-IPv6 mapping, they could simply not specify a IPv4-mapped IPv6 address. What am I missing here?

Saraann answered 22/4, 2010 at 19:11 Comment(3)
@Eric Eijkelenboom: no, it doesn'tWetzel
Since this is a networking question and it is not programming related, I assumed it did.Kopp
these flags are parameters given to the system calls to open a socket. used when programming, not when configuring or maintaining. IOW: it's the developer and not the admin who uses it.Wetzel
W
3

I don't know why it would be default; but it's the kind of flags that i would always put explicit, no matter what the default is.

About why does it exist in the first place, i guess that it allows you to keep existing IPv4-only servers, and just run new ones on the same port but just for IPv6 connections. Or maybe the new server can simply proxy clients to the old one, making the IPv6 functionality easy and painless to add to old services.

Wetzel answered 22/4, 2010 at 19:32 Comment(0)
M
13

Not all IPv6 capable platforms support dualstack sockets so the question becomes how do applications needing to maximimize IPv6 compatibility either know dualstack is supported or bind separatly when its not? The only universal answer is IPV6_V6ONLY.

An application ignoring IPV6_V6ONLY or written before dualstack capable IP stacks existed may find binding separatly to V4 fails in a dualstack environment as the IPv6 dualstack socket bind to IPv4 preventing IPv4 socket binding. The application may also not be expecting IPv4 over IPv6 due to protocol or application level addressing concerns or IP access controls.

This or similar situations most likely prompted MS et al to default to 1 even tho RFC3493 declares 0 to be default. 1 theoretically maximizes backwards compatibility. Specifically Windows XP/2003 does not support dualstack sockets.

There are also no shortage of applications which unfortunately need to pass lower layer information to operate correctly and so this option can be quite useful for planning a IPv4/IPv6 compatibility strategy that best fits the requirements and existing codebases.

Malodorous answered 10/5, 2010 at 16:41 Comment(0)
K
6

The reason most often mentioned is for the case where the server has some form of ACL (Access Control List). For instance, imagine a server with rules like:

Allow 192.0.2.4
Deny all

It runs on IPv4. Now, someone runs it on a machine with IPv6 and, depending on some parameters, IPv4 requests are accepted on the IPv6 socket, mapped as ::192.0.2.4 and then no longer matched by the first ACL. Suddenly, access would be denied.

Being explicit in your application (using IPV6_V6ONLY) would solve the problem, whatever default the operating system has.

Kamin answered 9/5, 2010 at 17:4 Comment(1)
Server simply shouldn't use dual-stack sockets with mapped addresses if it cannot apply IPv4 ACLs to IPv4 addresses in the mapped form.Zug
S
5

For Linux, when writing a service that listens on both IPv4 and IPv6 sockets on the same service port, e.g. port 2001, you MUST call setsockopt(s, SOL_IPV6, IPV6_V6ONLY, &one, sizeof(one)); on the IPv6 socket. If you do not, the bind() operation for the IPv4 socket fails with "Address already in use".

Streamy answered 28/3, 2013 at 16:16 Comment(0)
W
3

I don't know why it would be default; but it's the kind of flags that i would always put explicit, no matter what the default is.

About why does it exist in the first place, i guess that it allows you to keep existing IPv4-only servers, and just run new ones on the same port but just for IPv6 connections. Or maybe the new server can simply proxy clients to the old one, making the IPv6 functionality easy and painless to add to old services.

Wetzel answered 22/4, 2010 at 19:32 Comment(0)
M
2

There are plausible ways in which the (poorly named) "IPv4-mapped" addresses can be used to circumvent poorly configured systems, or bad stacks, or even in a well configured system might just require onerous amounts of bugproofing. A developer might wish to use this flag to make their application more secure by not utilizing this part of the API.

See: http://ipv6samurais.com/ipv6samurais/openbsd-audit/draft-cmetz-v6ops-v4mapped-api-harmful-01.txt

Maryannemarybella answered 25/4, 2012 at 0:50 Comment(0)
A
1

Imagine a protocol that includes in the conversation a network address, e.g. the data channel for FTP. When using IPv6 you are going to send the IPv6 address, if the recipient happens to be a IPv4 mapped address it will have no way of connecting to that address.

Anderegg answered 2/5, 2010 at 10:18 Comment(0)
R
1

There's one very common example where the duality of behavior is a problem. The standard getaddrinfo() call with AI_PASSIVE flag offers the possibility to pass a nodename parameter and returns a list of addresses to listen on. A special value in form of a NULL string is accepted for nodename and implies listening on wildcard addresses.

On some systems 0.0.0.0 and :: are returned in this order. When dual-stack socket is enabled by default and you don't set the socket IPV6_V6ONLY, the server connects to 0.0.0.0 and then fails to connect to dual-stack :: and therefore (1) only works on IPv4 and (2) reports error.

I would consider the order wrong as IPv6 is expected to be preferred. But even when you first attempt dual-stack :: and then IPv4-only 0.0.0.0, the server still reports an error for the second call.

I personally consider the whole idea of a dual-stack socket a mistake. In my project I would rather always explicitly set IPV6_V6ONLY to avoid that. Some people apparently saw it as a good idea but in that case I would probably explicitly unset IPV6_V6ONLY and translate NULL directly to 0.0.0.0 bypassing the getaddrinfo() mechanism.

Reeder answered 12/10, 2015 at 10:25 Comment(0)
E
1

Platform support

Some platforms do not support dualstack sockets, either due to age or deliberate choice.

For example, FreeBSD does not support IPv4-mapped addresses at all:

The reasoning behind this decision is that this mapping brings some security concerns with it. There are various types of attack surface that it opens up, but it all comes down to the provision of two different ways to reach the same port, each with its own access-control rules.

https://lwn.net/Articles/688462/

Therefore, some programs choose to open two sockets, one for each protocol. However, they need to be assured that they will not conflict.

Application considerations

The application may transmit IP addresses to/from the client.

And if there is an address mismatch, it could cause problems.

Efficacy answered 29/4, 2024 at 15:0 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.