AF_PACKET on OSX
Asked Answered
H

3

7

On Linux, it's possible to create a socket with AF_PACKET to receive raw data from socket and do IP filtering in the application. But the man page in OSX doesn't have this:

       PF_LOCAL        Host-internal protocols, formerly called PF_UNIX,
       PF_UNIX         Host-internal protocols, deprecated, use PF_LOCAL,
       PF_INET         Internet version 4 protocols,
       PF_ROUTE        Internal Routing protocol,
       PF_KEY          Internal key-management function,
       PF_INET6        Internet version 6 protocols,
       PF_SYSTEM       System domain,
       PF_NDRV         Raw access to network device

Is this not a POSIX standard interface? How to achieve same thing on OSX?

Heptagon answered 18/6, 2013 at 12:48 Comment(2)
possible duplicate of AF_PACKET equivalent under Mac OS X (Darwin)Lariat
@Macmade: thanks. But which is more close to the POSIX standard?Heptagon
C
6

No protocol whatsoever is POSIX standard. POSIX does not require a system to support any specific network protocol or any network protocol at all.

AF_PACKET is a pure Linux invention AFAIK, you won't find it on other systems.

BPF (Berkley Packet Filters) is also not POSIX, it's a BSD invention that many systems have copied, as it's pretty handy. However, you cannot inject traffic with it, you can only capture incoming and outgoing traffic with it.

In case anyone cares, here's the latest POSIX Standard:
The Open Group Base Specifications Issue 7, 2018 edition
IEEE Std 1003.1™-2017 (Revision of IEEE Std 1003.1-2008)

If you actually want to send raw IP packets (no matter if IPv4 or IPv6), using a raw IP socket is most portable:

int soc = socket(PF_INET, SOCK_RAW, IPPROTO_IP);

And then you need to tell the system, that you want to provide your own IP header:

int yes = 1;
setsockopt(soc, IPPROTO_IP, IP_HDRINCL, &yes, sizeof(yes));

Now you can send raw IP packets (e.g. IP header + UDP header + payload data) to the socket for sending, however, depending on your system, the system will perform some sanity checks and maybe override some fields in the header. E.g. it may not allow you to create malformed IP packets or prevent you from performing IP address spoofing. Therefor it may for example calculate the IPv4 header checksum for you or automatically fill in the correct source address if your IP header uses 0.0.0.0 or :: as source address. Check the man page for ip(4) or for raw(7) on your target system. Apple doesn't ship programmer man pages for macOS any longer, but you can find them online.

To quote from that man page:

Unlike previous BSD releases, the program must set all the fields of the IP header, including the following:

 ip->ip_v = IPVERSION;
 ip->ip_hl = hlen >> 2;
 ip->ip_id = 0;  /* 0 means kernel set appropriate value */
 ip->ip_off = offset;
 ip->ip_len = len;

Note that the ip_off and ip_len fields are in host byte order.

If the header source address is set to INADDR_ANY, the kernel will choose an appropriate address.

Note that ip_sum is not mentioned at all, so apparently you don't have to provide that one and the system will always calculate it for you.

If you compare that to Linux raw(7):

┌───────────────────────────────────────────────────┐
│IP Header fields modified on sending by IP_HDRINCL │
├──────────────────────┬────────────────────────────┤
│IP Checksum           │ Always filled in           │
├──────────────────────┼────────────────────────────┤
│Source Address        │ Filled in when zero        │
├──────────────────────┼────────────────────────────┤
│Packet ID             │ Filled in when zero        │
├──────────────────────┼────────────────────────────┤
│Total Length          │ Always filled in           │
└──────────────────────┴────────────────────────────┘

When receiving from a raw IP socket, you will either get all incoming IP packets that arrive at the host or just a subset of them (e.g. Windows does support raw sockets but won't ever let you send or receive TCP packets). You will receive the full packet, including all headers, so the first byte of every packet received is the first byte of the IP header.

Some people here will ask why I use IPPROTO_IP and not IPPROTO_RAW. When using IPPROTO_RAW you don't have to set IP_HDRINCL:

A protocol of IPPROTO_RAW implies enabled IP_HDRINCL and is able to send any IP protocol that is specified in the passed header.

But you can only use IPPROTO_RAW for outgoing traffic:

An IPPROTO_RAW socket is send only.

On macOS you can use IPPROTO_IP and you will receive all IP packets but on Linux this may not work, hence the created a new socket PF_PACKET socket type. What should work on both systems is specifying a sub-protocol:

int soc = socket(PF_INET, SOCK_RAW, IPPROTO_UDP);

Of course, now you can only send/receive UDP packets over that socket. If you set IP_HDRINCL again, you need to provide a full IP header on send and you will receive a full IP header on receive. If you don't set it, you can just provide the UDP header on send and the system will add an IP header itself, that is, if the socket is connected and optionally bound, so the system knows which addresses to use in that header. For receiving that option plays no role, you always get the IP header for every UDP packet you receive on such a socket.

In case people wonder why I use PF_INET and not AF_INET: PF means Protocol Family and AF means Address Family. Usually these are the same (e.g. AF_INET == PF_INET) so it won't matter what you use, but strictly speaking sockets should be creates with PF_ and the family in sockaddr structures should be set with AF_ as one day there might be a protocol that supports two kind of different addresses and then there will be AF_XXX1 and AF_XXX2 and neither one may be the same as PF_XXX.

Census answered 21/3, 2018 at 17:45 Comment(1)
"you cannot inject traffic with it [BPF]" — While not all link layer types are supported, you can inject traffic, by writing to the BPF file descriptor.Champerty
C
2

AF_PACKET does not exist on OS X, unfortunately. Instead, you should use /dev/bpfX (Berkeley Packet Filter) which will allow you to capture packets. For more, read: https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man4/bpf.4.html

Cavin answered 30/3, 2016 at 20:31 Comment(0)
G
2

If you want to send raw Ethernet frames (e.g. your own link-level protocol, not IP) on Mac OS X then you can use PF_NDRV sockets, which are kind of similar to PF_RAW:

#include <sys/socket.h>
#include <net/if.h>
#include <net/ndrv.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <net/ethernet.h>

int main (int argc, char **argv) {

   if (geteuid()) { fprintf(stderr,"No root, no service\n"); exit(1); }
   int s = socket(PF_NDRV,SOCK_RAW,0);
   if (s < 0) { perror ("socket"); exit(2); }

   uint16_t   etherType = ntohs(atoi(argv[1]));
   struct sockaddr_ndrv    sa_ndrv;

   strlcpy((char *)sa_ndrv.snd_name, "en0", sizeof (sa_ndrv.snd_name));
   sa_ndrv.snd_family = PF_NDRV;
   sa_ndrv.snd_len = sizeof (sa_ndrv);
   
   int rc = bind(s, (struct sockaddr *) &sa_ndrv, sizeof(sa_ndrv));
   
   if (rc < 0) { perror ("bind"); exit (3);}

   char packetBuffer[2048];

#ifdef LISTENER
   struct ndrv_protocol_desc desc;
   struct ndrv_demux_desc demux_desc[1];
   memset(&desc, '\0', sizeof(desc));
   memset(&demux_desc, '\0', sizeof(demux_desc));

   /* Request kernel for demuxing of one chosen ethertype */
   desc.version = NDRV_PROTOCOL_DESC_VERS;
   desc.protocol_family = atoi(argv[1]);
   desc.demux_count = 1;
   desc.demux_list = (struct ndrv_demux_desc*)&demux_desc;
   demux_desc[0].type = NDRV_DEMUXTYPE_ETHERTYPE;
   demux_desc[0].length = sizeof(unsigned short);
   demux_desc[0].data.ether_type = ntohs(atoi(argv[1]));

   if (setsockopt(s, 
        SOL_NDRVPROTO, 
        NDRV_SETDMXSPEC, 
     (caddr_t)&desc, sizeof(desc))) {
      perror("setsockopt"); exit(4);
   }
   /* Socket will now receive chosen ethertype packets */
   while ((rc = recv (s, packetBuffer, 2048, 0) ) > 0 ) {
    printf("Got packet\n"); // remember, this is a PoC..
   }
#else
   memset(packetBuffer, '\xff', 12);
   memcpy(packetBuffer + 12, &etherType, 2);
   strcpy(packetBuffer,"NDRV is fun!");
   rc = sendto (s, packetBuffer, 20, 0, 
        (struct sockaddr *)&sa_ndrv, sizeof(sa_ndrv));
   if (rc < 0) { perror("sendto"); }
#endif
} 
Gressorial answered 6/5, 2020 at 11:38 Comment(3)
Minor remark: rc is not declared in this code. You need to add int rc; at the top to make this code work. And it does work :-) Thank you @qris.Frederiksen
This code is Jonathan Levin's NewOSXBook.com/bonus/vol1ch16.html Listing 16-2: A sample PF_NDRV client/listener programHoe
On Mojave, tcpdump saw every packet sent, but the listener got nothing. Tweaked the code a bit and read this article at ZeroTier.com/blog/… $ gcc -g ether.c -o ether-out $ gcc -g -DLISTENER ether.c -o ether-in # ifconfig feth0 create # ifconfig feth1 create # ifconfig feth0 peer feth1 # ether-in feth1 2048 # ether-out feth0 2048 Got packetHoe

© 2022 - 2024 — McMap. All rights reserved.