This code works perfectly fine on Ubuntu 16.04 and prints correct value (ETHERTYPE_IP) when I toss around UDP bytes via loopback interface:
#include <pcap.h>
#include <iostream>
#include <net/ethernet.h>
int main(int argc,char **argv)
{
char errbuf[PCAP_ERRBUF_SIZE];
auto pcap = pcap_open_live("lo0", BUFSIZ, 0, 1000, errbuf);
pcap_loop(pcap,0, [] (u_char *self, const struct pcap_pkthdr *header,
const u_char *packet) {
auto eth = (struct ether_header *) packet;
auto eth_type = ntohs(eth->ether_type);
std::cout << "eth_type: " << std::hex << eth_type << std::endl;
}, nullptr);
return 0;
}
netcat:
➜ ~ nc -uv -l 54321
Listening on [0.0.0.0] (family 0, port 54321)
➜ ~ nc -4u localhost 54321
hello
Program output:
➜ ~ sudo ./a.out
eth_type: 800
However on OS X 10.11.5 it prints eth_type: 4011. Interesting that it works fine with en1 adapter.
Why there is such a difference between loopback and non-loopback adapters and what is the correct way to capture packets on both?
Update: tcpdump also works:
➜ ~ sudo tcpdump -i lo0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo0, link-type NULL (BSD loopback), capture size 262144 bytes
15:09:00.160664 IP localhost.54321 > localhost.63543: UDP, length 4