Realtime receiving of UDP packets with QNX RTOS
Asked Answered
W

4

8

I have a source which sends UDP packets at a rate of 819.2 Hz (~1.2ms) to my QNX Neutrino machine. I want to receive and process those messages with as little delay and jitter as possible.

My first code was basically:

SetupUDPSocket(); 
while (true) {
    recv(socket, buffer, BufferSize, MSG_WAITALL); // blocks until whole packet is received
    processPacket(buffer);
}

The problem is that recv() only checks at each timer tick of the system if there is a new packet available. The timer tick is usually 1ms. So, if I use this I will get a huge jitter, because I process a packet every 1ms or every 2ms. I could reset the size of the timer ticks, but that would affect the whole system (and other timers of other processes, etc). And I still would have a jitter, because I certainly would never exactly match the 819.2 Hz.

So, I tried to use the interrupt line of the network card (5). But it seems as there are also other things which causes the interrupt to rise. I used to following code:

ThreadCtl(_NTO_TCTL_IO, 0);
SIGEV_INTR_INIT(&event);
iID = InterruptAttachEvent(IRQ5, &event, _NTO_INTR_FLAGS_TRK_MSK);

while(true) {
    if (InterruptWait(0, NULL) == -1) {
        std::cerr << "errno: " << errno << std::endl;
    }

    length = recv(socket, buffer, bufferSize, 0); // non-blocking this time

    LogTimeAndLength(); 

    InterruptUnmask(IRQ5, iID;
} 

This results in a single succesful read in the beginning, followed by reads with 0 byte length after 0 time passing. It seems, that after do the InterruptUnmask(), the InterruptWait() does not wait at all, so there must already be a new interrupt (or the same?!).

Is it possible to do something like that with the interrupt line of the network card? Are there any other possibilties to receive the packets at a rate of 819.2 Hz?

Some information about the network card: 'pci -vvv' outputs:

Class          = Network (Ethernet)
Vendor ID      = 8086h, Intel Corporation 
Device ID      = 107ch,  82541PI Gigabit Ethernet Controller
PCI index      = 0h
Class Codes    = 020000h
Revision ID    = 5h
Bus number     = 4
Device number  = 15
Function num   = 0
Status Reg     = 230h
Command Reg    = 17h
I/O space access enabled
Memory space access enabled
Bus Master enabled
Special Cycle operations ignored
Memory Write and Invalidate enabled
Palette Snooping disabled
Parity Error Response disabled
Data/Address stepping disabled
SERR# driver disabled
Fast back-to-back transactions to different agents disabled
Header type    = 0h Single-function
BIST           = 0h Build-in-self-test not supported
Latency Timer  = 40h
Cache Line Size= 8h un-cacheable
PCI Mem Address = febc0000h 32bit length 131072 enabled
PCI Mem Address = feba0000h 32bit length 131072 enabled
PCI IO Address  = ec00h length 64 enabled
Subsystem Vendor ID = 8086h
Subsystem ID        = 1376h
PCI Expansion ROM = feb80000h length 131072 disabled
Max Lat        = 0ns
Min Gnt        = 255ns
PCI Int Pin    = INT A
Interrupt line = 5
CPU Interrupt  = 5h
Capabilities Pointer = dch
Capability ID        = 1h - Power Management
Capabilities         = c822h - 28002000h
Capability ID        = 7h - PCI-X
Capabilities         = 2h - 400000h
Device Dependent Registers:
0x040:  0000 0000 0000 0000   0000 0000 0000 0000 
...
0x0d0:  0000 0000 0000 0000   0000 0000 01e4 22c8 
0x0e0:  0020 0028 0700 0200   0000 4000 0000 0000 
0x0f0:  0500 8000 0000 0000   0000 0000 0000 0000 

and 'nicinfo' outputs:

wm1: 
    INTEL 82544 Gigabit (Copper) Ethernet Controller

    Physical Node ID ........................... 000E0C C5F6DD
    Current Physical Node ID ................... 000E0C C5F6DD
    Current Operation Rate ..................... 100.00 Mb/s full-duplex
    Active Interface Type ...................... MII
    Active PHY address ....................... 0
    Maximum Transmittable data Unit ............ 1500
    Maximum Receivable data Unit ............... 0
    Hardware Interrupt ......................... 0x5
    Memory Aperture ............................ 0xfebc0000 - 0xfebdffff
    Promiscuous Mode ........................... Off
    Multicast Support .......................... Enabled

Thanks for reading!

With answered 8/8, 2012 at 15:22 Comment(5)
'The problem is that recv() only checks at each timer tick of the system if there is a new packet available' - why would it do that? I don't know any QNX, - does the network driver not work sanely? The interrupt driver should set an event/semaphore and exit via the OS so it can set the recv() thread ready 'immediately'. There should be no need for any 'wait until timer tick' - that's just hopeless - might as well use a cooperative polling loop :(Lorenzoloresz
Implementation details like Martin mentioned may vary with driver and even specific model of card, but you haven't listed either.Knifeedged
@MartinJames and Ben Voigt: Sorry, I wasn't aware of the fact that my problem could be driver related. 'nicinfo' says "INTEL 82544 Gigabit (Copper) Ethernet Controller" and 'pci -vvv' outputs "Vendor ID = 8086h, Intel Corporation Device ID = 107ch, 82541PI Gigabit Ethernet Controller"With
Is it useful (or even possible) to timestamp the packets at the source so that you don't need to rely on timing of packet receipt? What if you do a busy wait using MSG_PEEK instead of MSG_WAITALL?Southbound
Is the interrupt edge-sensitive or level-sensitive?Hertahertberg
W
1

I am quite not sure why the statement "The problem is that recv() only checks at each timer tick of the system if there is a new packet available. The timer tick is usually 1ms." would be true for preemptive OS. There must be something in the system configuration or the network protocol stack implementation has some issues.

Years ago when I was working on some IPTV STB project for Yahoo BB Japan, i got an issue in RTP receiving. The issues is not delay or jitter, but the overall system performance in the STB after we add some NDS algorithm. We are using vxWorks, and vxWorks support ethernet hook interface, which will be called each time a ethernet packet is received by the driver.

I hook an API into it and just parse the UDP with specified port from the ethernet packets directly. Of course we have some assumption that there is no fragmentation, which is guaranteed by the network setup for performance issues. Maybe you can also check to see if you can get the same hook in the QNX ethernet driver. At lease you found out if the jitter comes from driver or not.

Weatherboard answered 27/2, 2013 at 5:30 Comment(0)
G
0

How big are your UDP packets ? If the packet size is small you will gain greater efficiency by packing more data into single packet and decreasing transmission rate.

Grishilda answered 7/9, 2012 at 9:48 Comment(0)
H
0

I suspect the interrupt service routing (ISR) is not masking the interrupt. Perhaps it is designed for edge-sensitivity and the interrupt is level-sensitive.

Hertahertberg answered 9/10, 2012 at 22:22 Comment(0)
M
0

sorry I'm a bit late to the party, but I came across your question and saw that it was similar to a situation I encountered. Instead of hardware interrupts, you could try a software interrupt using signals. QNX has some documentation here: http://www.qnx.com/developers/docs/qnx_4.25_docs/qnx4/sysarch/microkernel.html#IPCSIGNALS . I was using CentOS at the time but the theory is the same. According to http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/s/socket.html you can use ioctl() to set up a receive group for the SIGIO signal for a given file descriptor...in your case a UDP socket. When the socket has data that is ready for reading, a SIGIO signal is sent to the process indicated by ioctl(). Use sigaction() to tell the OS what signal handling function to use. In your case, the signal handler can read the data off the socket and store it in a buffer for processing. Use pause() to suspend the process until it handles the SIGIO signal. When the signal handler returns, the process will wake up and you can process the data in the buffer. That should allow you to process your data as it comes in without having to deal with timers or hardware interrupts. One thing to be aware of is that your system can process those signals as fast as the UDP traffic is coming in.

Macomber answered 20/3, 2013 at 4:6 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.