How to Avoid DOS Attack using Berkeley Sockets in C++
Asked Answered
W

5

5

I'm working my way through UNIX Network Programming Volume 1 by Richard Stevens and attempting to write a TCP Echo Client that uses the Telnet protocol. I'm still in the early stages and attempting to write the read and write functions.

I'd like to write it to use I/O Multiplexing and the Select function, because it needs to be multi-client and I don't want to try and tackle learning C++ threads while I'm trying to learn the Berkeley Sockets library at the same time. At the end of the chapter on I/O Multiplexing Stevens has a small section on DOS attacks where he says that the method I was planning on using is vulnerable to DOS attacks that simply send a single byte after connecting and then hang. He mentions 3 possible solutions afterwards - nonblocking IO, threading (out), and placing a timeout on the I/O operations.

My question is, are there any other ways of avoiding such an attack? And if not, which of these is the best? I glanced over the section on placing a timeout on the operations, but it doesn't look like something I want to do. The methods he suggests for doing it look pretty complex and I'm not sure how to work them into what I already have. I've only glanced at the chapter on NIO, it looks like it's the way to go right now, but I'd like to see if there are any other ways around this before I spend another couple of hours plowing through the chapter.

Any ideas?

Waiwaif answered 22/8, 2009 at 21:46 Comment(0)
P
3

... are there any other ways of avoiding such an attack?

Yes, asynchronous I/O is another general approach.

If the problem is that a blocking read() may suspend your execution indefinitely, your general countermeasures are then:

  1. Have multiple threads of execution

    multi-threaded, multi-process, both.

  2. Time-limit the blocking operation

    e.g., instantaneous (non-blocking I/O), or not (SO_RCVTIMEO, alarm(), etc.)

  3. Operate asynchronously

    e.g., aio_read

... which of these is the best?

For the newcomer, I'd suggest non-blocking I/O combined with a time-limited select()/poll(). Your application can keep track of whether or not a connection has generated "enough data" (e.g., an entire line) in a "short enough time."

This is a powerful, mostly portable and common technique.

However, the better answer is, "it depends." Platform support and, more importantly, design ramifications from these choices have to be assessed on a case-by-case basis.

Panthea answered 23/8, 2009 at 3:7 Comment(0)
L
4

Essential reading: The C10K Problem

Using threads (or processes) per connection makes for very straightforward code. The limit to the number of connections is really the limit to the number of threads your system can comfortably multi-task.

Using asynchronous IO to put all sockets in a single thread is not such straightforward code (nicely wrapped by libevent and libev2) but is much more scalable - its limited by the number of open file handles your system allows and - on recent linux builds for example - that can be measured in millions! Most web servers and other servers use asynchronous IO for this reason.

However, your server is still a finite resource that can be exhausted, and there are much nastier attacks than simply running out of capacity to handle new connections.

Firewalls and damage limitation e.g. backups, DMZs etc are essential elements in a real internet-facing services.

Lemal answered 22/8, 2009 at 22:43 Comment(0)
P
3

... are there any other ways of avoiding such an attack?

Yes, asynchronous I/O is another general approach.

If the problem is that a blocking read() may suspend your execution indefinitely, your general countermeasures are then:

  1. Have multiple threads of execution

    multi-threaded, multi-process, both.

  2. Time-limit the blocking operation

    e.g., instantaneous (non-blocking I/O), or not (SO_RCVTIMEO, alarm(), etc.)

  3. Operate asynchronously

    e.g., aio_read

... which of these is the best?

For the newcomer, I'd suggest non-blocking I/O combined with a time-limited select()/poll(). Your application can keep track of whether or not a connection has generated "enough data" (e.g., an entire line) in a "short enough time."

This is a powerful, mostly portable and common technique.

However, the better answer is, "it depends." Platform support and, more importantly, design ramifications from these choices have to be assessed on a case-by-case basis.

Panthea answered 23/8, 2009 at 3:7 Comment(0)
P
2

If you're just getting started learning socket programming, you probably would be better off concentrating on the basic functionality of sockets, and not worrying so much about security issues just yet. When you've written a few client-server applications and understand thoroughly how they work, you'll be in a better position to understand how they break.

Securing an internet-facing network application against malicious clients is not at all trivial, and probably involves all the advanced techniques you mentioned, and then some! For example, it's common to move some responsibility from the application code to the router or firewall level. You could restrict access to only trusted hosts, or detect excessive connection attempts and throttle or block them before the traffic ever hits your application.

Psalmbook answered 22/8, 2009 at 22:20 Comment(1)
I'm just getting started with the core of Berkley Sockets. I've written a couple of network apps in Java using Java Sockets. And I've been running and coding a MUD using sockets for years - I used a code base for that though and only modified - didn't write the socket portion. I've got a pretty good understanding of how networking programming is supposed to work and the theory behind it, it's just learning the quirks of C/C++ sockets. Also, not asking about all DOS attacks or all attacks. Just the one that causes blocking.Waiwaif
S
1

My question is, are there any other ways of avoiding such an attack?

For a server I'd want a timer at the application level:

  • Input data buffer per connection
  • Dumb socket-reading code reads data from the socket into the input buffer
  • Application-specific code parses the content of the input buffer

The application-specific code can terminate the connection associated with input buffers which have been allowed to idle for 'too long'.

Doing this implies asynch I/O, or a dedicated I/O thread[s].

Serles answered 22/8, 2009 at 22:50 Comment(0)
T
1

What I have done before, to help with this (circa 1997 :) was to require that a magic number be sent within a certain amount of time else the connection was closed.

If you have an asynchronous connection then the socket won't be blocked, and you would need a thread that can poll through the list of current connections that haven't sent a valid command, and if after about 20ms a message wasn't received that signifies a valid command, then close that connection and do whatever cleanup you need to do.

This isn't perfect, but for your current concern it may help solve it, and allow the resources to not be consumed by making too many connections.

So it does require a main thread and a second thread for cleaning up, so it isn't single-threaded.

Thesaurus answered 23/8, 2009 at 1:59 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.