How does sshuttle avoid of TCP-over-TCP curse?
Asked Answered
D

2

9

sshuttle claims that it solves much discussed problem of TCP-over-TCP meltdown.

sshuttle assembles the TCP stream locally, multiplexes it statefully over an ssh session, and disassembles it back into packets at the other end. So it never ends up doing TCP-over-TCP. It’s just data-over-TCP, which is safe.

But from the point of view of a program it maintains a TCP connection to a target server with all that comes with it (read exponential timeouts), which is layered about other TCP session since SSH doesn't yet just work on udp. This very much looks like TCP-over-TCP.

What is the trick here? Is the problem really solved by sshuttle?

I tried reading source code, but so far didn't find the answer.

More importantly, how exactly do they do it? If one wants to reimplement it in barebones, where one should look for inspiration?

Domineca answered 2/1, 2017 at 12:46 Comment(0)
C
4

sshuttle client sets up firewall rules(iptables in Linux, that's why sshuttle client need root privilege) to redirect certain outgoing TCP connections to a local port(12300 by default), you can see this process when starting sshuttle:

firewall manager: starting transproxy.
>> iptables -t nat -N sshuttle-12300
>> iptables -t nat -F sshuttle-12300
>> iptables -t nat -I OUTPUT 1 -j sshuttle-12300
>> iptables -t nat -I PREROUTING 1 -j sshuttle-12300
>> iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.0/8 -p tcp
>> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 0.0.0.0/0 -p tcp --to-ports 12300 -m ttl ! --ttl 42

And remove iptables nat rules when sshuttle exits,

>> iptables -t nat -D OUTPUT -j sshuttle-12300
>> iptables -t nat -D PREROUTING -j sshuttle-12300
>> iptables -t nat -F sshuttle-12300
>> iptables -t nat -X sshuttle-12300

the TCP contents are picked up and multiplexed over the ssh connection to the sshuttle server, then de-multiplexed into connections again. The function onaccept_tcpin in client.py do the mux:

def onaccept_tcp(listener, method, mux, handlers):
    global _extra_fd
    try:
        sock, srcip = listener.accept()
    except socket.error as e:
        if e.args[0] in [errno.EMFILE, errno.ENFILE]:
            debug1('Rejected incoming connection: too many open files!\n')
            # free up an fd so we can eat the connection
            os.close(_extra_fd)
            try:
                sock, srcip = listener.accept()
                sock.close()
            finally:
                _extra_fd = os.open('/dev/null', os.O_RDONLY)
            return
        else:
            raise

    dstip = method.get_tcp_dstip(sock)
    debug1('Accept TCP: %s:%r -> %s:%r.\n' % (srcip[0], srcip[1],
                                              dstip[0], dstip[1]))
    if dstip[1] == sock.getsockname()[1] and islocal(dstip[0], sock.family):
        debug1("-- ignored: that's my address!\n")
        sock.close()
        return
    chan = mux.next_channel()
    if not chan:
        log('warning: too many open channels.  Discarded connection.\n')
        sock.close()
        return
    mux.send(chan, ssnet.CMD_TCP_CONNECT, b'%d,%s,%d' %
             (sock.family, dstip[0].encode("ASCII"), dstip[1]))
    outwrap = MuxWrapper(mux, chan)
    handlers.append(Proxy(SockWrapper(sock, sock), outwrap))
    expire_connections(time.time(), mux)

You can see how the data is packed in ssnet.py.

I've seen the same strategy(I mean setting up firewall rules) in redsocks which aims at redirecting any TCP connection to SOCKS or HTTPS proxy.

Crowbar answered 2/1, 2017 at 17:13 Comment(0)
A
3

As the statement says, it's not TCP-over-TCP.

This is TCP-over-TCP:

First application   
  First end of outer TCP connection  
  First end of inner TCP connection  
   Datagram/packet link  
  Send end of inner TCP connection  
  Second end of outer TCP connection  
Second application  

Notice how the outer TCP connection is carried over the inner TCP connection?

This is what they're doing:

First application   
 First end of outer TCP connection  
  Outer end of First TCP connection  
  Inner end of First TCP connection  
    Byte stream link  
  Inner end of Second TCP connection  
  Outer end of Second TCP connection  
Second application 

Notice that there is no outer TCP connection being transported over an inner TCP connection? There is no TCP-over-TCP.

There are four obvious ways you could do it:

  1. Induce the application to make the TCP connection to an IP address already assigned to the system.
  2. Assign to the system the IP address the application tries to connect to.
  3. NAT the outbound TCP connection to a process running on the local system. (jfly answer hints that this is what they do)
  4. Make the OS route the TCP packets to you and terminate it with your implementation of TCP in user space.
Aviles answered 2/1, 2017 at 12:55 Comment(8)
That's a great explanation! But how exactly they do it?Domineca
Which part don't you understand? It should seem clear from my diagram that each application has an entirely local TCP connection to one end of a byte stream link.Aviles
How do they "end" first connection so that kernel stops caring about it? See updated questionDomineca
As I said, the TCP connection is entirely local, just like if you run a browser and a web server on the same machine. Just like you do telnet localhost.Aviles
OK, how do they make it such?Domineca
I don't know the specifics of how they do it, but there are four obvious ways you could do it: 1) Induce the application to make the TCP connection to an IP address already assigned to the system. 2) Assign to the system the IP address the application tries to connect to. 3) NAT the outbound TCP connection to a process running on the local system. (I suspect this is the method it uses.) 4) Make the OS route the TCP packets to you and terminate it with your implementation of TCP in user space.Aviles
Jfly's answer says it's option 3.Aviles
I rather hope you wouldn't mind my edit. If else please revert. You have my upvote, but jfly answer got it to the point, so I had to accept it.Domineca

© 2022 - 2024 — McMap. All rights reserved.