spray-client throwing "Too many open files" exception when giving more concurrent requests
Asked Answered
A

1

2

I have a spray http client which is running in a server X, which will make connections to server Y. Server Y is kind of slow(will take 3+ sec for a request)

This is my http client code invocation:

def get() {
    val result = for {
       response <- IO(Http).ask(HttpRequest(GET,Uri(getUri(msg)),headers)).mapTo[HttpResponse]
    } yield response

    result onComplete {
      case Success(res) => sendSuccess(res)
      case Failure(error) => sendError(res)
    }
}

These are the configurations I have in application.conf:

spray.can {
    client {
            request-timeout = 30s
            response-chunk-aggregation-limit = 0
            max-connections = 50
            warn-on-illegal-headers = off
        }
        host-connector {
            max-connections = 128
            idle-timeout = 3s
          }
    }

Now I tried to abuse the server X with large number of concurrent requests(using ab with n=1000 and c=100).

Till 900 requests it went fine. After that the server threw lot of exceptions and I couldn't hit the server after that. These are the exceptions:

[info] [ERROR] [03/28/2015 17:33:13.276] [squbs-akka.actor.default-dispatcher-6] [akka://squbs/system/IO-TCP/selectors/$a/0] Accept error: could not accept new connection

[info] java.io.IOException: Too many open files [info] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [info] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [info] at akka.io.TcpListener.acceptAllPending(TcpListener.scala:103)

and on further hitting the same server, it threw the below exception:

[info] [ERROR] [03/28/2015 17:53:16.735] [hcp-client-akka.actor.default-dispatcher-6] [akka://hcp-client/system/IO-TCP/selectors] null [info] akka.actor.ActorInitializationException: exception during creation

[info] at akka.actor.ActorInitializationException$.apply(Actor.scala:164)

[info] at akka.actor.ActorCell.create(ActorCell.scala:596)

[info] Caused by: java.lang.reflect.InvocationTargetException

[info] at sun.reflect.GeneratedConstructorAccessor59.newInstance(Unknown Source)

[info] Caused by: java.io.IOException: Too many open files [info] at sun.nio.ch.IOUtil.makePipe(Native Method)

I was previously using apache http client(which was synchronous) which was able to handle 10000+ requests with concurrency of 100.

I'm not sure I'm missing something. Any help would be appreciated.

Adrianople answered 29/3, 2015 at 1:18 Comment(0)
A
3

The problem is that every time you call get() method it creates a new actor that creates at least one connection to the remote server. Furthermore you never shut down that actor, so each such connection leaves until it times out.

You only need a single such actor to manage all your HTTP requests, thus to fix it take IO(Http) out of the get() method and call it only once. Reuse that returned ActorRef for all your requests to that server. Shut it down on application shutdown.

For example:

val system: ActorSystem = ...
val io = IO(Http)(system)
io ! Http.Bind( ...

def get(): Unit = {
  ...
  io.ask ...
  // or
  io.tell ...
}
Apocarpous answered 29/3, 2015 at 7:5 Comment(7)
Please correct me if i'm wrong, but isn't akka actors supposed to be instantiated in the millions ? why would 1000 concurrent actors saturate the system ?Kristof
Yes, you are right, but it depends on a type of actor. If your actor is managing limited resources like IO then those resource limits apply correspondingly. In this case your OS is limiting number of open FDs/sockets to the server. There is no point in opening thousands of TCP connections to the same server, just a few would suffice to send thousands of HTTP requests.Handtohand
It worked. But now I'm getting different exception. [error] akka.ConfigurationException: Logger specified in config can't be loaded [akka.event.Logging$DefaultLogger] due to [akka.event.Logging$LoggerInitializationException: Logger log1-Logging$DefaultLogger did not respond with LoggerInitialized, sent instead [TIMEOUT]]Adrianople
Perhaps you need to configure logger in conf and also make sure u have sbt dependency for that logger implementation.Handtohand
Logger was configured.It was showing those exceptions only after 1000+ requests. I fixed it by increasing the timeout in configuration.Adrianople
I'm using single actor for all the concurrent requests(as you have mentioned). I'm trying to give 10000 requests with concurrency of 100. Now the problem is, till 2000 requests, the http-client was running fine. But after that it became very slow and apache benchmarking tool stopped saying timeout. Is that anything I can fix in above code?Adrianople
Enable all sorts of debugging for Akka and see if you spot a problem. Hard to say. Check that all your code is non blocking like sendSuccess - what does it do. Test against another fast site too. Good luck. I'll be curios to know what the problem was.Handtohand

© 2022 - 2024 — McMap. All rights reserved.