Maximum concurrent Socket.IO connections
Asked Answered
C

6

187

This question has been asked previously but not recently and not with a clear answer.

Using Socket.io, is there a maximum number of concurrent connections that one can maintain before you need to add another server?

Does anyone know of any active production environments that are using websockets (particularly socket.io) on a massive scale? I'd really like to know what sort of setup is best for maximum connections?

Because Websockets are built on top of TCP, my understanding is that unless ports are shared between connections you are going to be bound by the 64K port limit. But I've also seen reports of 512K connections using Gretty. So I don't know.

Compressed answered 8/4, 2013 at 6:40 Comment(10)
Trello use sockets on a massive scale (specifically, socket.io).Wrath
I read that Trello had to modify Socket.io code because of a 10,000 connection cap and were able to maintain 'many thousands' of connections before adding servers. Still a huge gulf between that and 512K of other server systems.Compressed
How old is that article though? Trello has just recently reached over 1 million active users per month so I would imagine they are now running more than 10,000 active sockets. Trello use Redis to sit on top of socket.io for scalabilityWrath
Trello now apparently has over 4 million users, but surely they are running that on a large number of servers, right? That brings me back to my original question: what's their (or anyone else's) actual peak concurrent user count per server? It would also be good to know what kind of server/container they use. And are they still running their own fork, or are they back to the origin/master? My only purpose in asking this question was in trying to gauge if my company (at the time) could afford to maintain a Socket.io application for probably 120,000 concurrent connections.Compressed
Yeah they have a 4 million user base but around 1 million active users each month. I couldn't really comment on how Trello have scaled since their initial stack because I don't know where their bottlenecks are. They did clearly have scaling issues with socket.io (which is I/O bound) therefore one could assume they have scaled out more than they have up. However, you can run multiple instances of Node on one machine therefore it wouldn't surprise me if they were running just a couple of powerful machines with multiple node processes.Wrath
Any issues the guys at Trello encountered with socket.io they said they have raised against the project and looking at a general fix. However, as far as I could tell their solution was to disable all transports other than web sockets to prevent socket.io being too "chatty". To answer your question, 120,000 concurrent socket connections are certainly viable - this guy managed 1 million raw connections, whether socket.io is capable of that I guess is a different story, you would need to benchmark it.Wrath
Regarding the port limit, I think the explanation for why that is not an issue is explained here. Basically, the only port used on your system is the one on which you are listening. Sockets are created for each connection, and those use file descriptors, but they don't use ports on your box.Chord
12 million connections? mrotaru.wordpress.com/2013/10/10/…Tabling
Please don't use @MajidJafari answer in RHEL or CENTOS. You won't be able to sudo due to corrupt /etc/sysctl.conf. Meaning you will be locked out of your EC2 instance or PC. I had to detach volume, mount a backup volume as root volume and edit the /etc/sysctl.conf and /etc/security/limits.conf. Please use his answer if you know what you are doing.Kerrikerrie
Thanks @frank. I added your comment to his answer. I'm sorry you had to end up doing that. Sounds very gross, but I'm glad you got out of it okay.Compressed
C
105

This article may help you along the way: http://drewww.github.io/socket.io-benchmarking/

I wondered the same question, so I ended up writing a small test (using XHR-polling) to see when the connections started to fail (or fall behind). I found (in my case) that the sockets started acting up at around 1400-1800 concurrent connections.

This is a short gist I made, similar to the test I used: https://gist.github.com/jmyrland/5535279

Cannes answered 7/5, 2013 at 19:19 Comment(6)
I realize this is an older topic but I found it first when searching for a question to my answer and ultimately discovered this to be helpful: rtcamp.com/tutorials/linux/increase-open-files-limit The open file limit per process may default to a soft limit of 1024 and hard limit of 4096 and since every open TCP port represents a file, it's important to consider these limits when determining how many open sockets a machine will allow before trying to max out the library.Vaporous
@Cannes Did you ever discover why your web sockets were acting up around 1400-1800 connections? I am having the identical issue, and my file limits are set to 100,000, so I know that is not the issue. Any help would be greatly appreciated. Thank you.Dewar
@seth: it has been a while since I last reviewed this, but I think this was the conclution: XHR polling took up too much resources (in relation to other transport methods). When using websockets, the number of concurrent connections was higher.Cannes
@Cannes thank you for the answer. I am seeing the same issues using the ws module, not socket.io, so there shouldn't be any XHR polling with the ws module. That is where I'm having problems troubleshooting. The search continues.Dewar
This is a good clean answer.. Also correct as it is case by case.. Personally i suggest ppl write their own benchmarks or connection simulator. While a test for someone else might be good, it does not represent the real world environment... Once you have a client simulator capable of handling any number of clients with various real world faults.. You can bench mark after major changes and also update your simulator as you go. Operating a user chat interface would be different to monitor users browser and so on.. Python i found very handy to script a simulator...Lovieloving
Hi Jam, isn't it depends on the server you use on ? What kind of server was it which you tested on ?Maffick
A
46

I tried to use socket.io on AWS, I can at most keep around 600 connections stable.

And I found out it is because socket.io used long polling first and upgraded to websocket later.

after I set the config to use websocket only, I can keep around 9000 connections.

Set this config at client side:

const socket = require('socket.io-client')
const conn = socket(host, { upgrade: false, transports: ['websocket'] })
Amoritta answered 14/3, 2019 at 7:31 Comment(6)
did you use EC2, which kind of instance ? t2.micro, t2.nano ?Planarian
Did you notice a difference in responsiveness when you force websockets?Hyunhz
Do you know what size your instance was? Also just so anyone in the future knows some old browsers don't support WebSockets which is why the upgrade may be important for some.Palinode
How can we test the how much connection does the server support? How did you measure it 9000 connections? Please suggest..Fastigiate
This thread is old, but did anyone figure out what kind of EC2 instance to use? I am trying to figure out what instance to choose for: 50, 100, 200, and 300 concurrent connections. Hopefully, somebody will answer.Erastatus
I'm using websocket only (with nginx reverse proxy). My app gets unresponsive after 1200 connections. I have set all the ulimits, but nothing works.Secateurs
A
20

GO THROUGH THE COMMENT OF THIS ANSWER BEFORE PROCEEDING FURTHER

The question ask about socket.io sockets, the answer is for native sockets. These changes are dangerous as they apply to everything on the system, not just socket.io sockets. Besides, today networks is never the bottleneck for socket.io. Do not make these changes to your system without understanding the implications first.

For +300k concurrent connection:

Set these variables in /etc/sysctl.conf:

fs.file-max = 10000000 
fs.nr_open = 10000000

Also, change these variables in /etc/security/limits.conf:

* soft nofile 10000000
* hard nofile 10000000
root soft nofile 10000000
root hard nofile 10000000

And finally, increase TCP buffers in /etc/sysctl.conf, too:

net.ipv4.tcp_mem = 786432 1697152 1945728
net.ipv4.tcp_rmem = 4096 4096 16777216
net.ipv4.tcp_wmem = 4096 4096 16777216

for more information please refer to this.

Abebi answered 20/11, 2019 at 8:8 Comment(4)
Can you edit your answer and shortly explain what is going on here?Ilonailonka
I can't attest to this personally, but according to a comment by @Kerrikerrie above: "Please don't use (this) answer in RHEL or CENTOS. You won't be able to sudo due to corrupt /etc/sysctl.conf. Meaning you will be locked out of your EC2 instance or PC. I had to detach volume, mount a backup volume as root volume and edit the /etc/sysctl.conf and /etc/security/limits.conf. Please use his answer if you know what you are doing."Compressed
The question ask about socket.io sockets, the answer is for native sockets. These changes are dangerous as they apply to everything on the system, not just socket.io sockets. Besides, today networks is never the bottleneck for socket.io. Do not make these changes to your system without understanding the implications first.Preciousprecipice
use sysctl to modify settings, and run a check first before doing any changes and see if you don't already have that set to some higher value. also nr_open have a hardcoded kernel limit of 1024*1024 so increasing it over it will either have no effect or you will get an error.Withstand
R
18

This guy appears to have succeeded in having over 1 million concurrent connections on a single Node.js server.

http://blog.caustik.com/2012/08/19/node-js-w1m-concurrent-connections/

It's not clear to me exactly how many ports he was using though.

Role answered 24/10, 2014 at 16:52 Comment(1)
Not with socket.io I think and not even with websockets. The guy seemed to be using long polling which I guess is less resource hungry.Cayuse
F
3

I would like to provide yet another answer in 2023.

We only use websocket in socket.io-client. We have done 2 type of performance tests,

  1. my test team uses JMeter to test up to 5000 concurrent connections. Due to the nature of our product, 5000 connections is enough for us, so we didn't go higher.

  2. I use https://a.testable.io/ to do another performance test. The reason I uses testable (this is NOT a sales pitch for them lol) I can choose ws clients from different locations, e.g. I chose 3 different locations from NA and one location from Asia. I believe this would be more closer to real life scenario than I just run a test script from my local machine(which I do have too). Doing this kind of test caused money, to quote from their technical support words after I did my test, "I see you ran a 20,000 user test successfully today too, great! Less than $20 for a test of this size is by far the best pricing out there :)."

BTW, you can also refer to https://ably.com/topic/scaling-socketio, which the latest published article about socket.io performance I can find.

So in summary I would argue that if you only use websocket, 5000 to 10,000 concurrent connection should not be to hard to achieve.

Fogged answered 5/1, 2023 at 9:57 Comment(0)
C
2

After making configurations, you can check by writing this command on terminal

sysctl -a | grep file
Caputto answered 14/8, 2020 at 11:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.