Concurrent users per physical server



  • There are a ton of variables that complicate this answer, application design, app server software, database, processor, operating system, etc, etc - but what would you say is the amount of concurrent users a physical machine running a web server can handle?

    Say IIS6 on a dell poweredge 2970 - hitting an unremarkable aspx page pulling some content from SQLServer.

    Forgive the general vagueness of this questions. TIA



  • Concurrent users or concurrent connections?  I don't know about IIS and SQL Server, but with Apache and Postgres/MySQL you should be able to easily handle 90+ dynamic page views per second.

     

    EDIT: Meant to point out that IIS and Apache are going to be about equally matched and that I am assuming the DB is on the same physical machine. 



  • Sorry if that wasn't clear. I meant users accessing the same aspx page over http at the same time.

    With light traffic requirements I would use one machine for web and db, but if the needs were greater Id use a load balanced scheme with separate app servers and a separate database server.  



  • @chebrock said:

    Sorry if that wasn't clear. I meant users accessing the same aspx page over http at the same time.

    In that case you can generally assume an easy sustained 100 concurrent requests with peaks up to 150 or so.  This is with one server for web and DB.  If you start scaling up you can go quite a bit beyond this.


  • :belt_onion:

    If you have time, build a simple page that accesses your database and stress test it. You could use a tool as HP LoadRunner (formerly known as Mercury Interactiv LoadRunner).

    Measuring is the only way to know for sure



  • @morbiuswilters said:

    This is with one server for web and DB.  If you start scaling up you can go quite a bit beyond this.
     

    Do you have any estimates of what 2 servers with a load balancer could handle, 3 servers, etc.

    Let's say a website was advertised during the superbowl, what kind of architecture could handle that? Assume said advertisement had successfully lured the viewer to go to the site using popular imagery (breasts, monster trucks, close-ups of pizza with alot of cheese on it, more breasts)



  • @chebrock said:

    Do you have any estimates of what 2 servers with a load balancer could handle, 3 servers, etc.

    At that point it becomes more speculative, but the next bottleneck will be your DB server since you can add many web servers before you hit the limits of load balancing scalability.  Each server will probably add a sustained 100 page renders per second, with peaks of 150 or so.  From my experience with MySQL, a hefty DB server (with 15k RAID 10 disks, lots of RAM, powerful procs) can serve data for 5-10 web servers for a moderately query-heavy site.  If your queries are smaller or fewer than 10 queries or so a page, you can probably get more out of a DB server.  Bigger, slower queries (especially for things like report generation) can bog a DB down heavily, but your average "log the hit, pull the session/user data, pull page content based off of a primary key ID" page doesn't tend to make use of complex or slow queries.

     

    Once you reach the limit of the DB server, you will have to start looking at partitioning the data across multiple DB servers.  Generally you want to keep the queries for a single page hit confined to one server, so you aren't making 2+ DB connections for every page view, which will eat up DB resources and reduce scalability.  Paritioning can be accomplished in a number of ways, but the general idea is to keep commonly-grouped data on the same server.  That way 95% of your page views can hit one DB server only and the other 5% just connect to 2 or more DB servers and aggregate data in code.

     

    @chebrock said:

    Let's say a website was advertised during the superbowl, what kind of architecture could handle that? Assume said advertisement had successfully lured the viewer to go to the site using popular imagery (breasts, monster trucks, close-ups of pizza with alot of cheese on it, more breasts)

    You had me until "monster trucks" and then you had me again.  I have no idea what "Superbowl" level traffic would be, but in my experience you can serve a pretty hefty app (100+ big queries per page) on mediocre hardware (6x Poweredge 750 web servers, 4x PE 2850 DB servers) and comfortably serve up 7-8 million dynamic page views a day with extra capacity for bursts.  Most of those 7-8 million pages are rendered between normal business hours (8am EST to 6pm PST), so it is not an even distribution of load.  These machines were a few years old at the time and were pretty mid-range when they were new (2.4ghz single core procs, 1GB RAM).



  • @morbiuswilters said:

    @chebrock said:

    Let's say a website was advertised during the superbowl, what kind of architecture could handle that? Assume said advertisement had successfully lured the viewer to go to the site using popular imagery (breasts, monster trucks, close-ups of pizza with alot of cheese on it, more breasts)

    You had me until "monster trucks" and then you had me again.

    You're that fond of cheese? 



  • I have no idea what level of traffic a simple web/db server can handle, as has already been said you will need to measure it.

    For what its worth, I look after an ASP.NET site at work which is running on a virtual server.  It is emulated as single processor and never goes over 15% cpu.  It handles about 10 requests a second normally, less overnight.  However, the databases behind this site are on a seperate MSSQL server cluster (4*dual core 3Ghz xeon 32GB RAM).  I know SQL Server likes memory, I don't know how much the contention between IIS and SQL will hurt thoughput.

    If you want some serious scaling examples take a look at this http://dammit.lt/uc/workbook2007.pdf.  It is a document discussing the architecture behind wikipedia.



  • @morbiuswilters said:

    but the next bottleneck will be your DB server
     

    can you prerender the pages to html ? 



  • guys, what about bandwidth??

    It's a related issue...

    100 req/second with an average page size of 200kb = 20000kb/s it's more or less a 20Mbps to serve 1000 users...

     What do you think the normal situation should be? (A smaller page of course)

     

    Simone



  • @simone basso said:

    guys, what about bandwidth??

    It's a related issue...

    100 req/second with an average page size of 200kb = 20000kb/s it's more or less a 20Mbps to serve 1000 users...

     What do you think the normal situation should be? (A smaller page of course)

     

    Simone

    This is a 3 month old thread you resurrected, but I will go ahead and respond to you.

     

    Nobody on here said the pages were 200k each.  That's quite massive, but if that's what he's serving up, then that's what he's serving up.  20mbps is nothing a modern server can't handle: the bottleneck will almost always be the database, not the bandwidth to the user. 



  • If the users are not doing stuff that requires session management, you can cache a lot. I have seen a portal type of solution handling 300 concurrent users while the server was using 0.2 CPUs in total.

    The more you have actual logics, elements that can not be shared between users and user sessions, the less you can cache and the more you need real hardware. Sometimes this is even hard to estimate. I was in a project where we built an intranet solution for 15 000 daily users. By mistake we noticed that the servers that were built can handle 100 000 daily users...

     


Log in to reply