Accurately measuring required bandwidth?



  • In the ongoing series of prototyping my hardware/software needs, I've come to the point where I've got a basic framework for what I want, but I'm unsure of what the final version would cost if deployed to the internets (think something like amazon EC2 or similar).

    So I'm wanting to do some measuring of things like CPU usage, memory usage, bandwidth used, etc. I have some solid ideas for CPU / memory measurement (but suggestions are welcome, since you guys have pointed me at better solutions than what I've been picking)

    But bandwidth utilization is a bit tricky for me, as I've got a few things I need to measure:

    1. The load balancing/caching web server (NGINX caching server)
    2. The IIS ASP.Net MVC4 application
    3. A C# application that has persistent connections (always on services) to 3rd party apps

    The best I can think of to measure these items is to track activity across the different apps and associate a cost with each one, and sum the results, but I'm not really sure how to measure NGINX / ASP.Net true costs, as I'm not really sure how it gets charged when caching is involved. (IE: Does it cost the same to deliver a cached file as it does to generate/send new data?)

    Any recommendations or suggestions on how to measure bandwidth are greatly welcome. The results of these tests are going to drive how I try to set up pricing. The service I'm looking to launch isn't out to make money, but it shouldn't lose money either. (I'm trying to create a self sustaining app in terms of hosting costs)



  • Maybe do a dry run with an Azure free trial splitting each of your bits into separate servers and see what usages you get?



  • Cached data will cost a lot less. All you will do is send a few bytes to say that the file hasn't changed. And then the browser will say 'ok' and not ask for it again.

    So, as far as static content, you only have to pay for that once per user device, plus a small amount per connection.

    You can also save money by using a CDN for stuff like javascript and common css. This can also make your site faster. For example, everybody has jQuery in their cache, so everybody is better off if you point the browser at a CDN jQuery so they used the cached version.

    As far as estimating... I don't know. It depends on the site, what it serves, and so on. Your best bet is to overestimate, at least the first month. Bandwidth is cheap if you're not Netflix.



  • My prototype is an application that connects to a third party service that delivers multimedia (though my application is strictly text communication with the service, with the exception of possibly embedding a player - TBD)

    But the service will also be an analytics platform doing data analysis on the received text, so I'm expecting at least moderate CPU usage and ram, but I don't know how that translates to actual bandwidth, since it's mostly internal calculations / LAN queies (though if I move to a web platform...)



  • So it's Twitch Plays Pokemon.



  • Most sane platforms don't charge for bandwidth used in the same data center. (You'll want to confirm that with whoever you pick). So, you can have a big beefy box that does the the database or app layer, and lots of proxies.

    So, basically, you'll be paying for the bandwidth it takes to get data from the 3rd party. And the bandwidth to serve the users.

    I just checked out Linode's pricing. Their cheapest plan gives you 2TB of bandwidth. ($10 per month). A plan with 4GB of RAM gives you 4TB. ($40/month). Now, you might pay more windows VMs or hosting, but it won't be that much more.

    I would be very surprised if you came anywhere near 2TB without having video physically located on your server, and serving it to the public.



  • Uh, no.

    Think more like NightBot & NightBot services, but made by a guy who does massive amounts of analytics for VPs, knows how to integrate crazy varying technologies, and knows how to facilitate taxes.



  • I'd be surprised too, but I like to plan out my scenarios in fair detail so I at least have decent control over which direction my projects take in terms of reliability. Right now, my personal hosting is probably about on par with most free or basic plans, and start getting outclassed around standard/pro type plans (in terms of computing power I still win, but in terms of distribution I drop off significantly)

    The reasons I'm looking early is to know how to measure when I'll need to upgrade to those more expensive plans, or at least anticipate when things are likely to start dying. (But I'm still in prototyping phase, so that should still be a ways off)



  • What kind of hosting do you have locally? Business class home internet?

    Most providers are CPU bound, but many have recently invested in fast networks, so bandwidth has a very low marginal cost per customer. When you get a VM, CPU is what you're really paying for.



  • lower end fiber that should support the prototype, with some better fiber options (100/100 or 1000/1000) if I'm willing to shell out some cash. (The measurements are to determine if shelling out the cash would be worth it, since the hardware I have would be better than most of the plans i've seen at the 60-75 range)



  • In your position, I would set up 2-3 512MB of ram VMs to proxy the app server at home. That would probably cost 10-20$ per month, + your local internet costs, but that's not marginal if you're paying for it anyway.



  • You should build Twitch Plays Pokemon, that's pretty great.



  • If it makes you feel better, my platform will support unity3d integration after I finish the broadcaster dashboards.

    I'm also laughing at @Captain's 4 edits.



  • This post is deleted!

Log in to reply