Part 2: Lessons learned tuning TCP and Nginx in EC2

February 12th, 2014 by Justin

EDIT 2/20/14: Updated to reflect correct response time metric

In part 1 of our post, one of the items we discussed was our issues with using DNS as a load balancing solution.  To recap, at the end of our last post we were still setup with Dyn’s load balancing solution and our servers were receiving a disproportionate amount traffic to them.  Server failures were still not as seamless as we wanted, due to the issues with DNS TTLs not always being obeyed and our response times were a lot higher than we wanted them to be, hovering around 200-250ms.

In part 2 of this post, I’ll cover the following

  • How we improved our issues with server failure and response time by using Amazon’s ELB service
  • The performance gains we saw from enabling HTTP keepalives.

  • Future steps for performance improvements

But first before I dive into the ELB, there’s one topic I left out of my last post that I wanted to mention.

TCP Congestion Window (cwnd)

In TCP congestion control, there exists a variable called the “congestion window” , commonly referred to as cwnd.  The initial value of cwnd is often referred to as  “initcwnd”.    After the initial TCP handshake is done, we begin to send data, the cwnd determines how many bytes we can send before the client needs to respond with an ACK.  Let’s look at a graphic of how different initcwnd values affect TCP latency from a paper Google released.

Latency vs initcwnd size

At Chartbeat, we’re currently running Ubuntu 10.04 LTS (I know, I know, we’re in the process of upgrading to 12.04 as this is being written), which ships with Kernel 2.6.32.  Starting in Kernel 2.6.39, thanks to some research from Google, the default initcwnd was changed from 3 to 10.  If you are serving up content greater than 4380 bytes (3 * 1460), you will benefit from increasing your initcwnd due to the ability to have more data in flight (BDP or bandwidth delay product) before having to reply with an ACK.  The average response size from ping.chartbeat.net is way under that, at around 43 bytes, so this change had no benefit to us at the time when the servers were not behind the ELB.  We’ll see why increasing the initcwnd helped us later in the post when we discuss HTTP keepalives.

ELB (Elastic Load Balancer)

The options for load balancing traffic on AWS are fairly limited.  Your choices are

  • An ELB

  • DNS load balancing service such as Dyn

  • Homegrown solution using HAProxy, nginx or <insert favorite load balancing software here>

Each of these solutions have their limitations and depending on your requirements, some may not be suitable at all for you.  I won’t go into all of the pros and cons of each solution here since there are plenty of articles on the web discussing these already.  I’ll just go over a few that directly affected our choice.

In choosing a homegrown solution, support for high availability and scalability is difficult.  Currently with AWS, there’s no support for gratuitous ARP, which is traditionally used in handling of fail overs both in software and hardware load balancers.  In order to work around this issue, you can utilize Elastic IPs and homegrown scripts to move the Elastic IP between instances when it detects a failure.  In our experience we’ve seen lag times from 30 seconds to a few minutes when moving an Elastic IP.  During this time, you would be down hard and not serving up any traffic.  The above solution also only works when all your traffic can be handled by one host and you can accept the small period of downtime during fail over.

But how would you handle a situation where your traffic was too high for one host?  You could launch multiple instances of your home grown solution but you would then need to handle balancing the traffic between these instances.  We already discussed in part 1 the issue we had with using DNS to handle the balancing of traffic.  The only other solution would be to actually use an ELB in front of these instances.  If we went with this solution, it meant adding another layer of latency to the request.  Did we really need to do something like this?

The reason why most people end up going with a solution like HAProxy is because they have more advanced load balancing requirements.  ELB only supports round robin request balancing and sticky sessions.  Some folks require the ability to do request routing based on URI, weight based routing or any of the other various algorithms that HAProxy supports.  Our requirements for a load balancing solution were fairly straightforward:

  • Evenly distribute traffic (better than our current DNS solution)

  • Highly available

  • Handle our current traffic peak(200k req/sec) and scale beyond that

  • End-to-End SSL support

ELB best met all these requirements for us.  A homegrown solution would have been overkill for our needs.  We didn’t need any of the advanced load balancing features, SSL is currently only supported in HAProxy’s development branch (1.5.x) and requires using stunnel or nginx for support in the stable branch (1.4.x) and we didn’t need to add any additional layers that would increase our latency even further.

Moving to ELB

The move to using an ELB was fairly straight forward.  We contacted Amazon support and our technical account manager to coordinate pre-warming the ELB.  According to the ELB best practices guide, ELBs will scale gradually as your traffic grows (should handle 50% of traffic increase every 5 minutes), but if we suddenly switched 100% of our traffic to the ELB, it would not be able to scale quickly enough and start throwing errors.  We weren’t planning on doing a cutover in that fashion anyway, but to be safe we wanted to ensure the ELB was pre-warmed ahead of time even as we slowly moved over traffic.   We added all the servers into the ELB and then did a slow roll out utilizing Dyn’s traffic director solution, which allowed us to weight DNS records.  We were able to raise the weight of the ELB record and slowly remove the individual server’s IPs from ping.chartbeat.net to control the amount of traffic flowing through the ELB.

Performance gains

We saw large, immediate improvements in our performance with the cutover to the ELB.  We saw less TCP timeouts  and a decrease in our average response time.

 We went from roughly 200 ms average response times, to 30 ms response times.  That’s a 85% decrease in response time! (EDIT 2/20/2014) Thanks to Disqus commenter Mxx for pointing out, we incorrectly measured the response time here.  Moving behind the ELB changed the metric from being a measure of response time between our servers and clients, to a measurement of response time between the ELB and our servers.  Comparing external data from Pingdom, we still saw a decrease in response time of about 20% from peak traffic times, going from 270ms to 212ms.  Apologies for the earlier incorrect statement.

Our traffic was now more evenly distributed than our previous DNS based solution.  We were able to further distribute our traffic shortly after, when Amazon released  “Cross-Zone load balancing

 Enabling cross-zone load balancing got our request count distribution extremely well balanced, the max difference in requests between hosts sits currently around 13k requests over a minute.

Request Count Distribution ELB cutover

KeepAlives

With our servers now behind the ELB we had one last performance tweak we wanted to enable, HTTP keepalives between our servers and the ELB.  Keepalives work by allowing multiple requests over a single connection.  In cases where users are loading many objects off your site, this can greatly reduce latency by removing the overhead of having to re-establish a connection for each object you are loading off the site.  CPU savings are seen on the server side since less time will be spent opening and closing connections.  All this sounds pretty great, so why didn’t we have it enabled before hand?

There are a few cases where you may not want keepalives enabled on your web server.  If you’re only serving up one object from your domain, it doesn’t make much sense to keep a connection hanging around for more requests.  Each connection uses up a small amount of RAM.  If your web servers don’t have a large amount of RAM and you have a lot of traffic, enabling keepalives could get you in a situation where you will consume all RAM on the server, especially with a high default timeout for the keepalive connection.  For Chartbeat, our data comes from clients every 15 seconds, holding a connection open just to get a small amount of data every 15 seconds would be a waste of resources for us.  Fortunately we were able to offload that to the ELB which enables keepalive connections by default for any HTTP 1.1 client.

With our servers  no longer being directly exposed to the clients, we could re-visit enabling keepalives.  We are doing a high amount of requests between the ELB and our servers , with the connections coming from a limited set of servers on Amazon’s end.  We want the ELBs to be able to proxy as much information as possible to us over one connection and keep that connection open for as long as possible.  This is where having a larger initcwnd comes into play.  Having a larger initcwnd lowers our latency and gets our bandwidth up to full speed between the servers and the ELB.  We expected to see a drop off in the amount of traffic going through the servers as well as some CPU savings.  To ensure there were no issues, we did a “canary” test with one server enabled with keepalive and put it into production.  The results were not at all what we expected.  Traffic to the server became extremely spiky and average response time increased a bit when keepalives were enabled on the canary server.  After talking to Amazon about the issue, we learned that the ELB was favoring the host with keepalive enabled.  More traffic was being sent to that host causing its latency to increase.  When the latency increased, the ELB would then send less traffic through the host and the cycle would start over again.  Once we confirmed what the issue was, we proceeded with the keepalive rollout and the traffic went back to being evenly distributed.  The amount of sockets we had sitting in TIME_WAIT went from around 200k to 15k after enabling keepalives and CPU utilization dropped by about 20%.

Keepalives and Timeouts

There are a few important things to be aware of when configuring keepalives with your ELB with regards to timeouts.  Unfortunately there’s a lack of official documentation on ELB keepalive configuration and behavior, so the information below could only be found through various posts on the official AWS forums.

  • The default keepalive idle connection timeout is 60 seconds
  • The keepalive idle connection timeout can be changed to values as low as 1 second and as high as 17 minutes with a support ticket
  • The keepalive timeout value on your backend server must be higher than that of your ELB connection timeout.  If it is lower, the ELB will re-use the idle connection when your server has already dropped the connection, resulting in the client being served up a blank response.  The default nginx keepalive_timeout value is safe at 75 seconds with the default ELB timeout of 60 seconds.

Downsides

While the ELB has worked out great for us and we’ve seen huge performance improvements from switching to using one in front of our servers, there are a few issues we’d love to see addressed in future roll-outs of the ELB:

  1. Lack of bandwidth graphs in CloudWatch.  I’m surprised the ELB has been around for this long without this CloudWatch metric.  You get charged per GB processed through the ELB, yet there’s no way to see from Amazon’s view, how much bandwidth is going through your ELB.  This could also help identify DoS attacks that don’t involve making actual requests to the ELB.

  2. No Ability to pre-warm an ELB without going through support.  Right now it’s a process of having to contact Amazon support to get an ELB pre-warmed, and answering a bunch of questions related to your traffic.  Even if this process was moved to a web form like how requests for service limit increases are done, it would be better than the current method.

  3. No ability to clone an ELB.  Why would you want that?  If you have an ELB that is handling a large amount of traffic and you are experiencing issues with it, you cannot easily replace the faulty ELB in a hurry due to the need for new ELBs to scale up slowly.  It would be extremely useful to clone an existing one, capturing it’s fully warmed configuration and then be able to flip traffic over to it.  Right now if there’s an issue, AWS support needs to get involved, and unless you are paying for higher end support, you may not get a fast enough response from support.

  4. No access to the raw logs.  A feature to send the ELB logs to an S3 bucket would be very valuable.  This would open up a bunch of doors with the ability to setup AWS Data Pipeline to fire off an EMR job or move data into Redshift.  Currently all that must be done on the servers behind the ELB.

  5. No official documentation on keepalive configuration or behavior.
  6. Ability to change the default  keepalive timeout value is not exposed through the API and requires a support ticket.

Conclusions

We learned an important lesson by not monitoring some key metrics on our servers that were having an affect on our performance and reliability.  With increasing traffic it’s important to re-evaluate your settings periodically to see if they still make sense for the level of traffic you are receiving.  The default TCP sysctl settings will work just fine for a majority of workloads but when you begin to push your server resources to there limits, you can see big performance increases by making some adjustments to variables in sysctl.   Through TCP tuning and utilizing AWS Elastic Load Balancer we were able to

  • Decrease our traffic response time by 20%
  • Decrease our server footprint by 20% on our front end servers
  • Have failed servers removed from service within seconds
  • Eliminate dropped packets due to listen queue socket overflows

Next Steps

Since the writing of this article, we’ve done some testing with Amazon’s new C3 instance types and are planning to move from the m1.large instance type to the c3.large.  The c3.large is almost 50% cheaper and gives us more compute units which in turn yields slightly better response times.

Our traffic is very cyclical which lends itself perfectly to take advantage of Amazon’s auto scaling feature.  Take a look at a graph from a weeks worth of traffic.week_total_concurrents

In the middle of the night (EDT), we see half of what our peak traffic was earlier in the day and on weekends we see about 1/3 less traffic than a weekday.

In the next coming months we’ll be looking to implement auto scaling to achieve additional cost savings and better handle large, unexpected spikes of traffic.

Additional resources:

Special thanks to the following folks for feedback and guidance on this post

  • igor47

    ELBs have many other problems; for instance, they’re public-ip only unless you’re in a VPC, which makes them useless for internal load balancing.

    You guys should check out SmartStack: http://nerds.airbnb.com/smartstack-service-discovery-cloud/ . It’s one of those “home-grown” solutions. However, we found that additional latency from a combination of nginx and haproxy on the frontend is minimal, around 3ms (as measured via newrelic’s queue time functionality, so that’s even an overestimation).

    • jlintz

      Yes, ELBs unless inside a VPC are public facing only, but this was not a requirement for us.

      SmartStack looks very interesting. we’ll check it out , thanks!

      • jmason

        fwiw, we’ve had great results with SmartStack (albeit on low-rps internal services so far).

        BTW, the ‘raw logs from ELB’ point has been implemented: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/access-log-collection.html

        Finally, another upside of ELB which you hadn’t mentioned was the price — effectively free 😉 This compares pretty well to the cost of provisioning a fleet of nginx/haproxy hosts…

        • https://coderwall.com/team/sonru mikhailov

          Provisioning Nginx host is not difficult as it sounds. I don’t think ELB is totaly free, everything has its own cost which summary of cold setup, maintenance and support. Raw logs have been implemented recently which reduce the support cost, it is awesome but it is not enough. How do you tune sysctl on ELB without full root access? It’s not possibe to enable TCP Fast Open on new kernel or just try to increase TCP Congestion Window, I’m not talking on upgrading Linux kernel to have any other advantages.

          Let’s make things clear: EBS = micro/medium EC2 + open source software + nice GUI.

          The network optimisation’ first enemy is narrow congestion window and low timeout (net.ipv4.tcp_slow_start_after_idle). How do you handle that with black box without root access, it’s merely impossible. But I agree, not everybody needs that.

    • https://coderwall.com/team/sonru mikhailov

      homegrown solution? Nginx 25 LoC config with upstream maxfails, that’s it.

  • J.A. de los Palotes Machado

    In the “Performance gains” section you mention going from ~200ms to ~3ms response time, but the graphic on its right shows from ~200ms to ~30ms. Am I missing something?

    • jlintz

      doh!. Updated the numbers, thanks for catching that

  • Alex Corley

    You should be using CloudFormation templates to setup your ELBs and Web Worker setups…. then *YOU* can clone *ANY* resources, in any region/az where it is avail.

    • jlintz

      The ELB would still need to be pre-warmed even if using CloudFomration templates

      • https://coderwall.com/team/sonru mikhailov

        ELB (micro instance by default) has very unstable CPU performance. It can be promoted to medium by support requests, but you lost SPDY advantages in case of using ELB.

        Not sure if you can tune TCP Congestion Window on ELB, the same story about tcp_slow_start_after_idle kernel setting.

  • jmason

    Since you have a very clear day/night cycle, I’d be curious if you’ve seen cases where your servers are marked as unhealthy during times when the ELB is scaling up or scaling down — we ran into this, and it appears to be an ELB bug. Also, igor47, SmartStack looks very promising for internal LBs…

    • Mxx

      I doubt they have clear day/night cycles since their customers and customer’s visitors are global…

      • jmason

        take a look at the graph in the “Next Steps” section.

        • Mxx

          I stand corrected..and surprised. 🙂 I guess most of their traffic is from US..

    • jlintz

      @jmason:disqus haven’t seen that issue thankfully, did you speak to AWS support about it?

      • jmason

        Yep, we did — after quite a bit of back-and-forth, they offered to effectively turn off downscaling on the ELB which was suffering the issue. As far as I know, it hasn’t recurred since then.

  • http://dataddict.wordpress.com/ Marcos Ortiz

    You need to move to a new Ubuntu version with a kernel 3.X version. This version is great for its performance and optimization gains.

    • jlintz

      We recently completed the move to Ubuntu 12.04 for these servers. Can you expand on what you mean by performance and optimization gains?

      • https://coderwall.com/team/sonru mikhailov

        Older kernel vernsions have broken implementation of TCP Congestion mechanism.

        • jlintz

          Do you have a link discussing this? It’s my understanding they changed the default TCP congestion control method from reno to cubic on the kernels in 10.04 vs 12.04 but nothing was “broken” about reno

          • https://coderwall.com/team/sonru mikhailov

            can’t find but it was definitely broken on some of mine CentOS 5.* installation. The retrospective of TCP congstion mechanism evalution is available here http://en.wikipedia.org/wiki/TCP_congestion-avoidance_algorithm

            Kernel 2.6.19+ is prefer. If you want to get max performance take a look at TCP Fast Open (Kernel 3.7+)

  • Mxx

    “We saw large, immediate improvements in our performance with the cutover to the ELB. We saw less TCP timeouts”
    Isn’t it because you are now measuring timeouts between your servers and ELB, and not timeouts to browsers, which are now invisible to you? Seems like rather than fixing this problem(if it’s possible to fix(?)) you swept it under the rug…

    “We went from roughly 200 ms average response times, to 3 ms response times.”
    Again, what exactly are you measuring there? Time to the last byte? To a browser? To ELB? With such huge drop in response time it seems like you are measuring time to your ELB and not what your users actually see…

    KeepAlives
    It is my understanding that when it comes to “keepalive” load balancers still maintain a single connection _per visitor_. What you described sounds more like request multiplexing, which afaik is supported by very few products(Zeus/Stingray and F5) and ELB is not one of them.

    “ELB was favoring the host with keepalive enabled. More traffic was being sent to that host causing its latency to increase. When the latency increased, the ELB would then send less traffic through the hostand the cycle would start over again.”
    This sounds like “least response time” load balancing, not round robin.. Is that tunable in some way?

    You talked about drawbacks of DNS-based load balancing where you have traffic still going to expired records.
    But ELB did not fix this problem. Your ping.chartbeat.net is pointing to some CNAME which is then expanded into multiple IPs. So the same broken DNS servers will still send traffic to now-dead post-scaledown ELB IPs. The only difference now is that you don’t actually see that traffic at all.

    It’s cool that ELB is working for you guys, but to me it looks like you have chosen convenience and would rather don’t have visibility into all the same problems you had before.

    For such a critical part of your infrastructure you gave up visibility and control in favor of simplicity.. :/

    It doesn’t seem all that more complicated to point your domain to a bunch of Elastic IPs which would sit on load-balancer servers and then those servers would BALANCE the load to your web servers based on whatever metrics you wanted. The difference between this and ELB is that you would actually see what’s happening on the very edge of your infrastructure and have control over it. With ELB the best you can do is submit tickets to Amazon and hope they know what they are doing and will understand what’s going on(rather than just blow you off to close the ticket.)

    • jlintz

      Mxx,

      “Isn’t it because you are now measuring timeouts between your servers and ELB, and not timeouts to browsers, which are now invisible to you? Seems like rather than fixing this problem(if it’s possible to fix(?)) you swept it under the rug…”

      Some of the timeouts can be related to servers being overloaded as well as clients on slow/unstable connections. We actually still saw timeouts pre-keepalive with the servers behind the ELB and after enabling keepalive reduced them further I don’t believe there’s a way to completely eliminate these, but the timeouts also are related to downstream proxying, not just between the servers and ELB. It’s definitely possible that a portion of those timeouts have shifted to the ELB itself, which we unfortunately don’t have insight to.

      “Again, what exactly are you measuring there? Time to the last byte? To a browser? To ELB? With such huge drop in response time it seems like you are measuring time to your ELB and not what your users actually see…”

      This is $request_time being reported by Nginx. So that’s time nginx is spent processing the request.

      “It is my understanding that when it comes to “keepalive” load balancers still maintain a single connection _per visitor_. What you described sounds more like request multiplexing, which afaik is supported by very few products(Zeus/Stingray and F5) and ELB is not one of them.”

      I was referring to HTTP KeepAlive. The ELB supports this behavior.

      “This sounds like “least response time” load balancing, not round robin.. Is that tunable in some way?”

      Unfortunately I don’t have more insight into this behavior nor does there seem to be a way to tune it =(

      “You talked about drawbacks of DNS-based load balancing where you have traffic still going to expired records.But ELB did not fix this problem. Your ping.chartbeat.net is pointing to some CNAME which is then expanded into multiple IPs. So the same broken DNS servers will still send traffic to now-dead post-scaledown ELB IPs. The only difference now is that you don’t actually see that traffic at all.”

      This is certainly a good point. From talking to AWS, the ELB IPs are kept around for a bit afterwards and still send traffic to your ELB. They will also recycle the IPs if you are scaling up and down. If they do eventually release the IPs back into the ELB pool, and clients are still sending to that traffic, than its certainly an unavoidable issue but we still feel like we are in better off place than previously with regards to this.

      The move was more than just simplicity, we now have better failure handling of our individual servers and were able to reduce our server count by 20% from the switch. We definitely lose insight with the ELB in front but the gains outweighed that for us.

      Appreciate the feedback

      • Mxx

        This is $request_time being reported by Nginx. So that’s time nginx is spent processing the request.

        I don’t have much 1st hand experience with Nginx, but reading its docs it sounds like this parameter is similar to Apache log’s %D, which is essentially “time to LAST byte”. It seems like with ELB in the picture you are now not exactly measuring the same things. If before your number was in ~200ms range, it was a real number of how long actual browsers took to download your content. That included every client, even the ones on horribly bad connections. I’m sure you know that “average” number here is very misleading. I’m sure if you were to look at a bucket/histogram of your delivery times you’d have a much more accurate picture. You probably have a spike at 0ms(errors), something similar to a bell curve and then a long tail which skews your results. I’m sure your 95th% or even 99th% look much better.

        With ELB in the picture you are now measuring how long it takes your servers to throw content into ELB, which is probably local or very close to your servers. But you are now NOT measuring now long it takes ELB to deliver the same content to your clients. Realistically speaking unless ELB does something magical, it’s probably not much different from your previous delivery times(hopefully not worse).

        From talking to AWS, the ELB IPs are kept around for a bit afterwards and still send traffic to your ELB. They will also recycle the IPs if you are scaling up and down. If they do eventually release the IPs backinto the ELB pool, and clients are still sending to that traffic, than
        its certainly an unavoidable issue but we still feel like we are in better off place than previously with regards to this.

        You could’ve allocated 10(arbitrary number) ElasticIPs for your load balancer tier. At peak usage times you would have 10 separate LB servers with 1 IP on each. At lower usage periods you could consolidate those ElasticIPs back to your ‘steady state’ LB servers. That way you won’t be in a situation where stupid DNS servers keep sending traffic to non-existing IPs. You’ll have a fixed set of IPs that always respond to traffic, yet a variable number of LB servers to save on cost.

        • https://coderwall.com/team/sonru mikhailov

          I don’t see the real reason to switch to “just black box” without any option to tune and optimised it. Nginx can be configured with number IP range (not matter, Elastic or not) and Nginx marks died node as failed and doesn’t transfer traffic onto it further. What about DNS balancing, it looks like anti-pattern and should be avoided until it’s really needed.

        • https://coderwall.com/team/sonru mikhailov

          we do use Newrelic and gather some timers through special header through proxying:

          proxy_set_header X-Queue-Start “t=${msec}000”;

        • jlintz

          Mxx,

          Sorry didn’t get back to you sooner on this. I was digging into this more, and you are correct. The ELB latency is measuring the time between the servers and the ELB. I’ll be posting a correction shortly. In comparing some external monitoring, we did still see an improvement in response time, but the numbers we compared in the article were wrong. Thanks for bringing this to our attention.

          • Mxx

            I look forward to learn more. 🙂

    • https://coderwall.com/team/sonru mikhailov

      the “Part 1” was amazing, fantastic research, but once you learned I don’t see the reason to go away and use “one more magic cloud box”. I had a talk to AWS engineer, he talked that each their cloud service is EC2 + appropriate Open Source tool. I guess they have general stack configuration without anything specific like @jlintz:disqus investigated in the previous post.

      • jmason

        ‘each their cloud service is EC2 + appropriate Open Source tool’

        just to respond to this — speaking as an ex-employee of Amazon, this is most definitely not accurate.

    • Vitaly Karasik

      Thank you for interesting articles!
      Did you review Route 53 as DYN alternative for LB?

  • https://coderwall.com/team/sonru mikhailov

    “In order to work around this issue, you can utilize Elastic IPs and homegrown scripts to move the Elastic IP between instances when it detects a failure.”

    ELB is micro EC2 instance, you can “pre-warm” but it only means to make it “medium”, that’s it. I contacted to AWS engineers, that’s the only magic that happens. No magic, read again please. Homeground solution Nginx upstream with max_fails make the job done.

    “Keepalives work by allowing multiple requests over a single connection”

    You are talking about multiplexing (shared HTTP connection). Keep-alive is another thing (shared transport layer TCP connection after SYN-ACK). http://en.wikipedia.org/wiki/HTTP_persistent_connection
    Multiplexing works on SPDY level only, so you have to get back to Nginx as a LB.

    You are also talking about “initcwnd” but forget about “initrwnd” which it important as well.

    • jlintz

      ELB scales beyond medium sized instances, we’ve verified this with AWS.

      I was referring to HTTP Keep-alive, sorry if that wasn’t clear in the post.

      Good point about initrwnd, I’ll see if I can gather a list of potential updates for this and make edits. Thanks!

      • https://coderwall.com/team/sonru mikhailov

        hey, the part 1 was amazing, really! But the part 2 needs to be updated a bit.

        “In cases where users are loading many objects off your site, this can greatly reduce latency by removing the overhead of having to re-establish a connection for each object you are loading off the site.”

        right, here is a reference to the HTTP persistent connections that allows you to share, but In HTTP 1.1, all connections are persistent unless declared otherwise. The real CPU cost saving can be SSL session cache and multiplexing with SPDY. It brings the real changes into the serving data.

        • jlintz

          Sorry, I’m not sure what you’re suggesting here to be updated. We’re not using SPDY and our SSL traffic is around <1% of our total traffic. HTTP keepalives certainly improve SSL performance but I don't think it's inaccurate for us to say that CPU is reduced from also reducing the amount of connections opening and closing.

          • https://coderwall.com/team/sonru mikhailov

            Eventually HTTPS-backed clients number will grow, even your homepage (chartbeat.com) is HTTPS only

      • https://coderwall.com/team/sonru mikhailov

        How do you enable OCSP Stapling, Forward Secrecy, optimised SSL Ciphers on ELB side?
        And what about TCP/IP stack optimisation, it is not so useful if you can’t get root access to ELB instance.

        • Mxx

          They don’t. ELBs SSL support is rather weak. See https://www.ssllabs.com/ssltest/analyze.html?d=ping.chartbeat.net

          • https://coderwall.com/team/sonru mikhailov

            oh, crazy! It’s unbelievable! B rating is completely unacceptable.

          • jlintz

            Looks like AWS was listening , they’ve announced improvements to their SSL support today, including support for Forward Secrecy http://bit.ly/1maCD0M

          • Mxx

            Yup, looks like you already applied new policies and qualys ssl labs scan looks much better now. 🙂

          • https://coderwall.com/team/sonru mikhailov

            Very cool! 🙂 But it’s not enough…
            OCSP stapling and Next Protocol Negotiation both are unable to adjust.

          • jlintz

            For some reason mikhailov, I have a feeling if AWS gave you root on the ELBs, it still wouldn’t be enough 😉

          • https://coderwall.com/team/sonru mikhailov

            ah, of course, only best or nothing! 🙂

          • Mxx

            @jlintz:disqus And now you’ll be able to see even more accurate timing. 🙂 http://aws.typepad.com/aws/2014/03/access-logs-for-elastic-load-balancers.html

          • Mxx
        • jlintz

          As Mxx, already stated, you cannot. Obviously we will never have root access to the ELB, so we are bound to the optimizations that Amazon has in place. I did not claim that the ELB is the end all solution to the problem of load balancing in AWS, but it met and solved most of our needs. There are many features the ELB does not support, but to list all those features extended beyond the scope of this post

          • https://coderwall.com/team/sonru mikhailov

            sorry, I don’t understand why you have made such great investigation around TCP/IP stack optimisation (part 1) if you decided to go with ELB.

            “we are bound to the optimizations that Amazon has in place.”
            I don’t see TCP/IP optimisation and efficient TCP Congestion Window size here.

            “But how would you handle a situation where your traffic was too high for one host? You could launch multiple instances of your home grown solution but you would then need to handle balancing the traffic between these instances.”

            Yes, this is the most any LB basics, upstream connection with IP range. If one node goes down the traffic stopped going there. You can adjust settings for your needs.

    • Mxx

      Multiplexing works on SPDY level only

      Not exactly. SPDY takes it to the fullest extent, however Netscaler, Zeus and F5(and maybe other I’m not aware of) LBs support pipelining requests from different clients.
      If you have 100 different clients doing 10 requests each, ‘regular’ LBs will still open 100 different connections. LBs that support http multiplexing can pipeline it down into 50 or even 10 different connections. You don’t want to restrict it too much because of h-o-l-b.
      SPDY LBs can do it in 1 connection without holb problem.
      …At least that’s what I’ve learned while looking into our loadbalancing issues.