Trade Resources Company News An Unexpected Bug Cropped up After New Hardware Was Installed

An Unexpected Bug Cropped up After New Hardware Was Installed

An unexpected bug cropped up after new hardware was installed in one of Amazon Web Service's Northern Virginia data centers,which caused the more than 12-hour outage last week that brought down popular sites such as Reddit,Imgur,AirBNB and Salesforce.com's Heroku platform,according to a post-mortem issued by Amazon.

In response to the outage,AWS says it is refunding certain charges to customers affected by the outage,specifically those who had trouble accessing AWS application program interfaces(APIs)during the height of the downtime event.

AWS says the latest outage was limited to a single availability zone in the US-East-1 region,but an overly aggressive throttling policy,which it has vowed to fix as well,spread the issue for some customers into multiple zones.

The problem arose Oct.22 from what AWS calls a"latent memory bug"that appeared after a failed piece of hardware had been replaced in one of Amazon's data centers.The system failed to recognize the new hardware,which caused a chain reaction inside AWS's Elastic Block Storage(EBS)service,and eventually spread to its Relational Database Service(RDS)and its Elastic Load Balancers(ELBs).Reporting agents inside the EBS servers kept attempting to use the failed server that had been removed.

"Rather than gracefully deal with the failed connection,the reporting agent continued trying to contact the collection server in a way that slowly consumed system memory,"the post-mortem reads.It goes on to note that"our monitoring failed to alarm on this memory leak."

AWS says it's difficult to set accurate alarms for memory usage because the EBS system dynamically uses resources as needed,therefore memory usages fluctuate frequently.The system is supposed to work with a degree of fault tolerance for missing servers but eventually the memory loss became so severe that it started impacting customer requests.From there,the issue snowballed--"the number of stuck volumes increased quickly,"AWS reports.

AWS first reported a small issue at 10 a.m.PT but within an hour said the issue was impacting a"large number of volumes"in the affected availability zone.This seems to be the point when major sites such as Reddit,Imgur,AirBNB and Salesforce.com's Heroku platform all went down.By 1:40 p.m.PT,AWS said,60%of the impacted volumes had recovered,but AWS engineers were still baffled as to why.

"The large surge in failover and recovery activity in the cluster made it difficult for the team to identify the root cause of the event,"the report reads.Two hours later the team figured out the problem and restoration of the remaining impacted services continued until it was almost fully complete by 4:15 p.m.PT.

AWS vows to not get caught twice with the same bug.It has instituted new alarms to prevent this specific incident from happening again,and has also modified the broader EBS memory monitoring and alerts for detecting if new hardware is not being accepted into the system."We believe we can make adjustments to reduce the impact of any similar correlated failure or degradation of EBS servers within an Availability Zone,"AWS says.

Gartner analyst Kyle Hilgendorf says it's slightly surprising that human error caused a DNS propagation issue,which led to much of an availability zone going down."They're supposed to be deploying the best and brightest to handle these systems,"he says.But,accidents happen.The bigger flaw,he says,is that AWS did not have alerts in place to catch the issue earlier."That's the damaging part,"he says."A week passed and no one noticed memory was continuously leaking.That was the unacceptable part of this."

So could it have been prevented?

AWS says that customers who have heeded the company's advice about using multiple availability zones were able to tolerate the outage,for the most part.Some customers,including at least one Network World reader(see comments section of this story),reported that even in a multi-AZ architecture he still had problems moving workloads into healthy AZs.AWS says they messed this up,too.

The company uses a throttling system that prevents individual users from overwhelming the system because they are moving large workloads all at once.During the outage,AWS enabled a heightened throttling policy to ensure the system remained stable."Unfortunately,the throttling policy that was put in place was too aggressive,"AWS admits.It says policies have been changed to ensure the throttling is not as aggressive during future incidents.

AWS is making it up to customers,too.The company is issuing an automatic credit to any customers who were subject to its aggressive throttling policy between 12:06 and 2:33 p.m.on Oct.22 and will automatically credit their entire usage of EC2,EBS and ELB instances during that three-hour period on their October bill.

Some have been concerned about the US-East-1 region in Northern Virginia,seeing as it has been the site of three of the company's major recent outages.Hilgendorf says it is the oldest,largest and cheapest region for AWS,so if there's an issue there it could disproportionately impact more customers.

The latest outage is the third major one in two years,which means downtime events at AWS may be starting to add up,according to one analyst.While acknowledging that AWS is the"clear leader"in the infrastructure as a service(IaaS)market,Technology Business Research's Jillian Mirandi says if yet more major outages continue to happen at AWS,it could lead to some of AWS's biggest customers--like Netflix,Foursquare,Pinterest and Heroku--to look elsewhere."If major companies such as these continue to experience outages,they will be tempted to move services onto competing IaaS products,"she recently wrote.

Source: http://www.computerworld.com/s/article/9233199/Amazon_cites_cause_of_recent_outage_issues_refunds
Contribute Copyright Policy
Amazon Cites Cause of Recent Outage, Issues Refunds