Surprised Joejuan Williams Womens Jersey , by the title? well, this is a tour of how we cracked the scalability jinx from handling a meagre 40 records per second to 500 records per second. Beware, most of the problems we faced were straight forward N''Keal Harry Womens Jersey , so experienced people might find this superfluous. Contents
* 1.0 Where were we?
1.1 Memory hits the sky 1.2 Low processing rate 1.3 Data loss :-( 1.4 Mysql pulls us down 1.5 Slow Web Client
* 2.0 Road to Nirvana
2.1 Controlling memory! 2.2 Streamlining processing rate 2.3 What data loss uh-uh? 2.4 Tuning SQL Queries 2.5 Tuning database schema 2.5 Mysql helps us forge ahead! 2.6 er Web Client
* 3.0 Bottom line
Where were we?
Initially we had a system which could scale only upto 40 records sec. I could even recollect the discussion, about "what should be the ideal rate of records? ". Finally we decided that 40sec was the ideal rate for a single firewall. So when we have to go out, we atleast needed to support 3 firewalls. Hence we decided that 120sec would be the ideal rate. Based on the data from our competitor(s) we came to the conclusion that Sony Michel Womens Jersey , they could support around 240sec. We thought it was ok! as it was our first release. Because all the competitors talked about the number of firewalls he supported but not on the rate.
Memory hits the sky
Our memory was always hitting the sky even at 512MB! (OutOfMemory exception) We blamed cewolf(s) inmemory caching of the generated we could not escape for long! No matter whether we connected the client or not we used to hit the sky in a couple of days max 3-4 days flat! Interestingly,this was reproducible when we sent data at very high rates(then), of around 50sec. You guessed it right Stephon Gilmore Womens Jersey , an unlimited buffer which grows until it hits the roof.
Low processing rate
We were processing records at the rate of 40sec. We were using bulk update of dataobject(s). But it did not give the expected speed! Because of this we started to hoard data in memory resulting in hoarding memory!
Data Loss :-(
At very high speeds we used to miss many a packet(s). We seemed to have little data loss, but that resulted in a memory hog. On some tweaking to limit the buffer size we started having a steady data loss of about 20% at very high rates.
Mysql pulls us down
We were facing a tough time when we imported a log file of about 140MB. Mysql started to hog,the machine started crawling and sometimes it even stopped e all Tom Brady Womens Jersey , we started getting deadlock(s) and transaction timeout(s). Which eventually reduced the responsiveness of the system.
Slow Web Client
Here again we blamed the number of graphs we showed in a page as the bottleneck, ignoring the fact that there were many other factors that were pulling the system down. The pages used to take 30 seconds to load for a page with 6-8 graphs and tables after 4 days at Internet Data Center.
Road To Nirvana
Controlling Memory!
We tried to put a limit on the buffer size of 10,000 Julian Edelman Womens Jersey , but it did not last for long. The major flaw in the design was that we assumed that the buffer of around 10000 would suffice, i.e we would be process records before the buffer of 10,1000 reaches. Inline with the principle "Something can go wrong it will go wrong!" it went wrong. We started loosing data. Subsesquently we decided to go with a flat file based caching Authentic Jake Bailey Jersey , wherein the data was dumped into the flat file and would be loaded into the database using "load data infile". This was many times faster than an bulk insert via database driver. you might also want to checkout some possible optimizations with load data infile. This fixed our problem of increasing buffer size of the raw records.
The second problem we faced was the increase of cewolf(s) in memory caching mechanism. By default it used "TransientSessionStorage" which caches the image objects in memory, there seemed to be some problem in cleaning up the objects, even after the rerferences were lost! So we wrote a small "FileStorage" implementation which store the image objects in the local file. And would be served as and when the request comes in. Moreover Authentic Jarrett Stidham Jersey , we also implmentated a cleanup mechanism to cleanup stale images( images older than 10mins).