Back in 2014 I faced an interesting challenge regarding the way we where handling millions of concurrent events. Those events are aggregated in real-time to, then, be used in a following process that in the end it’s used for our final decisioning process (that’s the TLDR version, more stuff happens).
Because we introduced a new product, that behaved differently, all of this was not working the way we expected, not to mention requests were increasing significantly day by day. We were running out of time.
Perfect storm to say the least.
The funny thing is that I had the solution right in front of my nose: LUA. We had the experience of running Redis in production for a few years already, I knew Redis worked nicely but this was the first time to use the embedded LUA. Long story short: it was a great decision.
It’s going to be 3 years this December and the honeymoon is not over year, as a matter of fact I recently replaced some sections of our decisioning code with LUA and Redis. The results were phenomenal: latency went down (like 50%), overall resources usage went down (EC2 Instances and network internal traffic), throughput duplicated and our Apdex went from 0.5 to .95.
Developing in LUA it’s not complex, the language is simple and the documentation is easy to follow. Important things to remember are:
Regarding tools, everything is available for Mac (via Homebrew), in the end everything has to be installed using luarocks and this seems to be the case for Linux as well.
For testing (busted) and code coverage (luacov) are the way to go. The trick for making sure your LUA code works correctly is to write your tests in LUA (obviously) and then mimic the internal Redis calls through a function, basically using as template the work done by Andrew Newdigate.
Word of advice: not all Redis commands are available in that harness LUA function so if you happen to need to access to a new command, like exists, you will need to write a tiny wrapper around, it shouldn’t be that difficult.