MemCachier vs AWS ElastiCache

Memcache is a technology that improves the performance and scalability of web apps and mobile app backends. It can alleviate bottlenecks such as slow database queries or high CPU usage. This is in contrast to horizontal scaling where all resources are multiplied and can easily lead to overprovisioning a particular resource such as network bandwidth. Memcache helps you scale by alleviating a pressed resource and is thus a perfect addition to your scaling toolbox to optimize resource consumption.

When it comes to adding Memcache to your applications you have several options. You can, for example, just set up your own Memcached server. However, if you are a developer that loves building apps and are wary of setting and manage your own Memcached server, chances are you have stumbled upon ElastCache. This might seem like a better option since you only need to tell ElastiCache what instances you want, how many of them you want, and it will set up a Memcached cluster for you.

Unfortunately, with ElastiCache you’re still stuck needing to directly dealing with instances. There should be an easier way! Enter MemCachier, a SaaS offering for managed Memcache. With just a click of a button you get a cache in any the size you want. Simplicity, however, is just the tip of the iceberg in terms of benefits a SaaS offering can provide. At MemCachier, we wanted to not only make Memcache simpler, but also to make it better.

The benefit of MemCachier

Since our start in 2012, our Memcache service has been based on a custom built architecture that provides some key benefits over traditional Memcached clusters such as ElastiCache.

Multiple proxy servers

This is a key advantage of MemCachier in a subtle but important way. In contrast to having your cache spread out over multiple servers in your ElastiCache cluster, all of our proxy servers have a consistent view of the cache. So if you have two nodes in your ElastCache cluster, each will see just half the cache. In contrast, a MemCachier cache with two proxy servers will each be able to see and interact with the whole cache.

This is important because, in our experience of serving thousands of customers, 99% of the time an application has trouble reaching a Memcached server it is due to a network issue. With MemCachier’s proxies, temporary unreachability of a particular server does not impact your applications performance because you will get the same data from the fallback proxies until the network recovers.

You might be tempted to set your client to failover to another server in your cluster to thwart a lost of performance in such a situation. With a MemCachier cache this works perfectly and is the recommended setting. However, attempting to do this in an ElastiCache cluster can result in an inconsistent cache that returns stale data. Imagine you have ElastiCache cluster with server A and B and server A is temporarily unavailable due to a network issue. With failover you might update data on server B that was originally on server A. Once server A recovers though, you get stale data.

24/7 monitoring

Having your own ElastiCache cluster can all be smooth sailing until an instance fails. Especially when it fails in the middle of the night. Dealing with failed instances is never fun and setting up your own monitoring can be costly. With MemCachier you will never have to worry about such issues as all our caches are monitored 24/7.

Seamless scaling

You can scale your ElastiCache but it is not very seamless. Each time you add or remove a server in your cluster you will loose a random portion of your cache. For example, if you add a third server to your cluster you will loose a third of the data in the cache because a third of the key space will be mapped to the new server. With MemCachier you can add or remove memory from your cache and never loose your data.

Being able to seamlessly scale your cache not only allows you to scale your cache with an increasing demand without disruption, but it also facilitates finding out how much memory your cache needs in the first place. The only way to reliably find out what size cache you need is by gradually increasing its size over the span of several days and observe the reaction of the hit rate. A hit rate of over 80% is desirable. This experiment goes much smoother with seamless scaling.

Free development cache

Any serious development team has a staging application to test features before they are deployed in production. In theory you could use an in-memory cache for you staging application but ideally you would want the staging setup to be as close to the production setup as possible. Any deviance in the setup might lead to surprise bugs that were not caught in staging. With ElastiCache this would mean launching a new cluster and, depending on how often you test on staging, launching a new one each time you test in order to avoid the cost of a constantly running ElastiCache cluster. With MemCachier, you can create as many development caches as you want and they are always free.

Simple analytics

The stats from a Memcached server can be a bit overwhelming. There are dozens of values and it is unclear which ones are important and which ones can safely be ignored. MemCachier offers a dashboard for your cache that presents simple analytics. The center piece is a graph with your usage and hit rate, arguably the most important stats to understand how your cache is performing.

Expert support

Memcache can be a complicated business. Your application misbehaves and you think it is related to your cache but you are not sure. At MemCachier, all our engineers are here to help. Whenever you contact our support you will always get an answer from an engineer that is very familiar with Memcache in general and with MemCachiers architecture in particular.

So why use ElastiCache?

ElastiCache is a Memcache service at a lower level and as such it gives you more control and lower cost. If you know exactly what you need or want in terms of Memcache and you already have a team set up to monitor your infrastructure, the cost savings might be worth the missing features. In most cases, however, time is money and your time is probably better used to develop new features and a better application. Also, that is way more fun than managing your own Memcache cluster.