MemCachier Blog
Announcements, New Features, and How Tos
This guide shows how to create a simple Django 2.1 application on PythonAnywhere and then add Memcache to alleviate a performance bottleneck.
We’ll walk you through creating the application from start to finish, but you can view the finished product source code here.
In this tutorial you will learn how to create a simple Flask 1.0 application on PythonAnywhere and then add Memcache to alleviate a performance bottleneck.
We’ll walk you through creating the application from start to finish, but you can view the finished product source code here.
PythonAnywhere, a popular platform to
deploy Python apps, released their new image called earlgrey
which is based
on Ubuntu 16.04. This new image finally supports libmemcached
with SASL
support. This means you can now use pylibmc
to connect to cloud Memcache
providers such as MemCachier.
Caching is a technique used in a wide range of contexts to increase the performance of storage systems. For example, modern microprocessors all have fast cache memory built into them, so that repeated accesses to the same memory addresses come from the fast cache storage, rather than from the slower main memory. In a similar way, operating systems maintain a cache of disk buffers in main memory, so that repeated reads from the same disk blocks come from the cached data rather than needing to be read from disk, which is much slower than memory.
The same kinds of considerations apply to caching for web and mobile applications with Memcache. In this case, it’s common to cache the results of database queries, partial page renders, or the results of other application-specific computations.
In any caching application, there are two questions that have to be answered. First, “What should I cache?” and second, “How big a cache do I need?”.
In many cases, the answer to the first question is determined by the web framework you use. For example, Django, Laravel, Spring Boot and others all have integrated caching systems that simplify the caching of database query results and HTML page renders. These frameworks usually also expose a lower-level API for doing application-specific custom caching.
However, the second question is trickier. At MemCachier, we get questions about cache sizing from customers all the time. Fortunately, there is a simple and systematic way to go about choosing a cache size, which we’re going to describe in this article.
Memcache is a technology that improves the performance and scalability of web apps and mobile app backends. It can alleviate bottlenecks such as slow database queries or high CPU usage. This is in contrast to horizontal scaling where all resources are multiplied and can easily lead to overprovisioning a particular resource such as network bandwidth. Memcache helps you scale by alleviating a pressed resource and is thus a perfect addition to your scaling toolbox to optimize resource consumption.
When it comes to adding Memcache to your applications you have several options. You can, for example, just set up your own Memcached server. However, if you are a developer that loves building apps and are wary of setting and manage your own Memcached server, chances are you have stumbled upon ElastCache. This might seem like a better option since you only need to tell ElastiCache what instances you want, how many of them you want, and it will set up a Memcached cluster for you.
Unfortunately, with ElastiCache you’re still stuck needing to directly dealing with instances. There should be an easier way! Enter MemCachier, a SaaS offering for managed Memcache. With just a click of a button you get a cache in any the size you want. Simplicity, however, is just the tip of the iceberg in terms of benefits a SaaS offering can provide. At MemCachier, we wanted to not only make Memcache simpler, but also to make it better.
We’ll walk through how to create a simple Gin Gonic application, and how to deploy it using Amazon Elastic Beanstalk. Once the application is set up and deployed, we’ll explore ways that using Memcache can provide a solution to some common performance bottlenecks you might come across.
We’ll walk you through creating the application from start to finish, but you can view the finished product source code here.
In this tutorial we’ll learn how to create a simple Spring Boot 2 application (based on the Spring Framework 5), deploy it to Pivotal Web Services, then add Memcache to alleviate a performance bottleneck.
This post is out of date. Instead, read Deploy an Express.js application on AWS Elastic Beanstalk and scale it with Memcached.
In this guide, we’ll explore how to create a simple Express 4 application, deploy it using Amazon Elastic Beanstalk, then add Memcache to alleviate a performance bottleneck.
In this tutorial we’re going to look a little beyond the usual “getting started” Django tutorials to look at some of the things you might want to do to deploy applications to production.
If you’re using Memcache in Java, chances are you’re using the SpyMemcached client. It has been the most popular Java client for years. Unfortunately, at MemCachier we have had a lot of customer reports about problems with SpyMemcached. For this reason we now recommend to use XMemcached with Java.