Scaling a Laravel Application with Memcache on DigitalOcean
In this tutorial, we’re going to use Digital Ocean’s pre-build LEMP stack image to deploy a simple Laravel application and demonstrate how to use Memcache to improve application performance. (LEMP is Linux + Nginx + MySQL + PHP.)
Memcache is a fast caching layer that sits behind web applications and mobile application backends. It can be used to cache the results of database queries, page or page fragment renders, or the results of any other long-running computation that your application might need to reuse for multiple client requests. Using Memcache can help speed up responses to client requests and can help with horizontal scaling by reducing the load on database backends and application servers.
Prerequisites
The prerequisites for this tutorial are minimal, since we’re going to do everything on a single DigitalOcean droplet that we’ll set up from scratch.
You will need a DigitalOcean account and some familiarity with PHP and Laravel (I used version 5.8 for this tutorial), but that’s about all. (You’ll also need an SSH key to set up the droplet.)
Let’s get started.
Setting up a DigitalOcean droplet
In this section, we’ll do the initial setup of our DigitalOcean droplet.
Launch droplet from LEMP Ubuntu 18.04 image
From the DigitalOcean dashboard, launch a single droplet using the following configuration choices:
- Image Under the “Choose an image” heading on the drop launch page, choose the “Marketplace” tab and select the “LEMP on 18.04” image. DigitalOcean provides a range of images in their marketplace that have common software stacks pre-installed. This can save a lot of time: instead of installing Nginx, MySQL and PHP yourself (and setting them up), you launch a droplet from the LEMP image and you have everything there ready to go.
- Plan Choose the smallest droplet size for this tutorial: you don’t need anything larger. (Currently the smallest droplet size is a 1 Gb “Standard” droplet, which costs $5/month.)
- Datacenter Choose a data center that’s geographically close to where you are. For this tutorial, it doesn’t matter which one.
- SSH key If you’ve used DigitalOcean before, you probably
already have an SSH key set up that you can choose to use in this
part of the droplet setup page. If not, choose “New SSH Key”, paste
your public key into the dialog that pops up and give the SSH key a
name. Once the droplet is created, this public key will end up in
the
/root/.ssh/authorized_keys
file on the droplet, allowing you to SSH into the droplet as the root user. (We’ll set up an application user once the droplet is running so that we don’t need to use the root user.) - Other options Leave all the other options at their default
values, give your droplet a name (
lemp-test
or something like that), and hit the “Create” button.
SSH into droplet as root
It takes a minute or two to create your droplet. Once the droplet is there, you can view it in the DigitalOcean dashboard. From the droplet view, you can copy the IP address of the droplet. You can then SSH into the droplet as
ssh root@<ip-address>
where <ip-address>
is the droplet’s IP address from the dashboard
view. (If you’re using an SSH key other than your normal one, you
might need to add the key to your SSH key management agent, or use the
-i
flag to ssh
to tell it which key to use.)
If everything is set up right, you’ll end up with a root shell prompt on your new droplet.
In this tutorial, we’re just going to refer to our droplet using its IP address. If we were doing all this for real, we would set up a DNS name to point to the droplet. (We’re also not going to set up HTTPS, which is something that you should do for all production systems.)
Set up demo user
We don’t want to use the root user for further steps, so we’ll create
a user called demo
. There are a few steps to this, to make things
convenient in what follows. As the root user on your droplet, do the
following:
First, create the demo
user account (the options here are mostly to
fill in default values for some uninteresting fields in the password
file):
adduser --disabled-password --quiet --gecos '' demo
Then copy the SSH keys from root to the demo
user so that we can use
the same SSH keys to log in as demo
:
cp -r /root/.ssh /home/demo
chown -R demo:demo /home/demo/.ssh
Allow the demo
user to use sudo
, and modify the sudo
configuration to allow members of the sudo
user group to use sudo
without supplying a password:
gpasswd -a demo sudo
sed -i -e '/%sudo/s/) ALL/) NOPASSWD: ALL/' /etc/sudoers
At this point, the demo
user is set up, so log out of the droplet
and log back in as demo
:
ssh demo@<ip-address>
From here on, all commands should be run as the demo
user on the
droplet.
Setting up the application
The next step is to set up our basic Laravel application. We’re going to use a simple task list based on the Laravel tutorial. The code for this is in a GitHub repo, with one branch each for a version with no caching and one with database query caching.
OS package installation
We need a few extra operating system packages beyond those installed by default in the DigitalOcean LEMP image. We can install these by doing:
sudo apt update
sudo apt install curl php-cli php-mbstring php-xml git unzip
Install Composer
We’ll manage PHP dependencies using the Composer tool. We can install this by doing the following:
cd
curl -sS https://getcomposer.org/installer -o composer-setup.php
sudo php composer-setup.php --install-dir=/usr/local/bin --filename=composer
Once this is done, just running the composer
command should give you
Composer’s help message.
Set up to run application from Nginx
We’re going to be running our Laravel code using PHP-FPM behind Nginx. This is already set up in the LEMP stack, but we need to do a few things to make Nginx play nicely with our application code.
First we’ll create a directory to put our application in:
sudo mkdir /var/www/html/demo
sudo chown demo:demo /var/www/html/demo
cd /var/www/html/demo
Next, we need to change some of the configuration settings for Nginx
to make it default to looking in our application directory to serve
our application views, and to use an index.php
file without clients
needing to add it to the URL. The following sed
commands make the
necessary changes (and then we force Nginx to reload its configuration
data). If you prefer to edit the
/etc/nginx/sites-available/digitalocean
file by hand, go ahead and
do that:
sudo sed -i -e 's/root \/var\/www\/html/root \/var\/www\/html\/demo\/public/' /etc/nginx/sites-available/digitalocean
sudo sed -i -e 's/try_files .*/try_files \$uri \$uri\/ \/index.php\?\$query_string;/' /etc/nginx/sites-available/digitalocean
sudo nginx -s reload
Set up application code
We don’t have any application code in place yet, so in the
/var/www/html/demo
directory, we next clone the tutorial repository
and switch to the branch that doesn’t have any caching set up:
git clone https://github.com/memcachier/examples-laravel-lemp-do.git .
git checkout no-caching
Now we install the PHP dependencies using Composer, and set some permissions to allow the PHP-FPM process to write logs and other data (this step is important and nothing will work if you don’t do it!):
composer install
sudo chgrp -R www-data storage bootstrap/cache
sudo chmod -R ug+rwx storage bootstrap/cache
Laravel reads a number of configuration settings from a .env
file
that we need to set up. Replace <ip-address>
in the following with
the IP address of your droplet:
echo APP_ENV=production > .env
echo APP_DEBUG=false >> .env
echo APP_KEY=$(php artisan key:generate --show) >> .env
echo APP_URL=http://<ip-address> >> .env
echo DB_HOST=127.0.0.1 >> .env
echo DB_DATABASE=demo >> .env
echo DB_USERNAME=demo >> .env
echo DB_PASSWORD=demopassword >> .env
All the possible configuration settings are explained in the Laravel documentation, but what we have here is a minimal sort of configuration to allow us to connect to a local MySQL server. We’ll create the database next.
Create database
The LEMP stack includes a pre-configured MySQL installation, so we
just need to create a database for our application. The DigitalOcean
droplet setup code puts the root password for the MySQL server into a
file on the droplet (/root/.digitalocean_password
), so we can cut
and paste the password from there to log into the MySQL server:
sudo cat /root/.digitalocean_password
sudo mysql -u root -p
Now paste in the MySQL root password. The following commands are then run from the MySQL prompt:
> create database demo default character set utf8 collate utf8_unicode_ci;
mysql> grant all on demo.* to 'demo'@'localhost' identified by 'demopassword';
mysql> flush privileges;
mysql> exit; mysql
At this point we have a database called demo
accessible by the Linux
demo
user using password demopassword
, all of which match what we
wrote in the .env
configuration file.
We now use the Artisan tool to run the database migrations to set up the application’s model tables:
php artisan migrate --force
(We need to say --force
because we’re setting this up as a
production application.) Note that Artisan also picks up the database
name and credentials from the .env
file.
Test basic Laravel app
At this point, the basic Laravel application should be working. If you
visit http://<ip-address>/
in your browser (replace <ip-address>
with your droplet’s IP address), you should see a (very simple) task
manager application.
If you follow all the steps above, this should work, and you should be
able to add and delete tasks. If things don’t work, you can look in
the Laravel logs (in /var/www/html/demo/storage/logs
) or the Nginx
logs (in /var/log/nginx
) to see what’s going on.
In the source for the application, you might want to take a look at:
resources/views/tasks.blade.php
: the view for the task list;routes/web.php
: contains all the controller code for the task list, with routes to list all tasks, create a new task and delete an existing task;database/migrations/2019_..._create_tasks_table.php
: the database migration to create the task table.
Next, we’ll extend the controller code to make use of a Memcache cache to cache the results of our task list database query.
Caching with Laravel and MemCachier
Now that we have a working application, it’s time to explore some caching options. In this section, we’re going to set up a MemCachier cache and add the relevant configuration to our Laravel application to use the cache, then we’ll demonstrate how to cache the results of database queries, and to invalidate cached values at the appropriate time.
Cache configuration
The code demonstrating database caching is on the db-caching
branch
in the repository, so in the /var/www/html/demo
directory, switch to
this branch with:
git checkout db-caching
We now need to install the PHP requirements to work with Memcache:
sudo apt install php-memcached
composer install
Installing the php-memcached
package changes the PHP configuration
under the /etc/php
directory to include the shared object files
needed for the PHP interpreter to talk to Memcache. We need to restart
the PHP-FPM process to pick up this change in configuration, by doing
sudo systemctl restart php7.2-fpm
This step is important: caching will not work without doing it,
and you will see lots of errors in the Laravel logs about not being
able to find things called Memcached::something
.
We can now create a cache on MemCachier: create an account, then add a free development cache (25 Mb in size, plenty big enough for experimentation), on DigitalOcean and in the same region as the droplet you created earlier.
Your MemCachier dashboard will show you the server name and
credentials to use to connect to your cache. Add these to your .env
file:
echo MEMCACHIER_SERVERS=... >> .env
echo MEMCACHIER_USERNAME=... >> .env
echo MEMCACHIER_PASSWORD=... >> .env
(Fill in the ...
from the values provided by MemCachier.)
The Laravel configuration changes needed to enable Memcache support can be seen here. The configuration options shown there are good choices for operation with MemCachier.
Caching database queries
One of the most common uses for a caching layer like Memcache is to cache the results of expensive database queries that are needed by multiple client requests. For example, a news site might show the last ten stories on its front page, and all clients viewing the front page need to see the same set of stories. In this case, it makes sense to cache the result of the query used to make the story list so that it can be reused by multiple client requests without needing to touch the database again.
Let’s do something like this for the task list. Although the database query there is very simple and low-cost, we can demonstrate how caching works.
The changes to make this work are all in the routes/web.php
file.
First, we use the Cache::rememberForever
function in the GET route
to handle the caching:
Route::get('/', function () {
$tasks = Cache::rememberForever('all_tasks', function () {
return Task::orderBy('created_at', 'asc')->get();
;
})$stats = Cache::getMemcached()->getStats();
return view('tasks', [
'tasks' => $tasks,
'stats' => array_pop($stats)
;
]); })
The rememberForever
function takes as arguments a cache key to
identify the information we’re caching, and a function to generate the
data we want to cache. Here, we use “all_tasks
” as the cache key,
and use the same database query as in the non-caching code to get the
query results. When the rememberForever
function is called, it
checks the cache to see if there is data stored under the all_tasks
key. If there is data there, it’s returned immediately (so no database
query is needed). If the data is not found in the cache, the database
query is run and the result is stored in the cache. This means that
the database query is only run once.
To make the behaviour of the caching code visible, we include some statistics in the task list view, which we retrieve and pass into the view. This allows us to see count how often data is found in the cache (“a cache hit”) and when the database query has to be run (“a cache miss”).
Of course, if we change the list of tasks by creating a new one or
deleting and existing one, the cached task list data becomes invalid.
We thus need a way to clear the cached data whenever these changes
happen. We can do this using the Cache::forget
function, which we
add to the POST and DELETE routes:
Cache::forget('all_tasks');
This need to invalidate cached data is often the greatest difficulty in including a caching layer into an existing application. Here the invalidation logic is simple (invalidate any time the task list has an entry added or removed), but in more complicated cases it can require some thought.
Using the code on the db-caching
branch, we can view the task list
and add and remove tasks as before. The behaviour of the application
is substantially unchanged, but behind the scenes, far fewer database
queries are happening. Any time we reload the task list page, the task
list is served directly from the cached data, not the database. The
effect of adding or deleting tasks on the cache behaviour can be seen
by looking at the “Set commands”, “Get hits” and “Get misses” counters
at the bottom of the page.
Clean up and conclusions
The only cleanup needed after this tutorial is to destroy the DigitalOcean droplet you created at the beginning (go to the “Destroy” tab on the droplet view in the DigitalOcean dashboard).
Here, we’ve demonstrated only the most common use caching, to avoid rerunning expensive database queries. However, caching can be used in any situation where you have an expensive computation whose result you might want to reuse.
For example, a second common use of a caching layer is to cache fragments of rendered pages. As an example, think of a microblogging site like Twitter, where individual posts need to be rendered to HTML and displayed in different settings on many different pages (different users’ timelines, search results, results of hashtag queries, and so on). Depending on the exact performance of the rendering code, it may make sense to render each individual post once, cache the renders, then serve the cached renders to make the composite pages that users see. (This can be achieved using the laravel-partialcache package.)
Laravel’s caching API is flexible enough to support both these common use cases and more unusual applications.