Using multiple databases in Laravel for all queries

As we continue to grow our hosted Invoice Ninja platform we’ve been been looking into both vertical scaling [bigger servers] and horizontal scaling [more servers].

Initially we added more web servers using a load balancer. This was relatively easy to setup and as an added bonus provided higher availability. Over time however we’ve started to see our single database server experience increased memory usage, particularly after releasing our mobile apps.

Our plan was to simply divide our users across multiple databases. One database would function as the lookup server, the other databases would store the actual data.

Searching Google for ‘multiple databases in Laravel’ mainly returns examples of either specifying the connection for the particular query:

DB::connection(...)->select(...)

Or defining it in the model by setting:

protected $connection = ...;

However, if you’d like to use a different connection for all queries you can use:

config(['database.default' => ...]);

For our implementation we created a middleware class to find the database server with the provided credentials. The server is stored in the cache so the lookup is only required once every two hours, this is particularly helpful for the API where there’s no session.

To keep the lookup tables up to date we’re using Eloquent creating and deleted model events. Finally, to ensure our queued jobs use the right database we’re using Queue::before(...) to set it.

3 responses to “Using multiple databases in Laravel for all queries”

  1. Rob says :

    Why can’t you split it into multiple, independent installations with independent load balancers/web/DB servers? In case of the data centre outage, only some of your clients won’t be able to access their data rather than all of them.

    • Hillel says :

      Do you mean create a separate database for each customer? We considered it but it could become a challenge managing all of the databases, for example running migrations.

      We’re setting up master/slave database replication to provide fail-over in case of a server failure. In the past I’ve had bad experiences with replication, here’s hoping it goes better this time 🙂

      • Rob says :

        Not necessarily DB per client but let’s say DB per 20-30 clients or so. Personally I have never had problems with postgresql when it comes to replication. Some of our databases were around 800GB and streaming replication worked perfectly. Automatic master/slave switchover is also a breeze. Last time I tried MySQL replication was 4-5 years ago and it was never as good as postgtes but I have no idea how robust it is today.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: