EN | FR | DE | IT
home | previous | next | index | search

Architecture of LionsBase

Overview

LionsBase is a hosted application composed by three main components:

  1. A web server: this component receives HTTP requests from the clients (typically a web browser). Some requests are directly handled by the web server, typically requests for static content such as pictures. Requests for web pages are transmitted to an application server for further processing. LionsBase uses Nginx as web server,
  2. An application server: this component is responsible for handling the business logic. It dynamically constructs web pages that will be served to the end clients by the web server. PHP-FPM is our application server of choice.
  3. A database server: permanent data, such as member records, are stored in a database. LionsBase uses MySQL as database.

The components are independent and can be hosted on multiple different servers.

Architecture

LionsBase is using cloud computing platform provided by Amazon Web Services (AWS). Currently, LionsBase is deployed on multiple servers located in two “availability zones” of AWS. A web server and an application server are hosted on the same server for better performance, while the database server is hosted on a dedicated server.

Technical architecture overview

Figure 1: Technical architecture overview

As described in the figure Figure 1, the architecture can easily cope with a growing number of clubs by first scaling up and then scaling out:

Scaling Up (aka Vertical Scalability)
As the infrastructure is virtualized, as soon as the load or the amount of clubs increases, the capacity of the servers (i.e., virtual machines) will be increased accordingly (by adding more disk space, CPU or RAM).
Scaling Out (aka Horizontal Scalability)
Beside scaling up, new servers can be launched and fully configured in minutes. The requests for the clubs are then routed according to their domain name using DNS. We rely on a globally distributed anycast DNS service.

Scaling the web and application layer is not difficult. However, scaling the database is more challenging. Currently, our architecture supports one hot replica of the master server and several read replicas (in all availability zones). Scaling further is just a matter of deploying a second similar setup. That is, our architecture will easily follow the growth of our customers.

Security

Our infrastructure follows the best practices in term of security. Indeed, every server is kept up to date, fully patched and secured (access control, firewall, etc.). To further increase the security level, a reverse proxy analyzes every request coming to our infrastructure:

  • A Web Application Firewall (WAF) monitors application level requests and prevents various attacks such as SQL-injection, XSS, bots, scrappers, DDoS, etc.
  • The proxy puts static resources (images, css files, js files, etc.) in a globally distributed cache (i.e., a CDN) so as to increase the web sites performance.
  • The proxy also takes care of the SSL management.

The reverse proxy is handled by a third-party provider (mostly to protect our infrastructure against DDoS).

DNS

LionsBase, being an hosted application, needs to be in control of the DNS settings of your domain name, in order to provide security (SSL, WAF, anti-DDoS, etc.) and a transparent experience for the end users. To this end, you only have to change the DNS servers used by your domain at your registrar.

Monitoring

Each server is monitored every 5 minutes and alerts are reported in case of troubles.

Backup

Each server uses highly available, highly reliable block level storage volumes that can be attached to a running server. A snapshot of every volume is taken every day and stored off-site (in Amazon S3). Snapshots, which allow to restore a failed server very quickly, are perfect for efficient disaster recovery. Besides snapshot, regular incremental backups are made daily and stored encrypted off-site (in RackSpace CloudFile). A 30 days history is kept for backups and 10 days for snapshots.

Exchange of Information

The figure Figure 2 depicts the general workflow when a user accepts to share her information (please read section Exchange of information in the mobile application for details).

Overview of the exchange of information between multi districts

Figure 2: Overview of the exchange of information

What is really important to understand is that the LionsBase data associated to a given multi district is completely separate from another multi district. In short, each multi district runs its own dedicated LionsBase database, code (application) and naturally assets (documents, images and photos, …).

When members choose to share their information, a flag in the source database indicates that, together with the list of fields that may be shared. A daily task exports the corresponding data as a flat file and puts it in an “import” directory of the other multi district sandboxes where another task will then iterate over the available external sources of data and import them into a separate space of its LionsBase database.

Note

Members who change their mind and do not share their profile anymore will get their data getting wiped out of the other multi districts after the next daily import of data.

Documentation created using Sphinx 1.8.3 and integrated in TYPO3 with restdoc.