How best to scale a php application?



@KonoMi

Software environment:
1. php (7.2) application on laravel (6.0): backend-api
2. vue.js: frontend
3. postgres (10.9)
4. redis (5.0.7)
5. clickhouse
6. rabbitmq (3.8.4) (for web-sockets)
7. five Golang applications (1.13)
8. nginx
9. docker applications

System hardware:
1. 8 vCPU
2. 32gb RAM
3. 500gb SSD
4. Ubuntu 18.04.1 LTS
5. I use AWS lightsail

Dependencies:
1. Redis and Rabbitmq are used by both Golang and php applications.
2. Golang apps use clickhouse.
3. Php uses postgres, and also accesses Golang applications via grpc.
4. Nginx only works with php and vue.

I will describe the situation a little
There is a website that is written in PHP, as well as several microservices in Go. At peak load, the site can withstand about 250 active users (the site’s specifics imply active page navigation and a many requests). There is enough RAM, but CPU isn’t. Usually, the load at the peak is divided in this way (%CPU): php 400%, rabbitmq 100%, Golang 25%, postgres 100%, clickhouse 75%. The rest is small stuff. There was a question about scaling up to 1000 active users.

I’m not very good at devops, I think that experts have long understood this, but the situation obliges me to do it. How best to scale the application in the current situation? My first guess is to take an identical image, or build an instance on aws ec2 with more suitable parameters (more vCPU, less RAM), put only php there and enjoy life. But I’m afraid that dependencies(requests execution time) that become external can slow down the application a lot. The second assumption is simply to transfer the current state to a more powerful machine from ec2. But then the price for the month will be big enough.

Few details, focused on question:
— «1000 active users» refers to 1000 users in the last 5 minutes. If you look at the «rps», then at the moment, 75 requests can be processed at the peak. I would like to achieve results in ~300 rps
— how much will the interaction time of php and postgres/redis/rabbitmq increase if I make them separate into different instances?

Thank you to everyone who responds!


Решения вопроса 1



@FanatPHP

You need to make your mind first, whether you want to «scale up» which means to split the processing between different servers, or you are not even sure yet, whether adding different instances is a good thing at all.

Generally scaling up a web application indeed means providing dedicated hardware for the every major service involved. For your setup I’d make it four different servers
— PHP/Nginx instance
— database backend dedicated for Postgres
— column database backend to host Clickhouse
— processing backend to host rabbitmq and Golang microsevices

The good thing, each of those can be easily scaled up in turn by simply adding more instances of the kind (I am not sure how it’s done for Clickhouse tho).

But I have a feeling that you are probably confusing the scaling up with just performance optimization. And you need to start from the latter. Simply checking the CPU load is tоo rough a measurement. You need a full qualified profiling to pinpoint certain bottlenecks.

From such incomplete data I can only guess that Laravel configuration is not quite optimal. 75 rps shouldn’t be a problem for such a hardware. Make sure you have all production environment settings configured properly — all caching is turned on, etc. Also, switching to 7.4 and preloading could also help.



3

комментария


Ответы на вопрос 1



@oxidmod

I would start from profiling your PHP application to find bottlenecks.

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *