The Integration of Laravel with Swoole (Part 1)

In my previous article: Speed Up Laravel on Top of Swoole on Laravel News, I have shown you a big picture why Laravel performs slow in traditional PHP lifecycle.

In this series of articles, I'm going to give you more details about how this package (swooletw/laravel-swoole) works, the difficulties I encountered during the integration, and the solutions what I have come up with for these cases.

How Laravel Works Slow?

This is a rough process how PHP works when Laravel receives a request.
A large amount of files needs to be loaded for every single request in Laravel framework. In PHP, there's an interesting native function called get_included_files. If you call this function in your business logic, you will find something interesting.

Route::post('/entry', function (Request $request) {
    return count(get_included_files());

Tested with Laravel 5.6.3, 218 files are included for only one simple request. If you install other packages, the number will go even higher.

Each PHP file also needs to be compiled to opcodes and then executed by Zend Engine. Even if you enable OpCache to cache your compiled opcodes, you still can't avoid the I/O consumption for loading files.

Also, the lifecycle in Laravel itself is complicated. Hundreds of classes will be created for every request and then destroyed after the it's done.

Let's make some simple conclusions for why Laravel works slow in traditional lifecycle.

  • A large amount of files are required for every request.
  • Each file needs its parsing and compiling.
  • Compiled results will be destroyed after the request.
  • All the created resource can not be reused in other requests.

How Laravel Runs on Swoole?

This is the main structure for Swoole. Basically, there are few components you need to keep in mind:

  • Master Process: This is the original process when you execute your PHP script. It will fork a main Reactor and a Manager. It is the root process for the whole application.
  • Main Reactor: Reactor in Swoole is multi-thread and totally asynchronous implemented with epoll in Linux Kernel or kqueue in OSX. Reactor is in charge of receiving connected requests and deliver to Manager process. In simple words, its function is just like Nginx Server.
  • Manager: Manager process will fork multiple Worker processes. When any Worker terminates, it will automatically fork another Worker process to keep an accurate amount of Worker numbers.
  • Worker: This is where you should really care about. All the requests(main logics) will be processed in Worker.
  • Task Worker: Same function as Worker process, but only for task delivery. Workers can deliver tasks to a task queue asynchronously. And Task Workers are in charge of consuming tasks from this queue.

Laravel application will exist in Worker processes. Each Laravel application will be loaded and bootstrapped only one time when Worker process starts up. That means Laravel can be stored and kept in memory. No more needs to load the whole Laravel every time you process a request.