Tag Archives: performance

lazy php profiler

Caveman profiling with a side of “where were you at 9pm on the night in question?” As always, season to taste.

Continue reading lazy php profiler

php autoload

As of version 5.0, PHP has had the ability to dynamically include required classes as needed – without requiring the developer to manually include all possible dependencies beforehand. This means that in cases where your code execution never touches 39 of the 40 classes in the project, it loads, parses, and runs that much faster.

There is a performance hit for actually having to call the __autoload() method, but if you’re in a situation where the hit for executing a few extra comparison calls is unacceptable… you probably aren’t developing in PHP in the first place 😉

Almost all of the php I’ve written in the last 2-3 years uses autoloading, and it has probably saved me hundreds of hours of aggravation.

In most of my projects, the first line of any script or class usually looks something like this:

Then lib.php usually reads something like this:

And that is all that is strictly required to make the magic happen. It is fast, it is easy to understand, it is easy to use. You can use require_once() or include_once() and there is very little meaningful difference.

I’ve looked around the net and found several other attempts at improving on this simple mechanism. But they invariably overcomplicate things. They attempt to recurse source directories, cache filename->class differences to the filesystem, and otherwise turn what should be a simple filesystem operation that the php environment supports natively into a mess of exception handling and wheel reinvention.

There are obviously theoretical instances where you might want to have more than the one require_once/include_once line… but I’ve honestly never encountered one myself.

I mean, you could try to throw an exception if the file didn’t exist or otherwise failed to load… but nothing will happen. Failure to instantiate a nonexistant class is a fatal error in PHP, and will be handled as such with or without you – preempting any attempt at throwing an exception.

The only thing you can add is a bit of extra diagnostics or maybe logging to a separate location.

Assume that we have a file ‘test.php’:

If autoload.php contains a simple simple autoload function that uses require_once(), and Frog.php doesn’t exist anywhere in your include path, the results will look something like this:
ammon@kif:~$ php test.php

Warning: require_once(Frog.php): failed to open stream: No such file or directory in /home/ammon/autoload.php on line 3

Fatal error: require_once(): Failed opening required ‘Frog.php’ (include_path=’.:/usr/share/php:/usr/share/pear’) in /home/ammon/autoload.php on line 3
If we had used an include_once() call, the output is similar, but slightly more informative:
ammon@kif:~$ php test.php

Warning: include_once(Frog.php): failed to open stream: No such file or directory in /home/ammon/autoload.php on line 3

Warning: include_once(): Failed opening ‘Frog.php’ for inclusion (include_path=’.:/usr/share/php:/usr/share/pear’) in /home/ammon/autoload.php on line 3

Fatal error: Class ‘Frog’ not found in /home/ammon/test.php on line 4
So that’s probably a bit more useful in tracking down the error. Require calls don’t return anything – they throw a fatal error on failure. Include calls, however, return FALSE on failure and TRUE if the file is (or, in the case of include_once, has already been) successfully included. So you can include_once() and write to a separate logfile (or to the output stream…) if you need more information than the fatal error already provides you.


To those who insist on giving your classes and their containing files different names… umm. Wow.

If I have a class called DatabaseConnection, I’m going to put it in a file called DatabaseConnection.php. If I’m working with strange people who somehow don’t think that is explicit enough, I might call it DatabaseConnection.class.php and tweak the autoload method ever so slightly to compensate. There’s no good reason to put it in a file called projx-database_connection.incl or something. No. There isn’t.

If you want to organize your classes into a meaningful directory structure… good for you. Use PHP’s built-in include_path ini option. Don’t waste time trying to cascade down a directory structure searching for the classes – just make sure your includes are all in a set of reliable locations. You don’t actually have to edit the php.ini file and bounce Apache or your php-cgi processes, just define the additional include paths in the same file where you define your autoloader:

Naturally, you could turn that into some function calls to dynamically register and unregister directories, etc… but at that point, you’re probably hurting yourself again. If your codebase is being reorganized enough to make maintenance of the list of include dirs onerous without full time intervention, something else has probably already gone very wrong. At best, the code probably doesn’t work anyway, so any brief delay in updating the list can’t hurt any more than whatever else is happening.


But seriously. __autoload() is your friend. It will help clean up your code if you let it. It can help enforce naming conventions. It can even improve performance… so long as you refrain from using it to shoot yourself in the foot. 😉

php tail

I have a php script that frequently needs to email me the last few lines of a log file. I can’t afford to exec() a binary tail process, so the solution has to be in pure php.

Originally, the files in question never exceeded more than a few thousand lines. Unfortunately, I am encountering cases now where the files are now occasionally 50,000 lines or longer. This causes PHP’s memory consumption to explode.

Note: Code snippets provided here are not fully functional standalone shell scripts. The scripts I ran to benchmark the algorithms contain some rudimentary setup logic that is not important here, so has not been included.

My original method:

This is easy to understand and is pretty fast, all things considered. Unfortunately, the memory footprint for loading a file into an array is obscene. Loading a 4400 line log file with this method could consume more than 17mb of ram. 50,000 line files easily stressed the 256mb limit I am able to provide the process.

So, the obvious solution to the memory consumption is to avoid loading the entire file at once. What if we kept a rotating list of lines in the file?

This method works by keeping the $lines-many most recent lines of the file in an array. Memory consumption remains sane, but the performance hit for performing so many array pushes and shifts is bad. Really bad. With small files, I can’t notice any difference between this method and the file() method… but with longer files, it adds up quickly.

Given a 51 line, 4kb file, an average execution ($lines = 20) might look like this:
ammon@zapp:~$ time ./tail-file.php a.log >/dev/null

real 0m0.015s
user 0m0.009s
sys 0m0.007s

ammon@zapp:~$ time ./tail-array.php a.log >/dev/null

real 0m0.016s
user 0m0.010s
sys 0m0.006s

Comparable enough. But given a 50,004 line (3.3mb) log file:

The difference becomes quite clear. However… what if my log file grows obscenely large? I’ve got a 9 million line log file (1.6gb) lying around to test with…

The file() method crashes because it can’t allocate enough ram to hold a 9 million element array and the array method takes almost 20 seconds to execute. It’s slow… but at least it works.

Of course, there are other methods. The one I finally settled on is this:

This method doesn’t waste time reading the bulk of the file. It jumps to the end and scans backward until enough newlines have been located. The only problem here is that your average filesystem isn’t optimized for reading backwards… but since we’re not really reading very much data, it doesn’t much matter.

Performance is a trifle slower on small files, but it’s astronomically better on long ones. This is similar to the method used by most unix ‘tail’ commands, and is the clear winner for actual use in my application.

Of course, it needs a bit of cleanup from the state I’ve provided it in, and isn’t appropriate for all environments… but it’s a trifle better than requiring 20 seconds and 20gb of ram to execute 😉