Category Archives: programming

report-storage.sh

In the grand tradition of my publishing little building-block shell scripts of interest, here goes another one. This is a simple cron job that I run daily on a number of hosts to generate storage usage growth. (This is in addition to Cacti and Nagios which poll some of this data already but for different reasons and with different granularity).

The FILES variable should be populated with a whitespace separated list of files, directories, and block devices to track.

The DB_ABCD variables should be populated with appropriate credentials to talk to a mysql server.

The actual script looks something like this:

I am putting my data into a table called “storage_usage” in a database called “metrics”:

Obviously, this could be tweaked in any different number of ways, based on your needs. One tweak you might want to consider if you’re running it in a daily cron is to remove the echo so you don’t get an email report of every run. Also, if you might want to record more than one snapshot per file per host per day – in the which case you probably need to change the type of the timestamp column to a datetime. Or there might be cases where you want to change the replace to an insert or… whatever 😉

init.d template

This is a rudimentary template that I’ve been using for very quick and dirty /etc/init.d scripts recently.

It works under the assumption that your server daemon has a unique name and only ever runs a single instance – this also means that the binary and the init.d script cannot share a name – otherwise strange things happen 😉

Actual invocation logic may need to be updated on a per-service basis and chkconfig style headers would have to be added manually, but it works well for what it is.

gearman 0.3 php extension api

No real preamble to be made here. Gearman is a distributed job queuing system by the fine folks who brought us memcached. It is nicer than anything else I’ve looked at. I am attempting to switch one of my projects over to it (replacing a crufty curl + unix sockets + memcached monstrosity that attempted to do the same job).

The documentation is lacking, but if the discussion group is any indication, real docs are a high priority for the project team. Today, I visited the IRC channel to ask for a status update on docs for the PHP extension api (as opposed to the PEAR all-script api, whose auto-generated docs are broken). Turns out my suspicions were right. Documentation is a high priority and none currently exists for the api in question. However… I was informed that the classes support reflection… so 🙂

A quick grep of the source for the extension tells me that I am looking at four classes: GearmanClient, GearmanWorker, GearmanJob, and GearmanTask. A ridiculously short php script later…

And I can at least try to make a human readable list of available methods.

GearmanWorker

  • __construct()
  • clone()
  • error()
  • returnCode()
  • setOptions( $option, $data )
  • addServer( $host, $port ) – both args optional, examples say defaults are localhost on port 4730.
  • addFunction( $function_name, $function, $data, $timeout ) – data and timeout optional
  • work()

GearmanClient

  • __construct()
  • clone()
  • error()
  • setOptions( $option, $data )
  • addServer( $host, $port ) – reflection says REQUIRED, however the provided examples and personal experience says otherwise
  • do( $function_name, $workload, $unique ) – unique is optional
  • doHigh( $function_name, $workload, $unique ) – unique is optional
  • doLow( $function_name, $workload, $unique ) – unique is optional
  • doJobHandle()
  • doStatus()
  • doBackground( $function_name, $workload, $unique ) – unique is optional
  • doHighBackground( $function_name, $workload, $unique ) – unique is optional
  • doLowBackground( $function_name, $workload, $unique ) – unique is optional
  • jobStatus( $job_handle )
  • echo( $workload )
  • addTask( $function_name, $workload, $data, $unique ) – data and unique are optional
  • addTaskHigh( $function_name, $workload, $data, $unique ) – data and unique are optional
  • addTaskLow( $function_name, $workload, $data, $unique ) – data and unique are optional
  • addTaskBackground( $function_name, $workload, $data, $unique ) – data and unique are optional
  • addTaskHighBackground( $function_name, $workload, $data, $unique ) – data and unique are optional
  • addTaskLowBackground( $function_name, $workload, $data, $unique ) – data and unique are optional
  • addTaskStatus( $job_handle, $data ) – data is optional
  • setWorkloadCallback( $callback )
  • setCreatedCallback( $callback)
  • setClientCallback( $callback)
  • setWarningCallback( $callback)
  • setStatusCallback( $callback)
  • setCompleteCallback( $callback)
  • setExceptionCallback( $callback)
  • setFailCallback( $callback)
  • clearCallbacks()
  • data()
  • setData( $data )
  • runTasks()

GearmanJob

  • __construct()
  • returnCode()
  • workload()
  • workloadSize()
  • warning( $warning )
  • status( $numerator, $denominator )
  • handle()
  • unique()
  • data( $data )
  • complete( $result )
  • exception( $exception )
  • fail()
  • functionName()
  • setReturn( $gearman_return_t )

GearmanTask

  • __construct()
  • returnCode()
  • create()
  • free()
  • function()
  • uuid()
  • jobHandle()
  • isKnown()
  • isRunning()
  • taskNumerator()
  • taskDenominator()
  • data()
  • dataSize()
  • takeData( $task_object ) – optional
  • sendData( $data )
  • recvData( $data_len )

The extension also appears to expose all constants defined in the C api.

I have since added this to the official wiki – so there are at least SOME docs on the site now 😉

php autoload

As of version 5.0, PHP has had the ability to dynamically include required classes as needed – without requiring the developer to manually include all possible dependencies beforehand. This means that in cases where your code execution never touches 39 of the 40 classes in the project, it loads, parses, and runs that much faster.

There is a performance hit for actually having to call the __autoload() method, but if you’re in a situation where the hit for executing a few extra comparison calls is unacceptable… you probably aren’t developing in PHP in the first place 😉

Almost all of the php I’ve written in the last 2-3 years uses autoloading, and it has probably saved me hundreds of hours of aggravation.

In most of my projects, the first line of any script or class usually looks something like this:

Then lib.php usually reads something like this:

And that is all that is strictly required to make the magic happen. It is fast, it is easy to understand, it is easy to use. You can use require_once() or include_once() and there is very little meaningful difference.

I’ve looked around the net and found several other attempts at improving on this simple mechanism. But they invariably overcomplicate things. They attempt to recurse source directories, cache filename->class differences to the filesystem, and otherwise turn what should be a simple filesystem operation that the php environment supports natively into a mess of exception handling and wheel reinvention.

There are obviously theoretical instances where you might want to have more than the one require_once/include_once line… but I’ve honestly never encountered one myself.

I mean, you could try to throw an exception if the file didn’t exist or otherwise failed to load… but nothing will happen. Failure to instantiate a nonexistant class is a fatal error in PHP, and will be handled as such with or without you – preempting any attempt at throwing an exception.

The only thing you can add is a bit of extra diagnostics or maybe logging to a separate location.

Assume that we have a file ‘test.php’:

If autoload.php contains a simple simple autoload function that uses require_once(), and Frog.php doesn’t exist anywhere in your include path, the results will look something like this:
[code]
ammon@kif:~$ php test.php

Warning: require_once(Frog.php): failed to open stream: No such file or directory in /home/ammon/autoload.php on line 3

Fatal error: require_once(): Failed opening required ‘Frog.php’ (include_path=’.:/usr/share/php:/usr/share/pear’) in /home/ammon/autoload.php on line 3
[/code]
If we had used an include_once() call, the output is similar, but slightly more informative:
[code]
ammon@kif:~$ php test.php

Warning: include_once(Frog.php): failed to open stream: No such file or directory in /home/ammon/autoload.php on line 3

Warning: include_once(): Failed opening ‘Frog.php’ for inclusion (include_path=’.:/usr/share/php:/usr/share/pear’) in /home/ammon/autoload.php on line 3

Fatal error: Class ‘Frog’ not found in /home/ammon/test.php on line 4
[/code]
So that’s probably a bit more useful in tracking down the error. Require calls don’t return anything – they throw a fatal error on failure. Include calls, however, return FALSE on failure and TRUE if the file is (or, in the case of include_once, has already been) successfully included. So you can include_once() and write to a separate logfile (or to the output stream…) if you need more information than the fatal error already provides you.

<rant>

To those who insist on giving your classes and their containing files different names… umm. Wow.

If I have a class called DatabaseConnection, I’m going to put it in a file called DatabaseConnection.php. If I’m working with strange people who somehow don’t think that is explicit enough, I might call it DatabaseConnection.class.php and tweak the autoload method ever so slightly to compensate. There’s no good reason to put it in a file called projx-database_connection.incl or something. No. There isn’t.

If you want to organize your classes into a meaningful directory structure… good for you. Use PHP’s built-in include_path ini option. Don’t waste time trying to cascade down a directory structure searching for the classes – just make sure your includes are all in a set of reliable locations. You don’t actually have to edit the php.ini file and bounce Apache or your php-cgi processes, just define the additional include paths in the same file where you define your autoloader:

Naturally, you could turn that into some function calls to dynamically register and unregister directories, etc… but at that point, you’re probably hurting yourself again. If your codebase is being reorganized enough to make maintenance of the list of include dirs onerous without full time intervention, something else has probably already gone very wrong. At best, the code probably doesn’t work anyway, so any brief delay in updating the list can’t hurt any more than whatever else is happening.

</rant>

But seriously. __autoload() is your friend. It will help clean up your code if you let it. It can help enforce naming conventions. It can even improve performance… so long as you refrain from using it to shoot yourself in the foot. 😉

php signals while selecting

So a fairly longstanding gripe of mine has been that PHP fails to execute registered signal handlers when it receives a signal in the middle of a blocking select call. Today, I finally bumped into a situation where I couldn’t just change the spec to avoid the situation… and I’ve finally figured out how to make it work.

The bug has been reported here, where it was ignored for a few months before being shot down and ignored some more as per php dev team regulations.

Sample code given by the reporter of the bug is markedly similar to the situations I’ve encountered the problem:

By filling in his blanks, my first test case looks something like this:

When executing the script and pressing ^C (which sends SIGINT), the following occurs:
[code]
ammon@morbo:~$ php sigtest.php
PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/ammon/sigtest.php on line 13
select returned ”
[/code]

Ok, so the warning is to be expected, and we can easily squelch that.

The real problem is that the signal handler never runs.

However… for the first time in my life, a response to a php bug report proves enlightening. The dev who answered this ticket provides his sample code and says he can’t duplicate the bug. Upon looking at the differences between their code, only one difference stands out:

The declare(ticks) directive is deprecated as of php 5.3 and will not be with us in php 6.0. Ticks are an unreliable, unpredictable, and generally bad thing in php. I’ve neither successfully used them nor seen a successful and justified use.

That being said… turning the tick on but not telling it to do anything appears to address the problem of discarded interrupts:

And execution:
[code]
ammon@morbo:~$ php sigtest.php
received sig #2
select returned ”
[/code]
Which is precisely the desired behavior.

I don’t know what the performance hit for turning ticks on is, I haven’t had time to research this. But I can confirm that by declaring ticks globally, it does work in an OO environment as well:

Executing and hitting ^C:
[code]
ammon@morbo:~$ php sigtest.php
received sig #2
select returned ”
[/code]
After a few minutes of largely unscientific testing, it appears that turning ticks on globally costs a whopping 4 bytes of ram and causes the script to occasionally consume more cpu than the top process I used to monitor it. So… at first glance the cost is pretty negligible and all I can say is that if you ever need to handle signals (SIGTERM, SIGHUP, etc…) from within a blocking select call in php, it looks like declare ticks is the only option for now.

I did the initial tests in 5.1.6, but can confirm the same behavior in 5.2.5. I don’t know how the behavior is going to be in 5.3, since I don’t run alpha releases on my servers but my gut likes to think that it will continue to work the same for now… and will hopefully not break until 6.0 (when everything else will explode for a few years). Shrug.

php tail

I have a php script that frequently needs to email me the last few lines of a log file. I can’t afford to exec() a binary tail process, so the solution has to be in pure php.

Originally, the files in question never exceeded more than a few thousand lines. Unfortunately, I am encountering cases now where the files are now occasionally 50,000 lines or longer. This causes PHP’s memory consumption to explode.

Note: Code snippets provided here are not fully functional standalone shell scripts. The scripts I ran to benchmark the algorithms contain some rudimentary setup logic that is not important here, so has not been included.

My original method:

This is easy to understand and is pretty fast, all things considered. Unfortunately, the memory footprint for loading a file into an array is obscene. Loading a 4400 line log file with this method could consume more than 17mb of ram. 50,000 line files easily stressed the 256mb limit I am able to provide the process.

So, the obvious solution to the memory consumption is to avoid loading the entire file at once. What if we kept a rotating list of lines in the file?

This method works by keeping the $lines-many most recent lines of the file in an array. Memory consumption remains sane, but the performance hit for performing so many array pushes and shifts is bad. Really bad. With small files, I can’t notice any difference between this method and the file() method… but with longer files, it adds up quickly.

Given a 51 line, 4kb file, an average execution ($lines = 20) might look like this:
[code]
ammon@zapp:~$ time ./tail-file.php a.log >/dev/null

real 0m0.015s
user 0m0.009s
sys 0m0.007s

ammon@zapp:~$ time ./tail-array.php a.log >/dev/null

real 0m0.016s
user 0m0.010s
sys 0m0.006s
[/code]

Comparable enough. But given a 50,004 line (3.3mb) log file:

The difference becomes quite clear. However… what if my log file grows obscenely large? I’ve got a 9 million line log file (1.6gb) lying around to test with…

The file() method crashes because it can’t allocate enough ram to hold a 9 million element array and the array method takes almost 20 seconds to execute. It’s slow… but at least it works.

Of course, there are other methods. The one I finally settled on is this:

This method doesn’t waste time reading the bulk of the file. It jumps to the end and scans backward until enough newlines have been located. The only problem here is that your average filesystem isn’t optimized for reading backwards… but since we’re not really reading very much data, it doesn’t much matter.

Performance is a trifle slower on small files, but it’s astronomically better on long ones. This is similar to the method used by most unix ‘tail’ commands, and is the clear winner for actual use in my application.

Of course, it needs a bit of cleanup from the state I’ve provided it in, and isn’t appropriate for all environments… but it’s a trifle better than requiring 20 seconds and 20gb of ram to execute 😉