in_array for multiple checks

I’ve came across if statement like this numerous times now.

if ($var == 'value1' or $var == 'value2' or $var == 'value3' or $var == 'value4' or $var == 'value5') {


Not only it takes time to write a condition like this, you’re running out of space in the editor. Use in_array and it’ll look much nicer and cleaner.

if (in_array($var, ['value1', 'value2', 'value3', 'value4', 'value5'])) {


When Request::route() doesn’t give you Controller and Action

Grabbing the controller and action out of a request is easy in Laravel 3, It’s just a matter of grabbing the route object using Request::route() and accessing the controller and action properties. But the problem is in real world user cases this doesn’t work very well. Specially when detecting controller and action inside a Route filter. Here’s why.

Laravel’s route handling is done by few classes. If you do a backtrace,

$uri = URI::current();
Request::$route = Router::route(Request::method(), $uri);
$response = Request::$route->call();

laravel/laravel.php (stripped down version)

    • First laravel get the current URI, using URI::current(). This returns url for the current request, no http s, query strings. Something like post/create
    • Then it creates a Route object by calling Router::route($method, $current_uri) by passing the method of the current request (GET/POST) and the URI. The method is grabbed using Request::method()
    • After that it gets a Response object by calling call method of the route object.
      public function call()
      	//runs filter
      	$response = Filter::run($this->filters('before'), array(), true);
      	if (is_null($response))
      		$response = $this->response(); //this is where controller is resolved
      	//more stuff
      	Filter::run($this->filters('after'), array(&$response));
      	return $response;

      laravel/routing/route.php (stripped down version)

      Inside $this->response();

      public function response()
      	return Controller::call($delegate, $this->parameters);

      laravel/routing/route.php (stripped down version)

      Inside Controller::call

      public static function call($destination, $parameters = array())
      	static::references($destination, $parameters); //finally, set the controller and action

      laravel/routing/controller.php (stripped down version)

      if you do a dd($destination) after static::references call, it will shows the complete path for the route’s controller and action. Something like. posts@new, if it’s a nested controller users.posts@new

    • Finally, calls the render method of the Response object.

This whole process is visible in laravel/laravel.php

Inside Route::call it runs the “before” filters THEN resolve the controller and action for the specific request. This is when Controller name and action is assigned to Request::route() object. So if you try to access Request::route()->controller from the before filter, it returns NULL. The workaround I did was to run logic Controller::references for controller and action resolve inside my custom filter, which is executed “before” route call.

Controller::references is protected and the only way to access is to make it public which involves a core file edit, or using Relfection API to access it, I’ve done the later.

This will output the full path to the controller and route, if you want to get them separately just explode the $d on @, first part will be the controller, 2nd part action.

Note that my solution specifically targeted controller based routes. You will need to do checks if you have Closure based routes, improve my gist.

Worth checking.

Lifecycle of a Laravel Request #1

Lifecycle of a Laravel Request #2

Reflection API

Multiple File Uploads in Laravel

Out of the box laravel doesn’t support multiple file uploads in one go. However, since laravel uses symfony’s httpfoundation library under the hood, handling a multiple file upload is easy.

Assuming my file input field’s name is “file[]”

foreach((array) Request::foundation()->files->get('file') as $file) {
	if(!is_null($file)) {

		$file->move('save_directory', 'new_file_name_with_ext');


So the obvious, Request::foundation()->files->get('file') return an array, array of “UploadedFile” objects. You can do neat things with this object, the class implementing the object is located at laravel/vendor/Symfony/Component/HttpFoundation/File/UploadedFile.php

$file->getClientOriginalName(); //grab the original file name
$file->guessExtension(); //grab the ext

This is for Laravel 3 and technique should be applicable to Laravel 4 as well.

Laravel and NewRelic

NewRelic, from where can I start… it’s best freakin thing I came across (recently). Since you’re here I’m sure you’re already using newrelic. New relic does support some php frameworks out of the box, at the time I’m writing this newrelic doesn’t get the laravel’s routing seriously, resulting all of your requests to show as index.php

I rolled up my own helper library to use newrelic’s core features. If you want to learn the know-how refer this

Get the code from Put this inside application/libraries/newrelic.php (I’m talking about Laravel v3).

To get things rolling, you need to hook in to one of the core events. This will tell the correct route.

Event::listen('laravel.done', function($request){
	$route = Request::route();
	$transaction = $route->controller . '/' . $route->controller_action;

To send errors to newrelic, update the logger at config/error.php

'logger' => function($exception)
		//Log::exception($exception); uncomment this if you want to use the default logging with newrelic

To get the javascript links (so you can monitor performance from end user’s point as well) call browserHeader and browserFooter methods from your views.

If you have artisan tasks or cron jobs running, call markBackgrondTask method in the beginning on those tasks/requests so newrelic knows that it’s a background task.

Using the class directly on your code will introduce a new dependency to your code, so if you want you can use IoC Container to store an instance of the class.

IoC::singleton('monitor', function(){
	return new NewRelic;

You can call the methods as usual, IoC::resolve('monitor')->captureException($throw);

Don’t forget to contribute to the code!

static, not just for classes

We’ve seen static inside classes, “a lot”. In context to OOP, static is used to declare static properties and static methods, which doesn’t need an instance of a class to work with. Also when extending classes, static is used to access static properties and methods of the parent class. Anyhows, static can be used for variable scoping as well. When a variable is declared as static inside a function it’s value will remain the same on multiple function calls.

function test()
    static $a = 5;
    echo $a."n";
test(); //5
test(); //6
test(); //7

This is from the manual, for the 101 check

The popular DIC library “Pimple” uses static variable to store shared “objects”, clever. Here’s an excerpt from the source.

public static function share(Closure $callable)
    return function ($c) use ($callable) {
        static $object;

        if (null === $object) {
            $object = $callable($c);

        return $object;

By default, each time you get an object, Pimple returns a new instance of it. If you want the same instance to be returned for all calls, wrap your anonymous function with the share() method:

$c['session'] = $c->share(function ($c) {
    return new Session($c['session_storage']);

Since pimple store sharable objects as static, rest assured that only one instance of that object remains, singletons nicely done!

5 Services and Few Lessons I Learned

Since there are load of API’s around, as developers we’re able to plug them to our projects within minutes, praise “REST”. I just plugged the 5th service to the app I’m working on (no I’m not proud), it’s an all time high! I want to share some troubles I ran in to.

We had Twilio & SendGrid

This is a SAAS telephony/email marketing app. For telephony stuff we’re using Twilio and for email SendGrid as the service providers, so we had Twilio API from the beginning, users send sms through the web app, users can do sms in bulk and for larger quantities job queues and workers comes in to play. Technically we had sendgrid as well, but it was through smtp protocol, does it count?


Time goes by, we needed realtime notifications, like facebook. “Web-sockets” and “Node” was the first 2 things came to my mind since I’m not a fan of ajax long polling, I decided to go with Pusher, Pusher uses web sockets by default, but if user browser doesn’t support it, it will switch to a flash based solution. They have a good API with PHP wrappers. By the way as of now, facebook uses ajax polling

Couch DB (DBAAS)

The users mainly use the app to manage their leads/customers, and each of this lead has an activity feed, which basically records every interaction between the app and the leads. Most of these data is “not critical” and it’s not worth putting inside the main MySQL database. So I decided to try a NoSQL alternative this time, (we already had some benchmarks using this data inside main mysql db). I looked at Mongo and CouchDB, at the time mongo didn’t support full-text searching and neither CouchDB, but I stumbled upon this project on github. Luckily a company called Cloudant was already offering couchdb with lucene as a hosted service (the guy who created lucene for couchdb works @ cloudant). I moved with cloudant.

Sentry for Error Reporting

Crawling through the logs ain’t a good experience, specially on a server (at least for me), there goes I plugged another service for error reporting, “Sentry“.”Sentry notifies the developers when users experience errors in your web and mobile apps.” For PHP apps, getsentry’s library will register an exception handler to capture all the errors for reporting back, I get email notifications when errors occur plus I have a nice view of the errors (trust me you want that when checking errors :D), big time saver.

2X Performance Drop

After integrating couchdb (cloudant), the app got very slow, slow as hell. Some pages took almost 4-5 seconds to load on a decent internet connection. A ping to the cloudant servers from our server gave us around 300ms, which is very very bad for something like database communication. It turns out, our cloudant account was in singapore while the app’s server is in US. Duh! We were on a Hostgator VPS at this time, so we moved the cloudant account to USA (Chicago) thank you cloudant team! that gave us 25ms ping, not bad. Recently we moved the server from hosgator to rackspace. Turns out we’re in the same datacenter with cloudant now, yepee! and the ping dropped to 0.2ms. App is fast like hell.

Also moving from hostgator VPS to rackspace dedicated server reduced the api call times to all of our apis, before we had around 1000ms duration for some twilio calls, and around 2000-3000 secs for pusher calls (All from PHP), after moving to rackspace those numbers dropped to around 200-300. So if you’re thinking about using a VPS with lots of 3rd party services, think twice.

Background Jobs/Message Queues

The main downside of using 3rd party services is that you’re going to rely on someone else, a dependency if you will. And we can’t guarantee that it will be  online when you need it and it will always respond to our requests instantly. Some part of my app had some logics which directly call twilio api on the same request of the user, while on the VPS occasionally these requests got slow as hell (so far good under the dedicated environment). The thing is it’s always better if you can abstract the 3rd party service from the end user, using job queues has enabled us to reduce the waiting time significantly throughout the whole app, since the API calls are not processed on the request it self but added to a queue which is processed by a worker later on. Sure it’s not fast, but it’s “fast” for the user.

But Why Not Locally?

Most of the services I mentioned can be implemented locally, instead of pusher I could use Without cloudant I can roll my own couch instance on the server, there are lots of self hosted logging platforms so I can get rid of sentry. Given the time and man power it’s not always a good call to do all of these things by our selves and most importantly what I learned durin past few years was when working on start-up projects ONLY concentrate on the core features. If we hit a record in pusher, sure we can roll out own implementation later. But not now, development time is far more important than infrastructure costs, it really is. So let the server handle all the tough stuff for a while ;)

it’s not That Expensive

All of the services I mentioned doesn’t cost a dime to start off, don’t forget to test them.



“A” record can do that

I constantly see people doing this, so I decided to do a post.

tldr; if you want to change your web host you don’t want to change your domain’s nameservers. Just add/modify an A record which points to hosting server’s IP

98% of the time, your domain includes fair amount of records associated, MX records for google apps (not anymore?), cname for accessing other hosted services. If you were to change nameservers from your current location, you have to add these records again in new location which is an overkill. So just change the A record.

Property Overloading and empty()

While I was working on a Laravel based app, I ran into a weird behavior. Even though this happened to my Laravel models, it’s something to consider when doing OOP in PHP.

I had a User model defined with a has_many relation to a Post model. Straightforward, User has many posts. In my logic I was checking whether the user has posts or not, using empty(), but for some reason it wasn’t returning what I expected.

$u = User::first();

var_dump(empty($u->posts)); // bool(true)

var_dump(count($u->posts)); // int(7)

So if I loop through $u->posts I’ll get 7 posts, but when I check if it’s empty it says “yes”.

<few hrs later>

It is not possible to use overloaded properties in other language constructs than isset(). This means if empty() is called on an overloaded property, the overloaded method is not called.

To workaround that limitation, the overloaded property must be copied into a local variable in the scope and then be handed to empty().

In a nutshell you can’t pass overloaded properties (that are generated from __set() ) to empty(), since both relations and properties are overloaded inside the model (it has to be) empty was doing what it was suppose to do (return an unexpected result :D)

As the manual say, workaround is to copy the result from overloaded property to a variable and pass it to empty. So I guess this is considered as a best practice “create a local variable before passing a object property to empty()”, who know’s which is overloaded and which is not!

Here’s the “working” copy.

$u = User::first();
$posts = $u->posts;

var_dump(empty($posts)); // bool(false)

var_dump(count($posts)); // int(7)

P.S :- This is why you need to do unit testing before deploying.

Hiring Through Top Jobs – Does it Really Worth it? one of the biggest job portal’s in Sri Lanka. Our company recently advertised in topjobs to fill in few positions. The service is good, since there’s not much human interaction I can’t say much but we did contact them few times. Do the payment and send them the ad, that’s it. First ad brought us a load of applications within few minutes of posting which is kinda great (Not to the HR guy :D ). One guy called the company an explained his qualification and experience on and on, when asked “Did you send us a CV” he said NO!, following up is great but do it correctly.

2nd ad had some requirements. Main requirement, we clearly mentioned that we are looking for females. However the sad part is most of the applications were from males, obviously they didn’t read the requirements correctly. I’m sure this is happening with 90% of the ads, people are sending CVs like crazy, they are looking for a job for the sake of doing a job, not because they really want it. There might be a perfect explanation from applicant’s end as well? “we send CVs daily and nobody get back to us so we’re tired of reading an ad correctly before taking action”?

How topjobs worked for you?

And by the way we didn’t hire anyone through topjobs. Yes we did an interview.

Scheduled Content

Scheduled content enables you to schedule portions of a post.

After installing and activating the plugin you can use the shortcode [schedule]. You can pass 2 parameters, the date and the time. You have to pass at least one parameter.

Sample -

[schedule on='2011-01-04' at="13:16"]

the content you want to hide is here


The 2 parameters are “on” and “at”. Simply “on” accepts a date, until that date is arrived the content will be hidden, and “at” accepts a time. Time has to be set in 24hrs format and date should be yy-mm-dd If you only pass time, everyday content will be shown after that time. If you only pass a date, content will be shown on and after that specific date.

The time is compared with your blog’s time zone settings, if you set your time zone to local time zone it would be easy for you. Go to “General” settings of your wordpress dashboard, under “Time Zone” you can set the time zone you prefer.

Visit To Download