<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>John Veldboom</title>
    <description>Write an awesome description for your new site here. You can edit this line in _config.yml. It will appear in your document head meta (for Google search results) and in your feed.xml site description.
</description>
    <link>http://0.0.0.0:4000/</link>
    <atom:link href="http://0.0.0.0:4000/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Sun, 05 Feb 2017 00:47:20 +0000</pubDate>
    <lastBuildDate>Sun, 05 Feb 2017 00:47:20 +0000</lastBuildDate>
    <generator>Jekyll v3.4.0</generator>
    
      <item>
        <title>Laravel 4.2 updated_at Table Field Always Updating</title>
        <description>&lt;p&gt;
Here's the fix to the issue if you find the &quot;updated_at&quot; field in your database table always being updated even though the values do not actually change.
&lt;/p&gt;

&lt;p&gt;The &quot;issue&quot; is found within the &lt;kbd&gt;getDirty()&lt;/kbd&gt; function within Eloquent\Model. It's caused by the way it compares the current value with the value being inserted/updated. The comparison is type sensative which can cause problems for certain values. &lt;a href=&quot;https://github.com/laravel/framework/issues/1429&quot;&gt;Here's a github issue&lt;/a&gt; explaining the reasoning behind this from Taylor Otwell.
&lt;/p&gt;

&lt;p&gt;
The solution we decided on was to ensure the value types being inserted and updated match what was previously there by forcing integers or doubles. It will also require you to use 1 or 0 instead of true or false for boolean values.
&lt;/p&gt;

&lt;pre class=&quot;prettyprint&quot;&gt;
$int = 235;
// force it to be integer
$int = (int) 235;

$bool = true;
// becomes 
$bool = 1;
&lt;/pre&gt;</description>
        <pubDate>Fri, 18 Sep 2015 00:00:00 +0000</pubDate>
        <link>http://0.0.0.0:4000/posts/laravel-updated-at-date-always-updating</link>
        <guid isPermaLink="true">http://0.0.0.0:4000/posts/laravel-updated-at-date-always-updating</guid>
        
        
        <category>laravel</category>
        
      </item>
    
      <item>
        <title>AWS CLI Ansible Playbook on Ubuntu 14.04</title>
        <description>&lt;p&gt;
Here's a simple &lt;a href=&quot;http://www.ansible.com/home&quot;&gt;Ansible&lt;/a&gt; playbook that installs the &lt;a href=&quot;http://aws.amazon.com/cli/&quot;&gt;AWS CLI&lt;/a&gt; and all the required dependencies on Ubuntu 14.04 - should work on other versions of Ubuntu too.
&lt;/p&gt;

&lt;h3&gt;Setup&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Clone or download the &lt;a href=&quot;https://github.com/jveldboom/ansible-aws-cli&quot;&gt;jveldboom/ansible-aws-cli&lt;/a&gt; repo to your Ansible machine&lt;/li&gt;
&lt;li&gt;Add your host information within &lt;kbd&gt;inventories/hosts&lt;/kbd&gt;&lt;/li&gt;
&lt;li&gt;Add your AWS creditentials within &lt;kbd&gt;vars/vars.yml&lt;/kbd&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--[break]--&gt;
&lt;br&gt;
&lt;h3&gt;Run Playbook&lt;/h3&gt;
&lt;p&gt;Below is how I prefer to run Ansible playbooks since it allows you to choose the host and SSH user dynamically.&lt;/p&gt;

&lt;pre class=&quot;prettyprint&quot;&gt;
ansible-playbook -i inventories/hosts playbook.yml --extra-vars=&quot;hosts=vagrant user=vagrant&quot; --ask-pass
&lt;/pre&gt;

&lt;p&gt;The `--extra-vars` parameter sets the &quot;host&quot; and &quot;user&quot; variable in the `playbook` file. And `--ask-pass` will prompt you for a password.&lt;/p&gt;

</description>
        <pubDate>Thu, 30 Jul 2015 00:00:00 +0000</pubDate>
        <link>http://0.0.0.0:4000/posts/aws-cli-ansible-playbook</link>
        <guid isPermaLink="true">http://0.0.0.0:4000/posts/aws-cli-ansible-playbook</guid>
        
        
        <category>server</category>
        
      </item>
    
      <item>
        <title>GoAccess Automated Reports - Last 30+ Days via Cron</title>
        <description>&lt;p&gt;
First if you're not familier with GoAccess, here's a quick description from their website &lt;a href=&quot;http://goaccess.io/&quot;&gt;http://goaccess.io/&lt;/a&gt;:
&lt;/p&gt;

&lt;blockquote&gt;
GoAccess is an open source real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems. It provides fast and valuable HTTP statistics for system administrators that require a visual server report on the fly.
&lt;/blockquote&gt;

&lt;p&gt;Typically you run goaccess on a single log file like this: &lt;kbd&gt;goaccess -f access.log&lt;/kbd&gt;. But in our case we wanted to run it on multiple log files for a month end report. This provided some issues since we needed to be able to pass multiple files and only report within a date range.&lt;/p&gt;
&lt;p&gt;Below is the script we came up with that's not perfect, but it's simple as does a pretty good job of what we needed.  It finds all the gzipped &lt;kbd&gt;access.log&lt;/kbd&gt; files modified within the past 35 days and pipes them into goaccess. We chose 35 days since it's possible some files may contain multiple days so 35 files should always include at least 30 days. Finally we save the report as an HTML file by the date. &lt;kbd&gt;monthly-2015-05.html&lt;/kbd&gt;.&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;
#!/bin/bash
DATE=$(date +'%Y.%m')
zcat `find /var/log/apache2/ -name &quot;access.log.*.gz&quot; -mtime -35` | goaccess &gt; /dir/monthly-$DATE.html
&lt;/pre&gt;

&lt;p&gt;Then just save the file and add the cron job to run at midnight on the first of each month.&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;
# as a shell script
00 00 01 * * /bin/bash /dir/goaccess-monthly.sh

# or as a single cron job line
00 00 01 * *  zcat `find /var/log/apache2/ -name &quot;access.log.*.gz&quot; -mtime -35` | goaccess &gt; /dir/monthly-$(date +'%Y.%m').html
&lt;/pre&gt;

&lt;p&gt;Also check out the &lt;a href=&quot;http://goaccess.io/man&quot;&gt;man page&lt;/a&gt; for more information on various options and settings.&lt;/p&gt;</description>
        <pubDate>Sat, 30 May 2015 00:00:00 +0000</pubDate>
        <link>http://0.0.0.0:4000/posts/goaccess-automated-reports-last-30-days-via-cron</link>
        <guid isPermaLink="true">http://0.0.0.0:4000/posts/goaccess-automated-reports-last-30-days-via-cron</guid>
        
        
        <category>server</category>
        
      </item>
    
      <item>
        <title>EC2 High CPU Wait and the EBS Provisioned IOPS Difference</title>
        <description>On our small Graphite monitoring server (c4.large), we kept having very high CPU wait times. It would hover around 80% almost continuously. I could scrub back in the graph timeline to see where it started and there were no major changes made on that day. (like adding a fleet of new servers or extra data points) So I was pretty confident it was not an application issue.
&lt;br&gt;&lt;br&gt;
&lt;img src=&quot;http://i.imgur.com/t1JquM0.png&quot;&gt;
&lt;!--[break]--&gt;
&lt;br&gt;&lt;br&gt;
Turns out the issue was caused by our server running on a General Purpose SSD and it was hitting the max input/output per sec (IOPS) limit causing requests to go into a general pool. 
&lt;br&gt;&lt;br&gt;
So now onto switch over to Provisioned IOPS SSD, but first we need to determine the IOPS we need.
&lt;br&gt;&lt;br&gt;
Within your AWS console under the Elastic Block Store (EBS), you can monitor each volumes performance. So we're going to take the Read Throughput and Write Throughput to determine our IOPS need.
&lt;br&gt;&lt;br&gt;
&lt;img src=&quot;http://i.imgur.com/of7GNMz.png&quot;&gt;
&lt;br&gt;&lt;br&gt;
For example from the graph above, we'll round the Read Throughput up to 1 and the Write Throughput maxes out at 100. So in theory we could set the IOPS to 101 and be &quot;ok&quot;, but we'll likely want to give a little more head room and round up to at least 200 or even 300 to be safe.
&lt;br&gt;&lt;br&gt;
After changing our volume to use the provisioned SSD, we saw an immediate difference in the CPU wait.
&lt;br&gt;&lt;br&gt;
&lt;img src=&quot;http://i.imgur.com/Uove55E.png&quot;&gt;
&lt;br&gt;&lt;br&gt;
To convert an existing General Purpose SSD (or magnetic too) to a Provisioned IOPS, you'll need to complete a couple steps.
&lt;br&gt;&lt;br&gt;
&lt;ol&gt;
&lt;li&gt;Stop EC2 instances - optional but will prevent any data lose&lt;/li&gt;
&lt;li&gt;Create snapshot of volume you're changing&lt;/li&gt;
&lt;li&gt;Convert snapshot into a Provisioned IOPS volume.&lt;/li&gt;
&lt;li&gt;Detach old non-provisioned volume from EC2 instance&lt;/li&gt;
&lt;li&gt;Attached new provisioned volume&lt;/li&gt;
&lt;li&gt;Start EC2 instance&lt;/li&gt;
&lt;/ol&gt;



</description>
        <pubDate>Mon, 06 Apr 2015 00:00:00 +0000</pubDate>
        <link>http://0.0.0.0:4000/posts/ec2-high-cpu-wait-and-the-ebs-provisioned-iops-difference</link>
        <guid isPermaLink="true">http://0.0.0.0:4000/posts/ec2-high-cpu-wait-and-the-ebs-provisioned-iops-difference</guid>
        
        
        <category>server</category>
        
      </item>
    
      <item>
        <title>Heka JSON Decoder using a SandboxDecoder and Lua</title>
        <description>&lt;p&gt;First let me say if you're looking for help on Heka, check out their IRC channel. It's full of great guys that are extremely helpful! [IRC: #heka on irc.mozilla.org]&lt;/p&gt;

&lt;p&gt;This Heka JSON encoder converts any simple key/value JSON payload into Heka fields.&lt;/p&gt;
&lt;!--[break]--&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;
// sample json payload
[{&quot;name&quot;=&gt;&quot;John Doe&quot;,&quot;title&quot;=&gt;&quot;Sysadmin&quot;,&quot;@timestamp&quot;=&gt;&quot;2014-09-02T22:10:28Z&quot;}]
&lt;/pre&gt;

&lt;script src=&quot;https://gist.github.com/jveldboom/763556cdba6a012843d7.js&quot;&gt;&lt;/script&gt;

&lt;p&gt;I hope to have a sample use case where we used this with a PHP application to import data into ElasticSearch. Pretty cool stuff - and super easy to setup with new and (in our case) legacy systems.&lt;/p&gt;</description>
        <pubDate>Wed, 03 Sep 2014 00:00:00 +0000</pubDate>
        <link>http://0.0.0.0:4000/posts/heka-json-encoder</link>
        <guid isPermaLink="true">http://0.0.0.0:4000/posts/heka-json-encoder</guid>
        
        
        <category>server</category>
        
      </item>
    
      <item>
        <title>Drop MySQL Primary Key  with Foreign Key using Laravel 4 (error 1025 &amp; 105)</title>
        <description>&lt;p&gt;This issue is really not specifically related to Laravel. But the code below shows how to handle the issue within Laravel 4.&lt;/p&gt;

&lt;p&gt;The issue comes from trying to delete a primary key that's also a foreign key. MySQL would spit out the following &quot;useful&quot; error:&lt;/p&gt;

&lt;pre class=&quot;prettyprint&quot;&gt;
SQLSTATE[HY000]: General error: 1025 Error on rename of './database/#sql-10e9_9c' to './database/table' (errno: 150) (SQL: alter table `table` drop primary key)
&lt;/pre&gt;

&lt;p&gt;To solve this you just need to &lt;strong&gt;delete the foreign key first and then the primary key.&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=&quot;prettyprint&quot;&gt;
Schema::table('products_fulltext', function(Blueprint $table) {
   $table-&gt;dropForeign('table_field_foreign');
   $table-&gt;dropPrimary('PRIMARY');
});
&lt;/pre&gt;

</description>
        <pubDate>Fri, 21 Feb 2014 00:00:00 +0000</pubDate>
        <link>http://0.0.0.0:4000/posts/drop-primary-key-1025-105-errors-within-laravel-4</link>
        <guid isPermaLink="true">http://0.0.0.0:4000/posts/drop-primary-key-1025-105-errors-within-laravel-4</guid>
        
        
        <category>mysql</category>
        
      </item>
    
      <item>
        <title>Laravel 4 Unable to Read Package Configuration File</title>
        <description>&lt;p&gt;
	I was trying to add a configuration file to an existing Laravel 4 package (&lt;a href=&quot;https://github.com/VentureCraft/revisionable&quot;&gt;revisionable&lt;/a&gt;) to help improve the functionality. But no matter what I tried I could not get the package to read from the &lt;kbd&gt;src/config/config.php&lt;/kbd&gt; file. 
&lt;/p&gt;
&lt;!--[break]--&gt;
&lt;p&gt;
Turns out the issue was caused by the package not having a service provider. I'm not 100% why this made a difference but by adding the service provider file and adding it to the service providers within app/config/app, the config would read.
&lt;/p&gt;
&lt;p&gt;
	Here's the service provider that actually made it work - specifically the &lt;kbd&gt;$this-&gt;package('venturecraft/revisionable');&lt;/kbd&gt; code within the register() function
&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;
&amp;lt;?php
namespace VenturecraftRevisionable;

use IlluminateSupportServiceProvider;

class RevisionableServiceProvider extends ServiceProvider {

	/**
	 * Indicates if loading of the provider is deferred.
	 *
	 * @var bool
	 */
	protected $defer = false;

	/**
	 * Register the service provider.
	 *
	 * @return void
	 */
	public function register()
	{
		$this-&gt;package('venturecraft/revisionable');
	}

	/**
	 * Get the services provided by the provider.
	 *
	 * @return array
	 */
	public function provides()
	{
		return array();
	}
}
&lt;/pre&gt;

&lt;p&gt;Then to read the config file, you can just use:&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;
Config::get('revisionable::file.key');

// or if you only have one config file named config.php
Config::get('revisionable::key');
&lt;/pre&gt;

&lt;p&gt;I'd love to know exactly why it works like this so please feel free to shed your Laravel knowledge in the comments below.&lt;/p&gt;
&lt;p&gt;Thanks to &lt;a href=&quot;https://coderwall.com/p/svocrg&quot;&gt;Zennon Gosalvez's article&lt;/a&gt; which helped lead me in the right direction. &lt;/p&gt;
</description>
        <pubDate>Sat, 11 Jan 2014 00:00:00 +0000</pubDate>
        <link>http://0.0.0.0:4000/posts/laravel-4-unable-to-read-package-configuration-file</link>
        <guid isPermaLink="true">http://0.0.0.0:4000/posts/laravel-4-unable-to-read-package-configuration-file</guid>
        
        
        <category>laravel</category>
        
      </item>
    
      <item>
        <title>How to create a temporary storage directory that automatically deletes contents after X days (Mac)</title>
        <description>&lt;p&gt;Here's a quick way to have a directory that allows you to store files and other subdirectories for temporary usage. For example, I use this to save all my downloads and other files that I only need for the next day or two.&lt;/p&gt;

&lt;p&gt;We first need to create a new cron job. If you're not familier with cron, it's basically instructions to the cron daemon of the general form: &quot;run this command at this time on this date&quot;. (&lt;a href=&quot;http://unixhelp.ed.ac.uk/CGI/man-cgi?crontab+5&quot;&gt;or for some light reading&lt;/a&gt;) We're going to use cron to run a command to move all the contents of a directory to the trash.&lt;/p&gt;

&lt;pre class=&quot;prettyprint&quot;&gt;
// from the terminal enter to create new crontab or open existing
crontab -e

// next press Esc+i to enter into &quot;INSERT&quot; mode which will allow you enter text
00  */2  *  *  *  find /path/to/temp/ -mtime +1 -exec mv {} ~/.Trash ; &gt;/dev/null 2&gt;&amp;1

// then save the crontab by pressing Esc, :, w, q (write &amp; quit)
&lt;/pre&gt;

&lt;p&gt;Now let's explain what's going on with the cron job.&lt;/p&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;td&gt;00&lt;/td&gt;
&lt;td&gt;minutes (ie: 2:&lt;b&gt;00&lt;/b&gt;,18:&lt;b&gt;00&lt;/b&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;*/2&lt;/td&gt;
&lt;td&gt;every 2 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;*  *  *&lt;/td&gt;
&lt;td&gt;every hour, day of the month, day of the week&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;find /path/to/temp/&lt;/td&gt;
&lt;td&gt;finds all contents of directory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;-mtime +1&lt;/td&gt;
&lt;td&gt;where the file's make time is X days more than current date&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;-exec mv {}&lt;/td&gt;
&lt;td&gt;take all contents found from find comment and executes the mv (moves) command&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;~/.Trash ;&lt;/td&gt;
&lt;td&gt;users trash can (could be any directory though)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&gt;/dev/null 2&gt;&amp;1&lt;/td&gt;
&lt;td&gt;suppresses any output from displaying (ie fulling up your users mailbox)&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;

&lt;p&gt;Also, if you're looking for a quick way to add extra storage to your Macbook, checkout the &lt;a href=&quot;http://theniftyminidrive.com/&quot;&gt;Nifty Drives&lt;/a&gt;. This is what I use for my temporary storage.&lt;/p&gt;</description>
        <pubDate>Thu, 14 Nov 2013 00:00:00 +0000</pubDate>
        <link>http://0.0.0.0:4000/posts/how-to-create-a-temporary-storage-directory-that-automatically-deletes-contents-after-x-days</link>
        <guid isPermaLink="true">http://0.0.0.0:4000/posts/how-to-create-a-temporary-storage-directory-that-automatically-deletes-contents-after-x-days</guid>
        
        
        <category>server</category>
        
      </item>
    
      <item>
        <title>PHP session_start() failed no space left on device (Plesk plesk-php-cleanuper)</title>
        <description>&lt;p&gt;
	I kept receiving intermissive PHP warnings saying some thing like &lt;kbd&gt;E_WARNING: session_start(): open(/var/lib/php/session/sess_ji9k4chqke3pde98a5n5m1vca5, O_RDWR) failed: No space left on device (28)&lt;/kbd&gt;. Usually this would indicate that the disk where the sessions were being stored is full. The best place to start is to check if the disk where those session files are being stored is in fact NOT full.
&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;
// to check disk space usage
df -h

// should display something like this
Filesystem            Size  Used Avail Use% Mounted on
/dev/md1              4.0G  883M  3.2G  22% /
/dev/mapper/vg00-usr  4.0G  1.8G  2.0G  49% /usr
/dev/mapper/vg00-var  890G  231G  614G  28% /var
none                  5.9G  116K  5.9G   1% /tmp
&lt;/pre&gt;

&lt;p&gt;In my case the disk where the sessions were being stored had plenty of space - &lt;b&gt;600GB free!&lt;/b&gt;&lt;/p&gt;

&lt;!--[break]--&gt;
&lt;p&gt;It turns out the issue was caused by Plesk's PHP hourly script &lt;kbd&gt;/etc/cron.hourly/plesk-php-cleanuper&lt;/kbd&gt; was not able to finish due to the current sessions directly being too full. My guess is some where in the update from Plesk 10.4 to 11 the script got turned off or removed. Then a minor update turned it back on. But during that time the sessions folder grew to around 800MB of tiny session files. My guess is the average session file is less than 1 Kb. So there must have been over a million small session files. The Plesk script &lt;kbd&gt;plesk-php-cleanuper&lt;/kbd&gt; when turned back on could never finish clearing the old session files.
&lt;/p&gt;


	&lt;h3&gt;tl;dr - now to the fix&lt;/h3&gt;
	
&lt;pre class=&quot;prettyprint&quot;&gt;
# 1. Gracefully turn off Apache so no incoming request come during these changes.
apachectl -k graceful-stop

# 2. Rename the current PHP session directory
mv /var/lib/php/session /var/lib/php/session.old

# 3. Recreate PHP session directory and set permissions
mkdir /var/lib/php/session
chmod 1777 /var/lib/php/session

#4. Start Apache
apachectl -k start
 
# 5. Delete old session files (optional)*
mkdir /var/lib/php/empty
rsync -a --delete /var/lib/php/empty/ /var/lib/php/session.old/
&lt;/pre&gt;

&lt;p&gt;Step 5 is optional since the server I was working on I left the &lt;kbd&gt;rsync&lt;/kbd&gt; command run for around 24 hours and it still did not finish. The rsync will eat up your disk I/O and this was a production server so I could not have it run slow for that long - plus disk space was not really an issue. But if you need that disk space and can allow your server to run slow for a day or two, the rsync method is the &lt;a href=&quot;http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html&quot;&gt;fastest way to delete millions of files.&lt;/a&gt;&lt;/p&gt;</description>
        <pubDate>Wed, 16 Oct 2013 00:00:00 +0000</pubDate>
        <link>http://0.0.0.0:4000/posts/php-sessionstart-failed-no-space-left-on-device-plesk-plesk-php-cleanuper</link>
        <guid isPermaLink="true">http://0.0.0.0:4000/posts/php-sessionstart-failed-no-space-left-on-device-plesk-plesk-php-cleanuper</guid>
        
        
        <category>server</category>
        
        <category>php</category>
        
      </item>
    
      <item>
        <title>Authenticate Only Certain HTTP Verbs (POST, PUT, DELETE) in Laravel 4</title>
        <description>&lt;p&gt;
	In working on the API for an upcoming project, I needed a way to allow all reads (GET) to be allowed without authentication but any requests (POST, PUT, or DELETE) changing the data to require authentication.
&lt;/p&gt;
&lt;!--[break]--&gt;
&lt;p&gt;
	I wanted to use the &lt;kbd&gt;Route::group()&lt;/kbd&gt; to help keep the &lt;kbd&gt;routes.php&lt;/kbd&gt; file simple by grouping all the API calls together. I also wanted to use the &lt;kbd&gt;Route::resource&lt;/kbd&gt; to keep the controller relatively simple too.
&lt;/p&gt;

&lt;pre class=&quot;prettyprint&quot;&gt;
// router.php
Route::group(array('prefix' =&gt; 'api/v1'), function()
{
    Route::resource('tweets', 'ApiTweetsController');
}
&lt;/pre&gt;

&lt;p&gt;Next within the controller I added the following to the &lt;kbd&gt;__construct()&lt;/kbd&gt; class to require &lt;kbd&gt;auth.basic&lt;/kbd&gt; authentication on all functions except 'index' and 'show'. Since we're using a &lt;kbd&gt;resource&lt;/kbd&gt; route, the 'index' and 'show' function all represent HTTP &lt;kbd&gt;GET&lt;/kbd&gt; requests. So any POST, PUT or DELETE request will require authentication - which is exactly what we're looking for!
&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;
class ApiTweetsController extends BaseController {

   public function __construct()
   {
      $this-&gt;beforeFilter('auth.basic', array('except' =&gt; array('index','show')));
   }

   ....
&lt;/pre&gt;
&lt;p&gt;&lt;small&gt;Note: you could also add 'edit' and 'create' to the 'except' array but I did not need those methods for the API I was working on. &lt;kbd&gt;array('except' =&gt; array('index','show','edit','create'))&lt;/kbd&gt;.&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Update:&lt;/b&gt; If you wanted to use the actual HTTP verbs, you can use the following instead. Thanks &lt;a href=&quot;http://www.reddit.com/r/laravel/comments/1nwkzz/authentic_only_certain_http_verbs_post_put_delete/ccmreyq&quot;&gt;ericbarnes&lt;/a&gt; for pointing this out.&lt;/p&gt;

&lt;pre class=&quot;prettyprint&quot;&gt;
$this-&gt;beforeFilter('auth.basic', array('on' =&gt; array('post','put','patch','delete')));
&lt;/pre&gt;


&lt;p&gt;Please let me know if you have any questions about this or if you're found a different/better way of doing this.&lt;/p&gt;</description>
        <pubDate>Sun, 06 Oct 2013 00:00:00 +0000</pubDate>
        <link>http://0.0.0.0:4000/posts/authentic-only-certain-http-verbs-post-put-delete-in-laravel-4</link>
        <guid isPermaLink="true">http://0.0.0.0:4000/posts/authentic-only-certain-http-verbs-post-put-delete-in-laravel-4</guid>
        
        
        <category>laravel</category>
        
      </item>
    
  </channel>
</rss>
