<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[TommyBlue.it]]></title><description><![CDATA[Il blog di Tommaso Visconti]]></description><link>https://www.tommyblue.it/</link><generator>Ghost 2.0</generator><lastBuildDate>Tue, 09 Oct 2018 22:19:26 GMT</lastBuildDate><atom:link href="https://www.tommyblue.it/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[API Authentication with Phoenix and React - part 2]]></title><description><![CDATA[<p><a href="https://www.tommyblue.it/2018/03/29/api-authentication-with-phoenix-and-react-part-1/">In the first part of this post</a> I've shown how to configure the API server to let the user authenticate, return an authentication token, and request it to access protected routes.<br>
Now I'm going to configure a <a href="https://reactjs.org/">React</a> app to consume that API and manage authentication.</p>
<p>The app uses <a href="https://reacttraining.com/react-router/">React</a></p>]]></description><link>https://www.tommyblue.it/2018/03/31/api-authentication-with-phoenix-and-react-part-2/</link><guid isPermaLink="false">5b807014e04575000159f4ba</guid><category><![CDATA[elixir]]></category><category><![CDATA[phoenix]]></category><category><![CDATA[react]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Sat, 31 Mar 2018 21:34:35 GMT</pubDate><media:content url="https://www.tommyblue.it/content/images/2018/03/1_-MTuYZ4k46A8JJdWlq_x5A-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.tommyblue.it/content/images/2018/03/1_-MTuYZ4k46A8JJdWlq_x5A-1.png" alt="API Authentication with Phoenix and React - part 2"><p><a href="https://www.tommyblue.it/2018/03/29/api-authentication-with-phoenix-and-react-part-1/">In the first part of this post</a> I've shown how to configure the API server to let the user authenticate, return an authentication token, and request it to access protected routes.<br>
Now I'm going to configure a <a href="https://reactjs.org/">React</a> app to consume that API and manage authentication.</p>
<p>The app uses <a href="https://reacttraining.com/react-router/">React Router</a> to manage routes and <a href="https://redux.js.org/">Redux</a> for the state of the app.</p>
<h2 id="protectprivateroutes">Protect private routes</h2>
<p>I'm going to define a <code>PrivateRoute</code> component as a wrapper around <code>Route</code>. The component will check the user authentication.</p>
<p>The router configuration will have a standard <code>Route</code> component for the <code>Login</code> page and will use <code>PrivateRoute</code> for the rest of the routes:</p>
<pre><code class="language-javascript">&lt;Router&gt;
    &lt;Switch&gt;
        &lt;Route path='/login' component={Login} /&gt;
        &lt;PrivateRoute path='/private' component={PrivateComponent}/&gt;
    &lt;/Switch&gt;
&lt;/Router&gt;
</code></pre>
<p>The <code>PrivateRoute</code> component will check the <code>isAuthenticated</code> flag in the state and will redirect back to login if <code>false</code> or will render the private component otherwise:</p>
<pre><code class="language-javascript">import React from 'react';
import { Route, Redirect } from 'react-router-dom';
import { connect } from 'react-redux';

const mapStateToProps = state =&gt; {
    return {
        isAuthenticated: state.isAuthenticated,
    };
};

class PrivateRoute extends React.Component {
    render() {
        if (!this.props.isAuthenticated) {
            return (
                &lt;Redirect
                    to={{
                    pathname: &quot;/login&quot;,
                    state: { from: this.props.location }
                    }}
                /&gt;
            );
        }

        return (
            &lt;Route component={this.props.Component} {...this.props} /&gt;
        );
    }
}

export default connect(mapStateToProps)(PrivateRoute);
</code></pre>
<h2 id="signinandreceivethetokenfromtheserver">Sign in and receive the token from the server</h2>
<p>The <code>Login</code> component will simply show a form and will manage the initial authentication, saving the token in a cookie for later use:</p>
<pre><code class="language-javascript">import React from 'react';
import { connect } from 'react-redux';

import {
    signIn,
} from '../actions';

const mapStateToProps = state =&gt; {
    return {
        isAuthenticated: state.isAuthenticated,
    };
};

const mapDispatchToProps = dispatch =&gt; {
    return {
        onSignIn: (email, password) =&gt; dispatch(signIn(email, password)),
    };
};

class Login extends React.Component {
    constructor(props) {
        super(props);
        this.onSignIn = this.onSignIn.bind(this);
        this.state = {email: &quot;&quot;, password: &quot;&quot;};
    }

    render() {
        return (
            &lt;div className=&quot;container&quot;&gt;
                &lt;h1 className=&quot;title&quot;&gt;Login&lt;/h1&gt;
                {this.props.isAuthenticated ? this.alreadyAuthenticated() : this.form()}
            &lt;/div&gt;
        );
    }

    alreadyAuthenticated() {
        return (&quot;You're already authenticated.&quot;)
    }

    form() {
        return (
            &lt;form&gt;
                &lt;div className=&quot;field&quot;&gt;
                    &lt;label className=&quot;label&quot;&gt;Email&lt;/label&gt;
                    &lt;div className=&quot;control&quot;&gt;
                        &lt;input
                            className=&quot;input&quot;
                            type=&quot;email&quot;
                            placeholder=&quot;Your email address&quot;
                            value={this.state.email}
                            autoFocus={true}
                            onChange={(e) =&gt; this.setState({...this.state, email: e.target.value})}
                        /&gt;
                    &lt;/div&gt;
                &lt;/div&gt;

                &lt;div className=&quot;field&quot;&gt;
                    &lt;label className=&quot;label&quot;&gt;Password&lt;/label&gt;
                    &lt;div className=&quot;control&quot;&gt;
                        &lt;input
                            className=&quot;input&quot;
                            type=&quot;password&quot;
                            placeholder=&quot;Your password&quot;
                            value={this.state.password}
                            onChange={(e) =&gt; this.setState({...this.state, password: e.target.value})}
                        /&gt;
                    &lt;/div&gt;
                &lt;/div&gt;

                &lt;button
                    className=&quot;button is-primary&quot;
                    onClick={this.onSignIn}
                &gt;Sign in&lt;/button&gt;
            &lt;/form&gt;
        );
    }

    onSignIn() {
        this.props.onSignIn(this.state.email, this.state.password);
    }
}

export default connect(
    mapStateToProps,
    mapDispatchToProps
)(Login);
</code></pre>
<p>The <code>signIn</code> action is where the &quot;magic&quot; happens:</p>
<pre><code class="language-javascript">export const signIn = (email, password) =&gt; ((dispatch) =&gt; {
    return fetch(`http://&lt;server_url&gt;/api/sessions/sign_in}`, {
        method: &quot;POST&quot;,
        headers: {
            'Accept': 'application/json',
            'Content-Type': 'application/json'
        },
        body: JSON.stringify({email, password}),
      }).then(
        response =&gt; {
            if (!response.ok) {
                // Manage error
                return dispatch(errorOnFetch(response.statusText));
            }
            return response.json().then(response =&gt; dispatch(signInSuccessfull(response.data)));
        },
        error =&gt; {
            return dispatch(errorOnFetch(error))
        }
    );
});

const signInSuccessfull = (data) =&gt; {
    setAuthToken(data.token);
    return {
        type: AUTHENTICATION_SUCCEDED,
    }
};
</code></pre>
<p>Two main things happen in the <code>signInSuccessfull</code> method: the token returned by the server is passed to the <code>setAuthToken</code> method and the <code>AUTHENTICATION_SUCCEEDED</code> action is returned to the redux reducer.</p>
<p>The reducer sets the <code>isAuthenticated</code> flag to <code>true</code> (do you remember the check in the <code>PrivateRoute</code> component?):</p>
<pre><code class="language-javascript">const mainReducer = (state = initialState, action) =&gt; {
    switch (action.type) {
        case AUTHENTICATION_SUCCEDED:
            return ({...state,
                isAuthenticated: true,
            });
        case AUTHENTICATION_SIGNOUT:
            return ({...state,
                isAuthenticated: false,
            });
    }
}
</code></pre>
<p>The <code>setAuthToken</code> method saves the token in a cookie, so that it will be then available for the next requests:</p>
<pre><code class="language-javascript">const setAuthToken = (token) =&gt; {
    const cookies = new Cookies();
    cookies.set('my_auth_token', token, {
        path: '/'
    });
};
</code></pre>
<p>I'm using the <a href="https://github.com/reactivestack/cookies/tree/master/packages/universal-cookie">universal-cookie</a> package here, so we need to install it:</p>
<pre><code>yarn add universal-cookie
</code></pre>
<p>Other useful methods will permit to get the cookie or delete it:</p>
<pre><code class="language-javascript">export const getAuthToken = () =&gt; {
    const cookies = new Cookies();
    return cookies.get('my_auth_token');
};

const removeAuthToken = () =&gt; {
    const cookies = new Cookies();
    cookies.remove('my_auth_token', {
        path: '/',
    });
};
</code></pre>
<h2 id="usethetokenforprivateroutes">Use the token for private routes</h2>
<p>At this point we have a valid token saved in a cookie. We just need to use it when making a request for a private API endpoint.</p>
<p>I'll use a wrapper function around <code>fetch</code> to add the Authorization header to the requests:</p>
<pre><code class="language-javascript">const authFetch = (url, options) =&gt; (
    fetch(url, mergeAuthHeaders(options)).then(
        response =&gt; {
            // Sign out if we receive a 401!
            if (response.status === 401) {
                store.dispatch(signOut());
                throw new Error(&quot;Unauthorized&quot;);
            }
            return response;
        },
        error =&gt; error
    )
);

const mergeAuthHeaders = (baseOptions) =&gt; {
    const options = _.isUndefined(baseOptions) ? {} : baseOptions;
    if (!_.has(options, 'headers')) {
        options.headers = {};
    }
    options.headers = {
        ...options.headers,
        'Authorization': `Bearer ${getAuthToken()}`,
    };
    return options;
}
</code></pre>
<p>The <code>authFetch</code> method receives a URL to fetch and the options for the <code>fetch</code> method. It merges the authentication header in the options and makes the request.<br>
If it receives a 401 response, then it makes the sign out, deleting the cookie and setting the <code>isAuthenticated</code> flag to <code>false</code>:</p>
<pre><code class="language-javascript">export const signOut = () =&gt; {
    removeAuthToken();
    return {
        type: AUTHENTICATION_SIGNOUT,
    }
};
</code></pre>
<p>That's it, you should probably add more logic to manage side cases and errors, but this is enough to consume the APIs we built.</p>
]]></content:encoded></item><item><title><![CDATA[API Authentication with Phoenix and React - part 1]]></title><description><![CDATA[Scenario: you just wrote a cool web app using React for the frontend part and Phoenix as the API server. 
Then you realize everybody can poke around your stuff and you decide it’s time to restrict the access to known users, how to do it?]]></description><link>https://www.tommyblue.it/2018/03/29/api-authentication-with-phoenix-and-react-part-1/</link><guid isPermaLink="false">5b807014e04575000159f4b9</guid><category><![CDATA[elixir]]></category><category><![CDATA[phoenix]]></category><category><![CDATA[react]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Wed, 28 Mar 2018 22:08:09 GMT</pubDate><media:content url="https://www.tommyblue.it/content/images/2018/03/1_-MTuYZ4k46A8JJdWlq_x5A.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.tommyblue.it/content/images/2018/03/1_-MTuYZ4k46A8JJdWlq_x5A.png" alt="API Authentication with Phoenix and React - part 1"><p><strong>Scenario:</strong> you just wrote a cool web app using React for the frontend part and Phoenix as the API server.<br>
Then you realize everybody can poke around your stuff and you decide <strong>it’s time to restrict the access to known users</strong>, how to do it?</p>
<p>I’ll configure a <a href="http://phoenixframework.org/">Phoenix</a> server to manage <a href="https://tools.ietf.org/html/rfc6750">access tokens</a>, used by a <a href="https://reactjs.org/">React</a> app to make authenticated calls.</p>
<p>This blog post only deals with the backend part and consists of these steps:</p>
<ul>
<li>add users and give them the ability to sign in</li>
<li>manage authentication tokens for the users</li>
<li>define a pipeline to grant access to restricted routes only to authenticated requests</li>
</ul>
<p><strong>I’m not going to cover the SSL configuration here, but it’s fundamental to only serve the endpoints over HTTPS. You can check out <a href="https://spin.atomicobject.com/2018/03/07/force-ssl-phoenix-framework/">this article</a> which explains how to force SSL in Phoenix.</strong></p>
<h2 id="createtheusers">Create the Users</h2>
<p>Let’s create the schemas for the User:</p>
<pre><code>$ mix phx.gen.schema User users email:string:unique password_hash:string
</code></pre>
<p>The <code>mix</code> command doesn’t accept any option to avoid null values, so the migration files must be edited.<br>
This is the final version of the migration file (only the relevant parts):</p>
<pre><code class="language-elixir">create table(:users) do
  add :email, :string, null: false
  add :password_hash, :string, null: false
  timestamps()
end
create unique_index(:users, [:email])
</code></pre>
<p>We're going to save hashed password, not clear-text passwords, so our schema will have a virtual <code>password</code> field which, behind the scenes, will be hashed and saved.</p>
<p>To crypt passwords we’re going to use the <a href="https://github.com/riverrun/comeonin/">Comeonin</a> lib, that must be added to the dependencies, together with BCrypt (don’t forget to run <code>mix deps.get</code> after you made the changes):</p>
<pre><code class="language-elixir"># mix.exs
defp deps do
  [...]
  {:comeonin, &quot;~&gt; 4.0&quot;},
  {:bcrypt_elixir, &quot;~&gt; 1.0&quot;}
end
</code></pre>
<p>Let’s now see the <code>User</code> module:</p>
<pre><code class="language-elixir">defmodule MyApp.User do
  use Ecto.Schema
  import Ecto.Changeset
  alias MyApp.User
  
  schema &quot;users&quot; do
    field :email, :string
    field :password_hash, :string
    field :password, :string, virtual: true
    timestamps()
  end
  
  def changeset(%User{} = user, attrs) do
    user
    |&gt; cast(attrs, [:email, :password])
    |&gt; validate_required([:email, :password])
    |&gt; unique_constraint(:email, downcase: true)
    |&gt; put_password_hash()
  end
  
  defp put_password_hash(changeset) do
    case changeset do
      %Ecto.Changeset{valid?: true, changes: %{password: pass}} -&gt;
        put_change(
            changeset, 
            :password_hash, 
            Comeonin.Bcrypt.hashpwsalt(pass)
        )
      _ -&gt;
        changeset
    end
  end
end
</code></pre>
<p>At this point we can create new users:</p>
<pre><code class="language-elixir">$ iex -S mix
iex(1)&gt; MyApp.repo.insert!(MyApp.User.changeset(
  %MyApp.User{}, %{
    email: “my_email@provider.com”, 
    password: “s3cr3t”
  }
))
[..]
%MyApp.User{
  __meta__: #Ecto.Schema.Metadata&lt;:loaded, &quot;users&quot;&gt;,
  email: &quot;my_email@provider.com&quot;,
  id: 1,
  inserted_at: ~N[2018-03-24 22:47:37.981969],
  password: &quot;s3cr3t&quot;,
  password_hash: &quot;&lt;cut&gt;&quot;,
  updated_at: ~N[2018-03-24 22:47:37.984213]
}
</code></pre>
<h2 id="usertokens">User tokens</h2>
<p>Ok, now that we have users, we must generate tokens for them, so that they can access restricted routes.</p>
<p>The first step is to create the schema:</p>
<pre><code>$ mix phx.gen.schema AuthToken auth_tokens user_id:references:users token:text:unique revoked:boolean revoked_at:utc_datetime
</code></pre>
<p>As before, we must edit the migration to add missing <code>null: false</code>:</p>
<pre><code class="language-elixir">create table(:auth_tokens) do
  add :user_id, references(:users, on_delete: :nothing), null: false
  add :token, :text, null: false
  add :revoked, :boolean, default: false, null: false
  add :revoked_at, :utc_datetime
  timestamps()
end
create unique_index(:auth_tokens, [:token])
create index(:auth_tokens, [:user_id])
</code></pre>
<p>The schema is the following:</p>
<pre><code class="language-elixir">defmodule MyApp.AuthToken do
  use Ecto.Schema
  import Ecto.Changeset
  alias MyApp.AuthToken
  alias MyApp.User
  
  schema &quot;auth_tokens&quot; do
    belongs_to :user, User
    field :revoked, :boolean, default: false
    field :revoked_at, :utc_datetime
    field :token, :string
    timestamps()
  end
  
  def changeset(%AuthToken{} = auth_token, attrs) do
    auth_token
    |&gt; cast(attrs, [:token])
    |&gt; validate_required([:token])
    |&gt; unique_constraint(:token)
  end
end
</code></pre>
<p>We’ve added the <code>belongs_to</code> relationship there, we must also edit the <code>User</code> schema adding:</p>
<pre><code class="language-elixir">schema &quot;users&quot; do
  has_many :auth_tokens, MyApp.AuthToken
  [...]
end
</code></pre>
<p>We’re going to need a bunch of methods to deal with authorization headers and tokens, so a service could be useful.<br>
Let’s create an <code>Authenticator</code> service with the first methods we’ll use to generate and verify tokens with <a href="https://hexdocs.pm/phoenix/Phoenix.Token.html">Phoenix.Token</a>:</p>
<pre><code class="language-elixir">defmodule MyApp.Services.Authenticator do
  # These values must be moved in a configuration file
  @seed &quot;user token&quot;
  # good way to generate: 
  # :crypto.strong_rand_bytes(30) 
  # |&gt; Base.url_encode64 
  # |&gt; binary_part(0, 30)
  @secret &quot;CHANGE_ME_k7kTxvFAgeBvAVA0OR1vkPbTi8mZ5m&quot;
  
  def generate_token(id) do
    Phoenix.Token.sign(@secret, @seed, id, max_age: 86400)
  end
  
  def verify_token(token) do
    case Phoenix.Token.verify(@secret, @seed, token, max_age: 86400) do
      {:ok, id} -&gt; {:ok, token}
      error -&gt; error
    end
  end
end
</code></pre>
<h2 id="signinandouttheusers">Sign in and out the users</h2>
<p>We now need to let the users sign in (create a token for the user) and sign out (delete the token).</p>
<p>We’ll manage the logic inside the User module:</p>
<pre><code class="language-elixir">defmodule MyApp.User do
  [...]
  alias MyApp.Services.Authenticator
  
  def sign_in(email, password) do
    case Comeonin.Bcrypt.check_pass(Repo.get_by(User, email: email), password) do
      {:ok, user} -&gt;
        token = Authenticator.generate_token(user)
        Repo.insert(Ecto.build_assoc(user, :auth_tokens, %{token: token}))
      err -&gt; err
    end
  end
  
  def sign_out(conn) do
    case Authenticator.get_auth_token(conn) do
      {:ok, token} -&gt;
        case Repo.get_by(AuthToken, %{token: token}) do
          nil -&gt; {:error, :not_found}
          auth_token -&gt; Repo.delete(auth_token)
        end
      error -&gt; error
    end
  end
end
</code></pre>
<p>The first line of the <code>sign_in</code> function looks for the user in the <code>Repo</code> then passes it to <code>Bcrypt.check_pass</code> together with the provided password, to verify it.</p>
<p>In the case the user can’t be found, <code>check_pass</code> receives a wrong user and returns <code>{:error, &quot;invalid user-identifier&quot;}</code> while in the case the password verification fails it returns <code>{:error, &quot;invalid password&quot;}</code>.<br>
So, in both cases, we return a <code>{:error, reason}</code> tuple (we’ll later use this in the controller).</p>
<p>If the user is found and the password is valid, we create a token for the user and return it.</p>
<p>The <code>sign_out</code> function looks for the token in the header and deletes it if found.</p>
<p>The function that extracts the token is based on a simple regexp:</p>
<pre><code class="language-elixir">defmodule MyApp.Services.Authenticator do
  [...]
  def get_auth_token(conn) do
    case extract_token(conn) do
      {:ok, token} -&gt; verify_token(token)
      error -&gt; error
    end
  end
  
  defp extract_token(conn) do
    case Plug.Conn.get_req_header(conn, &quot;authorization&quot;) do
      [auth_header] -&gt; get_token_from_header(auth_header)
       _ -&gt; {:error, :missing_auth_header}
    end
  end
  
  defp get_token_from_header(auth_header) do
    {:ok, reg} = Regex.compile(&quot;Bearer\:?\s+(.*)$&quot;, &quot;i&quot;)
    case Regex.run(reg, auth_header) do
      [_, match] -&gt; {:ok, String.trim(match)}
      _ -&gt; {:error, &quot;token not found&quot;}
    end
  end
end
</code></pre>
<p>At this point all the underground pieces are in place, but we need to create the endpoints to let the user make the actions.</p>
<p>First, add the required routes:</p>
<pre><code class="language-elixir">scope &quot;/sessions&quot; do
  post &quot;/sign_in&quot;, SessionsController, :create
  delete &quot;/sign_out&quot;, SessionsController, :delete
end
</code></pre>
<p>We can check the result with <code>mix phx.routes</code>:</p>
<pre><code>sessions_path  POST    /sessions/sign_in    MyApp.SessionsController :create
sessions_path  DELETE  /sessions/sign_out    MyApp.SessionsController :delete
</code></pre>
<p>We must then create the <code>SessionsController</code>:</p>
<pre><code class="language-elixir">defmodule MyAppWeb.SessionsController do
  use MyAppWeb, :controller
  alias MyApp.User
  
  def create(conn, %{&quot;email&quot; =&gt; email, &quot;password&quot; =&gt; password}) do
    case User.sign_in(email, password) do
      {:ok, auth_token} -&gt;
        conn
        |&gt; put_status(:ok)
        |&gt; render(&quot;show.json&quot;, auth_token)
      {:error, reason} -&gt;
        conn
        |&gt; send_resp(401, reason)
    end
  end
  
  def delete(conn, _) do
    case User.sign_out(conn) do
      {:error, reason} -&gt; conn |&gt; send_resp(400, reason)
      {:ok, _} -&gt; conn |&gt; send_resp(204, &quot;&quot;)
    end
  end
end
</code></pre>
<p>and its view:</p>
<pre><code class="language-elixir">defmodule MyAppWeb.SessionsView do
  use MyAppWeb, :view
  def render(&quot;show.json&quot;, auth_token) do
    %{data: %{token: auth_token.token}}
  end
end
</code></pre>
<p>Done, it’s now time to make some tests calling these endpoints. I personally use <a href="https://install.advancedrestclient.com/#/install">Advanced Rest Client</a> (aka ARC), a Chrome extension to make HTTP calls.</p>
<p>To test sign in, we must make a <code>POST</code> call to <code>http://localhost:4000/sessions/sign_in</code> with the following JSON body:</p>
<pre><code>{
  &quot;email&quot;:&quot;my_email@provider.com&quot;,
  &quot;password&quot;: &quot;s3cr3t&quot;
}
</code></pre>
<p>If we didn’t make any error we’ll get back the token in a json structure as we defined in <code>show.json</code>:</p>
<pre><code>{
  &quot;data&quot;: {
    &quot;token&quot;: &quot;SFMyNTY.g3QAAAAC[...cut...]&quot;
  }
}
</code></pre>
<p>Now make a <code>DELETE</code> call against <code>http://localhost:4000/sessions/sign_out</code>, adding an authorization header in the form: <code>Authorization: Bearer SFMyNTY.g3QAAAAC[…cut…]</code>. You should receive a 204 response.</p>
<p>Take a look at the database for further feedback. A new token for the user must be created at sign in and it must be deleted at sign out.</p>
<h2 id="requirethetokentoaccessrestrictedroutes">Require the token to access restricted routes</h2>
<p>We’re almost there: users are able to sign in and receive an authentication token, we should now restrict the access to private routes requiring an authorization token.</p>
<p>The key is a basic component of Phoenix: the <a href="https://hexdocs.pm/phoenix/plug.html">Plug</a>.</p>
<p>To apply one or more plugs to routes, we need to create a pipeline and pipe the routes through it:</p>
<pre><code class="language-elixir">defmodule MyAppWeb.Router do
  pipeline :authenticate do
    plug MyAppWeb.Plugs.Authenticate
  end
  scope &quot;/restricted&quot;, Restricted do
    pipe_through :authenticate
    resources &quot;/private&quot;
    # more routes
  end
  [...]
end
</code></pre>
<p>The Authenticate plug will look for the authorization token in the request headers and will validate it. Only requests with valid tokens will go through. Invalid requests will get a 401 response.</p>
<p>This is the plug file:</p>
<pre><code class="language-elixir">defmodule MyAppWeb.Plugs.Authenticate do
  import Plug.Conn
  def init(default), do: default
  
  def call(conn, _default) do
    case MyApp.Services.Authenticator.get_auth_token(conn) do
      {:ok, token} -&gt;
        case MyApp.Repo.get_by(MyApp.AuthToken, %{token: token, revoked: false}) 
        |&gt; Repo.preload(:user) do
          nil -&gt; unauthorized(conn)
          auth_token -&gt; authorized(conn, auth_token.user)
        end
      _ -&gt; unauthorized(conn)
    end
  end
  
  defp authorized(conn, user) do
    # If you want, add new values to `conn`
    assign(conn, :signed_in, true)
    assign(conn, :signed_user, user)
    conn
  end
  
  defp unauthorized(conn) do
    conn |&gt; send_resp(401, &quot;Unauthorized&quot;) |&gt; halt()
  end
end
</code></pre>
<h2 id="revokeacompromisedtoken">Revoke a compromised token</h2>
<p>In the case a token is somehow “compromised”, the user can revoke it.</p>
<p>We need a new restricted route which updates the compromised token setting the <code>revoked=true</code> and <code>revoked_at=&lt;current timestamp&gt;</code>.</p>
<p>I’m going to leave this as an exercise for the readers.</p>
<h2 id="consumetheapiswithreact">Consume the APIs with React</h2>
<p>In the <a href="https://www.tommyblue.it/2018/03/31/api-authentication-with-phoenix-and-react-part-2/">next part of this guide</a>, I’ll show how to use what done here in a frontend app built using React. <a href="https://www.tommyblue.it/2018/03/31/api-authentication-with-phoenix-and-react-part-2/">Read it here</a>.</p>
<h2 id="notejwtandwhyididntuseit">Note: JWT and why I didn’t use it</h2>
<p>In the first iteration of the code I decided to use <a href="https://github.com/ueberauth/guardian">Guardian</a> and <a href="https://jwt.io/">JWT</a> (JSON Web Tokens) but then I realized I couldn’t revoke tokens without store them in the db and actually make a query at each API call (and avoiding a query was the main reason that lead me to use JWT), so I decided it was a over-engineered solution and moved to the integrated <a href="https://hexdocs.pm/phoenix/Phoenix.Token.html">Phoenix.Token</a>.</p>
<p>If you’re interested in the JWT revoke topic, check the <a href="https://github.com/ueberauth/guardian_db/blob/master/README.md#disadvantages">GuardianDB README</a> which has a good explanation:</p>
<blockquote>
<p>In other words, once you have reached a point where you think you need Guardian.DB, it may be time to take a step back and reconsider your whole approach to authentication!</p>
</blockquote>
<h2 id="references">References</h2>
<ul>
<li><a href="http://learningwithjb.com/posts/authenticating-users-using-a-token-with-phoenix">http://learningwithjb.com/posts/authenticating-users-using-a-token-with-phoenix</a></li>
<li><a href="https://dennisreimann.de/articles/phoenix-passwordless-authentication-magic-link.html">https://dennisreimann.de/articles/phoenix-passwordless-authentication-magic-link.html</a></li>
<li><a href="http://whatdidilearn.info/2018/02/18/authentication-in-phoenix.html">http://whatdidilearn.info/2018/02/18/authentication-in-phoenix.html</a></li>
<li><a href="https://itnext.io/authenticating-absinthe-graphql-apis-in-phoenix-with-guardian-d647ea45a69a">https://itnext.io/authenticating-absinthe-graphql-apis-in-phoenix-with-guardian-d647ea45a69a</a></li>
<li><a href="https://medium.freecodecamp.org/authentication-using-elixir-phoenix-f9c162b2c398">https://medium.freecodecamp.org/authentication-using-elixir-phoenix-f9c162b2c398</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[React+Sass+Typescript with Phoenix framework using Webpack]]></title><description><![CDATA[<p>If you’re playing with <a href="https://elixir-lang.org/">Elixir</a> and <a href="http://phoenixframework.org/">Phoenix</a> you’ll probably already know that Phoenix uses Brunch.io to build the assets pipeline.<br>
I initially started building my app with React / Redux + SASS and I was quite happy, but when I decided to add Typescript to the recipe, I found</p>]]></description><link>https://www.tommyblue.it/2017/09/05/react-sass-typescript-with-phoenix-framework-using-webpack/</link><guid isPermaLink="false">5b807014e04575000159f4b8</guid><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Tue, 05 Sep 2017 21:37:00 GMT</pubDate><media:content url="https://www.tommyblue.it/content/images/2018/03/1_ffJ5VWBKVuOnBBxhrKfXKQ-1.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://www.tommyblue.it/content/images/2018/03/1_ffJ5VWBKVuOnBBxhrKfXKQ-1.jpeg" alt="React+Sass+Typescript with Phoenix framework using Webpack"><p>If you’re playing with <a href="https://elixir-lang.org/">Elixir</a> and <a href="http://phoenixframework.org/">Phoenix</a> you’ll probably already know that Phoenix uses Brunch.io to build the assets pipeline.<br>
I initially started building my app with React / Redux + SASS and I was quite happy, but when I decided to add Typescript to the recipe, I found Brunch.io wasn’t very helpful!</p>
<p>I’ve already used these tools using <a href="https://webpack.github.io/">Webpack</a> as building tool, so I decided to switch to it.<br>
I had a working Webpack configuration I was using in other projects, so I only had to find out how to apply it to Phoenix.</p>
<p>When I initially generated the Phoenix app, I didn’t use the <code>--no-brunch</code> command line flag to generate it without Brunch, and the app itself was alreadyt working. So I wanted to replace the existing Brunch config with the new one without nuking any existing feature.</p>
<p>Below you can find the (few) steps required for this upgrade.</p>
<h2 id="removebrunchandinstallwebpack">Remove Brunch and install Webpack</h2>
<p>To remove Brunch, just remove the <code>assets/brunch-config.js</code> file and uninstall all brunch-related packages. In my case:</p>
<pre><code>yarn remove brunch babel-brunch clean-css-brunch sass-brunch uglify-js-brunch
</code></pre>
<p>In the <code>assets/package.json</code> file you can also see that Brunch is mentioned in the scripts commands:</p>
<pre><code>&quot;scripts&quot;: {
  &quot;deploy&quot;: &quot;brunch build --production&quot;,
  &quot;watch&quot;: &quot;brunch watch --stdin&quot;
},
</code></pre>
<p>Replace them with the Webpack commands (to run the <code>webpack</code> command you’d probably need to globally install it with <code>yarn global add webpack</code>):</p>
<pre><code>&quot;scripts&quot;: {
  &quot;deploy&quot;: &quot;webpack -p&quot;,
  &quot;compile&quot;: &quot;webpack --progress --color&quot;,
  &quot;watch&quot;: &quot;webpack --watch-stdin --progress --color&quot;
},
</code></pre>
<p>You’re now ready to install the webpack ecosystem we’re going to use:</p>
<pre><code>yarn add -D webpack babel-core babel-loader babel-preset-es2015 copy-webpack-plugin css-loader extract-text-webpack-plugin file-loader node-sass sass-loader style-loader webpack-notifier
</code></pre>
<h2 id="addtypescripttotheproject">Add Typescript to the project</h2>
<p>To add Typescript support we need some more packages too:</p>
<pre><code>yarn add -D typescript ts-loader tslint tslint-react @types/phoenix @types/react @types/react-dom @types/react-redux
</code></pre>
<p>The Typescript configuration file is <code>assets/tsconfig.json</code>:</p>
<pre><code>{
  &quot;compilerOptions&quot;: {
    &quot;target&quot;: &quot;es2015&quot;,
    &quot;module&quot;: &quot;es2015&quot;,
    &quot;jsx&quot;: &quot;preserve&quot;,
    &quot;moduleResolution&quot;: &quot;node&quot;,
    &quot;baseUrl&quot;: &quot;js&quot;,
    &quot;outDir&quot;: &quot;ts-build&quot;,
    &quot;allowJs&quot;: true
  },
  &quot;exclude&quot;: [
    &quot;node_modules&quot;,
    &quot;priv&quot;,
    &quot;ts-build&quot;
  ]
}
</code></pre>
<p>As you probably noticed in the previous command, I also installed <a href="https://palantir.github.io/tslint/">TSLint</a> support for linting features. Add the related package to your editor (like vscode-tslint for VSCode) to have (almost-)real-time linting warnings.<br>
You also need a configuration file, here’s <code>mytslint.json</code> file:</p>
<pre><code>{
  &quot;extends&quot;: [&quot;tslint:recommended&quot;, &quot;tslint-react&quot;],
  &quot;rules&quot;: {
    &quot;no-console&quot;: [false]
  }
}
</code></pre>
<h2 id="webpackconfiguration">Webpack configuration</h2>
<p>Before configuring Webpack, let’s configure Babel, using the <code>assets/.babelrc</code> file:</p>
<pre><code>{
  &quot;presets&quot;: [&quot;es2015&quot;, &quot;react&quot;]
}
</code></pre>
<p>And now, finally, the big part, the Webpack configuration file, <code>assets/webpack.config.js</code>:</p>
<pre><code>const env = process.env.NODE_ENV
const path = require(&quot;path&quot;)
const ExtractTextPlugin = require(&quot;extract-text-webpack-plugin&quot;);
const CopyWebpackPlugin = require(&quot;copy-webpack-plugin&quot;)
const config = {
  entry: [&quot;./css/app.scss&quot;, &quot;./js/app.js&quot;],
  output: {
    path: path.resolve(__dirname, &quot;../priv/static&quot;),
    filename: &quot;js/app.js&quot;
  },
  resolve: {
    extensions: [&quot;.ts&quot;, &quot;.tsx&quot;, &quot;.js&quot;, &quot;.jsx&quot;],
    modules: [&quot;deps&quot;, &quot;node_modules&quot;]
  },
  module: {
    rules: [{
      test: /\.tsx?$/,
      use: [&quot;babel-loader&quot;, &quot;ts-loader&quot;]
    }, {
      test: /\.jsx?$/,
      use: &quot;babel-loader&quot;
    }, {
      test: /\.scss$/,
      use: ExtractTextPlugin.extract({
        use: [{
          loader: &quot;css-loader&quot;,
          options: {
            minimize: true,
            sourceMap: env === 'production',
          },
        }, {
          loader: &quot;sass-loader&quot;,
          options: {
            includePaths: [path.resolve('node_modules')],
          }
        }],
        fallback: &quot;style-loader&quot;
      })
    }, {
      test: /\.(ttf|otf|eot|svg|woff(2)?)(\?[a-z0-9]+)?$/,
      // put fonts in assets/static/fonts/
      loader: 'file-loader?name=/fonts/[name].[ext]'
    }]
  },
  plugins: [
    new ExtractTextPlugin({
      filename: &quot;css/[name].css&quot;
    }),
    new CopyWebpackPlugin([{ from: &quot;./static&quot; }])
  ]
};
module.exports = config;
</code></pre>
<h2 id="finalpolishingandtests">Final polishing and tests</h2>
<p>The final step is to update the main template file <code>app.html.eex</code> to use the generated files (that you can find in the <code>priv/static</code> folder once compiled):</p>
<pre><code class="language-html">[..]
&lt;link rel=&quot;stylesheet&quot; href=&quot;&lt;%= static_path(@conn, &quot;/css/main.css&quot;) %&gt;&quot;&gt;
[..]
&lt;script src=&quot;&lt;%= static_path(@conn, &quot;/js/app.js&quot;) %&gt;&quot;&gt;&lt;/script&gt;
</code></pre>
<p>The whole pipeline is now ready, except we must tell Phoenix to run Webpack when the server is launched.<br>
At the moment you can already test the pipeline running, from the <code>assets/</code> folder, the command: <code>yarn run compile</code></p>
<p>To configure Phoenix, update the <code>config/dev.exs</code> file replacing Brunch command with Webpack:</p>
<pre><code>watchers: [
  node: [
    &quot;node_modules/webpack/bin/webpack.js&quot;, &quot;--watch-stdin&quot;, &quot;--progress&quot;, &quot;--color&quot;,
    cd: Path.expand(&quot;../assets&quot;, __DIR__)
  ]
]
</code></pre>
<p>That’s it.<br>
Run <code>mix phx.server</code> and you’ll have both the Phoenix server running and the assets pipeline compiled and watching for file updates.</p>
]]></content:encoded></item><item><title><![CDATA[How to access docker containers with nsenter]]></title><description><![CDATA[<p>Since the 0.9 version, <a href="http://docker.io">Docker</a> is shipped with the <a href="http://blog.docker.com/2014/03/docker-0-9-introducing-execution-drivers-and-libcontainer/">libcontainer execution driver</a> and the containers can be accessed with the <a href="http://man7.org/linux/man-pages/man1/nsenter.1.html">nsenter</a> util (e.g. you don't need to install SSH in a container anymore!).</p>
<p>Nsenter is included in the <strong>util-linux</strong> package, from version 2.23.</p>
<p>If your distribution has</p>]]></description><link>https://www.tommyblue.it/2014/07/25/how-to-access-docker-containers-with-nsenter/</link><guid isPermaLink="false">5b807014e04575000159f4b7</guid><category><![CDATA[container]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Fri, 25 Jul 2014 13:09:57 GMT</pubDate><content:encoded><![CDATA[<p>Since the 0.9 version, <a href="http://docker.io">Docker</a> is shipped with the <a href="http://blog.docker.com/2014/03/docker-0-9-introducing-execution-drivers-and-libcontainer/">libcontainer execution driver</a> and the containers can be accessed with the <a href="http://man7.org/linux/man-pages/man1/nsenter.1.html">nsenter</a> util (e.g. you don't need to install SSH in a container anymore!).</p>
<p>Nsenter is included in the <strong>util-linux</strong> package, from version 2.23.</p>
<p>If your distribution has an older versione of util-linux, you can compile it:</p>
<pre><code class="language-prettyprint">~$ curl https://www.kernel.org/pub/linux/utils/util-linux/v2.24/util-linux-2.24.tar.gz | tar -zxf-
~$ cd util-linux-2.24
~$ ./configure --without-ncurses
~$ make nsenter
~$ sudo cp nsenter /usr/local/bin
</code></pre>
<p>To enter a container you need to know its pid, which can be found with <code>docker inspect</code> knowing its ID:</p>
<pre><code class="language-prettyprint">~$ PID=$(docker inspect --format '{{.State.Pid}}' CONTAINER_ID)
</code></pre>
<p>Using the PID you can then enter the container:</p>
<pre><code class="language-prettyprint">~$ sudo nsenter --target $PID --mount --uts --ipc --net --pid /bin/bash
</code></pre>
<p>If you don't specify which program launch inside the container, <code>${SHELL}</code> is run. I prefer to specify it (<code>/bin/bash</code>) because I use ZSH but I don't usually want to to install it inside the containers.</p>
]]></content:encoded></item><item><title><![CDATA[Basic setup for a Node.js-based TDD Code Kata]]></title><description><![CDATA[<p>In the <a href="http://www.tommyblue.it/2014/06/27/basic-setup-for-a-ruby-based-tdd-code-kata/">last post</a> I suggested a minimal setup to begin with ruby-based TDD. In this post I want to show a possibile minimal setup for node.js-based TDD (node.js and npm must be installed). The kata will be again <a href="http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">The Game of Life</a>.</p>
<p><em>I'm not an expert of</em></p>]]></description><link>https://www.tommyblue.it/2014/06/27/basic-setup-for-a-node-js-based-tdd-code-kata/</link><guid isPermaLink="false">5b807014e04575000159f4b6</guid><category><![CDATA[node]]></category><category><![CDATA[tdd]]></category><category><![CDATA[testing]]></category><category><![CDATA[kata]]></category><category><![CDATA[mocha]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Fri, 27 Jun 2014 15:36:57 GMT</pubDate><content:encoded><![CDATA[<p>In the <a href="http://www.tommyblue.it/2014/06/27/basic-setup-for-a-ruby-based-tdd-code-kata/">last post</a> I suggested a minimal setup to begin with ruby-based TDD. In this post I want to show a possibile minimal setup for node.js-based TDD (node.js and npm must be installed). The kata will be again <a href="http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">The Game of Life</a>.</p>
<p><em>I'm not an expert of <a href="http://nodejs.org/">Node.js</a>, so I hope what I'm writing is correct :)</em></p>
<p>I'll use the test framework <a href="http://visionmedia.github.io/mocha/">Mocha</a> and <a href="https://github.com/LearnBoost/expect.js/">expect.js</a>, a <em>&quot;Minimalistic BDD-style assertions for Node.JS and the browser&quot;</em>.</p>
<p>Let's begin with the <code>package.json</code> file which will tell to <a href="https://www.npmjs.org/">npm</a> what to install:</p>
<pre><code>{
  &quot;name&quot;: &quot;game-of-life&quot;,
  &quot;version&quot;: &quot;0.0.1&quot;,
  &quot;dependencies&quot;: {
    &quot;mocha&quot;: &quot;*&quot;,
    &quot;expect.js&quot;: &quot;*&quot;
  }
}
</code></pre>
<p>With this file in the project folder you can run <code>npm install</code> to install the libraries. Then create the <code>test/</code> folder with the <code>mocha.opts</code> file, where you can specify various options, like the <a href="http://visionmedia.github.io/mocha/#reporters">reporter</a> to use:</p>
<pre><code>--reporter spec
</code></pre>
<p>With this file in place, the <code>mocha</code> command will launch the test.<br>
So write the minimal js file and its corresponding test file:</p>
<p><code>test/game_of_life_test.js</code>:</p>
<pre><code class="language-prettyprint">var expect = require('expect.js'),
  GameOfLife = require('../lib/game_of_life');

describe('Universe', function(){
  it('should have an initial size', function() {
    var u = new GameOfLife(6)
    expect(u.getSize()).to.equal(36);
  });
})
</code></pre>
<p><code>lib/game_of_life.js</code>:</p>
<pre><code class="language-prettyprint">function GameOfLife(side){
  this.size = side * side;
}
GameOfLife.prototype.getSize = function() { return this.size; }

module.exports = GameOfLife;
</code></pre>
<p>Now launch <code>mocha</code> and the first test should pass:</p>
<pre><code>~$ mocha

  Universe
    âœ“ should have an initial size 

  1 passing
</code></pre>
<p>To know something more about TDD and Node.js, start reading <a href="http://webapplog.com/test-driven-development-in-node-js-with-mocha/">this post</a> from <a href="http://azat.co/">Azat Mardan</a></p>
]]></content:encoded></item><item><title><![CDATA[Basic setup for a Ruby-based TDD Code Kata]]></title><description><![CDATA[<p>In the last weeks with the guys of the <a href="http://firenze.ruby-it.org/">Firenze Ruby Social Club</a> we started to think about organizing some <a href="http://en.wikipedia.org/wiki/Kata_%28programming%29">Code Katas</a> to play with <a href="http://en.wikipedia.org/wiki/Test-driven_development">test-driven development</a> and yesterday we met to play with <a href="http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">The Game of Life</a>.</p>
<p>We used Ruby and RSpec with a simple setup that I want</p>]]></description><link>https://www.tommyblue.it/2014/06/24/basic-setup-for-a-ruby-based-tdd-code-kata/</link><guid isPermaLink="false">5b807014e04575000159f4b5</guid><category><![CDATA[ruby]]></category><category><![CDATA[tdd]]></category><category><![CDATA[testing]]></category><category><![CDATA[rspec]]></category><category><![CDATA[kata]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Tue, 24 Jun 2014 12:53:00 GMT</pubDate><content:encoded><![CDATA[<p>In the last weeks with the guys of the <a href="http://firenze.ruby-it.org/">Firenze Ruby Social Club</a> we started to think about organizing some <a href="http://en.wikipedia.org/wiki/Kata_%28programming%29">Code Katas</a> to play with <a href="http://en.wikipedia.org/wiki/Test-driven_development">test-driven development</a> and yesterday we met to play with <a href="http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">The Game of Life</a>.</p>
<p>We used Ruby and RSpec with a simple setup that I want to report here if you'd like to play with some katas.</p>
<p>Although probably a Kata exercise won't need many gems, in ruby projects I like to always use a <code>Gemfile</code> with the required gems:</p>
<pre><code class="language-prettyprint">ruby '2.1.2'
source 'https://rubygems.org'
gem 'rspec'
</code></pre>
<p>After a <code>bundle install</code> you're ready to start writing some code (if you don't have the <code>bundle</code> command, install the bundler gem with <code>gem install bundler</code>).</p>
<p>A basic example to begin TDD with <a href="http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">the Game of Life</a> could be this:</p>
<p><code>game_of_life_spec.rb</code>:</p>
<pre><code class="language-prettyprint">require './game_of_life'

describe GameOfLife::Universe do
  it &quot;should have an initial size&quot; do
    u = GameOfLife::Universe.new(6)
    expect(u.size).to eq(36)
  end
end
</code></pre>
<p><code>game_of_life.rb</code>:</p>
<pre><code class="language-prettyprint">module GameOfLife
  class Universe
    attr_reader :size
    def initialize(side)
      @size = side**2
    end
  end
end
</code></pre>
<p>With this minimalistic setup, the test passes:</p>
<pre><code class="language-prettyprint">~$ bundle exec rspec --color game_of_life_spec.rb
.

Finished in 0.00095 seconds (files took 0.0966 seconds to load)
1 example, 0 failures
</code></pre>
<p>You're now ready to start playing with TDD in Ruby :)</p>
]]></content:encoded></item><item><title><![CDATA[How to logrotate rails logs]]></title><description><![CDATA[<p>If you deploy a rails app forgetting to configure logs automatic rotation, few weeks later won't be difficult to find something like this:</p>
<pre><code>$ ls -lh log/production.log
  -rw-rw-r-- 1 www-data www-data 93,2M apr 10 17:49 production.log
</code></pre>
<p>Think if you have to find some error log inside</p>]]></description><link>https://www.tommyblue.it/2014/04/11/how-to-logrotate-rails-logs/</link><guid isPermaLink="false">5b807014e04575000159f4b4</guid><category><![CDATA[rails]]></category><category><![CDATA[logs]]></category><category><![CDATA[deploy]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Fri, 11 Apr 2014 08:59:27 GMT</pubDate><content:encoded><![CDATA[<p>If you deploy a rails app forgetting to configure logs automatic rotation, few weeks later won't be difficult to find something like this:</p>
<pre><code>$ ls -lh log/production.log
  -rw-rw-r-- 1 www-data www-data 93,2M apr 10 17:49 production.log
</code></pre>
<p>Think if you have to find some error log inside a 100MB file, not easy... :)</p>
<p>Setting log rotation isn't difficult at all. I know two main ways.</p>
<h2 id="usesyslog">Use syslog</h2>
<p>This is a really easy solution. Rails will use standard syslog as logger, which means the logs will rotate automatically.</p>
<p>Open <code>config/environments/production.rb</code> and add this line:</p>
<pre><code class="language-prettyprint">config.logger = SyslogLogger.new
</code></pre>
<p>If you want to avoid your logs to be mixed with system logs you need to add some parameters:</p>
<pre><code class="language-prettyprint">config.logger = SyslogLogger.new('/var/log/&lt;APP_NAME&gt;.log')
</code></pre>
<h2 id="uselogrotate">Use logrotate</h2>
<p>This is the cleaner way, but requires to create a file in the server, inside the <code>/etc/logrotate.d/</code> folder. This is a possible content of the <code>/etc/logrotate.d/rails_apps</code> file:</p>
<pre><code>/path/to/rails/app/log/*.log {
    weekly
    missingok
    rotate 28
    compress
    delaycompress
    notifempty
    copytruncate
}
</code></pre>
<p>The <code>copytruncate</code> option is required unless you want to restart the rails app after log rotation. Otherwise the app will continue to use the old log file, if it exists, or will stop logging (or, worse, will crash) if the file is deleted.<br>
Below the <code>copytruncate</code> details from <a href="http://linuxcommand.org/man_pages/logrotate8.html">the logrotate man page</a>:</p>
<pre><code>copytruncate
      Truncate  the  original log file in place after creating a copy,
      instead of moving the old log file and optionally creating a new
      one,  It  can be used when some program can not be told to close
      its logfile and thus might continue writing (appending)  to  the
      previous log file forever.  Note that there is a very small time
      slice between copying the file and truncating it, so  some  log-
      ging  data  might be lost.  When this option is used, the create
      option will have no effect, as the old log file stays in  place.
</code></pre>
<p>To check the logrotate script you can use the <code>logrotate</code> command with the debug (<code>-d</code>) option, which executes a dry-run:</p>
<pre><code class="language-prettyprint">sudo logrotate -d /etc/logrotate.d/rails_apps
</code></pre>
<p>If everything seems ok you can wait until the next day or manually launch the rotation with:</p>
<pre><code class="language-prettyprint">sudo logrotate -v /etc/logrotate.d/rails_apps
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Security checks for Ruby apps]]></title><description><![CDATA[<p>If you, like me, have a lot of ruby apps and want to check if the code is vulnerable, <a href="https://github.com/codesake/codesake-dawn">Codesake::Dawn</a> could be a useful gem.</p>
<p>This gem supports Rails, Sinatra and Padrino apps. To install it in a Rails app, add the gem to the <code>development</code> group in <code>Gemfile</code></p>]]></description><link>https://www.tommyblue.it/2014/04/04/security-checks-your-ruby-apps/</link><guid isPermaLink="false">5b807014e04575000159f4b3</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[sinatra]]></category><category><![CDATA[owasp]]></category><category><![CDATA[security]]></category><category><![CDATA[padrino]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Fri, 04 Apr 2014 13:51:10 GMT</pubDate><content:encoded><![CDATA[<p>If you, like me, have a lot of ruby apps and want to check if the code is vulnerable, <a href="https://github.com/codesake/codesake-dawn">Codesake::Dawn</a> could be a useful gem.</p>
<p>This gem supports Rails, Sinatra and Padrino apps. To install it in a Rails app, add the gem to the <code>development</code> group in <code>Gemfile</code>:</p>
<pre><code class="language-prettyprint">group :development do
  gem 'codesake-dawn', require: false
end
</code></pre>
<p>then run <code>bundle install</code>.<br>
Now add this line in the <code>Rakefile</code>:</p>
<pre><code class="language-prettyprint">require 'codesake/dawn/tasks'
</code></pre>
<p>Install finished. To check the app you just have to run <code>rake dawn:run</code>:</p>
<pre><code class="language-prettyprint">~$ rake dawn:run
15:27:03 [*] dawn v1.1.0 is starting up
15:27:04 [$] dawn: scanning .
15:27:04 [$] dawn: rails v4.0.3 detected
15:27:04 [$] dawn: applying all security checks
15:27:04 [$] dawn: 171 security checks applied - 0 security checks skipped
15:27:04 [$] dawn: 1 vulnerability found
15:27:04 [!] dawn: Owasp Ror CheatSheet: Session management check failed
15:27:04 [$] dawn: Severity: info
15:27:04 [$] dawn: Priority: unknown
15:27:04 [$] dawn: Description: By default, Ruby on Rails uses a Cookie based session store. What that means is that unless you change something, the session will not expire on the server. That means that some default applications may be vulnerable to replay attacks. It also means that sensitive information should never be put in the session.
15:27:04 [$] dawn: Solution: Use ActiveRecord or the ORM you love most to handle your code session_store. Add &quot;Application.config.session_store :active_record_store&quot; to your session_store.rb file.
15:27:04 [$] dawn: Evidence:
15:27:04 [$] dawn: 	In your session_store.rb file you are not using ActiveRercord to store session data. This will let rails to use a cookie based session and it can expose your web application to a session replay attack.
15:27:04 [$] dawn: 	{:filename=&gt;&quot;./config/initializers/session_store.rb&quot;, :matches=&gt;[]}
15:27:04 [*] dawn is leaving
</code></pre>
]]></content:encoded></item><item><title><![CDATA[How to avoid mysqldump --events warning]]></title><description><![CDATA[<p>Since MySQL v.5.5.29 the <code>mysqldump</code> command can generate the following error:</p>
<pre><code>-- Warning: Skipping the data of table mysql.event. Specify the --events option explicitly.
</code></pre>
<p>An example of mysqldump for full dumps is this:</p>
<pre><code>mysqldump --opt -u &lt;USERNAME&gt; -p&lt;PASSWORD&gt; --all-databases | gzip &gt;</code></pre>]]></description><link>https://www.tommyblue.it/2014/04/03/how-to-avoid-mysqldump-events-warning/</link><guid isPermaLink="false">5b807014e04575000159f4b2</guid><category><![CDATA[linux]]></category><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Thu, 03 Apr 2014 15:54:53 GMT</pubDate><content:encoded><![CDATA[<p>Since MySQL v.5.5.29 the <code>mysqldump</code> command can generate the following error:</p>
<pre><code>-- Warning: Skipping the data of table mysql.event. Specify the --events option explicitly.
</code></pre>
<p>An example of mysqldump for full dumps is this:</p>
<pre><code>mysqldump --opt -u &lt;USERNAME&gt; -p&lt;PASSWORD&gt; --all-databases | gzip &gt; full_dump.sql.gz
</code></pre>
<p>In a case of a server which executes it during periodical backups, this means a warning email from Cron Daemon, very annoying.</p>
<p>If you add, as suggested, the <code>--events</code> option you can receive this error:</p>
<pre><code>mysqldump: Couldn't execute 'show events': 
  Access denied for user '&lt;USERNAME&gt;'@'localhost' to database '&lt;DATABASE&gt;' (1044)
</code></pre>
<p>The solution is to grant the <code>EVENT</code> permission to the user:</p>
<pre><code>GRANT EVENT ON &lt;DATABASE&gt;.* to '&lt;USERNAME&gt;'@'localhost' with grant option;
</code></pre>
<p>Apparently if you don't care about events there's no way to suppress the error message with some <code>--no-events</code> option, as far as I know.<br>
There's an interesting discussion about this (maybe-not-a-)bug <a href="http://bugs.mysql.com/bug.php?id=68376">here</a></p>
]]></content:encoded></item><item><title><![CDATA[Generate the sitemap of a Ghost blog during deploy]]></title><description><![CDATA[<p>Waiting for a sitemap generator inside the core of <a href="https://ghost.org/">Ghost</a> (planned as <em><a href="https://github.com/TryGhost/Ghost/wiki/Planned-Features">&quot;future implementation&quot;</a></em>) I decided to implement a way to generate an up-to-date <code>sitemap.xml</code> during deployment.<br>
As you can read in the <a href="https://www.tommyblue.it/2014/04/01/deploy-ghost-blog-with-capistrano-rbenv-and-nvm/">previous post</a> I'm deploying this blog with <a href="http://capistranorb.com/">Capistrano</a> and <a href="https://github.com/loopj/capistrano-node-deploy">capistrano-node-deploy</a>.<br>
So I added a</p>]]></description><link>https://www.tommyblue.it/2014/04/02/generate-the-sitemap-of-a-ghost-blog-during-deploy/</link><guid isPermaLink="false">5b807014e04575000159f4b1</guid><category><![CDATA[ghost]]></category><category><![CDATA[capistrano]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Wed, 02 Apr 2014 09:53:27 GMT</pubDate><content:encoded><![CDATA[<p>Waiting for a sitemap generator inside the core of <a href="https://ghost.org/">Ghost</a> (planned as <em><a href="https://github.com/TryGhost/Ghost/wiki/Planned-Features">&quot;future implementation&quot;</a></em>) I decided to implement a way to generate an up-to-date <code>sitemap.xml</code> during deployment.<br>
As you can read in the <a href="https://www.tommyblue.it/2014/04/01/deploy-ghost-blog-with-capistrano-rbenv-and-nvm/">previous post</a> I'm deploying this blog with <a href="http://capistranorb.com/">Capistrano</a> and <a href="https://github.com/loopj/capistrano-node-deploy">capistrano-node-deploy</a>.<br>
So I added a <code>deploy:generate_sitemap</code> task which is executed at the end of the deployment process.</p>
<p>This is the <code>Capfile</code> extract:</p>
<pre><code class="language-prettyprint">namespace :deploy do  
  task :generate_sitemap do
    run &quot;cd #{latest_release} &amp;&amp; ./ghost_sitemap.sh #{latest_release}&quot;
  end
end
after &quot;node:restart&quot;, &quot;deploy:generate_sitemap&quot;  
</code></pre>
<p>So at the end of the deployment the <code>ghost_sitemap.sh</code> script is executed. The script is placed in the blog root and is a personalized version of the code you can find here: <a href="http://ghost.centminmod.com/ghost-sitemap-generator/">http://ghost.centminmod.com/ghost-sitemap-generator/</a></p>
<p>It essentially does 3 things:</p>
<ul>
<li>Puts the <code>sitemap.xml</code> link in the <code>robots.txt</code> file</li>
<li>Scans (using <code>wget</code>) the website and generates the <code>sitemap.xml</code> file in the <code>content</code> folder</li>
<li>Notifies <a href="https://www.google.com/webmasters/tools/home">Google Webmaster Tools</a></li>
</ul>
<p>What I changed of the original script is:</p>
<pre><code>url=&quot;www.tommyblue.it&quot;
webroot=&quot;${1}/content&quot;
path=&quot;${webroot}/sitemap.xml&quot;
user='&lt;USER&gt;'
group='&lt;GROUP&gt;'
</code></pre>
<p><code>user</code> and <code>group</code> will be used to <code>chmod</code> the <code>sitemap.xml</code> file, so check that the web user (probably <code>www-data</code>) can read that file.</p>
<p>This process has a big problem: the sitemap is generated only during deploy, not when I publish a new post. A workaround is to run <code>cap deploy:generate_sitemap</code> after a new post is published.</p>
<p>It works but I need an automatic way. Any idea?</p>
]]></content:encoded></item><item><title><![CDATA[Deploy ghost blog with capistrano, rbenv and nvm]]></title><description><![CDATA[<p>I just moved this blog from <a href="http://jekyllrb.com/">Jekyll</a> to <a href="https://ghost.org/">Ghost</a> (<strong>v.0.4.2</strong> while writing this post) and I had to find a fast way to deploy new changes to the server.<br>
I'm pretty confident with <a href="http://capistranorb.com/">Capistrano</a> so, although Ghost doesn't use Ruby, I decided to use it to manage</p>]]></description><link>https://www.tommyblue.it/2014/04/01/deploy-ghost-blog-with-capistrano-rbenv-and-nvm/</link><guid isPermaLink="false">5b807014e04575000159f4b0</guid><category><![CDATA[ghost]]></category><category><![CDATA[capistrano]]></category><category><![CDATA[rbenv]]></category><category><![CDATA[nvm]]></category><category><![CDATA[node]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Tue, 01 Apr 2014 14:59:14 GMT</pubDate><content:encoded><![CDATA[<p>I just moved this blog from <a href="http://jekyllrb.com/">Jekyll</a> to <a href="https://ghost.org/">Ghost</a> (<strong>v.0.4.2</strong> while writing this post) and I had to find a fast way to deploy new changes to the server.<br>
I'm pretty confident with <a href="http://capistranorb.com/">Capistrano</a> so, although Ghost doesn't use Ruby, I decided to use it to manage deployments.<br>
A cool gem allow node apps to be deployed with Capistrano: <a href="https://github.com/loopj/capistrano-node-deploy">capistrano-node-deploy</a></p>
<p>This is the <code>Gemfile</code>:</p>
<pre><code>source 'https://rubygems.org'
gem 'capistrano', '~&gt; 2.15.5'
gem 'capistrano-node-deploy', '~&gt; 1.2.14'
gem 'capistrano-shared_file', '~&gt; 0.1.3'
gem 'capistrano-rbenv', '~&gt; 1.0.5'
</code></pre>
<p>If you don't use <a href="https://github.com/sstephenson/rbenv">rbenv</a> just remove the related line in the <code>Gemfile</code> and change the <code>Capfile</code> accordingly.</p>
<p>This configuration works well, but it has some problem if you use <a href="https://github.com/creationix/nvm">nvm</a> instead of a system-wide installation of node and npm.</p>
<p>To fix them I had to add some variables (<code>nvm_path</code>, <code>node_binary</code> and <code>npm_binary</code>) and totally override the <code>node:install_packages</code> task. Whithout this changes the deploy task ends with messages like:</p>
<pre><code>/usr/bin/env: node
No such file or directory
</code></pre>
<p>or:</p>
<pre><code>node: not found
</code></pre>
<p>This isn't really a good way, because you must change the <code>nvm_path</code> every time you upgrade nvm, but it's the only way I actually found :)</p>
<p>I also changed the <code>app_command</code> variable to launch <code>node ~/apps/tommyblue.it/current/index</code> instead of <code>node ~/apps/tommyblue.it/current/core/index</code> in the upstart script. The second command doesn't actually works although is the gem's default.</p>
<p>This is the full content of the <code>Capfile</code> (remember to change <uppercase values=""> to your own):</uppercase></p>
<pre><code class="language-prettyprint">require &quot;capistrano/node-deploy&quot;
require &quot;capistrano/shared_file&quot;
require &quot;capistrano-rbenv&quot;
set :rbenv_ruby_version, &quot;2.1.1&quot;

set :application, &quot;tommyblue.it&quot;
set :user, &quot;&lt;USERNAME&gt;&quot;
set :deploy_to, &quot;/home/#{user}/apps/#{application}&quot;

set :app_command, &quot;index&quot;

set :node_user, &quot;&lt;USERNAME&gt;&quot;
set :node_env, &quot;production&quot;
set :nvm_path, &quot;/home/&lt;USERNAME&gt;/.nvm/v0.10.26/bin&quot;
set :node_binary, &quot;#{nvm_path}/node&quot;
set :npm_binary, &quot;#{nvm_path}/npm&quot;

set :use_sudo, false
set :scm, :git
set :repository,  &quot;&lt;GIT REPO URL&gt;&quot;

default_run_options[:pty] = true
set :ssh_options, { forward_agent: true }

server &quot;&lt;SERVER HOSTNAME OR IP&gt;&quot;, :web, :app, :db, primary: true

set :shared_files,    [&quot;config.js&quot;]
set :shared_children, [&quot;content/data&quot;, &quot;content/images&quot;]

set :keep_releases, 3

namespace :deploy do
  task :mkdir_shared do
    run &quot;cd #{shared_path} &amp;&amp; mkdir -p data images files&quot;
  end

  task :generate_sitemap do
    run &quot;cd #{latest_release} &amp;&amp; ./ghost_sitemap.sh #{latest_release}&quot;
  end
end

namespace :node do
  desc &quot;Check required packages and install if packages are not installed&quot;
  task :install_packages do
    run &quot;mkdir -p #{previous_release}/node_modules ; cp -r #{previous_release}/node_modules #{release_path}&quot; if previous_release
    run &quot;cd #{release_path} &amp;&amp; PATH=#{nvm_path}:$PATH #{npm_binary} install --loglevel warn&quot;
  end
end

after &quot;deploy:create_symlink&quot;, &quot;deploy:mkdir_shared&quot;
after &quot;node:restart&quot;, &quot;deploy:generate_sitemap&quot;
after &quot;deploy:generate_sitemap&quot;, &quot;deploy:cleanup&quot;
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Una nuova casa?]]></title><description><![CDATA[<p>Recentemente non sono riuscito ad aggiornare molto questo blog. Spesso la causa non è stata la mancanza di contenuti, ma la non immediatezza della piattaforma. àˆ vero, con <a href="http://jekyllrb.com/">Jekyll</a> mi sono divertito e l'idea di servire un sito statico secondo me è grandiosa, ma l'implementazione è effettivamente molto scarna e</p>]]></description><link>https://www.tommyblue.it/2014/03/18/una-nuova-casa/</link><guid isPermaLink="false">5b807014e04575000159f4af</guid><category><![CDATA[jekyll]]></category><category><![CDATA[ghost]]></category><category><![CDATA[blog]]></category><category><![CDATA[emberjs]]></category><category><![CDATA[sito]]></category><category><![CDATA[informatica]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Tue, 18 Mar 2014 15:19:00 GMT</pubDate><content:encoded><![CDATA[<p>Recentemente non sono riuscito ad aggiornare molto questo blog. Spesso la causa non è stata la mancanza di contenuti, ma la non immediatezza della piattaforma. àˆ vero, con <a href="http://jekyllrb.com/">Jekyll</a> mi sono divertito e l'idea di servire un sito statico secondo me è grandiosa, ma l'implementazione è effettivamente molto scarna e questo mi ha spesso fermato quando avrei voluto iniziare a scrivere un articolo e basta.</p>
<p>Da un po' di tempo mi sto quindi guardando intorno alla ricerca di una nuova piattaforma, nella convinzione che comunque non voglio usare <a href="http://wordpress.org/">Wordpress</a>.</p>
<p>Per l'appunto nelle ultime settimane ho iniziato a riscrivere l'interfaccia di <a href="http://rubyfatt.kreations.it/">Rubyfatt</a> utilizzando <a href="http://emberjs.com/">Ember</a> e, quasi contemporaneamente, ho iniziato a sentir parlare di <a href="https://ghost.org/">Ghost</a> che ha appunto deciso di <a href="http://dev.ghost.org/hello-ember/">riscrivere l'interfaccia di amministrazione con Ember</a>. Mi è subito sembrata un'occasione da cogliere al volo.</p>
<p>Sto lavorando sulla migrazione da Jekyll a Ghost, molto presto le pagine del blog potrebbero quindi cambiare aspetto per <a href="https://www.tommyblue.it/2013/07/01/come-migrare-da-wordpress-a-jekyll-ed-essere-felici">l'ennesima volta</a></p>
]]></content:encoded></item><item><title><![CDATA[Git: Content-addressable filesystem and Version Control System]]></title><description><![CDATA[<iframe src="http://www.slideshare.net/slideshow/embed_code/28733271?rel=0" width="900" height="730" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC;border-width:1px 1px 0;margin-bottom:5px" allowfullscreen> </iframe> <div style="margin-bottom:5px"> <strong> <a href="https://www.slideshare.net/tommasovisconti/git-contentaddressable-filesystem-and-version-control-system" title="GIT: Content-addressable filesystem and Version Control System" target="_blank">GIT: Content-addressable filesystem and Version Control System</a> </strong> from <strong><a href="http://www.slideshare.net/tommasovisconti" target="_blank">Tommaso Visconti</a></strong> </div>]]></description><link>https://www.tommyblue.it/2013/11/29/git-presentation/</link><guid isPermaLink="false">5b807014e04575000159f4ae</guid><category><![CDATA[informatica]]></category><category><![CDATA[git]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Fri, 29 Nov 2013 11:03:40 GMT</pubDate><content:encoded><![CDATA[<iframe src="http://www.slideshare.net/slideshow/embed_code/28733271?rel=0" width="900" height="730" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC;border-width:1px 1px 0;margin-bottom:5px" allowfullscreen> </iframe> <div style="margin-bottom:5px"> <strong> <a href="https://www.slideshare.net/tommasovisconti/git-contentaddressable-filesystem-and-version-control-system" title="GIT: Content-addressable filesystem and Version Control System" target="_blank">GIT: Content-addressable filesystem and Version Control System</a> </strong> from <strong><a href="http://www.slideshare.net/tommasovisconti" target="_blank">Tommaso Visconti</a></strong> </div>
]]></content:encoded></item><item><title><![CDATA[Automagically create and bootstrap Chef nodes on VMware vSphere with knife esx]]></title><description><![CDATA[<p>Recently I decided to stop bothering me with &quot;standard&quot; system administration and I switched to a more devops-oriented administration using <a href="http://www.opscode.com/chef/">Chef</a>.</p>
<p>After a few days I found a very useful gem, <a href="https://github.com/maintux/knife-esx">knife-esx</a> which is a <a href="http://docs.opscode.com/knife.html">Knife</a> plugin to create and bootstrap new VMware virtual machines on the fly.</p>]]></description><link>https://www.tommyblue.it/2013/08/28/automagically-create-and-bootstrap-chef-nodes-on-vmware-vsphere-with-knife-esx/</link><guid isPermaLink="false">5b807014e04575000159f4ad</guid><category><![CDATA[informatica]]></category><category><![CDATA[vmware]]></category><category><![CDATA[esx]]></category><category><![CDATA[esxi]]></category><category><![CDATA[vsphere]]></category><category><![CDATA[knife]]></category><category><![CDATA[chef]]></category><category><![CDATA[how-to]]></category><category><![CDATA[software libero]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Wed, 28 Aug 2013 09:21:45 GMT</pubDate><content:encoded><![CDATA[<p>Recently I decided to stop bothering me with &quot;standard&quot; system administration and I switched to a more devops-oriented administration using <a href="http://www.opscode.com/chef/">Chef</a>.</p>
<p>After a few days I found a very useful gem, <a href="https://github.com/maintux/knife-esx">knife-esx</a> which is a <a href="http://docs.opscode.com/knife.html">Knife</a> plugin to create and bootstrap new VMware virtual machines on the fly. I patched it and its dependency gem <a href="https://github.com/maintux/esx">esx</a> because it wasn't possible to set the number of cores per virtual CPU, but now it is so, to create a new virtual machine and bootstrap it you just need to type:</p>
<pre><code> knife esx vm create \
       --free-license \
       --esx-host &lt;MY_ESXI_HOST&gt; \
       --esx-templates-dir /vmfs/volumes/datastore1/esx-gem/templates \
       --vm-name &lt;NEW_VM_NAME&gt; \
       --guest-id ubuntu64Guest \
       --use-template ubuntu-12.04-x64_template.vmdk \
       --distro ubuntu12.04-gems \
       --vm-memory 512 \
       --vm-cpus 8 \
       --vm-cpu-cores 4 \
       -x &lt;SSH_USERNAME&gt; \
       -i ~/.ssh/id_rsa \
       --node-name &lt;CHEF_NEW_NODE_NAME&gt; \
       -r 'role[base],role[esx-vm]'
</code></pre>
]]></content:encoded></item><item><title><![CDATA[How I deploy Rails apps]]></title><description><![CDATA[<p>In various mailing lists I read a lot of threads about deploying a Rails app. I want to contribute to the topic with this post, where I'll describe how I'm now deploying my rails apps in a VPS (actually it's not a virtual but a physical server, but it's the</p>]]></description><link>https://www.tommyblue.it/2013/07/17/how-i-deploy-rails-apps/</link><guid isPermaLink="false">5b807014e04575000159f4ac</guid><category><![CDATA[informatica]]></category><category><![CDATA[how-to]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[nginx]]></category><category><![CDATA[unicorn]]></category><category><![CDATA[supervise]]></category><category><![CDATA[capistrano]]></category><category><![CDATA[rbenv]]></category><dc:creator><![CDATA[Tommaso Visconti]]></dc:creator><pubDate>Wed, 17 Jul 2013 15:18:07 GMT</pubDate><content:encoded><![CDATA[<p>In various mailing lists I read a lot of threads about deploying a Rails app. I want to contribute to the topic with this post, where I'll describe how I'm now deploying my rails apps in a VPS (actually it's not a virtual but a physical server, but it's the same..).</p>
<p>In the past I used <a href="https://www.phusionpassenger.com/">Pushion Passenger</a> but it was a very young project and when <a href="http://unicorn.bogomips.org/">Unicorn</a> showed up, I felt in love :)<br>
I wrote a <a href="http://www.tommyblue.it/2009/11/14/deploy-di-applicazioni-rails-con-unicorn-e-nginx">similar post</a> some years ago, the idea is the same, but the structure is now more solid.</p>
<p>The tools I'm now using are:</p>
<ul>
<li>Unicorn as Rack HTTP server</li>
<li><a href="http://nginx.org/">Nginx</a> as proxy server</li>
<li>Supervise (part of Daemontools) to monitor the unicorn app</li>
<li><a href="https://github.com/capistrano/capistrano">Capistrano</a> to manage the deploy</li>
<li><a href="https://github.com/sstephenson/rbenv">Rbenv</a> to manage the ruby environment</li>
</ul>
<p>The server's o.s. is Ubuntu 12.04 LTS.</p>
<h2 id="rbenv">Rbenv</h2>
<p>To install rbenv and ruby-build:</p>
<pre><code>sudo apt-get install build-essential zlib1g-dev openssl libopenssl-ruby1.9.1 libssl-dev libruby1.9.1 libreadline-dev git-core
git clone https://github.com/sstephenson/rbenv.git ~/.rbenv
echo 'export PATH=&quot;$HOME/.rbenv/bin:$PATH&quot;' &gt;&gt; ~/.bashrc
echo 'eval &quot;$(rbenv init -)&quot;' &gt;&gt; ~/.bashrc
exec $SHELL -l
mkdir -p ~/.rbenv/plugins
cd ~/.rbenv/plugins
git clone git://github.com/sstephenson/ruby-build.git
rbenv install 2.0.0-p247
rbenv rehash
rbenv global 2.0.0-p247
rbenv local 2.0.0-p247
</code></pre>
<p>Just check if everything went ok:</p>
<pre><code>$ ruby -v
ruby 2.0.0p247 (2013-06-27 revision 41674) [x86_64-linux]
</code></pre>
<p>Read <a href="http://robots.thoughtbot.com/post/47273164981/using-rbenv-to-manage-rubies-and-gems">this post</a> to switch to Rbenv if you're using <a href="https://rvm.io/">RVM</a></p>
<h2 id="capistrano">Capistrano</h2>
<p>Create the required folder in the server:</p>
<pre><code>mkdir ~/apps
</code></pre>
<p>Now configure your app to be deployed:</p>
<pre><code>cd ~/my_app_path
echo &quot;gem 'capistrano'&quot; &gt;&gt; Gemfile
bundle install
capify .
</code></pre>
<p>edit the <em>Capfile</em> file if you need, then edit <em>config/deploy.rb</em>. This is a working example:</p>
<pre><code class="language-prettyprint">
require &quot;bundler/capistrano&quot;
require &quot;capistrano-rbenv&quot;
set :rbenv_ruby_version, &quot;2.0.0-p247&quot;

set :user, &quot;server_username&quot;
set :application, &quot;my_app&quot;
set :deploy_to, &quot;/home/#{user}/apps/#{application}&quot;
set :deploy_via, :remote_cache

set :use_sudo, false
set :scm, :git
set :repository,  &quot;your_app_git_repo&quot;

default_run_options[:pty] = true
set :ssh_options, { forward_agent: true }

server &quot;my_server.my_domain&quot;, :web, :app, :db, primary: true

set :branch, &quot;master&quot;
set :rails_env, &quot;production&quot;

after &quot;deploy&quot;, &quot;deploy:cleanup&quot; # keep only the last 5 releases

# Daemontools start/stop
namespace :deploy do
  %w[start stop restart].each do |command|
    desc &quot;#{command} unicorn server&quot;
    task command, roles: :app, except: {no_release: true} do
      if command == &quot;start&quot;
        sudo &quot;/usr/bin/svc -u /etc/service/my_app&quot;
      elsif command == &quot;stop&quot;
        sudo &quot;/usr/bin/svc -d /etc/service/my_app&quot;
      else
        sudo &quot;/usr/bin/svc -t /etc/service/my_app&quot;
      end
    end
  end

  task :setup_config, roles: :app do
    run &quot;mkdir -p #{shared_path}/config&quot;
  end
  after &quot;deploy:setup&quot;, &quot;deploy:setup_config&quot;

  task :symlink_config, roles: :app do
    run &quot;ln -nfs #{shared_path}/config/database.yml #{release_path}/config/database.yml&quot;
  end
  after &quot;deploy:finalize_update&quot;, &quot;deploy:symlink_config&quot;

  desc &quot;Make sure local git is in sync with remote.&quot;
  task :check_revision, roles: :web do
    unless `git rev-parse HEAD` == `git rev-parse origin/#{branch}`
      puts &quot;WARNING: HEAD is not the same as origin/#{branch}&quot;
      puts &quot;Run `git push` to sync changes.&quot;
      exit
    end
  end
  before &quot;deploy&quot;, &quot;deploy:check_revision&quot;
end

</code></pre>
<p>You can create the required folders with:</p>
<pre><code>cap deploy:setup
</code></pre>
<p>Log in to the server and check the <em>~/apps/my_app/shared</em> folder. Add these folders if they don't exist:</p>
<pre><code>cd ~/apps/my_app/shared
mkdir config logs pids sockets
</code></pre>
<p>in the <em>config</em> folder create a <em>database.yml</em> file with the rails production environment configurations.</p>
<h2 id="unicorn">Unicorn</h2>
<p>Add the unicorn gem to the rails app:</p>
<pre><code>cd ~/my_app_path
echo &quot;gem 'unicorn'&quot; &gt;&gt; Gemfile
bundle install
</code></pre>
<p>Add the unicorn configuration in the <em>shared/config/unicorn.rb</em> file (in the server):</p>
<pre><code class="language-prettyprint">
worker_processes 2
working_directory &quot;/home/my_user/apps/my_app/current&quot; # available in 0.94.0+
listen &quot;/home/my_user/apps/my_app/shared/sockets/my_app.sock&quot;, :backlog =&gt; 64
timeout 30
pid &quot;/home/my_user/apps/my_app/shared/pids/unicorn.pid&quot;
stderr_path &quot;/home/my_user/apps/my_app/shared/log/unicorn.stderr.log&quot;
stdout_path &quot;/home/my_user/apps/my_app/shared/log/unicorn.stdout.log&quot;

preload_app true
GC.respond_to?(:copy_on_write_friendly=) and
  GC.copy_on_write_friendly = true

before_fork do |server, worker|
  defined?(ActiveRecord::Base) and
    ActiveRecord::Base.connection.disconnect!
end

after_fork do |server, worker|
  defined?(ActiveRecord::Base) and
    ActiveRecord::Base.establish_connection
end

</code></pre>
<p>To launch unicorn I create the <em>~/service</em> folder. There I create a folder for each project. So:</p>
<pre><code>mkdir -p ~/service/my_app
</code></pre>
<p>Then the required files.</p>
<p><strong>~/service/my_app/run (must be executable)</strong></p>
<pre><code class="language-prettyprint">
#!/bin/bash

exec su - my_user -c '/home/my_user/service/load_my_app.sh bundle exec unicorn_rails -E production -c /home/my_user/apps/my_app/shared/config/unicorn.rb'
# If you want to launch unicorn manually use te line below instead of the line above (use sudo!). Useful for debugging
# exec su - my_user -c '/home/my_user/service/load_my_app.sh bundle exec unicorn_rails -E production -l /home/my_user/apps/my_app/shared/sockets/my_app.sock'

</code></pre>
<p><strong>~/service/load_my_app.sh</strong></p>
<pre><code class="language-prettyprint">
#!/bin/bash

export RAILS_ENV=&quot;production&quot;
export PATH=&quot;$HOME/.rbenv/bin:$PATH&quot;
eval &quot;$(rbenv init -)&quot;
cd /home/my_user/apps/my_app/current/
exec $@

</code></pre>
<p>As pointed in the comment, you can use the <em>run</em> file to test the app, just modify the file then launch it as root:</p>
<pre><code>cd ~/service/my_app
sudo ./run
</code></pre>
<p>You'll see the familiar unicorn startup process, then it will listen for connections in the given socket.</p>
<p>That's it, now jump to supervise</p>
<h2 id="daemontools">Daemontools</h2>
<p>Install the required packages:</p>
<pre><code>sudo apt-get install daemontools daemontools-run
</code></pre>
<p>After this command you'll have the <em>svc</em> executable. Before using it, create the symbolic link in the <em>/etc/service</em> folder:</p>
<pre><code>cd /etc/service
sudo ln -s /home/my_user/service/my_app
</code></pre>
<p>Supervise automatically launches, at server sturtup, the <em>run</em> executable in the folders present in <em>/etc/service/</em></p>
<p>To manually startup the app, use <em>svc</em>:</p>
<pre><code>sudo svc -u /etc/service/my_app
</code></pre>
<p>This is the same command used by capistrano during deploy (se configuration above).</p>
<h2 id="nginx">Nginx</h2>
<p>If everything went as expected, the rails app is running and listening for connections in the unix socket at <em>/home/my_user/apps/my_app/shared/sockets/my_app.sock</em>. Now configure Nginx to use that socket.</p>
<p><strong>/etc/nginx/sites-available/www.my_app.my_domain</strong></p>
<pre><code class="language-prettyprint">
upstream backend_my_app {
  server unix:/home/my_user/apps/my_app/shared/sockets/my_app.sock fail_timeout=0;
}

server {
	listen [::]:80;

  client_max_body_size 4G;
  keepalive_timeout 5;

  try_files $uri/index.html $uri.html $uri @app;

	root /home/my_user/apps/my_app/current/public/;
	index index.html index.htm;

	server_name my_app.my_domain www.my_app.my_domain;

  location @app {
    gzip_static on;
    proxy_pass http://backend_my_app;
    proxy_redirect off;

    proxy_set_header        Host    $host;
    proxy_set_header        X-Real-IP       $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Proto $scheme;

    root /home/my_user/apps/my_app/current/public/;
    index  index.html index.htm;
  }

  location ~* ^/font.+\.(svg|ttf|woff|eot)$ {
    root /home/my_user/apps/my_app/current/public/;
  }

  error_page   500 502 503 504  /50x.html;
  location = /50x.html {
    root   /var/www/nginx-default;
  }

  access_log  /var/log/nginx/access.log;
  error_log  /var/log/nginx/error.log;
}

</code></pre>
<p>Symlink this file in <em>/etc/nginx/sites-enabled/</em> and restart nginx, your app should be online.</p>
<p>When you'll deploy a new version of the app, Capistrano will require the sudo password to send a TERM signal to supervise, which will restart the rails app.</p>
<p>That's it, it seems a lot of configuration (and maybe is) but it works great and there are very little differences between the projects, so <strong>CTRL-C+CTRL-V</strong> works great! :)</p>
]]></content:encoded></item></channel></rss>