All posts by zaid

Anime watch list – Spring season 2014

Here is a list of the anime shows that I watched in spring season 2014, ranked according to my opinion from least to most enjoyable shows:

  • 5. Bokura wa Minna Kawaisou

    English Name: The Kawai Complex Guide to Manors and Hostel Behavior

    Bokura wa minna Kawaisou vlcsnap-2014-04-04-13h50m26s37

    Genres : Comedy, Romance, School, Slice of Life.
    Brief : High school freshman Kazunari Usa move to live in his own on Kawai Complex house, in which his senpai Ritsu – whom he admires- also lives, with other “unique” individuals.

    My opinion : Good light romantic and comedy anime, which has simple plot, yet has funny atmosphere. Sometimes you may feel bored due to expected events, but yet it worth watching if one loves simple funny anime.

  • 4. Soredemo Sekai wa Utsukushii

    English name: The World is Still Beautiful

    764e149923c3d29e5c8e9b456fa1d9101396569806_full

    Genres : Adventure, Fantasy, Romance, Shoujo.
    Brief : Princess Nike, the youngest Rain Dukedom daughter who can call the rain, travels to the Sun Kingdom to marry Sun King Livius for her country, to find that he is just a demanding child!

    My opinion : While it have nearly average drawings quality and other technical aspects, but its story line has done very good job in delivering the main message of the anime, which is the same as its title – regardless what happens to you in this life- world is still beautiful, it is enjoyable show which have somehow its own charm.

  • 3. Isshuukan Friends.

    English Name: One Week Friends

    MjhWZ2xyb24.market_maxres

    Genres : Comedy, School, Shounen, Slice of Life.
    Brief : Kaori Fujimiya is a lonely girl with no friends, because her memories about her friends disappear every week, so Yuuki Hase -who wants to be her friend- introduce himself to her each week.

    My opinion : This is lovely slow paced slice of life anime, although there is no dramatic moments or plot, but yet it is really warm, simple and lovely.

  • 2. Mahouka Koukou no Rettousei

    English Name: The Irregular at Magic High School

    mahoukakoukou

    Genres : Magic, Romance, School, Sci-Fi, Shounen, Supernatural.
    Brief : In a world where magic is recognized as formal technology, a brother and sister attend the National Magic University, and many stormy incidents begin to appear then…

    My opinion : This anime is a very interesting, great drawings and good soundtracks, fight scenes are so intense with plenty of strong characters. The only thing that I may consider as disadvantage is that the protagonist (the big brother – Tatsuya) is too strong, and there is no other character in the show that’s in near his strength and skill level. Overall its very good supernatural anime.

  • 1. Haikyuu!!

    English Name: High Jump

    G77gSr4

    Genres : Comedy, Drama, School, Shounen, Sports.
    Brief : Shouyou Hinata loves volleyball, but due to his team lacking real players, he got easily defeated in his first middle school match, by genius player “Tobio Kageyama”. So he applied to Karasuno High School volleyball club to take his revenge from Kageyama.

    My opinion : This is the first sport anime I watch in long time, but it does great job in expressing what is important in such sports, which is team spirit, then hard training. Even if the individual players are strong (or even genius like Kageyama), it is useless if you cannot coordinate efficiently with your teammates. Very amusing anime with great events and story line.

Server Sent Events example with laravel

Recently I have read about HTML5 Server Sent Events, and liked the concept of establishing long-lived connections to the server, instead of performing frequent Ajax calls to pull the updates. And I wanted to put it into action by implementing live currency rates widget with Laravel as backend PHP application.

Basic Introduction

What are “Server Sent Events”?
As Wikipedia defines

Server-sent events (SSE) is a technology for a browser to get automatic updates from a server via HTTP connection. The Server-Sent Events EventSource API is standardized as part of HTML5 by the W3C.

Basically, its an HTML5 technology that helps web client getting data from the server, using one connection that lives on the server for long interval and sending stream of data to the browser without closing the connection  (basically the connection will remain active until browser closes); such technique is useful for pushing news updates, automatically send updates in a social network, and populating live prices components…etc.

The older approach is called Ajax Long Polling which implemented requesting the updates from the web client by issuing frequent separate requests (by initiating Ajax request recursively with timeout), like the following example:
(function poll(){
   setTimeout(function(){
      $.ajax({ url: "/path/to/url", success: function(data){
        console.log(data);  
        poll();
      }, dataType: "json"});
  }, 30000);
})();

To make the idea more clear, I will use live currency rates widget as an example; this widget gets the rates to convert between one currency to another, with displaying up & down arrows to indicate the change of the price.

Basic Usage

The following snippet shows the basic usage of SSE with javascript:

<script type="text/javascript">
var es = new EventSource("/path/to/url");
es.addEventListener("message", function(e) {
            console.log(e.data);
}, false);
</script>

This piece of javascript code intialize EventSource object which listen for the specified URL, and process the data, as the server sent it back to the browser. Each time the server send new data, the event listener method will be called and will process the information according to the callback function implementation.

The code

As I said, Laravel will be used to implement this example, I will implement two actions; one for rendering the whole page, and the other will send only modified data in json format to the EventSource; as the following:

First, I defined the routes in the routes.php

// in apps/routes.php 
Route::get('/prices-page', 'HomeController@pricesPage');
Route::get('/prices-values', 'HomeController@pricesValues');

Then, I will implement a method to retrieve the rates values ( I used yahoo service as a free feed source):
    /**
     * retrieve rates of currencies from feed
     * @return array
     */
    protected function getCurrencyRates() {
        $pair_arr = array('EURUSD', 'GBPUSD', 'USDJPY', 'XAUUSD', 'XAGUSD', 'USDJOD');
        $currencies_arr = array();

        foreach ($pair_arr as $pair) {
            try {
                
                $price_csv = file_get_contents("http://finance.yahoo.com/d/quotes.csv?e=.csv&f=sl1d1t1&s=$pair=X");
                $price_data = explode(',', $price_csv);
                $currencies_arr[$pair]['price'] = $price_data[1];
                $currencies_arr[$pair]['status'] = '';
            } catch (Exception $ex) {
                $currencies_arr['error'] = $ex->getMessage();
            }
        }
        return $currencies_arr;
    }

It is not efficient to get file from external source in a controller, but I use it here for the purpose of the example. Usually, I write a backend command to get the prices from external source (usually trading server) and controller methods retrieve data from the database.

Second, I will implement the the action to render the whole price block:

public function pricesPage() {
    $prices = $this->getCurrencyRates();
    return View::make('pricesPage', array('prices' => $prices));        
}

and here is the template:
<h1>Prices here</h1>
<table>
    <thead>
        <tr>
            <th>Currency</th>
            <th>Rate</th>
            <th>status</th>
        </tr>
    </thead>
    <tbody>
        <?php foreach($prices as $currency=>$price_info){?>
        <tr class="price-row">
            <td><?php echo $currency?></td>
            <td data-symbol-price="<?php echo $currency; ?>"><?php echo $price_info['price']; ?></td>
            <td data-symbol-status="<?php echo $currency; ?>"><?php echo $price_info['status']; ?></td>
        </tr>
        <?php }?>
    </tbody>
</table>

<script type="text/javascript">
        var es = new EventSource("<?php echo action('HomeController@pricesValues'); ?>");
        es.addEventListener("message", function(e) {
            arr = JSON.parse(e.data);
            
            for (x in arr) {    	
                $('[data-symbol-price="' + x + '"]').html(arr[x].price);
                $('[data-symbol-status="' + x + '"]').html(arr[x].status);
                //apply some effect on change, like blinking the color of modified cell...
            }
        }, false);
</script>    

And now I will implement pricesValues() action that will push the data to the server, as following:

    /**
     * action to handle streamed response from laravel
     * @return \Symfony\Component\HttpFoundation\StreamedResponse
     */
    public function pricesValues() {

            $response = new Symfony\Component\HttpFoundation\StreamedResponse(function() {
            $old_prices = array();

            while (true) {
                $new_prices = $this->getCurrencyRates();
                $changed_data = $this->getChangedPrices($old_prices, $new_prices);

                if (count($changed_data)) {
                    echo 'data: ' . json_encode($changed_data) . "\n\n";
                    ob_flush();
                    flush();
                }
                sleep(3);
                $old_prices = $new_prices;
            }
        });

        $response->headers->set('Content-Type', 'text/event-stream');
        return $response;
    }
    

    /**
     * comparing old and new prices and return only changed currency rates
     * @param array $old_prices
     * @param array $new_prices
     * @return array
     */
    protected function getChangedPrices($old_prices, $new_prices) {
        $ret = array();
        foreach ($new_prices as $curr => $curr_info) {
            if (!isset($old_prices[$curr])) {
                $ret[$curr]['status'] = '';
                $ret[$curr]['price'] = $curr_info['price'];                
            } elseif ($old_prices[$curr]['price'] != $curr_info['price']) {
                $ret[$curr]['status'] = $old_prices[$curr]['price']>$curr_info['price']?'down':'up';
                $ret[$curr]['price'] = $curr_info['price']; 
            }
        }

        return $ret;
    }

As you notice, the action that push data to the event source, have following properties:

  1. the content type of the response is text/event-stream.
  2. the response I returned here, is of type “StreamedResponse” which is part of Symfony HTTP foundation component, this type of response enables the server to return data to the client as chunks. StreamedResponse object accepts a callback function to output the transferred data chunks.
  3. The prices that have been changed since the latest push will be sent back to browser, (I have compared the old and new prices easily since they reside in the same action), so if the prices didn’t change nothing will be sent back to the browser.
  4. The data returned is prefixed with “data:” and appended “\n\n” characters to the end.
  5. flush() and ob_flush() are called to trigger sending data back to the browser.
For the browsers that don’t support HTML5 features, you can apply simple fallback as following:
<script type="text/javascript">
if(window.EventSource !== undefined){
    // supports eventsource object go a head...
} else {
    // EventSource not supported, 
    // apply ajax long poll fallback
    }
</script>

The final output

Now the live currency rates widget is ready, the widget will auto-refresh prices every 3 seconds, and the server will send only rates that has been changed, so the operation is optimized and will not exchange unnecessary requests/response.

SSE price rate
* screenshot of the final component.

Review of Tokyo ravens

Winter of 2014 was really good anime season, it has a dozen of anime shows that worth watching, varying genres between: magic, drama, supernatural and others. Among those shows, I really enjoyed watching “Tokyo Ravens”  every week and loved to blog a brief post about it:

Genres: Comedy, School, Super Power, Supernatural.
Season: Winter 2013/2014

Details

This show takes place in Japan with world that magic plays major role in it, there are laws to regulate using magic, schools that teach it, onmyo agency that acts as police force to keep magic issues under control. The main character in the story is Harutora, young man who is born for an important and powerful prestigious Onmyoji family, but he cannot see “spirit power” and as a result cannot be an Onmyo fighter. However, his cousin, Natsume, which is a girl of the head family cause him to enroll into the main Onmyo school in the capital Japan, and that’s when things begin to change!

My Review

Usually the super power anime are not my favorite, but this anime has really good storyline, the fighting scenes and music are simply excellent. Every character has his own power and influence in the show, so the story is not centered around a couple of characters like other single-hero stories, and when you think that some character is not that strong, you will be surprised by his own skills (like what I though about the main character – Harutora).

Screenshots

Tokyo Ravens is one of few anime shows that cause me to lose control of myself and watch the last several episodes all together, episode themes are varies from comedy, simple funny school life, to intense fighting scenes and life/death events.
I really enjoyed my time watching it, and I really wish that second season or at least an OVA comes out really soon 🙂

Pagination optimization for Symfony2 & Doctrine

In the last week, one of my websites – which is recently launched – has suffered from major performance problems, although there is little traffic on it. The website is built on Symfony2 and it acts as directory for financial/forex companies, I started investigating the problem, and the reason was quickly identified, the pagination was the culprit!

One of the tables pulled RSS feeds from multiple financial news sources and saved them in our CMS content table, and in about one month the table contained approximately 50,000 rows… the number of rows is relatively not that much, but the queries that are used in the pagination functionality (which is knp_paginator bundle) cause me some problem, especially that I am deploying the site on VPS with 2GB RAM..,

Background on limit/offset performance

Many developers that are dealing with MySQL has misunderstanding of limit/offset functionality, its not necessary that your query become efficient just because you applied limit to it. MySQL process query conditions, apply the ordering then it retrieves the desired number of rows, so internally MySQL may perform full table scan, use filesort of the resultset in a file or temporary table, then retrieve the desired number of rows, so the applying limit/offset comes at later stages of the query execution.

If you are using an index on your columns (especially for “order by” clause), you will get a better performance, but still when applying large offset (lets say 100,000 rows with limit 10), MySQL will have to process 100,000 then throw away unneeded rows to return just your desired 10 rows!

That’s why it’s problematic to apply pagination on large dataset on MySQL, and when you are using ORM you have to make some extra effort to optimize your pagination queries.

Paging Problems in Symfony2 knp_paginator

Usually when we need to paginate one of our table rows in Doctrine ORM with Symfony2, we would use Knp_paginator, since it’s easy to use, and provide you with simple paging functionality with just a couple lines of code. However, when I looked more closely in the way it performs I found some points that form performance bottlenecks in the way it operates, especially for large data sets.

For the purpose of clarification, I will be using cms_content table that have the following structure:

mysql> desc cms_content;
+--------------+--------------+------+-----+---------+----------------+
| Field        | Type         | Null | Key | Default | Extra          |
+--------------+--------------+------+-----+---------+----------------+
| id           | int(11)      | NO   | PRI | NULL    | auto_increment |
| category_id  | int(11)      | NO   | MUL | NULL    |                |
| is_published | tinyint(1)   | NO   | MUL | NULL    |                |
| slug         | varchar(255) | NO   | MUL | NULL    |                |
| created_at   | datetime     | NO   | MUL | NULL    |                |
....
+--------------+--------------+------+-----+---------+----------------+

The columns that I frequently use in the queries are is_published, category_id and usually the ordering is based on created_at column.

Counting Query:

In order to get number of pages available, any paging library construct query that counts the results based on the parameters passed, the simplest counting query will look something like that:

SELECT COUNT(id)
FROM cms_content
where
category_id = 3 and
is_published = true
order by created_at desc;

so when you explain this query in order to check its performance you will see:
+----+-------------+-------------+------+--------------------------------------+-----------------+---------+-------------+-------+-------------+
| id | select_type | table       | type | possible_keys                        | key             | key_len | ref         | rows  | Extra       |
+----+-------------+-------------+------+--------------------------------------+-----------------+---------+-------------+-------+-------------+
|  1 | SIMPLE      | cms_content | ref  | IDX_A0293FB812469DE2,secondary_index | secondary_index | 5       | const,const | 13972 | Using index |
+----+-------------+-------------+------+--------------------------------------+-----------------+---------+-------------+-------+-------------+

as you can see, this query is fairly optimized, because it uses a covering index (see Using Index in the Extra cell in the explain result), that mean that MySQL performs this query by just looking into index without reading full data row of the table.

Here I have created an index for the columns frequently used, which are is_published, category_id, created_at to utilize indexing and improve all queries, see the indexes applied:
mysql> show index from cms_content;
+-------------+------------+----------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table       | Non_unique | Key_name             | Seq_in_index | Column_name  | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------------+------------+----------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| cms_content |          0 | PRIMARY              |            1 | id           | A         |       27620 |     NULL | NULL   |      | BTREE      |         |               |
| cms_content |          1 | IDX_A0293FB812469DE2 |            1 | category_id  | A         |           1 |     NULL | NULL   |      | BTREE      |         |               |
| cms_content |          1 | secondary_index      |            1 | is_published | A         |           1 |     NULL | NULL   |      | BTREE      |         |               |
| cms_content |          1 | secondary_index      |            2 | category_id  | A         |           1 |     NULL | NULL   |      | BTREE      |         |               |
| cms_content |          1 | secondary_index      |            3 | created_at   | A         |       27620 |     NULL | NULL   |      | BTREE      |         |               |
+-------------+------------+----------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
6 rows in set (0.25 sec)

however, when I searched for the count query used by knp_paginator bundle, I got this complex query:

SELECT 
  COUNT(*) AS dctrn_count 
FROM 
  (
    SELECT 
      DISTINCT id0 
    FROM 
      (
        SELECT 
          c0_.id AS id0, 
          c0_.is_published AS is_published1, 
          c0_.slug AS slug2, 
          c0_.guid AS guid3, 
          c0_.rss_link AS rss_link4, 
          c0_.category_id AS category_id5, 
          c0_.created_at AS created_at6 
        FROM 
          cms_content c0_ 
        WHERE 
          c0_.is_published = true 
          AND c0_.category_id = 3 
        ORDER BY 
          c0_.created_at DESC
      ) dctrn_result
  ) dctrn_table

and here is the explain result for the previous query:
+----+-------------+------------+------+--------------------------------------+----------------------+---------+------+-------+------------------------------+
| id | select_type | table      | type | possible_keys                        | key                  | key_len | ref  | rows  | Extra                        |
+----+-------------+------------+------+--------------------------------------+----------------------+---------+------+-------+------------------------------+
|  1 | PRIMARY     | NULL       | NULL | NULL                                 | NULL                 | NULL    | NULL |  NULL | Select tables optimized away |
|  2 | DERIVED     | <derived3> | ALL  | NULL                                 | NULL                 | NULL    | NULL | 26775 | Using temporary              |
|  3 | DERIVED     | c0_        | ALL  | IDX_A0293FB812469DE2,secondary_index | IDX_A0293FB812469DE2 | 4       |      | 27944 | Using where; Using filesort  |
+----+-------------+------------+------+--------------------------------------+----------------------+---------+------+-------+------------------------------+

as shown, the query is not optimized as all! the use of subquery eliminated the use of the indexes and required a file sort to handle processing its result, also there is unnecessary distinct keyword that impose some overhead on the query.
On my machine this query took 1.04 seconds to execute compared with 0.01 seconds for the previous count query, although they have the exact same logic.

Fetcing Data Query:

The query that retrieves the data of pagination, usually implemented by simply adding limit/offset clauses with the desired parameters. As I explained earlier, this can be a performance problem for large number of pages because it causes MySQL to process too much rows then discard most of them.
For example, here is the default query to retrieve data of the page number 5,000 of my cms_content table:

SELECT 
  DISTINCT c0_.id AS id0, 
  c0_.created_at AS created_at1,
  c0_.slug as slug2
FROM 
  cms_content c0_ 
WHERE 
  c0_.is_published = ? 
  AND c0_.category_id = ? 
ORDER BY 
  c0_.created_at DESC 
LIMIT 
  5 OFFSET 24995

this query may seem as simple query, but it will cause MySQL to process large number of rows, and perform filesort, and will lead to high random I/O operation especially if you want to retrieve more data columns such as title, guid…etc, and this cost will increase as the number of pages gets larger.

One of the best ways to optimize such queries is by using deferred joins, that is joining with smaller resultset, as following:

select id, created_at, slug 
from cms_content 
inner join 
(
    select id 
    from cms_content 
	where category_id = 3 and is_published=1
	order by created_at desc 
	limit 5 OFFSET 24995
)  as small_tbl 
using (id)

This method is used to make MySQL read as little data as possible, if you check the sub-query, it’s using a covering index and retrieves small resultset with very low cost (I/O operation), then full data row will be fetched according to the small number of IDs retrieved by the sub-query.

despite how this query structured, its very efficient one because it minimize the cost of I/O operations. you can verify that by checking last_query_cost after executing it, and comparing it to the other approach
show status like 'last_query_cost';

Doctrine Code

In order to apply the optimized pagination queries in my Symfony project, I had to write my own code, and remove knp_paginator calls.
First, I wrote a method to get the count of results used in paging, placing it in the ContentRepository class:

    public function getContentCountByCategory($cat_id){
        
        $qb = $this->createQueryBuilder('c')
                ->select('count(c.id)')                                
                ->where('c.categoryId = :category_id')
                ->andWhere('c.isPublished = :published')
                ->setParameter('category_id', $cat_id)
                ->setParameter('published', true)
                ;
        
        return $qb->getQuery()->getSingleScalarResult();                
    }

this method produces an optimized version of the count query.

Second, I had to write method to retrieve results with deferred join, I tried to use query builder, but unfortunately doctrine doesn’t support subquery in “from” clause, so I wrote a native SQL query with ResultSetMapping as following:

    public function getContentQueryPyCategory($cat_id, $lang, $page=1, $limit=10) {

        $offset = ($page-1)*$limit;
        
        $rsm = new ResultSetMapping();
        $rsm->addEntityResult('Webit\CMSBundle\Entity\Content', 'c');
        $rsm->addFieldResult('c', 'id', 'id');
        $rsm->addFieldResult('c', 'created_at', 'createdAt');
        $rsm->addFieldResult('c', 'slug', 'slug');
        
        $query = $this->getEntityManager()->createNativeQuery("select id, created_at, slug from cms_content inner join "
                . "(select id from cms_content where category_id = ? and is_published=? order by created_at desc limit ? offset ?) as small_tbl"
                . " using (id)", $rsm);
        $query->setParameter(1, $cat_id);
        $query->setParameter(2, true);
        $query->setParameter(3, $limit);
        $query->setParameter(4, $offset);
        
        
        return $query->getResult();
    }

This method with produce an efficient query, that reduces I/O operations inside MySQL, and the query cost will be minimal even when the number of pages increase.

After implementing this solution, the queries used to produce the page number 3000 took about 46ms compared to approximately 440ms in the original paginator code, more important, the memory usage of MySQL have been utilized because the queries uses the indexing in a more efficient way.

Conclusion

In this post, I discussed the performance problems of pagination in doctrine & Symfony2 project. I applied some techniques that are more efficient in handling limit/offset operations in pagination. Although, there are other workarounds that give good performance, like this good post. This piece of code made my application perform better, hope it helps others who face similar situation.

Look at materialized views in PostgreSQL 9.3

I needed to use materialized views for the first time in order to handle performance problems that our team encountered when we were developing a reporting tool for one of our clients.

The database that was a MySQL5.1 already contained several views, this caused us a lot of performance bottlenecks especially as the data continued to grow. It took me some time to create and maintain a materialized views to replace normal views so we can get good performance, because MySQL don’t have out-of-box materialized view feature.

Once I started to learn PostgreSQL before couple of weeks, the most interesting thing that attracted me is support of creating and maintaining materialized views.

Database Views

Most people are more familiar with a concept of database views, which creates a virtual table from a result of select query, the view is used usually for:

  • security purposes: in case you want to give certain user a permission on a subset of table data rather than a whole table.
  • Hide the complexity of a query: by encapsulating all the complex query parts in the view definition.

But the views can be a major performance bottleneck in databases such as MySQL – especially when built upon other views, so the concept of materialized views came to light in order to hold that advantages of views with eliminating the performance troubles it may cause.

What are Materialized views and its benefits?

The definition of materialized view (Abbreviated as MV) in wikipedia is

A materialized view is a database object that contains the results of a query. For example, it may be a local copy of data located remotely, or may be a subset of the rows and/or columns of a table or join result, or may be a summary based on aggregations of a table’s data

so in short, materialized views are objects that stored in the databases, (like an actual table) that holds a precomputed result of a select query as its data. Being an object stored actually on disk will allow queries based on it to take benefit from caching results  without re-executing the complex underlying query each time you use the materialized view, and more important implement indices on it.

The concept in theory is simple and a MV can be build initially (as a simple table) easily using  simple query like that:

create table test_mv 
    select * 
    from test 
    where 
    col1 = 1
    //.... any more complicated conditions
    group by col_name

but the most important issue is how to get the updated data into the materialized view (refresh the MV). The materialized views can be refreshed either by complete refresh (which is executing the underlying query again) or incrementally (via executing the query of subset of records that has been changed recently)

Unfortunately MySQL don’t have built in functionality to that materialized view till now, although there are some interesting open source projects like flexviews.

Example database structure

In order to try out the materialized views in PostgreSQL, I have migrated a subset of testing data that I have to Postgres, and here is the structure:

I have a users table that have the following structure:

mt4data=> \d+ users;
                                                    Table "public.users"
       Column       |            Type             |              Modifiers               | Storage  | Stats target | Description
--------------------+-----------------------------+--------------------------------------+----------+--------------+-------------
 login              | numeric                     | not null                             | main     |              |
 GROUP              | character(16)               | not null                             | extended |              |
 zipcode            | character(16)               | not null                             | extended |              |
 address            | character(128)              | not null                             | extended |              |
 phone              | character(32)               | not null                             | extended |              |
 email              | character(48)               | not null                             | extended |              |
 comment            | character(64)               | not null                             | extended |              |
 id                 | character(32)               | not null                             | extended |              |
 agent_account      | numeric                     | not null                             | main     |              | 
 modify_time        | timestamp without time zone | not null                             | plain    |              |
-- Other columns

and here is the structure of trades table, which holds all trades done by users in trading system, the sample data that I tested on for this article contained about 700,000 rows:
mt4data=> \d+ trades;
                                              Table "public.trades"
      Column      |            Type             |          Modifiers          | Storage  | Stats target | Description
------------------+-----------------------------+-----------------------------+----------+--------------+-------------
 ticket           | numeric                     | not null                    | main     |              |
 login            | numeric                     | not null                    | main     |              |
 symbol           | character(16)               | not null                    | extended |              |
 cmd              | numeric                     | not null                    | main     |              |
 volume           | numeric                     | not null                    | main     |              |
 open_time        | timestamp without time zone | not null                    | plain    |              |
 open_price       | double precision            | not null                    | plain    |              |
 close_time       | timestamp without time zone | not null                    | plain    |              |
 profit           | double precision            | not null                    | plain    |              |
 comment          | character(32)               | not null                    | extended |              |
 modify_time      | timestamp without time zone | not null                    | plain    |              |

now, I want to generate some reports based about special type of trades, which are “deposits”, here is the query:

select trades.TICKET AS ticket,
    trades.LOGIN AS login,
    users."GROUP" AS group_name,
    trades.VOLUME AS volume,
    users.AGENT_ACCOUNT AS agent_account,
    users.ZIPCODE AS zipcode,
    users.STATUS AS status,
    trades.CLOSE_TIME AS close_time,
    trades.PROFIT AS amount,
    trades.COMMENT AS comment ,
    trades.MODIFY_TIME
    from
    trades, users
where
    (users.LOGIN = trades.LOGIN)
        and (trades.CMD = 6)
        and (trades.PROFIT > 0)
and (
 trades.comment like 'DPST%' or
 trades.comment like 'Bonus%' or
 trades.comment like 'Credit%' or
 trades.comment like 'Deposit%'
)

as you can see, the query used to retrieve this data contains many conditions, and can be troublesome to execute. Here is the complexity of query as show in the result of explain query:
 Nested Loop  (cost=4874.69..67486.46 rows=42 width=127)
   ->  Bitmap Heap Scan on trades  (cost=4874.41..67189.44 rows=42 width=72)
         Recheck Cond: (cmd = 6::numeric)
         Filter: ((profit > 0::double precision) AND ((comment ~~ 'DPST%'::text) OR (comment ~~ 'Bonus%'::text) OR (comment ~~ 'Credit%'::text) OR (comment ~~ 'Deposit
%'::text)))
         ->  Bitmap Index Scan on i_cmd  (cost=0.00..4874.40 rows=70186 width=0)
               Index Cond: (cmd = 6::numeric)
   ->  Index Scan using login_index on users  (cost=0.28..7.06 rows=1 width=60)
         Index Cond: (login = trades.login)
(8 rows)

The query needed about 25 seconds to execute on my data sample, so on the actual data it will take much more time.

In Postgres 9.3, I can create the materialized view by issuing:

create materialized view deposits as 
select trades.TICKET AS ticket,
    trades.LOGIN AS login,
    users."GROUP" AS group_name,
    trades.VOLUME AS volume,
    users.AGENT_ACCOUNT AS agent_account,
    users.ZIPCODE AS zipcode,
    users.STATUS AS status,
    trades.CLOSE_TIME AS close_time,
    trades.PROFIT AS amount,
    trades.COMMENT AS comment ,
    trades.MODIFY_TIME
    from
    trades, users
where
    (users.LOGIN = trades.LOGIN)
        and (trades.CMD = 6)
        and (trades.PROFIT > 0)
and (
 trades.comment like 'DPST%' or
 trades.comment like 'Bonus%' or
 trades.comment like 'Credit%' or
 trades.comment like 'Deposit%'
)

the initial build of my materialized view took almost 20 seconds on my sample data (which is almost similar to initial build on MySQL).

here is I added my required indices on the new MV:

mt4data=> create index login_deposit_index on deposits  using btree(login);
CREATE INDEX
mt4data=> create unique index ticket_deposit_index on deposits  using btree(ticket);
CREATE INDEX                              
mt4data=> create index close_time_index on deposits  using btree(close_time);
CREATE INDEX

the number of records in the deposits materialized view, is:
mt4data=> select count(*) from deposits;
 count
-------
   176
(1 row)

now I can issue some queries on the materialized view with low cost, like this query that get the number and total amount of deposits per group:
mt4data=> explain select group_name, count(*), sum(amount) from deposits group by group_name;
                           QUERY PLAN
-----------------------------------------------------------------
 HashAggregate  (cost=7.08..7.60 rows=52 width=25)
   ->  Seq Scan on deposits  (cost=0.00..5.76 rows=176 width=25)
(2 rows)

as explain command result states, the cost indicates fast execution of the query.

The most advantage of materialized views in PostgreSQL 9.3 is the refresh of the MV data. prior to this version, the refresh process was done by monitoring changes on the underlying tables using triggers (which can add performance overhead in environments with high write rate) or by writing custom script or stored procedure that checks modification based on timestamps, this can be troublesome and may be little bit complicated based on your tables and query structure.

After I modified the underlying trades table by inserting another chunk of data, I will refresh the materialized view by using “refresh materialized view” command as following:

mt4data=> refresh materialized view deposits;
REFRESH MATERIALIZED VIEW

now, the materialized view is updated, and I can see the new number of records:
mt4data=> select count(*) from deposits;
 count
-------
   731
(1 row)

the rebuild of the materialized view took about 3-5 seconds on my machine, which is fast (compared to a custom procedure I wrote in Mysql to perform similar functionality).

According to PostgreSQL documentation, the refresh that is performed on MV now is a complete refresh, meaning that the query used in the MV definition is re-executed and the new data is re-filled, but incremental refresh is expected in future releases (or as a patch) which should make refresh process much more efficient.

Finally,

As you need see in this post, PostgreSQL 9.3 offers simple and efficient way to handle materialized views, although more performance improvement can be attained once incremental refresh feature is implemented. I hope MySQL can offer some similar feature in the near future, since its very useful feature and needed frequently by database developers and administrators.

postgreSQL quick start with php

Last week, I accidentally stumbled upon blog stating features of new version of posgreSQL, and I found it pretty interesting, couple of useful features that does not exist in MySQL, are now implemented in PostgreSQL 9.3 (I am especially interested in Materialized views). so I wanted to learn more about this database.

Unfortunately, I haven’t used Postgres before ( although I have several years experience as MySQL developer and administrator), so I had to learn the basics about postgres, and I wanted to share this experience:

Installing PostgreSQL

In order to get the latest version on my Centos machine, I compiled Postgres from the source as following:

first, I got the desired version source files from Postgres site ( I used V9.3):

wget http://ftp.postgresql.org/pub/source/v9.3.2/postgresql-9.3.2.tar.bz2

then, uncompress the file:
tar xvjf postgresql-9.3.2.tar.bz2

then, compile the source files using this simple command inside the extracted folder:
./configure && make && make install

now postgres files should be placed at /usr/local/pgsql.
Postgres operates by default under user named postgres, so we should create the user, create data directory and assign folder ownership to created user:
adduser postgres 
mkdir /usr/local/pgsql/data 
chown postgres:postgres /usr/local/pgsql/data

Then we should initialize the data storage “database cluster” for the server by calling initdb, but first I switched to postgres user because you cannot run this command as a root:
[root@sub bin]# su - postgres
-bash-4.1$ /usr/local/pgsql/bin/initdb -D  /usr/local/pgsql/data/

database cluster is the collection of databases that postgres use. by creating database cluster the data directory will be filled with database files, and sample databases like Postgres and Template1 will be created.

now, I can start postgres server by typing:

/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &
-D parameter specify the data directory location which also contains the configuration file of postgres, which is named by default postgresql.conf (analogous to my.cnf in mysql).

Now Postgres server is running and we can begin working with sql commands.

PostgreSQL Client

now let us enter the postgres client program by executing psql program, which is the interactive terminal for postgres:

/usr/local/pgsql/bin/psql -hlocalhost -U postgres -w
Here I am using the database using its super user “postgres”.
I will issue \list command to see the databases installed:
postgres=# \list
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(3 rows)
as shown in the snippet, there are three databases here:

  • Postgres database: which is the default database for postgres (same as mysql database in mysql)
  • template0 and template1: which are two template databases.
Template database is very useful feature in postgres, it enables administrator to create a database by copying all the content from another (template) database, by default any newly created database will be using template0 as a template.

I created a new database:

postgres=# create database test_pg;
CREATE DATABASE

If you want to create a database using template other than template0, you can use template keyword at the end of create command like this:
create database test_pg2 template template_database;

now if you run \list command, you will see the new database there

postgres=# \list
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 test_pg   | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres         +
           |          |          |             |             | postgres=CTc/postgres+
           |          |          |             |             | test_usr=CTc/postgres
(4 rows)

then, I created a new user and granted him a permission to use that database:

postgres=# create user test_usr with password 'test_pass';
CREATE ROLE
postgres=# grant all privileges on database test_pg to test_usr;
GRANT

Now, I will exit (by typing \q), then connect to the new database using the user I have created in the previous step:
-bash-4.1$ /usr/local/pgsql/bin/psql test_pg -Utest_usr -W
Password for user test_usr:
psql (9.3.2)
Type "help" for help.

test_pg=>

create a table I will use for testing:
test_pg=# create table test_tbl( id serial primary key, name varchar(255) );
CREATE TABLE

Serial keyword is similar to auto-increment attribute in other databases, and used to create unique identifier for the table records

Unlike MySQL, Postgres don’t have different types of storage engines (like MyISAM or InnoDB), it has a unified database with one storage engine

and I will insert sample data to the table

test_pg=# insert into test_tbl(name) values('test1'), ('test2'), ('test3');
INSERT 0 3

PHP Script

I will use PDO in order to test php connectivity to postgres, but first php-postgres package must be installed:

yum install php-pgsql.x86_64

then I wrote this simple script:
<?php
try{
   $dbh = new \PDO('pgsql:host=localhost;dbname=test_pg', 'test_usr', 'test_pass');
}catch(Exception $ex){
   die('Error in connecting: '.$ex->getMessage());
}
$stmt = $dbh->prepare("select * from test_bg");
$stmt->execute();

echo $stmt->rowCount(). " records fetched.".chr(10);
while($row = $stmt->fetch()){
    echo "id:".$row['id'].', name:'.$row['name'].chr(10);
}

now you can run the script to see the results:
[root@sub pg_test]# php -f test.php
3 records fetched.
id:1, name:test1
id:2, name:test2
id:3, name:test3

Now you can use Postgres as your data store in a very similar way to MySQL.

Conclusion

In this post, I explained quick introduction to Postgres database, from installation, and creating database and roles, to writing simple PHP script that retrieves data from Postgres. PostgreSQL has great features that I intend to learn more about in order to get most of value out of it.

Simple web spider with PHP Goutte

Last week we got SEO analysis about one of our portals, that analysis included thorough  statistics about website SEO measures, like missing and duplicate <title>,<h1> and meta tags, broken and invalid links, duplicate content percentage…etc . It appears that the SEO agency that prepared that analysis use some sort of crawler to extract these information.

I liked that crawler idea, and wanted to implement it in PHP. After some reading of web scrapping and Goutte I was able to write a similar web spider that extracts the needed information, and I wanted to share it in this post.

About web scrapping and Goutte

Web scrapping is a technique to extract information from websites, its very close to web indexing because the bot or web crawler that search engines use, performs some sort of scrapping the web documents through following the links, analyzing keywords, meta tags, URLs and ranking them according to relevancy, popularity, engagement..etc.

Goutte is a screen scraping and web crawling library for PHP, it provides an API to crawl websites and extract data from the HTML/XML responses. Goutte is wrapper around Guzzle and several Symfony components like: BrowserKit, DOMCrawler and CSSSelector.

Here is a small description about some libraries that Goutte wraps:

    1. Guzzle: framework for building RESTful web service, it provides a simple interface to perform cURL, along with other important features like: persistent connections and streaming request and response bodies.
    2. BrowserKit: simluates a behaviour of a web browser, providing abstract HTTP layer like request, response, cookie…etc.
    3. DOMCrawler: provides easy methods for DOM navigation and manipulation.
    4. CSSSelector: provide an API to select elements using same selectors used for CSS (it becomes exremely easy to select elements when it works with DOMCrawler).
* These are the main components I interested in for this post, however, other components like: Finder and Process are also used in Goutte.

 

Basic usage

Once you download Goutte(from here),  you should define a Client object, the client used to send requests to a website and returns a crawler object, as in the snippet below:

require_once 'goutte.phar';
use Goutte\Client;

$url_to_traverse = 'http://zrashwani.com';

$client = new Client();
$crawler = $client->request('GET', $url_to_traverse);

Here I declared a client object, and called “Request()” to simulate browser requesting the url “http://zrashwani.com” using “GET” http method.
Request() method returns an object of type Symfony\Component\DomCrawler\Crawler, than can be used to select elements from the fetched html response.

but before processing the document, let’s ensure that this URL is a valid link, which means it returned a response code (200), using

$status_code = $client->getResponse()->getStatus();
if($status_code==200){
    //process the documents
}

$client->getResponse() method returns BrowserKit/Response object that contains information about the response the client got, like: headers (including status code I used here), response content…etc

In order to extract document title, you can filter either by XPath or CSS selector in order to get you target HTML DOM element value

$crawler->filterXPath('html/head/title')->text()
// $crawler->filter('title')->text()

In order to get the number of <h1> tags, and get the contents of the tags that exist in the page,

$h1_count = $crawler->filter('h1')->count();
$h1_contents = array();
if ($h1_count) {
    $crawler->filter('h1')->each(function(Symfony\Component\DomCrawler\Crawler $node, $i) use($h1_contents) {
                $h1_contents[$i] = trim($node->text());
        });
}

for SEO Purposes, there should be one h1 tag in a page, and its content should have the main keywords in the page. Here each() function is quite useful, it can be used to loop over all matching elements. each() function takes a closure as a parameter to perform some callback operation on the node.

PHP closures is anonymous functions that started to be used in PHP5.3, its very useful to perform a callback functionality, you can refer to PHP manual if you are new to closures.

Application goals

After this brief introduction, I can begin explaining the spider functionality, this crawler will detect broken/invalid links in the website, along with extracting <h1>,<title> tag values that are important for SEO issue that I have.

my simple crawler implements Depth-limited search, in order to avoid crawling large amounts of data, and works as following :

      1. Read the initial URL to crawl along with depth of links to be visited.
      2. crawl the url and check the response code to determine the link is not broken, then add it to an array containing site links.
      3. extract <title>, <h1> tags content in order to use their values later for reporting.
      4. loop over all <a> tags inside the document fetch to extract their href attribute along with other data.
      5. check that depth limit is not reached, and the current href is not visited before, and the link url does not belong to external site.
      6. crawl the child link by repeating steps (2-5).
      7. stop when the links depth is reached.

 

These steps implemented in SimpleCrawler class that I wrote, (It still a basic version and should be optimized more):

<?php

require_once 'goutte.phar';

use Goutte\Client;

class simpleCrawler {

    private $base_url;
    private $site_links;
    private $max_depth;

    public function __construct($base_url, $max_depth = 10) {
        if (strpos($base_url, 'http') === false) { // http protocol not included, prepend it to the base url
            $base_url = 'http://' . $base_url;
        }

        $this->base_url = $base_url;
        $this->site_links = array();
        $this->max_depth = $max_depth;
    }

    /**
     * checks the uri if can be crawled or not
     * in order to prevent links like "javascript:void(0)" or "#something" from being crawled again
     * @param string $uri
     * @return boolean
     */
    protected function checkIfCrawlable($uri) {
        if (empty($uri)) {
            return false;
        }

        $stop_links = array(//returned deadlinks
            '@^javascript\:void\(0\)$@',
            '@^#.*@',
        );

        foreach ($stop_links as $ptrn) {
            if (preg_match($ptrn, $uri)) {
                return false;
            }
        }

        return true;
    }

    /**
     * normalize link before visiting it
     * currently just remove url hash from the string
     * @param string $uri
     * @return string
     */
    protected function normalizeLink($uri) {
        $uri = preg_replace('@#.*$@', '', $uri);

        return $uri;
    }

    /**
     * initiate the crawling mechanism on all links
     * @param string $url_to_traverse
     */
    public function traverse($url_to_traverse = null) {
        if (is_null($url_to_traverse)) {
            $url_to_traverse = $this->base_url;

            $this->site_links[$url_to_traverse] = array(//initialize first element in the site_links 
                'links_text' => array("BASE_URL"),
                'absolute_url' => $url_to_traverse,
                'frequency' => 1,
                'visited' => false,
                'external_link' => false,
                'original_urls' => array($url_to_traverse),
            );
        }

        $this->_traverseSingle($url_to_traverse, $this->max_depth);
    }

    /**
     * crawling single url after checking the depth value
     * @param string $url_to_traverse
     * @param int $depth
     */
    protected function _traverseSingle($url_to_traverse, $depth) {
        //echo $url_to_traverse . chr(10);

        try {
            $client = new Client();
            $crawler = $client->request('GET', $url_to_traverse);

            $status_code = $client->getResponse()->getStatus();
            $this->site_links[$url_to_traverse]['status_code'] = $status_code;

            if ($status_code == 200) { // valid url and not reached depth limit yet            
                $content_type = $client->getResponse()->getHeader('Content-Type');                
                if (strpos($content_type, 'text/html') !== false) { //traverse children in case the response in HTML document 
                   $this->extractTitleInfo($crawler, $url_to_traverse);

                   $current_links = array();
                   if (@$this->site_links[$url_to_traverse]['external_link'] == false) { // for internal uris, get all links inside
                      $current_links = $this->extractLinksInfo($crawler, $url_to_traverse);
                   }

                   $this->site_links[$url_to_traverse]['visited'] = true; // mark current url as visited
                   $this->traverseChildLinks($current_links, $depth - 1);
                }
            }
            
        } catch (Guzzle\Http\Exception\CurlException $ex) {
            error_log("CURL exception: " . $url_to_traverse);
            $this->site_links[$url_to_traverse]['status_code'] = '404';
        } catch (Exception $ex) {
            error_log("error retrieving data from link: " . $url_to_traverse);
            $this->site_links[$url_to_traverse]['status_code'] = '404';
        }
    }

    /**
     * after checking the depth limit of the links array passed
     * check if the link if the link is not visited/traversed yet, in order to traverse
     * @param array $current_links
     * @param int $depth     
     */
    protected function traverseChildLinks($current_links, $depth) {
        if ($depth == 0) {
            return;
        }

        foreach ($current_links as $uri => $info) {
            if (!isset($this->site_links[$uri])) {
                $this->site_links[$uri] = $info;
            } else{
                $this->site_links[$uri]['original_urls'] = isset($this->site_links[$uri]['original_urls'])?array_merge($this->site_links[$uri]['original_urls'], $info['original_urls']):$info['original_urls'];
                $this->site_links[$uri]['links_text'] = isset($this->site_links[$uri]['links_text'])?array_merge($this->site_links[$uri]['links_text'], $info['links_text']):$info['links_text'];
                if(@$this->site_links[$uri]['visited']) { //already visited link)
                    $this->site_links[$uri]['frequency'] = @$this->site_links[$uri]['frequency'] + @$info['frequency'];
                }
            }

            if (!empty($uri) && 
                !$this->site_links[$uri]['visited'] && 
                !isset($this->site_links[$uri]['dont_visit'])
                ) { //traverse those that not visited yet                
                $this->_traverseSingle($this->normalizeLink($current_links[$uri]['absolute_url']), $depth);
            }
        }
    }

    /**
     * extracting all <a> tags in the crawled document, 
     * and return an array containing information about links like: uri, absolute_url, frequency in document
     * @param Symfony\Component\DomCrawler\Crawler $crawler
     * @param string $url_to_traverse
     * @return array
     */
    protected function extractLinksInfo(Symfony\Component\DomCrawler\Crawler &$crawler, $url_to_traverse) {
        $current_links = array();
        $crawler->filter('a')->each(function(Symfony\Component\DomCrawler\Crawler $node, $i) use (&$current_links) {
                    $node_text = trim($node->text());
                    $node_url = $node->attr('href');
                    $hash = $this->normalizeLink($node_url);

                    if (!isset($this->site_links[$hash])) {  
                        $current_links[$hash]['original_urls'][$node_url] = $node_url;
                        $current_links[$hash]['links_text'][$node_text] = $node_text;
                        
    		if (!$this->checkIfCrawlable($node_url)){

			}elseif (!preg_match("@^http(s)?@", $node_url)) { //not absolute link                            
                            $current_links[$hash]['absolute_url'] = $this->base_url . $node_url;
                        } else {
                            $current_links[$hash]['absolute_url'] = $node_url;
                        }

                        if (!$this->checkIfCrawlable($node_url)) {
                            $current_links[$hash]['dont_visit'] = true;
                            $current_links[$hash]['external_link'] = false;
                        } elseif ($this->checkIfExternal($current_links[$hash]['absolute_url'])) { // mark external url as marked                            
                            $current_links[$hash]['external_link'] = true;
                        } else {
                            $current_links[$hash]['external_link'] = false;
                        }
                        $current_links[$hash]['visited'] = false;
                        
                        $current_links[$hash]['frequency'] = isset($current_links[$hash]['frequency']) ? $current_links[$hash]['frequency']++ : 1; // increase the counter
                    }
                    
                });

        if (isset($current_links[$url_to_traverse])) { // if page is linked to itself, ex. homepage
            $current_links[$url_to_traverse]['visited'] = true; // avoid cyclic loop                
        }
        return $current_links;
    }

    /**
     * extract information about document title, and h1
     * @param Symfony\Component\DomCrawler\Crawler $crawler
     * @param string $uri
     */
    protected function extractTitleInfo(Symfony\Component\DomCrawler\Crawler &$crawler, $url) {
        $this->site_links[$url]['title'] = trim($crawler->filterXPath('html/head/title')->text());

        $h1_count = $crawler->filter('h1')->count();
        $this->site_links[$url]['h1_count'] = $h1_count;
        $this->site_links[$url]['h1_contents'] = array();

        if ($h1_count) {
            $crawler->filter('h1')->each(function(Symfony\Component\DomCrawler\Crawler $node, $i) use($url) {
                        $this->site_links[$url]['h1_contents'][$i] = trim($node->text());
                    });
        }
    }

    /**
     * getting information about links crawled
     * @return array
     */
    public function getLinksInfo() {
        return $this->site_links;
    }

    /**
     * check if the link leads to external site or not
     * @param string $url
     * @return boolean
     */
    public function checkIfExternal($url) {
        $base_url_trimmed = str_replace(array('http://', 'https://'), '', $this->base_url);

        if (preg_match("@http(s)?\://$base_url_trimmed@", $url)) { //base url is not the first portion of the url
            return false;
        } else {
            return true;
        }
    }

}

?>

and you can try this class functionality as following:

$simple_crawler = new simpleCrawler($url_to_crawl, $depth);    
$simple_crawler->traverse();    
$links_data = $simple_crawler->getLinksInfo();

getLinksInfo() method returns an associative array, containing information about each page crawled, such as url of the page, <title>, <h1> tags contents, status_code…etc. You can store these results in any way you like, for me I prefer MySQL for simplicity in order to be able to get desired results using query, so I created pages_crawled table as following:

CREATE TABLE `pages_crawled` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `url` varchar(255) DEFAULT NULL,
  `frequency` int(11) unsigned DEFAULT NULL,
  `title` varchar(255) DEFAULT NULL,
  `status_code` int(11) DEFAULT NULL,
  `h1_count` int(11) unsigned DEFAULT NULL,
  `h1_content` text,
  `source_link_text` varchar(255) DEFAULT NULL,
  `original_urls` text,
  `is_external` tinyint(1) DEFAULT '0',
  `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=37 DEFAULT CHARSET=utf8

and here I store the links traversed into mysql table:

<?php 
error_reporting(E_ALL);
set_time_limit(300);
include_once ('../src/SimpleCrawler.php');

$url_to_crawl = $argv[1];
$depth = isset($argv[2])?$argv[2]:3;

if($url_to_crawl){
    
    echo "Begin crawling ".$url_to_crawl.' with links in depth '.$depth.chr(10);
    
    $start_time = time();    
    $simple_crawler = new simpleCrawler($url_to_crawl, $depth);    
    $simple_crawler->traverse();    
    $links_data = $simple_crawler->getLinksInfo();
       
    $end_time = time();
    
    $duration = $end_time - $start_time;
    echo 'crawling approximate duration, '.$duration.' seconds'.chr(10);
    echo count($links_data)." unique links found".chr(10);
    
    mysql_connect('localhost', 'root', 'root');
    mysql_select_db('crawler_database');
    foreach($links_data as $uri=>$info){
        
        if(!isset($info['status_code'])){
            $info['status_code']=000;//tmp
        }
        
        $h1_contents = implode("\n\r", isset($info['h1_contents'])?$info['h1_contents']:array() );
        $original_urls = implode('\n\r', isset($info['original_urls'])?$info['original_urls']:array() );
        $links_text = implode('\n\r',  isset($info['links_text'])?$info['links_text']:array() );
        $is_external = $info['external_link']?'1':'0';
        $title = @$info['title'];
        $h1_count = isset($info['h1_count'])?$info['h1_count']:0;
        
        $sql_query = "insert into pages_crawled(url, frequency, status_code, is_external, title, h1_count, h1_content, source_link_text, original_urls)
values('$uri', {$info['frequency']}, {$info['status_code']}, {$is_external}, '{$title}', {$h1_count}, '$h1_contents', '$links_text', '$original_urls')";
        
        mysql_query($sql_query) or die($sql_query);
    }
}

 

Running the spider

Now let me try out the spider on my blog url, with depth of links to be visited is 2:

C:\xampp\htdocs\Goutte\web>php -f test.php zrashwani.com 2

Now I can get the important information that I needed using simple SQL query of the pages_crawled table, as following:

mysql> select count(*) from pages_crawled where h1_count >1;
+----------+
| count(*) |
+----------+
|       30 |
+----------+
1 row in set (0.01 sec)

mysql> select count(*) as c, title from pages_crawled group by title having c>1;

+---+----------------------------------------------------------+
| c | title                                                    |
+---+----------------------------------------------------------+
| 2 | Z.Rashwani Blog | I write here whatever comes to my mind |
+---+----------------------------------------------------------+
1 row in set (0.02 sec)

in the first query, I returned the number of pages with duplicate h1 tags ( I find alot, I will consider changing the HTML structure of my blog a little bit),
in the second one, I returned the duplicated page titles.
now we can get many other statistics on the pages traversed using information we collected.

Conclusion

In this post I explained how to use Goutte for web scrapping using real-world example that I encountered in my job. Goutte can be easily used to extract great amount of information about any webpage using its easy API for requesting pages, analyzing the response and extract specific data from Dom document.

I used Goutte to extract some information that can be used as SEO measures about the specified website, and stored them into MySQL table in order query any report or statistics derived from them.

Update

thanks to Josh Lockhart, this code is modified for composer and Packagist and now available on github https://github.com/codeguy/arachnid

Introduction to sphinx with PHP – part2

In Part1, I explained how to install sphinx and configure it to index the data from MySQL source, and use the searchd daemon from command line to retrieve data from defined indexes.

In this post, I will explain a PHP examples of how to use Sphinx API.

The following script is based of the database structure and sphinx config file I used in Part1 of this sphinx introduction.

Example PHP Script

<?php

header('Content-type: text/html; charset=utf8');
include ( "sphinxapi.php" );

mysql_connect('localhost', 'root', 'root');
mysql_select_db('your_database_here');        
mysql_query('set names utf8');        

$phrase = @$_GET['phrase'];
$page = isset($_GET['page']) ? $_GET['page'] : 1;
$date_start = @$_GET['date_start'];
$date_end = @$_GET['date_end'];

$client = new SphinxClient();
$client->SetLimits(($page - 1) * 10, 10);
$client->SetSortMode(SPH_SORT_EXTENDED, '@weight desc, created_time desc');
$client->SetMatchMode(SPH_MATCH_ANY);
$client->SetFieldWeights(array('title'=>4, 'keywords'=>2, 'body'=>1 ));

if(isset($date_start) || isset($date_end)){    
    $start_time = isset($date_start)?strtotime($date_start):null;
    $end_time = isset($date_end)?strtotime($date_end):null;    
    $client->SetFilterRange('created_time', $start_time, $end_time);
}

$res = $client->Query($phrase, 'content_index');


if (!$res) {
    echo 'error: ' . $client->GetLastError();
} else {

    if ($res['total'] == 0 || !isset($res['matches'])) {
        echo 'No results retrieved from Search engine';
    } else {
        echo "Displaying " . (($page - 1) * 10+1).'-'.(min($res['total'],$page * 10)) . " out of " . $res['total_found'] . ' total results';
                
        //var_dump($res);
        $ids_str = implode(', ', array_keys($res['matches']));
        $res_db = mysql_query('select id, title, created_at from content where id in  (' . $ids_str . ') order by field(id,'.$ids_str.')');
        if ($res_db === false) {
            echo "Error in mysql query #" . mysql_errno() . ' - ' . mysql_error();
        } else {
            echo '<ul>';
            while ($row = mysql_fetch_assoc($res_db)) {
                echo '<li>'
                . '<a href="show.php?id=' . $row['id'] . '&phrase='.$phrase.'">' . $row['title'] . '<a>'
                . '<br/> [relevency: '.$res['matches'][$row['id']]['weight'].']'        
                . '<br/> [created_at: '.$row['created_at'].']'        
                . '</li>';
            }
            echo '</ul>';
        }

        echo '<br/><br/>Total Time: ' . $res['time'] . 's';
    }
}

This simple script takes parameters from the webpage, then issue a search request containing the specified phrase and conditions from searchd daemon.

In the first lines (1-13), I declared the database connection along with the parameters that I will use within the search, after that I initialized sphinx client and applied main configurations on it as explained in the next section.

Main SphinxClient Methods

Here are a list of main methods used to configure SphinxClient:

1- SetSortMode:
Sphinx supports multiple flexible sort modes which controls the ordering criteria of the retrieved results,
I will mention brief information about each sort mode – since I consider them as one of most important features in sphinx:

a- SPH_SORT_RELEVANCE: its the default sort mode that sorts the results according to the their relevancy to the search query passed.

$client->SetSortMode(SPH_SORT_RELEVANCE);

Sphinx ranks the results by default using phrase proximity that takes into consideration the phrase words order along with words frequency. We can control the way sphinx computes relevancy by changing Ranking modes (using  SetRankingMode function ).

b- SPH_SORT_ATTR_ASC / SPH_SORT_ATTR_DESC: sort the results in ascending or descending order according to predefined attribute, for example, you can change line 17 to be:

$client->SetSortMode(SPH_SORT_ATTR_DESC, 'created_time');
in this way, the newest articles will come as the first result in the page.

c- SPH_SORT_TIME_SEGMENTS: sorts by combination time ordering then by relevancy

$client->setSortMode(SPH_SORT_TIME_SEGMENTS, 'created_time');

d- SPH_SORT_EXTENDED: sort by a combination of attributes ascending or descending in SQL-like format, as I used in the script above:

$client->SetSortMode(SPH_SORT_ATTR_ASC, '@weight desc, created_time desc');
Here I sorted according to relevancy (represented using @weight computed attribute), then descending according to creation time (in case two results have same weight).

e- SPH_SORT_EXPR: sort using some arithmetic expression, for example you can use a combination of the relevancy and popularity represented by page_views, as an example:

$client->SetSortMode(SPH_SORT_EXPR, '@weight * page_views/100');

unlike MySql, putting expression in sort mode (analogous to order by clause) won’t effect the performance negatively.

2- SetMatchMode():
used to control how sphinx perform a match for the query phrase, here is the most important options:
a- SPH_MATCH_ALL: matches all keywords in the search query.
b- SPH_MATCH_ANY: matches any keyword.
c- SPH_MATCH_PHRASE: match the whole phrase, which require perfect match.
all matching modes can be found here

3- SetFieldWeights():
Using this function, you can distribute the relevancy weight among the fields, in the script above, I used this line:

$client->SetFieldWeights(array('title'=>4, 'keywords'=>2, 'body'=>1 ));

in order to indicate that “title” field is more important than “keywords” field and “body” field, so the results that have matching query phrase in the title will appear before those which have many matching query phrase in the body. This option is very useful to control the relevancy of results.

4- SetFilterRange():
Here you can add filter based on one of the attributes defined in sphinx index, (analogous to adding where condition to the SQL statement). I used it to filter according to the creation time

$client->SetFilterRange('created_time', $start_time, $end_time);

5- Query():
after configuring sphinx search query, this method used to send request to searchd daemon and get the results from sphinx:

$res = $client->Query($phrase, 'content_index');

the Query() method, take the search phrase as the first parameter, and the name of the index(es) to match against as the second parameter.

After calling Query() method on sphinxClient, a result array will be returned containing information about matching records. If we dumped the “matches” index in the result array, we will get similar to those data:

var_dump($res['matches']);
/*********/

  array(2) {
    [181916]=>
    array(2) {
      ["weight"]=>
      string(1) "1"
      ["attrs"]=>
      array(3) {
        ["status"]=>
        string(1) "1"
        ["category_id"]=>
        string(2) "11"
        ["created_time"]=>
        string(10) "1386946964"
      }
    }
    [181915]=>
    array(2) {
      ["weight"]=>
      string(1) "7"
      ["attrs"]=>
      array(3) {
        ["status"]=>
        string(1) "1"
        ["category_id"]=>
        string(2) "12"
        ["created_time"]=>
        string(10) "1386368157"
      }
    }

The data returned for each matched element are:
– documentID (as the key of the array element)
– weight (dynamically calculated according to SetSortMode() and SetFieldWeights() functions, we used earlier)
– attributes values, in “attrs” index (ex. created_time, status…etc), containing sphinx attributes defined in config file.

note that sphinx will not return the textual data itself, because it only index textual data and don’t store it, so we have to get them from our MySQL database:

$ids_str = implode(', ', array_keys($res['matches']));
$res_db = mysql_query('select id, title, created_at from mdn_content where id in  (' . $ids_str . ') order by field(id,'.$ids_str.')');

in this line, I got the records from MySQL using the DocumentIDs, and kept the same ordering as Sphinx by using “Field(id, val1,val2,…)” in order by clause.

Now I got the results IDs from sphinx, fetched associated textual data from MySQL and displayed them into webpage.

Running the code

Now, I would like to query all recording containing word “syria” published in the last two weeks, and here are the results:
Screenshot from 2013-12-14 00:02:11

you can see that articles with “syria” word appeared in title got higher rank than those with “syria” keyword appeared in the body, because of the field weights I used in the script above. also the sphinx took about 0.015 seconds to get those results among 150,000 record, which is extremely fast.

another execution here, searching for syria phrase without any additional filters:
Screenshot from 2013-12-14 00:20:34
and that took about 0.109 seconds to execute!

Quick MySQL comparison

I just wanted to compare sphinx with MySQL, in terms of performance here:
I execute mysql query that have a similar condition to that I executed on sphinx in previous section, and here is the result:

mysql> select id from content where match(body) against('*syria*' in boolean mode) and status=1;
+--------+
| id     |
+--------+
| 145805 |
| 142579 |
| 133329 |
|  59778 |
|  95318 |
|  94979 |
|  83539 |
|  56858 |
| 181915 |
| 181916 |
| 181917 |
| 181918 |
+--------+
12 rows in set (10.74 sec)

MySQL took about 10 seconds to execute the same query compared to about 0.1 second using sphinx.

Conclusion

Now, the simple PHP script is running with sphinx and MySQL, and I explained the main functions to control Sphinx using PHP API, including sorting, matching and filtration.
There are many other powerful features of sphinx, like: MultiQuery, MVA (multi-valued attributes), grouping…etc, that I may write about in the future.

Introduction to sphinx with PHP – Part 1

I was using MySQL Full text indexing in one of my projects, but I noticed that after data size increased beyond several gigabytes, MySQL won’t scale well in using the index and the queries will get too slow to be used in production environments especially for high traffic websites, so I have read about sphinx search engine, and find it quite powerful for textual searching functionality.

What is Sphinx?

as sphinx official site defines:

Sphinx is an open source full text search server, designed from the ground up with performance, relevance (aka search quality), and integration simplicity in mind. It’s written in C++ and works on Linux (RedHat, Ubuntu, etc), Windows, MacOS, Solaris, FreeBSD, and a few other systems.

 

Some sphinx advantages

Sphinx has many features that make it excellent option for textual search of large data sizes, some of these advantages (specifically compared to MySQL):

  1. Scalable over large data sizes (both horizontally and vertically, using features like distributed indexes)
  2. Advanced ranking algorithm, calculating relevancy of data based on analyzing the keywords (ability to set customized weight -relevancy importance- for each field)
  3. Index data from multiple sources, including different database types (or different storage engines)
  4. Other important enhancements, like: parallel results,batch queries…etc

Install Sphinx

Sphinx is available on many platforms (including Linux, MacOS, Windows), and its relatively easy to install.
I will cover here compiling Sphinx from source code, (since I have Fedora and I didn’t find a ready made RPM package in my OS).

1- Go to sphinx download page:
http://sphinxsearch.com/downloads/release/

and download the package suitable to your OS, as I mentioned earlier I will go for the first option, which is downloading the source files on sphinx.

If you find a package for your OS, install it and you can skip the remaining installation steps and jump to the next section.

2- Compile Sphinx from source code:
After downloading the source files, extract the source files and run:

./configure

then
make install

In case you want to let sphinx work with mysql, you should have mysql development package installed to your system, you should run:
yum install mysql-devel

By doing these steps successfully, Sphinx should be installed to your machine successfully. The next step is to configure Sphinx sources and indexes to read from MySQL.

Sphinx Configuration

After having installed sphinx, we have to configure data source and indexes for sphinx.

First, let me introduce a sample MySQL table which I will run sphinx indexer to work with. Its a table called “content” which stores news portal articles, containing standard news fields (like title, body, author…etc):

CREATE TABLE `content` (
  `id` int(11) unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
  `title` varchar(500) DEFAULT NULL,
  `author` varchar(255) DEFAULT NULL,
  `category_id` int(11) NOT NULL,  
  `status` int(11) NOT NULL, -- used to indicate if content is published, pending...etc  
  `body` longtext,
  `keywords` varchar(255) DEFAULT NULL,
  `is_deleted` int(11) DEFAULT NULL,
  `slug` varchar(255) DEFAULT NULL,
  `updated_at` datetime DEFAULT NULL,
  `created_at` datetime DEFAULT NULL,   
   FULLTEXT KEY `INDEX_TEXT` (`title`,`author`),
   FULLTEXT KEY `title` (`title`),
   FULLTEXT KEY `body` (`body`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

This table was MyISAM table in order to benefit from MySQL Full-Text index prior to use of Sphinx. We will use these indexes to compare sphinx and mySQL performance later.

Now let’s see sphinx configuration file,
Sphinx configuration file, is named sphinx.conf, usually you will find it in /usr/local/etc (in case you compiled sphinx from sources), or in /etc/sphinx (in case you installed a RPM package). you will find also at the same directory, sphinx.conf.dist which have standard configuration file that have all the configuration options.

In the following, the contents of the sphinx.conf file I used with my database

source content
{
        #data source type
        type                    = mysql
        sql_host                = localhost
        sql_user                = root
        sql_pass                = root
        sql_db                  = your_db
        sql_port                = 3306

        #main document fetch query
        sql_query_pre           =   SET NAMES utf8
        sql_query               = \
                                SELECT id, title, author, status, body, keywords, category_id, unix_timestamp(created_at) as created_time  \
                                from content \
                                where is_deleted=0 and status=1 \
                                ;
        #attribute declaration        
        sql_attr_uint           = status
        sql_attr_uint           = category_id
        sql_attr_timestamp      = created_time

}

#defining delta source
source delta: content {
       sql_query                = \
                                SELECT id, title, author, status, body, keywords, category_id, unix_timestamp(created_at) as created_time  \
                                from content \
                                where is_deleted=0 and status=1 and created_at >= CURRENT_TIMESTAMP() - INTERVAL 15 MINUTE\
                                ;
}

#defining indexes
index content_index
{
       source           = content
       path             = /home/zaid/sphinx/data/mdn_content
       docinfo          = extern
}
index delta: content_index
{
      source            = delta
      path              = /home/zaid/sphinx/data/mdn_content_delta
      docinfo           = extern
}

#searchd daemon configurations
searchd
{
     listen   = localhost:3321
     listen   = 9312
     log      = /var/log/searchd.log
     pid_file = /var/log/searchd.pid
}

Here is the explanation of the configuration file parts:

1- Main data source block (mandatory):

source content
{
....
}

this block defines the data source, which sphinx will index its contents, in this block you define mainly three groups of information:

  •     data source connection parameters:   containing information to connect your database, including: database type, username, password, port, database name …etc
    source content
    {
            #data source type
            type                    = mysql
            sql_host                = localhost
            sql_user                = root
            sql_pass                = root
            sql_db                  = your_db
            sql_port                = 3306
    
            ....
    }
  • Query Fetch configurations: containing the main query to fetch the data from your source in order to be indexed in Sphinx, I used
    sql_query_pre to define UTF8 encoding of the incoming data.
    sql_query to fetch the main data to be indexed, which are -in my case- non deleted, approved news articles.
    source content
    {
            ....
            #main document fetch query
            sql_query_pre           =   SET NAMES utf8
            sql_query               = \
                                    SELECT id, title, author, status, body, keywords, category_id, unix_timestamp(created_at) as created_time  \
                                    from content \
                                    where is_deleted=0 and status=1 \
            ....
    }
  • Attribute declaration: mainly data fetched into Sphinx will be full-text indexed, however, you can define other attribute that can be used for filtration, ordering and grouping as non-text fields. Here I used
    sql_attr_uint to define status and category_id columns as unsigned integer attributes,
    sql_attr_timestamp to define created_time as time stamp.
    source content
    {
            ....
            #attribute declaration        
            sql_attr_uint           = status
            sql_attr_uint           = category_id
            sql_attr_timestamp      = created_time
    }

    You may not define any attribute if there is no need for them

 

2- Delta Source Block:
This block defines the data which are recently updated, so we don’t have to run Sphinx indexer on all the data, we will run the indexer periodically only on the recently added contents (delta) in order to add them to the index.

#defining delta source
source delta: content {
       sql_query                = \
                                SELECT id, title, author, status, body, keywords, category_id, unix_timestamp(created_at) as created_time  \
                                from content \
                                where is_deleted=0 and status=1 and created_at >= CURRENT_TIMESTAMP() - INTERVAL 15 MINUTE\
                                ;
}

3- Index Block(s):
Defining the index associated with the data sources, I defined one index for the main source, and another for delta source. this block contains the path where the index will be stored

index content_index
{
       source           = content
       path             = /home/zaid/sphinx/data/mdn_content
       docinfo          = extern
}
index delta: content_index
{
      source            = delta
      path              = /home/zaid/sphinx/data/mdn_content_delta
      docinfo           = extern
}

4- Searchd Daemon Block:
Searchd is the daemon that serves the search queries issued by the clients and retrieve results, you define the port to listen to, log file path and PID (process ID) file path.

#searchd daemon configurations
searchd
{
     listen   = localhost:9312
     listen   = 9312
     log      = /var/log/searchd.log
     pid_file = /var/log/searchd.pid
}

Running Sphinx

Once you have placed your sphinx config file, you are ready to start indexing your data, and requesting search queries from Sphinx.

To index your data source, run the Indexer

indexer --all

I found the indexer pretty fast, it indexed my data source (which is about 1.5G) in about 90 seconds!

After completion of indexing data, start searchd daemon by simply typing

searchd

To make sure that your searchd daemon works, you can type
netstat -nlp | grep searchd
netstat -nlp | grep 9312

Now we need to make the delta index, indexing new data automatically and merge them to the main index, please these in your crontab:

#I added this cronjob to run each 15minutes according to my delta query
*/15 * * * * /usr/bin/indexer --rotate --config /usr/local/etc/sphinx.conf delta
*/15 * * * * /usr/bin/indexer --config /usr/local/etc/sphinx.conf --merge content_index delta --rotate

Now, you are ready to go!

Running sphinx from Command Line

now you can query search phrases from your sphinx using search command

in the following format

 search -i NAME_OF_INDEX -l LIMIT -o OFFSET SEARCH_PHRASE

Here is an example, this command search for “jordan” in content_index that we have just defined.

[zaid@localhost tmp]$ search -i content_index -l 10 -o 20 jordan
Sphinx 2.2.1-id64-dev (r4310)
Copyright (c) 2001-2013, Andrew Aksyonoff
Copyright (c) 2008-2013, Sphinx Technologies Inc (http://sphinxsearch.com)

using config file '/usr/local/etc/sphinx.conf'...
index 'content_index': query 'jordan ': returned 62 matches of 62 total in 0.000 sec

displaying matches:
21. document=136385, category_id=11, created_time=Sun Feb 26 14:27:32 2012
22. document=138933, category_id=11, created_time=Mon Mar 12 15:39:15 2012
23. document=142949, category_id=11, created_time=Wed Apr  4 13:23:04 2012
24. document=152446, category_id=19, created_time=Sun May 27 14:41:34 2012
25. document=156444, category_id=11, created_time=Sun Jun 17 00:40:47 2012
26. document=180436, category_id=11, created_time=Mon Oct 22 11:03:01 2012
27. document=57574, category_id=1, created_time=Sun Oct  3 18:05:58 2010
28. document=62989, category_id=53, created_time=Tue Nov 30 19:11:22 2010
29. document=76606, category_id=11, created_time=Sat Mar 12 11:29:13 2011
30. document=80203, category_id=17, created_time=Wed Apr  6 23:59:56 2011

words:
1. 'jordan': 62 documents, 164 hits

Note that the results returned DocumentID (which is analogous to id column in our SQL), along with other attributes we defined in the config file, which include category_id and created_time. The search was pretty fast (0.0000sec) take for this search query.

I am aware that I didn’t write any PHP code in this article, I will leave it to part 2 🙂

In the next article, I will write simple PHP script that query results from the indexes we created, and talk a little bit about filtering, grouping and ranking results in sphinx.

Optimizing and Compiling js/css files in php

In the last month, my team  in the company have been working on applying new theme to old project that we have, this project is more than 3 years old and it is written in relatively old technology (symfony 1.4/Propel ORM).

I wanted to find an automated method to optimize the javascript and stylesheet files that are served for this project (similar to the functionality of assetic in symfony2) , so I write a couple of files to automate this optimization, which do the following:

  1. Scan stylesheet folder and Optimize files using CssMin project:
    which compress the css file by removing whitespaces and comments, then minify.
  2. Scan javascript folder and Optimize  using google closure compiler:
    which parses javascript files, and convert it into better optimized form, as the closure page states:

    It parses your JavaScript, analyzes it, removes dead code and rewrites and minimizes what’s left. It also checks syntax, variable references, and types, and warns about common JavaScript pitfalls.

    note: I used CSSMin and google closure compiler, since they have the least dependencies, so I can utilize without installing additional packages, however, other options like Grunt or UglifyJs are really powerful but require npm to be installed.
  3. Creating new unique file names, through md5(resource_file_size) and copy it to the destination folder with the new name.
    This will prevent the browser caching for the modified files.

    note: other method for preventing browser cache is “cache busting” where you append changable query string to the resource file like

    <link rel="stylesheet" type="text/css" media="screen" href="/css/style.css?v=8" />
  4. Adding the association between the original file and the compiled file name to an array that will be used for rendering the resource path.

 

and here is the code of the task that performs the optimization:

<?php
include('CSSMinify.php'); //download CSS Min from http://code.google.com/p/cssmin/
class optimizeResourcesTask{

    private static $RESOURCES_ASSOCIATION = array();  //array to hold  mapping between original files and compiled ones
    
    private $source_js_folder = 'js/'; //the relative path of the source files for javascript directory, it will be scanned and its individual js files will be optimized
    private $target_js_folder = 'js-compiled/';    //result js files will be stored in this directory
    
    //target and source CSS folders preferred to be on the same folder level, 
    //otherwise path rewrite should be handled in the contents of the css files
    private $source_css_folder = 'css/'; //the relative path of the source files for stylesheet directory, it will be scanned and its individual css files will be optimized
    private $target_css_folder = 'css-compiled/';  //result css files will be stored in this directory   
    
    //path of the file that will hold associative array containing mapping between original files and compiled ones
    private $resource_map_file = '_resource_map.php';
    


    public function run() {
        // initialize the database connection

        $css_dir = __DIR__ .'/'. $this->source_css_folder;
        $this->optimizeCSSResources($css_dir);

        $js_dir = __DIR__.'/'.$this->source_js_folder;
        $this->optimizeJSResources($js_dir);

        $this->writeMappingData();
        
        
        $this->cleanupOldData($this->target_css_folder, 'css');
        $this->cleanupOldData($this->target_js_folder, 'js');
    }

    /**
     * iterating over the CSS directory and optimizing all of its contents
     * every single CSS file found it this directory will be passed to optimizeOneCSS() method in order to optimize
     * @param string $dir
     */
    protected function optimizeCSSResources($dir = null) {
        if (is_null($dir)) {
            $dir = __DIR__ . '/'.$this->source_css_folder;
        }

        if ($handle = opendir($dir)) {
            while (false !== ($entry = readdir($handle))) {
                if ($entry != "." && $entry != "..") {
					
                    if (is_dir($dir . $entry)) {
                        $this->optimizeCSSResources($dir . $entry);
                    } else {
                        $this->optimizeOptimizeOneCSS($dir . $entry);
                    }
                }
            }
        }
    }


    /**
     * optimize one CSS file by using CSSMin library to minify the contents of the file
     * generate new file name using hash of its file contents
     * add the new file name association to $RESOURCES_ASSOCIATION static variable in order to write resource association array later
     * @link "http://code.google.com/p/cssmin/" CSSMin documentation
     * @param string $file css file absolute path to minify
     */
    protected function optimizeOptimizeOneCSS($file) {
	
        print('trying to optimize css file ' . $file. chr(10));
        $info = pathinfo($file);
        if ($info['extension'] == 'css') {
            $optimized_css = CssMin::minify(file_get_contents($file));

            $target_css_dir_absolute = __DIR__ . '/' . $this->target_css_folder;
            if (!is_dir($target_css_dir_absolute)) {
                mkdir($target_css_dir_absolute);
                chmod($target_css_dir_absolute, 0777);
            }

            $new_name = md5($optimized_css) . '.css';
            file_put_contents($target_css_dir_absolute .  $new_name, $optimized_css);


            $file_relative_path = str_replace(__DIR__ , '', $file);
			
            self::$RESOURCES_ASSOCIATION[$file_relative_path] = '/' . $this->target_css_folder .  $new_name;

            print('CSS FILE: ' . $file . ' has been optimized to ' . $target_css_dir_absolute .  $new_name. chr(10));
			
        } else {
            print("skipping $file from optimization, not stylesheet file, just copying it". chr(10));
            
            $file_relative_path = str_replace(__DIR__ . $this->source_css_folder, '/', $file);
            
            $target_css_dir_absolute = __DIR__ . '/' . $this->target_css_folder .dirname($file_relative_path);
            
            if (!is_dir($target_css_dir_absolute)) {
                mkdir($target_css_dir_absolute);
                chmod($target_css_dir_absolute, 0777);
            }
            
            copy($file, $target_css_dir_absolute.'/'.basename($file));
        }
    }
	
	
    /**
     * iterating over the JS directory and optimizing all of its files contents'
     * every single JS file found it this directory will be passed to optimizeOneJS() method in order to optimize/minimize
     * @param string $dir
     */
    protected function optimizeJSResources($dir = null) {

        if (is_null($dir)) {
            $dir = __DIR__ . '/'.$this->source_js_folder;
        }
        print('getting JS inside ' . $dir. chr(10));

        if ($handle = opendir($dir)) {
            while (false !== ($entry = readdir($handle))) {
                
                if ($entry != "." && $entry != "..") {

                    if (is_dir($dir . $entry)) {
                        $this->optimizeJSResources($dir .  $entry);
                    } else {
                        $file_path = $dir . $entry;
                        $pathinfo = pathinfo($file_path);
                        if($pathinfo['extension']=='js'){
                            $this->optimizeOneJS($file_path);
                        }else{
                            print($file_path.' is not passed to optimization, its not a valid js file'. chr(10));
                        }
                    }
                }
            }
        }
    }

    /**
     * optimize one JS File using "Google Closure Compiler", 
     * store the optimized file in target directory named as hash of the file contents
     * add the new file name association to $RESOURCES_ASSOCIATION static variable in order to write resource association array later
     * @link  "https://developers.google.com/closure/compiler/docs/gettingstarted_api" "Google Closure Compiler API"
     * @param string $file js file absolute path to optimize/minify
     */
    protected function optimizeOneJS($file) {
	
        print("trying to optimize js ". $file. chr(10));

        $post_fields = array(
            'js_code' => file_get_contents($file),
            'compilation_level' => 'SIMPLE_OPTIMIZATIONS',
            'output_format' => 'text',
            'output_info' => 'compiled_code',
        );



        $ch = curl_init("http://closure-compiler.appspot.com/compile");
        curl_setopt($ch, CURLOPT_POST, count($post_fields));
        curl_setopt($ch, CURLOPT_POSTFIELDS, http_build_query($post_fields));
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

        $optimized_js = curl_exec($ch);
        
        if(strpos($optimized_js,'Error(22): Too many compiles performed recently.') !==false){ //Google Closure API returned error on too many compilation
            trigger_error($file.' is failed to be compiled, skipped...');
            return;
        }
        
        curl_close($ch);

        $target_js_dir_absolute = __DIR__ . '/' . $this->target_js_folder;
        if (!is_dir($target_js_dir_absolute)) {
            mkdir($target_js_dir_absolute);
            chmod($target_js_dir_absolute, 0777);
        }

        $new_name = md5($optimized_js) . '.js';
        file_put_contents($target_js_dir_absolute . '/' . $new_name, $optimized_js);


        $file_relative_path = str_replace(__DIR__ , '', $file);
        self::$RESOURCES_ASSOCIATION[$file_relative_path] = '/' . $this->target_js_folder . $new_name;

        print('JS FILE: ' . $file . ' has been optimized to ' . $new_name. chr(10));
    }

    /**
     * write $resources_map array stored in $RESOURCES_ASSOCIATION into $resource_map_file file that will be used in generating
     * the association between original JS/CSS files and the optimized/minimized ones
     */
    protected function writeMappingData() {
        $str = "<?php \$resources_map = array(";
        foreach (self::$RESOURCES_ASSOCIATION as $original_file => $optimized_file) {
            $str .= "'$original_file'=>'$optimized_file', " . chr(10);
        }
        $str .= "); ";

        $f = fopen(__DIR__ . '/' . $this->resource_map_file, 'w+');
        fwrite($f, $str);
        fclose($f);

        echo 'mapping data written to ' . $this->resource_map_file . chr(10);
    }

    /**
     * this function will remove any file that exists $target_js_folder and $target_css_folder
     * and doesnot exist in $RESOURCES_ASSOCIATION array, most probably that were generated from old builds and not used anymore
     * @param $dir the relative path of the directory to cleanup
     * @param $extension_to_filter the extension that is going to be cleaned (either css or js), the idea is to ignore cleaning static resources like font files, ex. woff, eot
     */
    protected function cleanupOldData($dir, $extension_to_filter){
        $dir_absolute = __DIR__.'/'.$dir;
		
        if ($handle = opendir($dir_absolute)) {
            while (false !== ($entry = readdir($handle))) {
                if ($entry != "." && $entry != "..") {
                    
                    if (is_dir($dir_absolute .  $entry)) {
                        $this->cleanupOldData($dir.$entry, $extension_to_filter);
                    }else{
                        $file_path = $dir_absolute .  $entry;
                        $pathinfo = pathinfo($file_path);
                        print('examining   /'.$dir .  $entry. chr(10));
                        
                        //including the packup files deletion
                        if(in_array($pathinfo['extension'], array($extension_to_filter, $extension_to_filter.'~')) && !in_array('/'.$dir . $entry, self::$RESOURCES_ASSOCIATION)){                            
                            unlink($file_path);
                            print($file_path.' is deleted....'. chr(10));
                        }
                    }
                }
            }
        }        
    }
}


$task = new OptimizeResourcesTask();
$task->run();
echo 'optimization done...';

after running this class, it will generate “_resource_map.php” file that contains array to store mapping between original resources and compiled ones, its contents will be similar to this:

<?php $resources_map = array('/css/main.css'=>'/css-compiled/d41d8cd98f00b204e9800998ecf8427e.css', 
'/css/redmondjquery-ui-1.8.14.custom.css'=>'/css-compiled/d41d8cd98f00b204e9800998ecf8427e.css', 
'/css/style.css'=>'/css-compiled/f866be09baee73d596cb578b02d37d29.css', 
'/js/jquery-1.5.1.min.js'=>'/js-compiled/6c1b3f8d121bfefdad82fb4854a8f254.js', 
'/js/jquery-ui-1.8.14.custom.min.js'=>'/js-compiled/e34d1750b1305e35327964b7f0ea6bb9.js', 
'/js/jquery.cookie.js'=>'/js-compiled/08bf7e471064522f8e45c382b2b93550.js', 
'/js/jquery.easing-1.3.pack.js'=>'/js-compiled/0301f5ff89729b3c0fc5622b7633f4b8.js', 
'/js/jquery.fancybox-1.3.4.js'=>'/js-compiled/cb707a9b340d624510e1fa27d3692f0e.js', 
'/js/jquery.fancybox-1.3.4.pack.js'=>'/js-compiled/f58ec8d752b6148925d6a3f14061c269.js', 
'/js/jquery.min.js'=>'/js-compiled/5ee7bdd2dbbdec528925cb61c3010598.js', 
'/js/jquery.validate.min.js'=>'/js-compiled/9d28b87b0ec7b4e3195665adbd6918be.js', 
); 

now we need a function to get the optimized version of the files (in production environment only):

<?php 
function resource_path($file){
    global $config;
	if($config['env'] == 'prod'){ //serve compiled resource only on production environment
		include '_resource_map.php';
		if(isset($resources_map[$file])){
			return $resources_map[$file]; //return compiled version of the file
		}
	}
	return $file;
} ?>

and here is the use of example css/js file in “header.php”:

<link href="<?php echo resource_path('/css/style.css') ?>" rel="stylesheet" type="text/css" />
<script src="<?php echo resource_path('/js/jquery.min.js') ?>" type="text/javascript" ></script>

now once you render the page in production environment, the optimized css/js will be served instead of the original ones, as follows:
<link href="/css-compiled/f866be09baee73d596cb578b02d37d29.css" rel="stylesheet" type="text/css" />
<script src="/js-compiled/5ee7bdd2dbbdec528925cb61c3010598.js" type="text/javascript" ></script>

Now everything works good and you can serve optimized versions of your resource files with minimal effort upon each update on your website. Whenever there is some amendments to the website theme,  I would only run optimizeResourcesTask to optimize files and serve them automatically in production environments.

I used this code for my project s that written in native php or old symfony version, but as I mentioned  earlier there is some frameworks like symfony2 assetic that perform similar functionality with long list of optimizers available.