2012-10-11

Map/Reduce and GNU Parallel

Map/Reduce and GNU Parallel

This week I attended a meeting organized by DKUUG.The topic was GNU Parallel and the speaker was Ole Tange - the developer behind GNU Parallel.

To be honest, I have not used GNU Parallel before. Of course, I have heard about it as Ole always talks about the program when I meet him. His introduction to the program was great - enjoy it when DKUUG releases the video.

Simply put, GNU Parallel is able to run tasks in parallel. It can either be running locally or remotely. In order words, GNU Parallel can help you to transform your command-line into a parallel computational engine.

Lately, I have been studying Apache Hadoop. Currently, Hadoop is probably the most popular implementation of the programming paradigm Map/Reduce. GNU Parallel offers a way of specifying the Map component. It is activated by using the --pipe option. On my way home I was thinking on how to implement a simply Map/Reduce based analysis using GNU Parallel.

I have used the On Time Performance data set more than once. It is a good data set as it is highly regular and it is large (5-600,000 rows every month). The data set records every flight within USA, and you can find information about destination airport, delays, and 120 other data points. 6 months of data will result in a 1.3 GB (comma separated value) file.

A simple analysis of the data set is to generate a table of the number of time an airport is used by a flight. The three-letter airport code is unique e.g., LAX is Los Angeles International Airport. It is possible do the analysis in parallel by breaking the data file into smaller parts. This is the map task. Each task will produce a table, and the reduce task will combine the output for each map task into the final table.

In order to use GNU Parallel as driver for Map/Reduce, I have implemented the mapper and reduce in Perl. The mapper is:

#!/usr/bin/perl -w

use strict;

my %dests;
while (<>) {
    my @data = split /,/;
    my $airport = $data[14];
    $dests{$airport} = 0 if (not exists $dests{$airport});
    $dests{$airport}++
}

foreach my $airport (keys %dests) {
    printf "$airport $dests{$airport}\n";
}


The reducer is also simple:

#!/usr/bin/perl -w

use strict;

my %dests;
while (<>) {
    chomp;
    my ($airport, $count) = split / /;
    $dests{$airport} = 0 if (not exists $dests{$airport});
    $dests{$airport} += $count;
}

my $total = 0;
foreach my $airport (sort keys %dests) {
    $total += $dests{$airport};
    print "$airport $dests{$airport}\n";
}
print "Total: $total\n";


It is possible to run the Map/Reduce analysis by the command-line:

cat On_Time_Performance_1H2012.csv | parallel --pipe --blocksize 64M ./map.pl | ./reduce.pl

The input file is broken down into 64 MB chunks. GNU Parallel is line oriented so a chunk will not be exactly 64 MB but close. My laptop has four cores, and they are fully utilized.

It seems to me that GNU Parallel offers a simple approach to Map/Reduce for people living much of their life on a command-line.

2012-10-08

GotoCon 2012

GotoCon 2012 in Århus

I attended GotoCon 2012 in Århus last week. To be more precise, I attended the Big Data track Monday. My educational background is somewhat related to supercomputing (yeah, I did my share of Fortran-77 programming as a graduate student), and I have over the years worked on various "Big Data" project: bioinformatics, credit card fraud detection, building Linux clusters, etc. Currently, I work for a small NoSQL database vendor, and our solution could fit the Big Data quite nicely. Given all this, going to Århus and spend a day listening to the open source Big Data vendors sounded like a treat.

Neo4j - Jim Webber

First speaker was Jim Webber from Neo4j. Neo4j is a graph database, and I like graph theory a lot. Jim is not just a speaker - he is an entertainer. He began his talk by looking back on the database history. As you might be aware of, relational databases were developed in 1970s, and through the 1980s, Oracle became The Database. In the last decade, a number of companies has gone "google scale" that is, they have vast amount of data. Handling big data requires different data models than relational databases.
Jim pointed out that the complexity of an application is a function of data size, connectness, and uniformity. Of course, his graph database can help you if your data is highly connected. He demonstrated a number of data sets and how to query them using Neo4j.

Couchbase - Chris Andersen

Chris noted that the database industry is changing rapidly these years. Companies are required to shrink and grow databases on demand. In particular in the mobile app world where an app can go virtal over night. As an example, he mentioned Instagram who gained about one million new users on one day when they released their Android app.
He boils down the requirements to:
  • grow and shrink databases on demand
  • easy cluster node deployment
  • multiple masters in the cluster (for availability)
  • multiple datacenters (for availability)
  • auto sharding
The old-school approach to scalability of the data layer is to sharded MySQL instances and memcached. But this approach typically requires some high availability handling in the application code. With Couchbase (and probably other NoSQL databases), the applications are free of this complexity.

Couchbase has conducted a survey among NoSQL users, and the flexible schemas are reported as the most important feature.

Cassandra - Matthew Dennis

Matthew began his talk by discussing the relevance of Big Data. According to studies, companies using Big Data products and techniques seem to be out perform companies not using Big Data. Cassandra is offering a master-free cluster database, and it scales well. When data sets are larger than physical memory, it slows down gracefully. A personal note: data sets smaller than physical memory are not big data.
According to Matthew Solid State Disks (SSD) are pushing the limit as they lower the latency. Cassandra is SSD aware: it does not overwrite data but only append data. Files (probably, its blocks) are reclaimed by the database when they are not used any more. Latency is important for Matthew, and he argues that linear scalability means that latency is constant when:
  • increasing throughput and increasing number of nodes
  • increasing load and increasing nodes
The nodes in a Cassandra cluster are equal, and queries are proxies from node to node. This implies that Cassandra is eventually consistent.

MongoDB - Alvin Richards

Last speaker in the track was about MongoDB. Alvin talked about scaling up without spending (too much) money and that is where the NoSQL revolution fits in. In the last 40 years, we have seen a dramatic growth in computing power for example, memory has increased from 1 Kb to 4 GB, and disk space has gone from 100 MB to 3 TB.
For MongoDB, availability is some important than consistency. Together with the CAP theorem, this gives some consequences for the architectural choices for MongoDB. The asynchronous replication within a MongoDB cluster implies that MongoDB is eventually consistent.
Storing and analyzing time series is an important use case for NoSQL.

Panel discussion

The organizers has arrange a panel discussion with the major NoSQL vendors - and Martin Fowler. Martin has just published a book about NoSQL. I have read it - and I will return with a review in the near future.
Martin noted that NoSQL is still inmature. But the good news is that NoSQL brings us past "one model fits all". It is worth noticing that most NoSQL databases are released under an open source license - and backed by companies.
The panellists agreed that the CAP theorem and the application should drive which database to choose. The major issue here is that the majority of software developers only read 1 computer science book per year.

2012-10-05

Playing with TypeScript

Taking TypeScript out for a spin

Microsoft announced a new programming language by the name TypeScript this week. The driving philosophy behind the language is to extend JavaScript. Since I have done my shared of JavaScript programming - and given talks,taught classes, and written articles and a book on the topic - I decided to take TypeScript out for a spin.
The language comes with a specification (97 pages) and a compiler. The compiler is compiling TypeScript to JavaScript. In this way, Microsoft can support virtually any platform in the world. The big trend at the moment is that JavaScript is used as a server-side programming language - node.js is popular. In my day-work, I have recently worked on an extension (in C++) for node.js for employer's database.

Open Source

One of the interesting developments of TypeScript is that Microsoft has released everything as free or open source software. According to Microsoft, they are an operating systems company and not focusing on development tools. Today, developers are used to open source software, and in order to succeed within the developer community, you are forced to released under these terms.
The TypeScript compiler has a command-line interface. Basically, it is simply a node.js application. I have tested the compiler on my laptop which is currently running Fedora 17.
As an Emacs user, I am happy to see that Microsoft has released a TypeScript mode for Emacs. It is a simple mode as it primarily does syntax highlightning and indention. I have briefly looked at the elisp source code, and it seems to pretty well organized. If TypeScript becomes popular, I do hope that we will see a more advanced mode like js2.

Classes and strongly typing

As you might know, JavaScript is an object-oriented programming language. But is does not have classes. Over the years, various techniques for encapsulating data and emulating classes have been developed. For example, you can use a function as constructor. TypeScript introduces classes, and it is possible to inherit from a class. This implies that common design patterns can probably be implemented easily in TypeScript.
JavaScript is a dynamical typed languages, and TypeScript brings strongly typing to JavaScript. You must declare variables - with associated types. It is possible for the TypeScript compiler to infer types in many cases, and an any type gives you the flexibility of weak typing.

An example

I am fond of graph theory, and I have written a simple graph class in TypeScript.I implemented typological sorting in JavaScript a few years ago as a non-trivial example for a course I was teaching.
The TypeScript example uses two classes. The MyNode class is actually just a convenient way to store collection of data items. For a C programmer, it is just a struct. The Graph class is somewhat longer. As you can see, a TypeScript class has attributes, a constructor and methods. It is different from JavaScript where you do not have classes and objects do not have methods (but properties can be anonymous functions).

class MyNode {
    id: string;
    visited: bool;
    edges: number[];
    payload: string;

    constructor(id : string, payload : string) {
        this.id = id;
        this.payload = payload;
        this.visited = false;
        this.edges = [];
    }
};
   

class Graph {
    name: string;
    nodes: MyNode[];
   
    constructor(name : string) {
        this.name = name;
        this.nodes = [];
    }

    find_node(id : string) : number {
        var i;
        var found;

        if (this.nodes.length == 0) {
            return -1;
        }

        i = 0;
        found = false;
        while (i<this.nodes.length) {
            if (this.nodes[i].id == id) {
                found = true;
            }
            if (found) {
                break;
            }
            i++;
        }
        if (found) {
            return i;
        } else {
            return -1;
        }
    }

    // add nodes - id is a unique string for identification
    add_node(id : string, payload : string) {
        if (this.find_node(id) < 0) {
            var edges = [];
            var n = new MyNode(id, payload);
            this.nodes.push(n);
        }
    }

    // add edge - work well for DAGs
    // if you don't work with DAGs, then use
    // add_edge(i, j) ; add_edge(j, i);
    add_edge(src : string, dst : string) {
        var n;
        var i = this.find_node(src);
        var j = this.find_node(dst);
        if (i>=0 && j>=0) {
            n = this.nodes[i].edges.push(j);
        }
    }

    // topological sorting - return array of IDs
    // see http://en.wikipedia.org/wiki/Topological_sort
    // (alternative algorithm)
    sort() : number[] {
        var i, j, m;
        var L = [], Lid = [];
        var that = this;

        for (i in this.nodes) {
            this.nodes[i].visited = false;
        }

        function _visit(n) {
            var j;
            if (!that.nodes[n].visited) {
                that.nodes[n].visited = true;
                for (j in that.nodes[n].edges) {
                    m = that.nodes[n].edges[j];
                    _visit(m);
                }
                L.push(n);
            }
        }
        for (i in this.nodes) {
            _visit(i);
        }

        for (i in L) {
            j = L[i];
            Lid.push(this.nodes[j].id);
        }
        return Lid;
    }
}

// Test case
var g = new Graph("Software");
g.add_node('Emacs', '23.1');
g.add_node('GTK',   '2.18.3');
g.add_node('Cairo', '1.8.8');
g.add_node('glib',  '2.22.3');
g.add_node('libc',  '2.10.1');
g.add_edge('Emacs', 'libc');
g.add_edge('Emacs', 'GTK');
g.add_edge('GTK',   'glib');
g.add_edge('GTK',   'Cairo');
g.add_edge('Cairo', 'libc');
g.add_edge('glib',  'libc');
var list = g.sort();
console.log('Topological sort: '+list.join('->'));



The TypeScript language is an interesting development as it offers (web) programmers a more mainstream approach. To be honest, when I have been teaching JavaScript for seasoned programmers, they have often been puzzled by the odd object system behind JavaScript. The scope rules of JavaScript are always regarded as being odd. I have not yet examined rules in TypeScript, but I hope to return to the topic in the near future.