Data are becoming the new raw material of business
The Economist


NLTK vs. spaCy: Natural Language Processing in Python

The venerable NLTK has been the standard tool for natural language processing in Python for some time. It contains an amazing variety of tools, algorithms, and corpuses. Recently, a competitor has arisen in the form of spaCy, which has the goal of providing powerful, streamlined language processing. Let’s see how these toolkits compare.

Philosophy

NLTK provides a number of algorithms to choose from. For a researcher, this is a great boon. Its nine different stemming libraries, for example, allow you to finely customize your model. For the developer who just wants a stemmer to use as part of a larger project, this tends to be a hindrance. Which algorithm performs the best? Which is the fastest? Which is being maintained?

In contrast, spaCy implements a single stemmer, the one that the spaCy developers feel to be best. They promise to keep it updated, and may replace it with an improved algorithm as the state of the art progresses. You may update your version of spaCy and find that improvements to the library have boosted your application without any work necessary. (The downside is that you may need to rewrite some test cases.)

As a quick glance through the NLTK documentation demonstrates, different languages may need different algorithms. NLTK lets you mix and match the algorithms you need, but spaCy has to make a choice for each language. This is a long process and spaCy currently only has support for English.

Strings versus objects

NLTK is essentially a string processing library. All the tools take strings as input and return strings or lists of strings as output. This is simple to deal with at first, but it requires the user to explore the documentation to discover the functions they need.

In contrast, spaCy uses an object-oriented approach. Parsing some text returns a document object, whose words and sentences are represented by objects themselves. Each of these objects has a number of useful attributes and methods, which can be discovered through introspection. This object-oriented approach lends itself much better to modern Python style than does the string-handling system of NLTK.

A more detailed comparison between these approaches is available in this notebook.

Performance

An important part of a production-ready library is its performance, and spaCy brags that it’s ready to be used. We’ll run some tests on the text of the Wikipedia article on NLP, which contains about 10 kB of text. The tests will be word tokenization (splitting a document into words), sentence tokenization (splitting a document into sentences), and part-of-speech tagging (labeling the grammatical function of each word).

timing

It is fairly obvious that spaCy dramatically out-performs NLTK in word tokenization and part-of-speech tagging. Its poor performance in sentence tokenization is a result of differing approaches: NLTK simply attempts to split the text into sentences. In contrast, spaCy is actually constructing a syntactic tree for each sentence, a more robust method that yields much more information about the text. (You can see a visualization of the result here.)

Conclusion

While NLTK is certainly capable, I feel that spaCy is a better choice for most common uses. It makes the hard choices about algorithms for you, providing state-of-the-art solutions. Its Pythonic API will fit in well with modern Python programming practices, and its fast performance will be much appreciated.

Unfortunately, spaCy is English only at the moment, so developers concerned with other languages will need to use NLTK. Developers that need to ensure a particular algorithm is being used will also want to stick with NLTK. Everyone else should take a look at spaCy.

Tweet about this on TwitterShare on FacebookShare on LinkedIn

Automatically Generating License Data from Python Dependencies

We all know how important keeping track of your open-source licensing is for the average startup.  While most people think of open-source licenses as all being the same, there are meaningful differences that could have potentially serious legal implications for your code base.  From permissive licenses like MIT or BSD to so-called “reciprocal” or “copyleft” licenses, keeping track of the alphabet soup of dependencies in your source code can be a pain.

Today, we’re releasing pylicense, a simple python module that will add license data as comments directly from your requirements.txt or environment.yml files.

Continue reading

Tweet about this on TwitterShare on FacebookShare on LinkedIn

Painlessly Deploying Data Apps with Bokeh, Flask, and Heroku

Here at The Data Incubator, our Fellows deploy their own fully functional, public-facing web app to showcase their data science skills to employers. This not only gives them valuable experience dynamically fetching and displaying data, but also encourages them to think about end user interaction. To demo the process, we decided to marry together some of our favorite technologies:

  • Flask, a slick web framework for Python
  • Heroku for cloud-based app deployment
  • Bokeh for interactive, D3.js-style visualizations
  • Git for version control and distributing code

The goal is to create some distant ancestor of Google Finance: a form capable of accepting a stock ticker as input and producing a plot of the daily close price. Here’s the finished product. So how do we get there?

Continue reading

Tweet about this on TwitterShare on FacebookShare on LinkedIn