Skip to main content
  1. All Posts/

php-nlp-tools

Tools PHP

PHP NlpTools

NlpTools is a set of php 5.3+ classes for beginner to
semi advanced natural language processing work.

Documentation

You can find documentation and code examples at the project’s homepage.

Contents

Classification Models

  1. Multinomial Naive Bayes
  2. Maximum Entropy (Conditional Exponential model)

Topic Modeling

Lda is still experimental and quite slow but it works. See an example.

  1. Latent Dirichlet Allocation

Clustering

  1. K-Means
  2. Hierarchical Agglomerative Clustering

    • SingleLink
    • CompleteLink
    • GroupAverage

Tokenizers

  1. WhitespaceTokenizer
  2. WhitespaceAndPunctuationTokenizer
  3. PennTreebankTokenizer
  4. RegexTokenizer
  5. ClassifierBasedTokenizer
    This tokenizer allows us to build a lot more complex tokenizers
    than the previous ones

Documents

  1. TokensDocument
    represents a bag of words model for a document.
  2. WordDocument
    represents a single word with the context of a larger document.
  3. TrainingDocument
    represents a document whose class is known.
  4. TrainingSet
    a collection of TrainingDocuments

Feature factories

  1. FunctionFeatures
    Allows the creation of a feature factory from a number of callables
  2. DataAsFeatures
    Simply return the data as features.

Similarity

  1. Jaccard Index
  2. Cosine similarity
  3. Simhash
  4. Euclidean
  5. HammingDistance

Stemmers

  1. PorterStemmer
  2. RegexStemmer
  3. LancasterStemmer
  4. GreekStemmer

Optimizers (MaxEnt only)

  1. A gradient descent optimizer
    (written in php) for educational use.
    It is a simple implementation for anyone wanting to know a bit
    more about either GD or MaxEnt models
  2. A fast (faster than nltk-scipy), parallel gradient descent
    optimizer written in Go. This optimizer
    resides in another repo,
    it is used via the external optimizer.
    TODO: At least write a readme for the optimizer written in Go.

Other

  1. Idf Inverse document frequency
  2. Stop words
  3. Language based normalizers
  4. Classifier based transformation for creating flexible preprocessing pipelines