Visualise distances between texts
Asked Answered
C

7

11

I'm working on a research project for school. I've written some text mining software that analyzes legal texts in a collection and spits out a score that indicates how similar they are. I ran the program to compare each text with every other text, and I have data like this (although with many more points):

codeofhammurabi.txt crete.txt      0.570737
codeofhammurabi.txt iraqi.txt      1.13475
codeofhammurabi.txt magnacarta.txt 0.945746
codeofhammurabi.txt us.txt         1.25546
crete.txt iraqi.txt                0.329545
crete.txt magnacarta.txt           0.589786
crete.txt us.txt                   0.491903
iraqi.txt magnacarta.txt           0.834488
iraqi.txt us.txt                   1.37718
magnacarta.txt us.txt              1.09582

Now I need to plot them on a graph. I can easily invert the scores so that a small value now indicates texts that are similar and a large value indicates texts that are dissimilar: the value can be the distance between points on a graph representing the texts.

codeofhammurabi.txt crete.txt      1.75212
codeofhammurabi.txt iraqi.txt      0.8812
codeofhammurabi.txt magnacarta.txt 1.0573
codeofhammurabi.txt us.txt         0.7965
crete.txt iraqi.txt                3.0344
crete.txt magnacarta.txt           1.6955
crete.txt us.txt                   2.0329
iraqi.txt magnacarta.txt           1.1983
iraqi.txt us.txt                   0.7261
magnacarta.txt us.txt              0.9125

SHORT VERSION: Those values directly above are distances between points on a scatter plot (1.75212 is the distance between the codeofhammurabi point and the crete point). I can imagine a big system of equations with circles representing the distances between points. What's the best way to make this graph? I have MATLAB, R, Excel, and access to pretty much any software I might need.

If you can even point me in a direction, I'll be infinitely grateful.

Cradling answered 14/4, 2013 at 21:57 Comment(6)
I can't think of anything else but a symmetrical 3D bar plot, X and Y being indices for your bodies of text, (like 1=codeofhammurabi.txt, 2=crete.txt, ...), and Z being the similarity factor. Would this do?Celeriac
Or maybe a color map? (similarity coded as color)Celeriac
Have a look at pheatmap in the package pheatmap?Ghee
How would you draw this graph? Every node would be connected to every other node. Possible but probably not pretty with a large number of nodes and different spacing between each. A 3D graph or colour map is probably your best bet.Colonic
Good thinking; I'll definitely give the color-coding a try. What I really had in mind was something like this guy did.Cradling
Well, I suppose you could define a threshold value above which you consider two texts 'connected' and place an edge between these nodes. You could layout and draw the graph with GraphViz4Matlab.Colonic
C
10

Your data are really distances (of some form) in the multivariate space spanned by the corpus of words contained in the documents. Dissimilarity data such as these are often ordinated to provide the best k-d mapping of the dissimilarities. Principal coordinates analysis and non-metric multidimensional scaling are two such methods. I would suggest you plot the results of applying one or the other of these methods to your data. I provide examples of both below.

First, load in the data you supplied (without labels at this stage)

con <- textConnection("1.75212
0.8812
1.0573
0.7965
3.0344
1.6955
2.0329
1.1983
0.7261
0.9125
")
vec <- scan(con)
close(con)

What you effectively have is the following distance matrix:

mat <- matrix(ncol = 5, nrow = 5)
mat[lower.tri(mat)] <- vec
colnames(mat) <- rownames(mat) <-
  c("codeofhammurabi","crete","iraqi","magnacarta","us")

> mat
                codeofhammurabi  crete  iraqi magnacarta us
codeofhammurabi              NA     NA     NA         NA NA
crete                   1.75212     NA     NA         NA NA
iraqi                   0.88120 3.0344     NA         NA NA
magnacarta              1.05730 1.6955 1.1983         NA NA
us                      0.79650 2.0329 0.7261     0.9125 NA

R, in general, needs a dissimilarity object of class "dist". We could use as.dist(mat) now to get such an object, or we could skip creating mat and go straight to the "dist" object like this:

class(vec) <- "dist"
attr(vec, "Labels") <- c("codeofhammurabi","crete","iraqi","magnacarta","us")
attr(vec, "Size") <- 5
attr(vec, "Diag") <- FALSE
attr(vec, "Upper") <- FALSE

> vec
           codeofhammurabi   crete   iraqi magnacarta
crete              1.75212                           
iraqi              0.88120 3.03440                   
magnacarta         1.05730 1.69550 1.19830           
us                 0.79650 2.03290 0.72610    0.91250

Now we have an object of the right type we can ordinate it. R has many packages and functions for doing this (see the Multivariate or Environmetrics Task Views on CRAN), but I'll use the vegan package as I am somewhat familiar with it...

require("vegan")

Principal coordinates

First I illustrate how to do principal coordinates analysis on your data using vegan.

pco <- capscale(vec ~ 1, add = TRUE)
pco

> pco
Call: capscale(formula = vec ~ 1, add = TRUE)

              Inertia Rank
Total           10.42     
Unconstrained   10.42    3
Inertia is squared Unknown distance (euclidified) 

Eigenvalues for unconstrained axes:
 MDS1  MDS2  MDS3 
7.648 1.672 1.098 

Constant added to distances: 0.7667353

The first PCO axis is by far the most important at explaining the between text differences, as exhibited by the Eigenvalues. An ordination plot can now be produced by plotting the Eigenvectors of the PCO, using the plot method

plot(pco)

which produces

enter image description here

Non-metric multidimensional scaling

A non-metric multidimensional scaling (nMDS) does not attempt to find a low dimensional representation of the original distances in an Euclidean space. Instead it tries to find a mapping in k dimensions that best preserves the rank ordering of the distances between observations. There is no closed-form solution to this problem (unlike the PCO applied above) and an iterative algorithm is required to provide a solution. Random starts are advised to assure yourself that the algorithm hasn't converged to a sub-optimal, locally optimal solution. Vegan's metaMDS function incorporates these features and more besides. If you want plain old nMDS, then see isoMDS in package MASS.

set.seed(42)
sol <- metaMDS(vec)

> sol

Call:
metaMDS(comm = vec) 

global Multidimensional Scaling using monoMDS

Data:     vec 
Distance: user supplied 

Dimensions: 2 
Stress:     0 
Stress type 1, weak ties
No convergent solutions - best solution after 20 tries
Scaling: centring, PC rotation 
Species: scores missing

With this small data set we can essentially represent the rank ordering of the dissimilarities perfectly (hence the warning, not shown). A plot can be achieved using the plot method

plot(sol, type = "text", display = "sites")

which produces

enter image description here

In both cases the distance on the plot between samples is the best 2-d approximation of their dissimilarity. In the case of the PCO plot, it is a 2-d approximation of the real dissimilarity (3 dimensions are needed to represent all of the dissimilarities fully), whereas in the nMDS plot, the distance between samples on the plot reflects the rank dissimilarity not the actual dissimilarity between observations. But essentially distances on the plot represent the computed dissimilarities. Texts that are close together are most similar, texts located far apart on the plot are the most dissimilar to one another.

Cassiopeia answered 14/4, 2013 at 23:0 Comment(0)
L
12

If the question is 'how I can do something like this guy did?' (from xiii1408's comment to the question), then the answer is use Gephi’s built-in Force Atlas 2 algorithm on Euclidean distances of document topic posterior probabilities.

"This guy" is Matt Jockers, who is an innovative scholar in the digital humanities. He has documented some of his methods on his blog and else where, etc. Jockers mostly works in R and shares some of his code. His basic work flow seems to be:

  1. break plain text into 1000-word chunks,
  2. remove stopwords (don't stem),
  3. do part-of-speech tagging and keep nouns only,
  4. build a topic model (using LDA),
  5. calculate Euclidean distances between documents based on topic proportions, subset the distances to keep only ones below a certain threshold, and then
  6. visualise with a force-directed graph

Here's a small-scale reproducible example in R (with an export to Gephi) that might be close to what Jockers did:

#### prepare workspace
# delete current objects and clear RAM
rm(list = ls(all.names = TRUE))
gc()

Get data...

#### import text
# working from the topicmodels package vignette
# using collection of abstracts of the Journal of Statistical Software (JSS) (up to 2010-08-05).
install.packages("corpus.JSS.papers", repos = "http://datacube.wu.ac.at/", type = "source")
data("JSS_papers", package = "corpus.JSS.papers")
# For reproducibility of results we use only abstracts published up to 2010-08-05 
JSS_papers <- JSS_papers[JSS_papers[,"date"] < "2010-08-05",]

Clean and reshape...

#### clean and reshape data
# Omit abstracts containing non-ASCII characters in the abstracts
JSS_papers <- JSS_papers[sapply(JSS_papers[, "description"], Encoding) == "unknown",]
# remove greek characters (from math notation, etc.)
library("tm")
library("XML")
remove_HTML_markup <- function(s) tryCatch({
    doc <- htmlTreeParse(paste("<!DOCTYPE html>", s),
                         asText = TRUE, trim = FALSE)
                         xmlValue(xmlRoot(doc))
                         }, error = function(s) s)
# create corpus
corpus <- Corpus(VectorSource(sapply(JSS_papers[, "description"], remove_HTML_markup)))
# clean corpus by removing stopwords, numbers, punctuation, whitespaces, words <3 characters long..
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords)
corpus_clean <- tm_map(corpus, wordLengths=c(3,Inf), FUN = tm_reduce, tmFuns = funcs)

Part of speech tagging and sub-setting of nouns...

#### Part-of-speach tagging to extract nouns only
library("openNLP", "NLP")
# function for POS tagging
tagPOS <-  function(x) {

  s <- NLP::as.String(x)
  ## Need sentence and word token annotations.

  a1 <- NLP::Annotation(1L, "sentence", 1L, nchar(s))
  a2 <- NLP::annotate(s, openNLP::Maxent_Word_Token_Annotator(), a1)
  a3 <- NLP::annotate(s,  openNLP::Maxent_POS_Tag_Annotator(), a2)

  ## Determine the distribution of POS tags for word tokens.
  a3w <- a3[a3$type == "word"]
  POStags <- unlist(lapply(a3w$features, `[[`, "POS"))

  ## Extract token/POS pairs (all of them): easy - not needed
  # POStagged <- paste(sprintf("%s/%s", s[a3w], POStags), collapse = " ")
  return(unlist(POStags))
} 
# a  loop to do POS tagging on each document and do garbage cleaning after each document
# first prepare vector to hold results (for optimal loop speed)
corpus_clean_tagged <- vector(mode = "list",  length = length(corpus_clean))
# then loop through each doc and do POS tagging
# warning: this may take some time!
for(i in 1:length(corpus_clean)){
  corpus_clean_tagged[[i]] <- tagPOS(corpus_clean[[i]])
  print(i) # nice to see what we're up to
  gc()
}

# subset nouns
wrds <- lapply(unlist(corpus_clean), function(i) unlist(strsplit(i, split = " ")))
NN <- lapply(corpus_clean_tagged, function(i) i == "NN")
Noun_strings <- lapply(1:length(wrds), function(i) unlist(wrds[i])[unlist(NN[i])])
Noun_strings <- lapply(Noun_strings, function(i) paste(i, collapse = " "))
# have a look to see what we've got
Noun_strings[[1]]
[8] "variogram model splus user quality variogram model pairs locations measurements variogram nonstationarity outliers variogram fit sets soil nitrogen concentration"

Topic modelling with latent Dirichlet allocation...

#### topic modelling with LDA (Jockers uses the lda package and MALLET, maybe topicmodels also, I'm not sure. I'm most familiar with the topicmodels package, so here it is. Note that MALLET can be run from R: https://gist.github.com/benmarwick/4537873
# put the cleaned documents back into a corpus for topic modelling
corpus <- Corpus(VectorSource(Noun_strings))
# create document term matrix 
JSS_dtm <- DocumentTermMatrix(corpus)
# generate topic model 
library("topicmodels")
k = 30 # arbitrary number of topics (they are ways to optimise this)
JSS_TM <- LDA(JSS_dtm, k) # make topic model
# make data frame where rows are documents, columns are topics and cells 
# are posterior probabilities of topics
JSS_topic_df <- setNames(as.data.frame(JSS_TM@gamma),  paste0("topic_",1:k))
# add row names that link each document to a human-readble bit of data
# in this case we'll just use a few words of the title of each paper
row.names(JSS_topic_df) <- lapply(1:length(JSS_papers[,1]), function(i) gsub("\\s","_",substr(JSS_papers[,1][[i]], 1, 60)))

Calculate Euclidean distances of one document from another using topics probabilities as the document's 'DNA'

#### Euclidean distance matrix
library(cluster)
JSS_topic_df_dist <-  as.matrix(daisy(JSS_topic_df, metric =  "euclidean", stand = TRUE))
# Change row values to zero if less than row minimum plus row standard deviation
# This is how Jockers subsets the distance matrix to keep only 
# closely related documents and avoid a dense spagetti diagram 
# that's difficult to interpret (hat-tip: https://mcmap.net/q/1014638/-change-row-values-to-zero-if-less-than-row-standard-deviation)
JSS_topic_df_dist[ sweep(JSS_topic_df_dist, 1, (apply(JSS_topic_df_dist,1,min) + apply(JSS_topic_df_dist,1,sd) )) > 0 ] <- 0

Visualize using a force-directed graph...

#### network diagram using Fruchterman & Reingold algorithm (Jockers uses the ForceAtlas2 algorithm which is unique to Gephi)
library(igraph)
g <- as.undirected(graph.adjacency(JSS_topic_df_dist))
layout1 <- layout.fruchterman.reingold(g, niter=500)
plot(g, layout=layout1, edge.curved = TRUE, vertex.size = 1,  vertex.color= "grey", edge.arrow.size = 0.1, vertex.label.dist=0.5, vertex.label = NA)

enter image description here And if you want to use the Force Atlas 2 algorithm in Gephi you simply export the R graph object to a graphml file and then open it in Gephi and set the layout to Force Atlas 2:

# this line will export from R and make the file 'JSS.graphml' in your working directory ready to open with Gephi
write.graph(g, file="JSS.graphml", format="graphml") 

Here's the Gephi plot with the Force Atlas 2 algorithm: enter image description here

Lavona answered 15/4, 2013 at 8:19 Comment(6)
I know this is trivial, but for some reason I'm having difficulties--how can you modify the plot command to have labels printed with the points? I know it's not useful for this example, but some of my data sets are smaller in size, and this would be handy.Cradling
Yes, in the plot function you include vertex.label=names(X) where X is the data frame, or you can use any other vector of your labels in place of names(X). Here's a simple example: X <- data.frame(matrix(sample(c(0,0,1,2), 25, replace=TRUE), ncol=5)); names(X) <- LETTERS[1:5]; X; str(X); g <- graph.adjacency(X); plot(g, layout=layout.fruchterman.reingold, vertex.size=4, edge.arrow.size = 0.01, vertex.label=names(X), vertex.label.dist=0.5)Lavona
And to adjust the size of the point labels, you can experiment with vertex.label.cex = 0.1 in the plot functionLavona
@Lavona Thank you for sharing this. I'm keen to try this out, but appear to be having trouble with the OpenNLP package: am I correct that this has changed radically since you wrote your answer? tmTagPOS appears to have been replaced by Maxent_POS_Tag_Annotator - and indeed much support for tm seems to have disappeared. Working my way through it now, but (on the assumption you've already solved) would love any pointers.Nightingale
Yes, POStagging in R has all changed recently. I've updated my answer to use the current POStagger from NLP and openNLP. Let me know if you have any problems.Lavona
That's amazing. Thank you. Works for the data set given. Trying to reproduce your Gephi plot, I notice that the underlying igraph and Gephi data sets appear to be different... Once again, thank you for such a speedy response.Nightingale
C
10

Your data are really distances (of some form) in the multivariate space spanned by the corpus of words contained in the documents. Dissimilarity data such as these are often ordinated to provide the best k-d mapping of the dissimilarities. Principal coordinates analysis and non-metric multidimensional scaling are two such methods. I would suggest you plot the results of applying one or the other of these methods to your data. I provide examples of both below.

First, load in the data you supplied (without labels at this stage)

con <- textConnection("1.75212
0.8812
1.0573
0.7965
3.0344
1.6955
2.0329
1.1983
0.7261
0.9125
")
vec <- scan(con)
close(con)

What you effectively have is the following distance matrix:

mat <- matrix(ncol = 5, nrow = 5)
mat[lower.tri(mat)] <- vec
colnames(mat) <- rownames(mat) <-
  c("codeofhammurabi","crete","iraqi","magnacarta","us")

> mat
                codeofhammurabi  crete  iraqi magnacarta us
codeofhammurabi              NA     NA     NA         NA NA
crete                   1.75212     NA     NA         NA NA
iraqi                   0.88120 3.0344     NA         NA NA
magnacarta              1.05730 1.6955 1.1983         NA NA
us                      0.79650 2.0329 0.7261     0.9125 NA

R, in general, needs a dissimilarity object of class "dist". We could use as.dist(mat) now to get such an object, or we could skip creating mat and go straight to the "dist" object like this:

class(vec) <- "dist"
attr(vec, "Labels") <- c("codeofhammurabi","crete","iraqi","magnacarta","us")
attr(vec, "Size") <- 5
attr(vec, "Diag") <- FALSE
attr(vec, "Upper") <- FALSE

> vec
           codeofhammurabi   crete   iraqi magnacarta
crete              1.75212                           
iraqi              0.88120 3.03440                   
magnacarta         1.05730 1.69550 1.19830           
us                 0.79650 2.03290 0.72610    0.91250

Now we have an object of the right type we can ordinate it. R has many packages and functions for doing this (see the Multivariate or Environmetrics Task Views on CRAN), but I'll use the vegan package as I am somewhat familiar with it...

require("vegan")

Principal coordinates

First I illustrate how to do principal coordinates analysis on your data using vegan.

pco <- capscale(vec ~ 1, add = TRUE)
pco

> pco
Call: capscale(formula = vec ~ 1, add = TRUE)

              Inertia Rank
Total           10.42     
Unconstrained   10.42    3
Inertia is squared Unknown distance (euclidified) 

Eigenvalues for unconstrained axes:
 MDS1  MDS2  MDS3 
7.648 1.672 1.098 

Constant added to distances: 0.7667353

The first PCO axis is by far the most important at explaining the between text differences, as exhibited by the Eigenvalues. An ordination plot can now be produced by plotting the Eigenvectors of the PCO, using the plot method

plot(pco)

which produces

enter image description here

Non-metric multidimensional scaling

A non-metric multidimensional scaling (nMDS) does not attempt to find a low dimensional representation of the original distances in an Euclidean space. Instead it tries to find a mapping in k dimensions that best preserves the rank ordering of the distances between observations. There is no closed-form solution to this problem (unlike the PCO applied above) and an iterative algorithm is required to provide a solution. Random starts are advised to assure yourself that the algorithm hasn't converged to a sub-optimal, locally optimal solution. Vegan's metaMDS function incorporates these features and more besides. If you want plain old nMDS, then see isoMDS in package MASS.

set.seed(42)
sol <- metaMDS(vec)

> sol

Call:
metaMDS(comm = vec) 

global Multidimensional Scaling using monoMDS

Data:     vec 
Distance: user supplied 

Dimensions: 2 
Stress:     0 
Stress type 1, weak ties
No convergent solutions - best solution after 20 tries
Scaling: centring, PC rotation 
Species: scores missing

With this small data set we can essentially represent the rank ordering of the dissimilarities perfectly (hence the warning, not shown). A plot can be achieved using the plot method

plot(sol, type = "text", display = "sites")

which produces

enter image description here

In both cases the distance on the plot between samples is the best 2-d approximation of their dissimilarity. In the case of the PCO plot, it is a 2-d approximation of the real dissimilarity (3 dimensions are needed to represent all of the dissimilarities fully), whereas in the nMDS plot, the distance between samples on the plot reflects the rank dissimilarity not the actual dissimilarity between observations. But essentially distances on the plot represent the computed dissimilarities. Texts that are close together are most similar, texts located far apart on the plot are the most dissimilar to one another.

Cassiopeia answered 14/4, 2013 at 23:0 Comment(0)
K
2

You could do a network graph using igraph. The Fruchterman-Reingold layout has a parameter to provide edge weights. Weights bigger than 1 result in more "attraction" along the edges, weights less than 1 do the opposite. In your example, crete.txt has the lowest distance and sits in the middle and has smaller edges to other vertices. In fact, it is closer to iraqi.txt. Note that you have to inverse the data for E(g)$weight to get the correct distances.

data1 <- read.table(text="
codeofhammurabi.txt crete.txt      0.570737
codeofhammurabi.txt iraqi.txt      1.13475
codeofhammurabi.txt magnacarta.txt 0.945746
codeofhammurabi.txt us.txt         1.25546
crete.txt iraqi.txt                0.329545
crete.txt magnacarta.txt           0.589786
crete.txt us.txt                   0.491903
iraqi.txt magnacarta.txt           0.834488
iraqi.txt us.txt                   1.37718
magnacarta.txt us.txt              1.09582")
par(mar=c(3,7,3.5,5), las=1)

library(igraph)
g <- graph.data.frame(data1, directed = FALSE)
E(g)$weight <- 1/data1[,3] #inversed, high weights = more attraction along the edges
l <- layout.fruchterman.reingold(g, weights=E(g)$weight)
plot(g, layout=l)

enter image description here

Kyser answered 14/4, 2013 at 23:18 Comment(0)
G
0

Are you making all pairwise comparisons? Depends on how you calculate the distance(similarity), I am not sure if it is possible to make such a scatter plot. so when you have only 3 text file to consider, your scatter plot is easy to make (triangle with sides equal the distances). but when you add the fourth point, you might not be able to place it in a location where its distances to the existing 3 points satisfy all constraints.

But if you can do that, than you have a solution, just add new points on and on....I think... Or, if you don't need the distances on the scatter plot to be precise, you can simply make a web and label the distance.

Gittens answered 14/4, 2013 at 22:19 Comment(0)
I
0

Here's a potential solution for Matlab:

You can arrange your data into a formal 5x5 similarity matrix S where element S(i,j) represents your similarity (or dissimilarity) between the document i and document j. Assuming your distance measure is an actual metric, you can apply multi-dimensional scaling to this matrix via mdscale(S,2).

This function will attempt to find a 5x2 dimensional representation of your data that preserves the similarity (or dissimilarity) between your classes found in the higher dimensions. You can then visualize this data as a scatterplot of 5 points.

You could also potentially try this using mdscale(S,3) to project into a 5x3 dimensional matrix which you can then visualize with plot3().

Installation answered 14/4, 2013 at 22:58 Comment(0)
K
0

If you want circles representing the distances between points, this would work in R (I used the first table in your example):

data1 <- read.table(text="
codeofhammurabi.txt crete.txt      0.570737
codeofhammurabi.txt iraqi.txt      1.13475
codeofhammurabi.txt magnacarta.txt 0.945746
codeofhammurabi.txt us.txt         1.25546
crete.txt iraqi.txt                0.329545
crete.txt magnacarta.txt           0.589786
crete.txt us.txt                   0.491903
iraqi.txt magnacarta.txt           0.834488
iraqi.txt us.txt                   1.37718
magnacarta.txt us.txt              1.09582")
par(mar=c(3,7,3.5,5), las=1)

symbols(data1[,1],data1[,2], circles=data1[,3], inches=0.55, bg="lightblue", xaxt="n", yaxt="n", ylab="")
axis(1, at=data1[,1],labels=data1[,1])
axis(2, at=data1[,2],labels=data1[,2])
text(data1[,1], data1[,2], round(data1[,3],2), cex=0.9)

enter image description here

Kyser answered 14/4, 2013 at 22:58 Comment(0)
C
0

This Matlab snippet should work if you want to try a 3D bar view:

% Load data from file 'dist.dat', with values separated by spaces
fid = fopen('dist.dat');
data = textscan(                            ...
        fid,                   '%s%s%f', ...
        'Delimiter',           ' ',      ...
        'MultipleDelimsAsOne', true      ...
);
fclose(fid);

% Find all unique sources
text_bodies = unique(reshape([data{1:2}],[],1));

% Iterate trough the records and complete similarity matrix
N = numel(text_bodies);
similarity = NaN(N,N);
for k = 1:size(data{1},1)
        n1 = find(strcmp(data{1}{k}, text_bodies));
        n2 = find(strcmp(data{2}{k}, text_bodies));

        similarity(n1, n2) = data{3}(k); % Symmetrical part ignored
end;

% Display #D bar chart
bar3(similarity);
Celeriac answered 14/4, 2013 at 23:2 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.