22 Dec 2012

Convert OpenStreetMap Objects to KML with R

A quick geo-tip:
With the osmar and maptools package you can easily pull an OpenStreetMap object and convert it to KML, like below (thanks to adibender helping out on SO). I found the relation ID by googling for it (www.google.at/search?q=openstreetmap+relation+innsbruck).

# get OSM data
library(osmar)
library(maptools)
 
innsbruck <- get_osm(relation(113642), full = T)
sp_innsbruck <- as_sp(innsbruck, what = "lines")
 
# convert to KML
for( i in seq_along(sp_innsbruck) ) {
      kmlLine(sp_innsbruck@lines[[i]], kmlfile = "innsbruck.kml",
               lwd = 3, col = "blue", name = "Innsbruck") 
      }
 
shell.exec("innsbruck.kml")

16 Dec 2012

Taxonomy with R: Exploring the Taxize-Package

http://upload.wikimedia.org/wikipedia/commons/6/68/Ernst_Haeckel_-_Tree_of_Life.jpgFirst off, I'd really like to give a shout-out to the brave people who have created and maintain this great package - the fame is yours!

So, while exploring the capabilities of the package some issues with the ITIS-Server arose and with large datasets things weren't working out quite well for me.
I then switched to the NCBI API and saw that the result is much better here (way quicker, on first glance also a higher coverage).
At the time there is no taxize-function that will pull taxonomic details from a classification returned by NCBI, that's why I plugged together a little wrapper - see here:

# some species data:
spec <- data.frame("Species" = I(c("Bryum schleicheri", "Bryum capillare", "Bryum argentum", "Escherichia coli", "Glis glis")))
spl <- strsplit(spec$Species, " ")
spec$Genus <- as.character(sapply(spl, "[[", 1))

# for pulling taxonomic details we'd best submit higher rank taxons
# in this case Genera. Then we'll submit Genus Bryum only once and 
# save some computation time (might be an issue if you deal 
# with large datasets..)

gen_uniq <- unique(spec$Genus)

# function for pulling classification details ("phylum" in this case)
get_sys_level <- function(x){ require(taxize)
                              a <- classification(get_uid(x))
                              y <- data.frame(a[[1]])                                        # if there are multiple results, take the first..
                              z <- tryCatch(as.character(y[which(y[,2] == "phylum"), 1]),    # in case of any other errors put NA
                                            error = function(e) NA)
                              z <- ifelse(length(z) != 0, z, NA)                             # if the taxonomic detail is not covered return NA
                              return(data.frame(Taxon = x, Syslevel = z))
                             }

# call function and rbind the returned values 
result <- do.call(rbind, lapply(gen_uniq, get_sys_level))
print(result)
#         Taxon       Syslevel
# 1       Bryum   Streptophyta
# 2 Escherichia Proteobacteria
# 3        Glis       Chordata

# now merge back to the original data frame
spec_new <- merge(spec, result, by.x = "Genus", by.y = "Taxon")
print(spec_new)
#         Genus           Species       Syslevel
# 1       Bryum Bryum schleicheri   Streptophyta
# 2       Bryum   Bryum capillare   Streptophyta
# 3       Bryum    Bryum argentum   Streptophyta
# 4 Escherichia  Escherichia coli Proteobacteria
# 5        Glis         Glis glis       Chordata
#

28 Nov 2012

So, What Are You? ..A Plant? ..An Animal? -- Nope, I'm a Fungus!


Lately I had a list of about 1000 species names and I wanted to filter out only the plants as that is where I come from. I knew that Scott Chamberlain has put together the ritis package which obviously can do such things. However, I knew of ITIS before and was keen to give it a shot..

Here's what I've come up with (using the ITIS API, updated on 11. Dec 2012, previous version had a flaw with indefinite matches.. Should be ok now. However, there are of course species that are not covered by the database, i.e. Ixodes, see below):



library(XML)
get_tsn <- function(sp_name) {
           require(XML)
           units <- tolower(unlist(strsplit(sp_name, " ")))

           # valid string?
           if (length(units) > 2) { stop("...No valid search string submitted (two words seperated by one space)!") }

           itis_xml <- htmlParse(paste("http://www.itis.gov/ITISWebService/services/ITISService/searchByScientificName?srchKey=", 
                                       sp_name, sep=""))
           tsn <- xpathSApply(itis_xml, "//tsn", xmlValue)
           unitname1 <- tolower(gsub("\\s+", "", xpathSApply(itis_xml, "//unitname1", xmlValue)))
           unitname2 <- tolower(gsub("\\s+", "", xpathSApply(itis_xml, "//unitname2", xmlValue)))
           unitname3 <- tolower(gsub("\\s+", "", xpathSApply(itis_xml, "//unitname3", xmlValue)))

           # sp_name = only Genus, get tsn were sp_name matches perfectly and unitname2 (lower level taxon) is absent 
           if (length(units) == 1) {
               return(tsn[tolower(sub("\\s+", "", unitname1)) == tolower(sp_name) & unitname2 == ""]) }

           # sp_name = Genus and Epitheton, get tsn where both match perfectly and unitname3 (lower level taxon) is absent 
           if (length(units) == 2) {
               return(tsn[unitname1 == units[1] & 
                          unitname2 == units[2] &
                          nchar(unitname3) == 0]) }
           }

get_kngdm <- function(tsn) {
                   kngdm <- xpathSApply(htmlParse(paste("http://www.itis.gov/ITISWebService/services/ITISService/getKingdomNameFromTSN?tsn=", 
                                                       tsn, sep="")), 
                                                  "//kingdomname", xmlValue)
           return(kngdm)
           }

get_tsn_kngdm <- function(x) {y = get_tsn(x)
                              z = get_kngdm(y)
                              return(list(Name = x, TSN = y, Kingdom = z))
                              }

# I had some API-related errors (I guess it was mysteriously not answering in 
# some cases). I couldn't resolve this and thus implemented tryCatch
get_tsn_kngdm_try <- function(x) tryCatch(get_tsn_kngdm(x), error = function(e) NULL)

sp_names <- c("Clostridium", "Physcia", "Ixodes", "LYNX", "Homo sapiens", "Canis lupus")

system.time(result <- data.frame(do.call(rbind, lapply(sp_names, FUN = get_tsn_kngdm_try))))
result

system.time(result <- data.frame(do.call(rbind, lapply(sp_names, FUN = get_tsn_kngdm_try))))
#
# result
#        User      System verstrichen 
#        1.54        0.01       33.66 
#           Name    TSN  Kingdom
# 1  Clostridium 555645   Monera
# 2      Physcia  14024    Fungi
# 3        Viola  22030  Plantae
# 4       Ixodes                
# 5         LYNX 180581 Animalia
# 6 Homo sapiens 180092 Animalia
# 7  Canis lupus 180596 Animalia
#

20 Nov 2012

Add Comments in MS-Word using VBA

This VBA procedure (1) Searches words ("Str1", "Str2",..) and adds commentboxes to it and (2) changes the initials used in the box:

Sub CommentBox()

    Dim range As range
    Dim i As Long
    Dim TargetList
    Dim cm As Comment
    
    TargetList = Array("Str1", "Str2")
    
    For i = 0 To UBound(TargetList)
    
    Set range = ActiveDocument.range
    
    With range.Find
    .Text = TargetList(i)
    .Format = True
    .MatchCase = False
    .MatchWholeWord = True
    .MatchWildcards = False
    .MatchSoundsLike = False
    .MatchAllWordForms = False
        
    Do While .Execute(Forward:=True) = True

        Set cm = range.Comments.Add(range:=range, Text:="Hallo")
        cm.Author = "InternalName"
        cm.Initial = "Initials"
        
        Loop
    
    End With

    Next

With ActiveDocument.Styles("Kommentartext").Font
        .Size = 12
        .Bold = True
        .Italic = True
        .Color = wdColorBlue
End With

End Sub


12 Nov 2012

WIKI Search in Excel with VBA

Here's a VBA code snippet for searching a string in a cell in WIKIPEDIA:

Sub wiki()

Dim searchstr As String
Dim searchsadd As String

searchstr = ActiveCell.Value
searchadd = "http://en.wikipedia.org/w/index.php?title=Special%3ASearch&profile=default&search=" & searchstr & "&fulltext=Search"
  
  If Len(searchstr) = 0 Then
    MsgBox "The active cell is empty.. Nothing to search for..", vbOKOnly, "WIKIPEDIA"

  Else:
    ActiveWorkbook.FollowHyperlink searchadd, NewWindow:=True
    Application.StatusBar = "WIKI search for: " & searchstr
  End If
  
End Sub

6 Nov 2012

Calculate Single Contour-Line from DEM with QGIS / GDAL

In QGIS:

- from menu: Raster / Extraction / Contour

- define output name path/to/output.shp

- alter GDAL call for single contour line at 900 m asl:
gdal_contour -fl 900 "path/to/dem_raster.asc" "path/to/output.shp"


- for removing small poplygons or lines add area or length field (attr table / field calc or vector / geometry / add geometry)

- query by length or area to deselect unwanted iso-lines


Finally, you can export the contours as KML and check it in Google Earth:


I downloaded my DEM-rasters (Nord-Tirol, Österreich) HERE

29 Sep 2012

Merging Dataframes by Partly Matching String

The latest posting by Tony Hirst sparked my attention because I was thinking about a very similar issue recently.

I was also fiddling around with agrep and adist until I realised that for this very issue matching of substrings is not as important as matching multiple words.. With this different approach I quite easily matched all but 3 countries.

See what I did:

## look up matches of one dataframe in another dataframe.
## the strings to be matched are comprised of 1 or more words 
## and seperated by white space.
## method: match strings that have the highest fraction of words that match up

d1 <- read.csv("http://s.telegraph.co.uk/graphics/conrad/PercentageUsingTheNet.csv", 
               header = T, sep = ",", encoding = "UTF-8")
d2 <- read.csv("http://www.iso.org/iso/country_names_and_code_elements_txt",
               header = T, sep = ";", encoding = "UTF-8")

## strings to be compared d1$ECONOMY and d2$Country.Name
mystr.1 <- as.character(d1$ECONOMY)
mystr.2 <- as.character(d2$Country.Name)
mystr.3 <- as.character(d2$ISO.3166.1.alpha.2.code)

## remove punctuation and multiple spaces
mystr.1 <- tolower(gsub("[^[:alnum:][:space:]]", "", mystr.1))
mystr.1 <- gsub("\\s+", " ", mystr.1)
mystr.2 <- tolower(gsub("[^[:alnum:][:space:]]", "", mystr.2))
mystr.2 <- gsub("\\s+", " ", mystr.2)

## function that finds matching words in string (words seperated by single space!)
n.wordshared <- function(x, y) {
    sum(!is.na(match(unlist(strsplit(x, " ")),
                     unlist(strsplit(y, " ")))
         )
        )
    }
## example
n.wordshared(x = "hello world", y = "good bye world")
## [1] 1

## function that calculates fraction of shared words
fr.wordshared <- function(x, y) {
                     n.wordshared(x, y) / (length(unique(unlist(strsplit(x, " "))))
                                           + length(unique(unlist(strsplit(y, " ")))))
                          }
## example
fr.wordshared(x = "hello world", y = "good bye world")
## [1] 0.2

mydf <- data.frame(str1 = mystr.1, mymatch = "", match.iso = "",
                   stringsAsFactors = F)

## now look up every element of string 1 in string 2
## and if there are matching words assign match to dataframe
for (i in 1:nrow(mydf)) {
   xx <- sapply(mystr.2, fr.wordshared, y = mystr.1[i])
   if (sum(xx) == 0) {
     mydf$mymatch[i] <- NA
     mydf$match.iso[i] <- NA
     } else {
     mydf$mymatch[i] <- paste(names(which(xx == max(xx))), collapse = "; ")
     mydf$match.iso[i] <- paste(mystr.3[as.integer(which(xx == max(xx)))], collapse = "; ")
   }
}

## see result
print(mydf)

## these are the multiple matches
(aa <- mydf[grep(";", mydf$mymatch), ])
##
##               str1                            mymatch match.iso
## 28 slovak republic czech republic; dominican republic    CZ; DO


## these were not matched
(bb <- mydf[is.na(mydf$mymatch), ])
##      str1 mymatch match.iso
##
## 61  russia     NA        NA
## 108  syria     NA        NA

Now, expanding on this concept by introduction of partial matching would most propably result in a 100% match...

28 Sep 2012

Reading and Text Mining a PDF-File in R

I just added this R-script that reads a PDF-file to R and does some text mining with it to my Github repo..


17 Sep 2012

Online Questionnaire & Report Generation with Google Drive & R

Here's how I did it in 3 easy steps:

(1) Set up a form in Google Docs/Drive.

(2) Choose "Actions" and "Embed in Website" to get the URL for the iframe and put it in a post, like below. Then, go to the spreadsheet view of the form on Google Docs/Drive and set the share option to "everyone with the link" or "public" and copy the document key from this URL: https://docs.google.com/spreadsheet/ccc?key=0AmwAunwURQNsdFplUTBZUTRLREtLUDhabGxBMHBRWmc

(3) Make a report with knitr in R using the responses that were put to a spreadsheet at Google Docs/Drive. Do (knit) something like THIS to produce a markdown-file, i.e. (I pushed the md-file to Github for publishing - see it rendered to HTML HERE)..






      6 Sep 2012

      Get Long-Term Climate Data from KNMI Climate Explorer

      You can query global climate data from the KNMI Climate Explorer (the KNMI is the Royal Netherlands Metereological Institute) with R.


      Here's a little example how I retreived data for my hometown Innsbruck, Austria and plotted annual total precipitation. You can choose station data by pointing at a map, by setting coordinates, etc.

      25 Aug 2012

      Toy Example with GScholarScraper_3.1

      A commentator on my blog brought up this nice idea of how to use the GScholarScraper function for bibliometrics..
      I altered the code a little bit which enables to set a year since when results should be returned and added a field to the output collecting the year of publication. With this you can simply do something like this:







      input <- "intitle:metapopulation"
      df <- GScholar_Scraper(input, since = 1980, citation = 1)
      nrow(df)
      hist(df$YEAR, xlab = "Year", 
           main = "Frequency of Publications with\n\"METAPOPULATION\" in Title")

      22 Aug 2012

      Web-Scraper for Google Scholar Updated!

      I have updated the Google Scholar Web-Scraper Function GScholarScaper_2 to GScholarScraper_3 (and GScholarScaper_3.1) as it was outdated due to changes in the Google Scholar html-code. The new script is more slender and faster. It returns a dataframe or optionally a CSV-file with the titles, authors, publications & links. Feel free to report bugs, etc.



      Update 11-07-2013: bug fixes due to google scholar code changes - https://github.com/gimoya/theBioBucket-Archives/blob/master/R/Functions/GScholarScraper_3.2.R. Note that since lately your IP will be blocked by Google at about the 1000th search result (cumulated) - so there's not much fun when you want to do some extensive bibliometrics..

      11 Aug 2012

      Tcl/Tk GUI Example with Variable Input by User

      I recently used R with GUI-elements for the first time and browsed through the available online resources, but I didn't quite find what I was searching for: The user should be able to put in some variables and call a function with a button. In the end I did it with a little help from SO. Here is the working example that I eventually plugged together:

       

      29 Jun 2012

      Use IUCN API with R & XPath

      Thanks to a posting on R-sig-eco mailing list I learned of the IUCN-API. Here's a simple example for what can be done with it (output as pdf is HERE):


      25 Jun 2012

      Avoid Overplotting of Text in Ordination Diagram

      Referring to a recent posting on r-sig-eco mailing list I'll add this example to theBioBucket:














      library(vegan)
      library(vegan)
      data(dune)
      sol <- metaMDS(dune)
       
      # use ordipointlabel -
      # here is an example where I added cex according to species frequencies:
      plot(sol, type = "n")
      cex.lab = colSums(dune > 0) / nrow(dune) + 1
      col.lab = rgb(0.2, 0.5, 0.4, alpha = 0.6)
      ordipointlabel(sol, displ = "sp", col = col.lab, cex = cex.lab)
       
       
      # you could also use pointLabel() from maptools package:
      library(maptools)
      x = as.vector(sol$species[,1])
      y = as.vector(sol$species[,2])
      w = row.names(sol$species)
       
      plot(sol, type = "n")
      points(sol, displ = "species", cex = 1, pch = 4, col = 3)
      pointLabel(x, y, w, col = col.lab, cex = cex.lab)

      19 Jun 2012

      A Wrapper Function for Instant Package Installation / Loading

      Since library() and require() only accept input with length(input) = 1 it is necessary to make repeated calls - this can be quite annoying.. So, HERE is a little wrapper function for convenient package installation / loading. It installs packages if they are missing and loads them if there were not loaded yet. I have put it to my RProfile.site so I can conveniently install / load packages with only one call and am quite content with it..

      11 Jun 2012

      FloraWeb Plant Species Report via R

      For German-spoken users I added the function floraweb_scrape.R that allows you to conveniently collect species data and print to a PDF-file (see this example output). The function accesses data provided by the  web-site FloraWeb.de (BfN - Bundesministerium für Naturschutz).
      You can use it as an interactive version (RTclTk) which I have put to a Github repository HERE.

      Preview:



      14 May 2012

      Source R-Script from Dropbox

      A quick tip on how to source R-scripts from a Dropbox-account:

      (1) Upload the script..

      (2) Get link with the "get link" option. The link should look like "https://www.dropbox.com/s/XXXXXX/yourscript.R"..

      (3) Grab this part "XXXXXX/yourscript.R" and paste it to "http://dl.dropbox.com/s/"..





      (4) the final URL that can be sourced:
      source("http://dl.dropbox.com/s/XXXXXX/yourscript.R")
      ..an example with this script stored at my Dropbox account:
      source("http://dl.dropbox.com/s/c18lcwnnrodsevt/test_dropbox_source.R")
      

      EDIT, March 2013:
      This method is not working anymore. You can use the following approach instead:

      library(RCurl)
      setwd(tempdir())
      
      destfile = "test.txt"
      x = getBinaryURL("https://dl.dropbox.com/u/68286640/test_dropbox_source.R", followlocation = TRUE, ssl.verifypeer = FALSE)
      writeBin(x, destfile, useBytes = TRUE)
      source(paste(tempdir(), "/test.txt", sep = ""))
      
      # remove files from tempdir:
      unlink(dir())
      

      2 May 2012

      knitr-Example: Use World Bank Data to Generate Report for Threatened Bird Species

      I'll use the below script that retrieves data for threatened bird species from the World Bank via its API and does some processing, plotting and analysis. There is a package (WDI) that allows you to access the data easily.









      # world bank indicators for species - 
      # I'll check bird species:
      code <- as.character(WDIsearch("bird")[1,1])
      bird_data <- WDI(country="all", indicator=code, start=2010, end=2012)
      
      # remove NAs and select values in the range 50 - 1000:
      bird_data_sub <- bird_data[!is.na(bird_data$EN.BIR.THRD.NO)&
                                 bird_data$EN.BIR.THRD.NO < 1000&
                                 bird_data$EN.BIR.THRD.NO > 50, ]
      
      # change in numbers across years 2010 and 2011:
      change.no <- aggregate(EN.BIR.THRD.NO ~ country, diff,
                             data = bird_data_sub)
      # plot:
      par(mar = c(3, 3, 5, 1))
      plot(x = change.no[,2], y = 1:nrow(change.no),
           xlim = c(-12, 12), xlab = "", ylab = "",
           yaxt = "n")
      abline(v = 0, lty = 2, col = "grey80")
      title(main = "Change in Threatened Bird Species in\nCountries with Rich Avifauna (>50)")
      text(y = 1:nrow(change.no), 
           x = -2, adj = 1,
           labels = change.no$country)
      segments(x0 = 0, y0 = 1:nrow(change.no),
               x1 = change.no[, 2], y1 =  1:nrow(change.no))
      
      # test hypothesis that probability of species decrease is
      # equal to probability of increase:
      binom.test(sum(change.no < 0), sum(change.no != 0))
      
      For generating the report you can source the script from dropbox.com and stitch it in this fashion:
      stitch("http://dl.dropbox.com/s/ga0qbk1o17n17jj/Change_threatened_species.R")
      ..this is one line of code - can you dig it?..
      BTW, for simplicity I use knitr::stitch with its default template...

      You should get something like THIS PDF.

      EDIT, MARCH 2013
      OUTDATED! you can use this approach instead:

      library(knitr); library(RCurl); library(WDI)
      
      destfile = "script.txt"
      x = getBinaryURL("https://dl.dropbox.com/s/ga0qbk1o17n17jj/Change_threatened_species.R", followlocation = TRUE, ssl.verifypeer = FALSE)
      writeBin(x, destfile, useBytes = TRUE)
      source(paste(tempdir(), "/script.txt", sep = ""))
      
      stitch(paste(tempdir(), "/script.txt", sep = ""))
      

      Playing with knitr: Create Report with Dynamic List

      Here is a little toy example using knitr, LaTeX/MiKTeX and Google Docs.
      Say you had a list on Google Docs (say a list of attendants) and you want to print a report with it..
      Then see this example using this Rnw-file and the output...

      make the tex-file with:
      library(knitr)
      knit("knitr_list_of_attendants.Rnw")
      
      ..then compile the tex-file with MiKTeX.

      or with this shortcut:
      knit2pdf("knitr_list_of_attendants.Rnw")
      browseURL("knitr_list_of_attendants.pdf") 
      

      1 May 2012

      Quick Tip: Replace Values in Dataframe on Condition with Random Numbers

      This one took me some time - though, in fact it is plain simple:
      > options(scipen=999)
      > (my_df <- data.frame(matrix(sample(c(0,1), 100, replace = T), 10, 10)))
         X1 X2 X3 X4 X5 X6 X7 X8 X9 X10
      1   0  0  1  0  1  1  1  1  0   1
      2   0  0  1  1  1  0  0  0  0   0
      3   0  1  1  0  0  1  1  0  0   1
      4   1  1  0  0  0  0  0  0  0   0
      5   0  0  1  1  1  0  0  1  1   1
      6   0  0  1  1  0  0  1  1  0   1
      7   0  0  1  1  1  0  1  1  1   1
      8   1  1  1  0  0  0  0  1  1   0
      9   0  0  0  0  1  0  1  0  1   0
      10  1  1  1  1  1  0  1  1  0   1
      > my_df[my_df == 0] <- runif(sum(my_df==0), 0, 0.001)
      > my_df
               X1       X2       X3       X4       X5       X6       X7       X8
      1  0.000268 0.000926 1.000000 2.00e-05 1.000000 1.00e+00 1.00e+00 1.00e+00
      2  0.000531 0.000882 1.000000 1.00e+00 1.000000 4.66e-04 3.96e-04 6.70e-04
      3  0.000785 1.000000 1.000000 5.03e-04 0.000164 1.00e+00 1.00e+00 2.98e-04
      4  1.000000 1.000000 0.000336 8.71e-04 0.000770 7.44e-05 6.49e-05 1.01e-04
      5  0.000168 0.000674 1.000000 1.00e+00 1.000000 6.49e-04 2.26e-04 1.00e+00
      6  0.000404 0.000950 1.000000 1.00e+00 0.000735 7.59e-04 1.00e+00 1.00e+00
      7  0.000472 0.000516 1.000000 1.00e+00 1.000000 1.37e-04 1.00e+00 1.00e+00
      8  1.000000 1.000000 1.000000 6.30e-06 0.000972 3.97e-04 5.46e-05 1.00e+00
      9  0.000868 0.000577 0.000347 7.21e-05 1.000000 2.25e-04 1.00e+00 7.19e-05
      10 1.000000 1.000000 1.000000 1.00e+00 1.000000 5.80e-05 1.00e+00 1.00e+00
               X9      X10
      1  0.000880 1.00e+00
      2  0.000754 7.99e-04
      3  0.000817 1.00e+00
      4  0.000982 7.85e-04
      5  1.000000 1.00e+00
      6  0.000104 1.00e+00
      7  1.000000 1.00e+00
      8  1.000000 9.43e-06
      9  1.000000 7.79e-04
      10 0.000099 1.00e+00
      
      

      25 Apr 2012

      Reproducible Research: Running odfWeave with 7-zip

      odfWeave is an R-package that is used for making dynamic reports by Sweave processing of Open Document Format (ODF) files. For anyone new to report generation and lacking knowledge of markup languages this might be a good starting point or even a true alternative to sweave / LATEX and others.

      Now, anyone who recently tried to install the required zipping program for odfWeave might have noticed that there are currently no info-zip executables available (zip and unzip by info-zip software are suggested in the odfWeave manual). There are several other free zipping programs - but if you use these the default syntax for odfWeave changes. Looking into the internals it is revealed that the OS command specified for running the zipping program has to be adapted. There are some postings on the R-help mailing list concerning these topic, but none of them worked for me. After some trial and error I managed to get around this problem by using 7-zip with an adapted syntax and will share this here:

      20 Apr 2012

      Reproducible Research: Export Regression Table to MS Word

      Here's a quick tip for anyone wishing to export results, say a regression table, from R to MS Word:

      6 Apr 2012

      R-Bloggers' Web-Presence

      We love them, we hate them: RANKINGS!

      Rankings are an inevitable tool to keep the human rat race going. In this regard I'll pick up my last two posts (HERE & HERE) and have some fun with it by using it to analyse R-Bloggers' web presence. I will use number of hits in Google Search as an indicator.

      I searched for URLs like this: https://www.google.com/search?q="http://www.twotorials.com" - meaning that only the exact blog-URL is searched.

      5 Apr 2012

      A Little Web Scraping Exercise with XML-Package

      Some months ago I posted an example of how to get the links of the contributing blogs on the R-Blogger site. I used readLines() and did some string processing using regular expressions.

      With package XML this can be drastically shortened -
      see this:
      # get blogger urls with XML:
      library(RCurl)
      library(XML)
      script <- getURL("www.r-bloggers.com")
      doc <- htmlParse(script)
      li <- getNodeSet(doc, "//ul[@class='xoxo blogroll']//a")
      urls <- sapply(li, xmlGetAttr, "href")
      
      With only a few lines of code this gives the same result as in the original post! Here I will also process the urls for retrieving links to each blog's start page:
      # get ids for those with only 2 slashes (no 3rd in the end):
      id <- which(nchar(gsub("[^/]", "", urls )) == 2)
      slash_2 <- urls[id]
      
      # find position of 3rd slash occurrence in strings:
      slash_stop <- unlist(lapply(str_locate_all(urls, "/"),"[[", 3))
      slash_3 <- substring(urls, first = 1, last = slash_stop - 1)
      
      # final result, replace the ones with 2 slashes,
      # which are lacking in slash_3:
      blogs <- slash_3; blogs[id] <- slash_2
      
      p.s.: Thanks to Vincent Zoonekynd for helping out with the XML syntax.

      28 Mar 2012

      Applying Same Changes to Multiple Dataframes

      How to apply the same changes to several dataframes and
      save them to CSV:

      # a dataframe
      a <- data.frame(x = 1:3, y = 4:6)
      
      # make a list of several dataframes, then apply function (change column names, e.g.):
      my.list <- list(a, a)
      my.list <- lapply(my.list, function(x) {names(x) <- c("a", "b") ; return(x)})
      
      # save dfs to csv with similar lapply-call:
      n <- 1:length(my.list)
      lapply(n, function(ni) {
                     write.table(file = paste(ni, ".csv", sep = ""), 
                     my.list[ni], sep = ";", row.names = F)
                     }
             )
      

      Edit:

      I'll extend this to a script that reads several files from a directory, applies changes to the files in the same fashion and finally saves files back to the directory (as HERE)

      # clean up
      rm(list = ls())
      setwd(tempdir())
      unlink(dir(tempdir()))
      
      # create some files in tempdir:
      a <- data.frame(x = 1:3, y = 4:6)
      b <- data.frame(x = 10:13, y = 14:15)
      write.csv(a, "file1.csv", row.names = F)
      write.csv(b, "file2.csv", row.names = F)
      
      # now read all files to list:
      mycsv = dir(pattern=".csv")
      
      n <- length(mycsv)
      mylist <- vector("list", n)
      
      for(i in 1:n) mylist[[i]] <- read.csv(mycsv[i])
      
      # now change something in all dfs in list:
      mylist <- lapply(mylist, function(x) {names(x) <- c("a", "b") ; return(x)})
      
      # then save back dfs:# then save back dfs:
      for(i in 1:n) {
         write.csv(file = sub(".csv", "_new.csv", mycsv[i]),
                   mylist[i], row.names = F)
      }
      

      26 Mar 2012

      How to Extract Citation from a Body of Text

      Say, you have a text and you want to retrieve the cited names and years of publication. You wouldn't want to this by hand, wouldn't you?

      Try the following approach:
      (the text sample comes from THIS freely available publication)

      library(stringr)
      
      (txt <- readLines("http://dl.dropbox.com/u/68286640/Test_Doc.txt"))
      [1] "1  Introduction"                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
      [2] ""                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
      [3] "Climate projections of the Intergovernmental Panel on Climate Change (IPCC) forecast a general increase of seasonal temperatures in the present century across the temperate zone, aggravated by decreasing amounts of summer rainfall in certain regions at lower latitudes (Christensen et al. 2007). These changes imply serious ecological consequences, especially in biome transition zones (Fischlin et al. 2007). Due to their economic importance, as well as their major contribution to supporting, regulating and cultural ecosystem services, predicted changes and shifts in temperate forest ecosystems receive wide public attention. It’s no surprise that dominant forest tree species are frequently modelled in bioclimatic impact studies (e.g., Sykes et al. 1996; Iverson, Prasad 2001; Rehfeldt et al. 2003; Ohlemüller et al. 2006). However, most studies focus on continental-scale effects of climate change, using low resolution climatic and species distribution data. More detailed regional studies focussing on specific endangered regions are also needed (Benito Garzón et al. 2008). Such regional studies have already been prepared for several European regions, including the Swiss Alps (Bolliger et al. 2000), the British Isles (Berry et al. 2002) and the Iberian Peninsula (Benito Garzón et al. 2008)."                                                                                                                    
      [4] ""                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
      [5] "In this study, we aim to (1) identify the limiting macroclimatic factors and to (2) predict the future boundaries of beech (Fagus sylvatica L.) and sessile oak (Quercus petraea (Mattuschka) Liebl.) forests in a region highly vulnerable to climatic extremes. Both tree species form extensive zonal forests throughout Central Europe and reach their low altitude/low latitude, xeric (Mátyás et al. 2009) distributional limits within the forest-steppe biome transition zone of Hungary. The rise of temperature, and especially summer rainfall deficits expected for the twenty-first century, may strongly affect both species. Nevertheless, regarding the potential future distribution of these important forest tree species along their xeric boundaries in Central Europe, there has been no detailed regional analysis before. Experimental studies and field survey data suggest a strong decline in beech regeneration (Czajkowski et al. 2005; Penuelas et al. 2007; Lenoir et al. 2009) and increased mortality rates following prolonged droughts (Berki et al. 2009). Mass mortality and range retraction are potential consequences, which have been already sporadically observed in field survey studies (Jump et al. 2009; Allen et al. 2010; Mátyás et al. 2009). With the study, we intend to assist in assessing overall risks, locating potentially affected regions and supporting the formulation of appropriate measures and strategies."
      [6] ""                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
      [7] "Beech and sessile oak forests of Hungary are to a large extent “trailing edge” populations (Hampe and Petit 2005), which should be preferably modelled using specific modelling strategies (Thuiller et al. 2008). Most modelling studies do not differentiate between leading and trailing edges and rely on assumptions and techniques which are intrinsically more appropriate for “leading edge” situations. Being aware of these challenges, we compiled a statistical methodology customized to yield inference on influential variables and providing robust and reliable predictions for climate-dependent populations near their xeric limits. We laid special emphasis on three features in the course of the modelling process: (1) screening of the occurrence data in order to limit modelling to plausible zonal (i.e. macroclimatically determined) occurrences, (2) avoiding pitfalls of statistical pseudoreplication caused by spatial autocorrelation (a problem to which regional distribution modelling studies are particularly prone; Dormann 2007) and (3) simultaneous use of several initial and boundary conditions in an ensemble modelling framework (Araújo et al. 2005; Araújo and New 2007; Beaumont et al. 2007). "                                                                                                                                                                                                                         
      
      # retrieve text inbetween parantheses:
      extr1 <- unlist(str_extract_all(txt, pattern = "\\(.*?\\)"))
      
      # keep only those elements which have four digit strings (years):
      extr2 <- extr1[grep("[0-9]{4}", extr1)]
      
      # extract partial strings starting with uppercase letter (name)
      # and end in a four digit string (year):
      (str_extract(extr2, "[A-Z].*[0-9]"))
       [1] "Christensen et al. 2007"                                                              
       [2] "Fischlin et al. 2007"                                                                 
       [3] "Sykes et al. 1996; Iverson, Prasad 2001; Rehfeldt et al. 2003; Ohlemüller et al. 2006"
       [4] "Benito Garzón et al. 2008"                                                            
       [5] "Bolliger et al. 2000"                                                                 
       [6] "Berry et al. 2002"                                                                    
       [7] "Benito Garzón et al. 2008"                                                            
       [8] "Mátyás et al. 2009"                                                                   
       [9] "Czajkowski et al. 2005; Penuelas et al. 2007; Lenoir et al. 2009"                     
      [10] "Berki et al. 2009"                                                                    
      [11] "Jump et al. 2009; Allen et al. 2010; Mátyás et al. 2009"                              
      [12] "Hampe and Petit 2005"                                                                 
      [13] "Thuiller et al. 2008"                                                                 
      [14] "Dormann 2007"                                                                         
      [15] "Araújo et al. 2005; Araújo and New 2007; Beaumont et al. 2007"                        
      
      # as proposed by a commentator -
      # do this if you want each citation seperately:
      (unlist(str_extract_all(extr2, "[A-Z].*?[0-9]{4}")))
       [1] "Christensen et al. 2007"   "Fischlin et al. 2007"     
       [3] "Sykes et al. 1996"         "Iverson, Prasad 2001"     
       [5] "Rehfeldt et al. 2003"      "Ohlemüller et al. 2006"   
       [7] "Benito Garzón et al. 2008" "Bolliger et al. 2000"     
       [9] "Berry et al. 2002"         "Benito Garzón et al. 2008"
      [11] "Mátyás et al. 2009"        "Czajkowski et al. 2005"   
      [13] "Penuelas et al. 2007"      "Lenoir et al. 2009"       
      [15] "Berki et al. 2009"         "Jump et al. 2009"         
      [17] "Allen et al. 2010"         "Mátyás et al. 2009"       
      [19] "Hampe and Petit 2005"      "Thuiller et al. 2008"     
      [21] "Dormann 2007"              "Araújo et al. 2005"       
      [23] "Araújo and New 2007"       "Beaumont et al. 2007"
      
      

      25 Mar 2012

      Classification Trees and Spatial Autocorrelation

      I'm currently trying to model species presence / absence data (N = 523) that were collected over a geographic area and are possibly spatially autocorrelated. Samples come from preferential sites (sea level > 1200 m, obligatory presence of permanent waterbodies, etc). My main goal is to infere on environmental factors determining the occurrence rate of several (amphibian) species and to rule out spatial autocorrelation.



      24 Mar 2012

      Custom Summary Stats as Dataframe or List

      On Stackoverflow I found this useful example on how to apply custom statistics on a dataframe and return the results as list or dataframe:

      14 Mar 2012

      Creating a Stratified Random Sample of a Dataframe

      Expanding on a question on Stack Overflow I'll show how to make a stratified random sample of a certain size:
      d <- expand.grid(id = 1:35000, stratum = letters[1:10])
      
      p = 0.1
      
      dsample <- data.frame()
      
      system.time(
      for(i in levels(d$stratum)) {
        dsub <- subset(d, d$stratum == i)
        B = ceiling(nrow(dsub) * p)
        dsub <- dsub[sample(1:nrow(dsub), B), ]
        dsample <- rbind(dsample, dsub) 
        }
      )
      
      # size per stratum in resulting df is 10 % of original size:
      table(dsample$stratum)
      

      13 Mar 2012

      R-Function to Read Data from Google Docs Spreadsheets

      I used this idea posted on Stack Overflow to plug together a function for reading data from Google Docs spreadsheets into R.


      28 Feb 2012

      Apprentice Piece with Lattice Graphs

      Lattice graphs can be quite tedious to learn. I don't use them too often and  when I need them I usually have to dig deep into the archives for details on the parameter details.
      The here presented example may serve as a welcome template for the usage of panel functions, panel ordering, for drawing of lattice keys, etc.
      You can download the example data HERE.

      (Also, check this resource with examples by the lattice-author). 

      27 Feb 2012

      Testing the Effect of a Factor within each Level of another Factor with R-Package {contrast}

      This is a small example of how custom contrasts can easily be applied with the contrast-package. The package-manual has several useful explanations and the below example was actually grabbed from there.
      This example can also be applied to a GLM but I choose to use a LM because the coefficients are more easily interpreted.



      1 Feb 2012

      Transformation of Several Variables in a Dataframe

      This is how I transform several columns of a dataframe, i.e., with count-data into binary coded data (this would apply also for any other conversion..).

      count1 <- count2 <- count3 <- count4 <- sample(c(rep(0, 10), 1:10))
      some <- LETTERS[1:20]  
      thing <- letters[1:20]  
      mydf <- data.frame(count1, count2, count3, count4, some, thing)
      
      ids <- grep("count", names(mydf))
      myfun <- function(x) {ifelse(x > 0, 1, 0)}
      mydf[, ids] <- lapply(mydf[, ids], myfun)
      

      p.s.: Let me know if you know of a slicker way.

      26 Jan 2012

      A Short Example with R-Package osmar..

      Following up my last post in which I praised the capabilities of the osmar-package I give a short example...

      ps: You can also find this example at GitHub HERE.







      osmar - Don't Miss this New R-Geo-Package!

      The osmar-package enables you to retrieve all geographic elements of OpenStreetMap via its API.
      I.e., you can retrieve a street, river, state-boundary or whatever and use this as a spatial object in R.

      It's overwhelming thinking of the endless playground that is opened for R-users by this package!

      And, owing to altruistic R-package authors (like the ones of osmar, Thomas Schlesinger and Manuel J. A. Eugster) the oligarch's (ESRI) power evermore crumbles away..

      1 Jan 2012

      R-Function to Source all Functions from a GitHub Repository

      Here's a function that sources all scripts from an arbitrary github-repository. At the moment the function downloads the whole repo and sources functions kept in a folder named "Functions" - this may be adapted for everyones own purpose.