(This article was first published on ** What You're Doing Is Rather Desperate » R**, and kindly contributed to R-bloggers)

Last week, I was listening to episode 337 of the podcast At one point the host and guests discussed the idea of searching for *Steamer*-like sequences in the data from ocean metagenomics projects, such as the Global Ocean Sampling expedition. Sounds like fun. So I made an initial attempt, using R/ggplot2 to visualise the results.

To make a long story short: the initial BLAST results are not super-convincing, the visualisation could use some work (click image, right, for larger version) and the code/data are all public at Github, summarised in this report. It made for a fun, relatively-quick side project.

Filed under: bioinformatics, R, statistics Tagged: cancer, clam, GOS, metagenomics, ocean, retroelement, steamer, twiv, virus

To **leave a comment** for the author, please follow the link and comment on his blog: ** What You're Doing Is Rather Desperate » R**.

R-bloggers.com offers

(This article was first published on ** Yet Another Blog in Statistical Computing » S+/R**, and kindly contributed to R-bloggers)

# READ QUARTERLY DATA FROM CSV library(zoo) ts1 <- read.zoo('Documents/data/macros.csv', header = T, sep = ",", FUN = as.yearqtr) # CONVERT THE DATA TO STATIONARY TIME SERIES ts1$hpi_rate <- log(ts1$hpi / lag(ts1$hpi)) ts1$unemp_rate <- log(ts1$unemp / lag(ts1$unemp)) ts2 <- ts1[1:nrow(ts1) - 1, c(3, 4)] # METHOD 1: LMTEST PACKAGE library(lmtest) grangertest(unemp_rate ~ hpi_rate, order = 1, data = ts2) # Granger causality test # # Model 1: unemp_rate ~ Lags(unemp_rate, 1:1) + Lags(hpi_rate, 1:1) # Model 2: unemp_rate ~ Lags(unemp_rate, 1:1) # Res.Df Df F Pr(>F) # 1 55 # 2 56 -1 4.5419 0.03756 * # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # METHOD 2: VARS PACKAGE library(vars) var <- VAR(ts2, p = 1, type = "const") causality(var, cause = "hpi_rate")$Granger # Granger causality H0: hpi_rate do not Granger-cause unemp_rate # # data: VAR object var # F-Test = 4.5419, df1 = 1, df2 = 110, p-value = 0.0353 # AUTOMATICALLY SEARCH FOR THE MOST SIGNIFICANT RESULT for (i in 1:4) { cat("LAG =", i) print(causality(VAR(ts2, p = i, type = "const"), cause = "hpi_rate")$Granger) }

To **leave a comment** for the author, please follow the link and comment on his blog: ** Yet Another Blog in Statistical Computing » S+/R**.

R-bloggers.com offers

(This article was first published on ** Revolutions**, and kindly contributed to R-bloggers)

R is an environment for programming with data, so unless you're doing a simulation study you'll need some data to work with. If you don't have data of your own, we've made a list of open data sets you can use with R to accompany the latest release of Revolution R Open.

At the Data Sources on the Web page on MRAN, you can find links to dozens of open data sources both large and more. You'll find some classics of data science and machine learning, like the Enron emails data set, and the famous Airlines data. You can find official statistics on economics and government from countries around the world, including links to every country's official data repositories at UNdata. There are links to scientific data, including several sources from the social sciences. And of course you'll find links to various financial data sources (but not all of these are 100% free to use).

Many of the data sets are indicated as ready-to-use in R format; for the others, you can use R's various data import tools to access the data (for which there is a great guide at ComputerWorld).

Got other suggestions for great open data sources? Let us know in the comments below, or send an email to mran@revolutionanalytics.com.

MRAN: Data Sources on the Web

To **leave a comment** for the author, please follow the link and comment on his blog: ** Revolutions**.

R-bloggers.com offers

(This article was first published on ** R tutorial for Spatial Statistics**, and kindly contributed to R-bloggers)

In the previous post we looked at ways to perform some introductory point pattern analysis of open data downloaded from Police.uk. As you remember we subset the dataset of crimes in the Greater London area, extracting only the drug related ones. Subsequently, we looked at ways to use those data with the package In this post I will briefly discuss ways to create interactive plots of the results of the point pattern analysis using the Google Maps API and Leaflet from R.

In the previous post we looped through the

This post is intended to be a continuation of the previous, so I will not present again the methods and objects we used in the previous experiment. To make this code work you can just copy and paste it below the code you created before and it should work just fine.

First of all, let's create a new object including only the names of the boroughs from the

GreaterLondon.Google <- GreaterLondonUTM[,"name"]

The new object has only one column with the name of each borough.

Now we can create a loop to iterate through these names and calculate the intensity of the crimes:

Borough <- GreaterLondonUTM[,"name"]

for(i in unique(GreaterLondonUTM$name)){

sub.name <- Local.Intensity[Local.Intensity[,1]==i,2]

Borough[Borough$name==i,"Intensity"] <- sub.name

Borough[Borough$name==i,"Intensity.Area"] <- round(sub.name/(GreaterLondonUTM[GreaterLondonUTM$name==i,]@polygons[[1]]@area/10000),4)

}

As you can see this loop selects one name at the time, then subset the object

Now we can use again the package

The code for doing that is very simple and it is presented below:

plotGoogleMaps(Borough,zcol="Intensity",filename="Crimes_Boroughs.html",layerName="Number of Crimes", fillOpacity=0.4,strokeWeight=0,mapTypeId="ROADMAP")

I decided to plot the polygons on top of the roadmap and not on top of the satellite image, which is the default for the function. Thus I added the option

The result is the map shown below and at this link: Crimes on GoogleMaps

In the post Interactive Maps for the Web in R I received a comment from Gerardo Celis, whom I thank for it, telling me that now in R is also available the package

I started from the sample of code presented here: https://github.com/chgrl/leafletR and I adapted with very few changes to my data.

The function

Borough.Leaflet <- toGeoJSON(Borough)

Extremely simple!!

Now we need to set the style to use for plotting the polygons using the function

In this function we need to set several options:

This is important!!

After we set the style we can simply call the function

leaflet(Borough.Leaflet,popup=c("name","Intensity","Intensity.Area"),style=map.style)

In this function we need to input the name of the

The result is the map shown below and available at this link: Leaflet Map

I must say this function is very neat. First of all the function

The package

However, I noticed that I cannot see the map if I open the HTML files from my PC. I had to upload the file to my website every time I changed it to actually see these changes and how they affected the plot. This may be something related to my PC, however.

As you may remember from the previous post, one of the steps included in a point pattern analysis is the computation of the spatial density of the events. One of the techniques to do that is the kernel density, which basically calculates the density continuously across the study area, thus creating a raster.

We already looked at the kernel density in the previous post so I will not go into details here, the code for computing the density and transform it into a raster is the following:

The first lines is basically the same we used in the previous post. The only difference is that here I added the option

Then I simply transformed the first object into a raster and assign to it the same UTM projection of the object

Now we can create the map. As far as I know (and for what I tested)

plotGoogleMaps(Density.raster,filename="Crimes_Density.html",layerName="Number of Crimes", fillOpacity=0.4,strokeWeight=0,colPalette=rev(heat.colors(10)))

When we use this function to plot a raster we clearly do not need to specify the

The raster presented above can also be represented as contour lines. The advantage of this type of visualization is that it is less intrusive, compared to a raster, and can also be better suited to pinpoint problematic locations.

Doing this in R is extremely simple, since there is a dedicated function in the package

Contour <- rasterToContour(Density.raster,maxpixels=100000,nlevels=10)

This function transforms the raster above into a series of 10 contour lines (we can change the number of lines by changing the option

Now we can plot these lines to an interactive web map. I first tested again the use of

For this reason I will present below the lines to plot contour lines using

Contour.Leaflet <- toGeoJSON(Contour)

colour.scale <- color.scale(1:(length(Contour$level)-1),color.spec="rgb",extremes=c("red","blue"))

map.style <- styleGrad(pro="level",breaks=Contour$level,style.val=colour.scale,leg="Number of Crimes", lwd=2)

leaflet(Contour.Leaflet,style=map.style,base.map="tls")

As mentioned, the first thing to do to use

The next step is again to set the style of the map and then plot it. In this code I changed a few things just to show some more options. The first thing is the custom color scale I created using the function

In the function leaflet the only thing I changed is the

`"osm"`

(OpenStreetMap standard map), `"tls"`

(Thunderforest Landscape), `"mqosm"`

(MapQuest OSM), `"mqsat"`

(MapQuest Open Aerial),`"water"`

(Stamen Watercolor), `"toner"`

(Stamen Toner), `"tonerbg"`

(Stamen Toner background), `"tonerlite"`

(Stamen Toner lite), `"positron"`

(CartoDB Positron) or `"darkmatter"`

(CartoDB Dark matter). "These lines create the following image, available as a webpage here: Contour

R code snippets created by Pretty R at inside-R.org

To **leave a comment** for the author, please follow the link and comment on his blog: ** R tutorial for Spatial Statistics**.

R-bloggers.com offers

(This article was first published on ** Wiekvoet**, and kindly contributed to R-bloggers)

Last week I created a JAGS model combining data from two paper helicopter datasets. This week, I will use the model to find the longest flying one.In step one predictions from the whole design space were combined. To keep the number of predictions at least somewhat limited, only a few levels were used for the continuous variables. This step was used to select the best region within the whole space. Step two focuses on the best region and provides more detailed predictions.

It was decided to focus on the region top right. At least 2.7 for the lower 5% limit, and at least 3.7 for the mean time. The associated settings are summarized below.

PaperType WingLength BodyWidth BodyLength TapedBody

bond :72 Min. : 7.408 Min. :2.540 Min. : 3.810 No :114

regular1 :72 1st Qu.:12.065 1st Qu.:3.387 1st Qu.: 6.562 Yes: 54

construction:24 Median :12.065 Median :4.233 Median : 6.562

Mean :11.871 Mean :4.163 Mean : 6.955

3rd Qu.:12.065 3rd Qu.:5.080 3rd Qu.: 9.313

Max. :12.065 Max. :5.080 Max. :12.065

TapedWing PaperClip PaperClip2 Fold test Time

No : 68 No :84 No: 20 No:168 WH:168 Mode:logical

Yes:100 Yes:84 WH:148 NA's:168

RH: 0

Mean u95 l05

Min. :3.203 Min. :3.574 Min. :2.701

1st Qu.:3.327 1st Qu.:3.808 1st Qu.:2.776

Median :3.465 Median :3.975 Median :2.847

Mean :3.486 Mean :4.108 Mean :2.873

3rd Qu.:3.636 3rd Qu.:4.388 3rd Qu.:2.952

Max. :3.877 Max. :5.044 Max. :3.165

It is my choice to avoid the more uncertain region. Hence I will base my choice on the lower limit. Here we can see that there is a tradeoff. The bond paper needs a slightly longer BodyLength, while Regular paper can have a short BodyLength. BodyWidth should be maximized, but that is not a sensitive parameter.

For completeness, the mean prediction. This shows hardly any interaction. Hence the need for higher BodyLength in bond type paper is due to lack of experiments in this region. A few confirming final experiments seem to be in order. Within those, we could also include a low BodyWidth, since the models are unclear if this should be maximized or minimized.

helis <- rbind(h1,h2)

helis$test <- factor(helis$test)

helis$PaperClip2 <- factor(ifelse(helis$PaperClip=='No','No',as.character(helis$test)),

levels=c('No','WH','RH'))

library(R2jags)

library(ggplot2)

helispred <- expand.grid(

PaperType=c('bond','regular1','construction'),

WingLength=seq(min(helis$WingLength),max(helis$WingLength),length.out=4),

BodyWidth=seq(min(helis$BodyWidth),max(helis$BodyWidth),length.out=4),

BodyLength=seq(min(helis$BodyLength),max(helis$BodyLength),length.out=4),

TapedBody=c('No','Yes'),

TapedWing=c('No','Yes'),

PaperClip=c('No','Yes'),

PaperClip2=c('No','WH','RH'),

Fold='No',

test='WH',

Time=NA)

helisboth <- rbind(helis,helispred)

#################################

datain <- list(

PaperType=c(2,1,3,1)[helisboth$PaperType],

WingLength=helisboth$WingLength,

BodyLength=helisboth$BodyLength,

BodyWidth=helisboth$BodyWidth,

PaperClip=c(1,2,3)[helisboth$PaperClip2],

TapedBody=c(0,1)[helisboth$TapedBody],

TapedWing=c(0,1)[helisboth$TapedWing],

test=c(1,2)[helisboth$test],

Time=helisboth$Time,

n=nrow(helis),

m=nrow(helispred))

parameters <- c('Mul','WL','BL','PT','BW','PC','TB','TW','StDev',

'WLBW','WLPC', 'WLWL',

'BLPT' ,'BLPC', 'BLBL',

'BWPC', 'BWBW', 'other','pred')

jmodel <- function() {

for (i in 1:(n+m)) {

premul[i] <- (test[i]==1)+Mul*(test[i]==2)

mu[i] <- premul[i] * (

WL*WingLength[i]+

BL*BodyLength[i] +

PT[PaperType[i]] +

BW*BodyWidth[i] +

PC[PaperClip[i]] +

TB*TapedBody[i]+

TW*TapedWing[i]+

WLBW*WingLength[i]*BodyWidth[i]+

WLPC[1]*WingLength[i]*(PaperClip[i]==2)+

WLPC[2]*WingLength[i]*(PaperClip[i]==3)+

BLPT[1]*BodyLength[i]*(PaperType[i]==2)+

BLPT[2]*BodyLength[i]*(PaperType[i]==3)+

BLPC[1]*BodyLength[i]*(PaperClip[i]==2)+

BLPC[2]*BodyLength[i]*(PaperClip[i]==3)+

BWPC[1]*BodyWidth[i]*(PaperClip[i]==2)+

BWPC[2]*BodyWidth[i]*(PaperClip[i]==3) +

WLWL*WingLength[i]*WingLength[i]+

BLBL*BodyLength[i]*BodyLength[i]+

BWBW*BodyWidth[i]*BodyWidth[i]

)

}

for (i in 1:n) {

Time[i] ~ dnorm(mu[i],tau[test[i]])

}

# residual[i] <- Time[i]-mu[i]

for (i in 1:2) {

tau[i] <- pow(StDev[i],-2)

StDev[i] ~dunif(0,3)

WLPC[i] ~dnorm(0,1)

BLPT[i] ~dnorm(0,1)

BLPC[i] ~dnorm(0,1)

BWPC[i] ~dnorm(0,1)

}

for (i in 1:3) {

PT[i] ~ dnorm(PTM,tauPT)

}

tauPT <- pow(sdPT,-2)

sdPT ~dunif(0,3)

PTM ~dnorm(0,0.01)

WL ~dnorm(0,0.01)

BL ~dnorm(0,0.01)

BW ~dnorm(0,0.01)

PC[1] <- 0

PC[2]~dnorm(0,0.01)

PC[3]~dnorm(0,0.01)

TB ~dnorm(0,0.01)

TW ~dnorm(0,0.01)

WLBW~dnorm(0,1)

WLTW~dnorm(0,1)

WLWL~dnorm(0,1)

BLBL~dnorm(0,1)

BWBW~dnorm(0,1)

other~dnorm(0,1)

Mul ~ dnorm(1,1) %_% I(0,2)

for (i in 1:m) {

pred[i] <- mu[i+n]

}

}

jj <- jags(model.file=jmodel,

data=datain,

parameters=parameters,

progress.bar='gui',

n.chain=5,

n.iter=4000,

inits=function() list(Mul=1.3,WL=0.15,BL=-.08,PT=rep(1,3),

PC=c(NA,0,0),TB=0,TW=0))

#jj

predmat <- jj$BUGSoutput$sims.matrix[,grep('pred',dimnames(jj$BUGSoutput$sims.matrix)[[2]],value=TRUE)]

helispred$Mean <- colMeans(predmat)

helispred$u95 <- apply(predmat,2,function(x) quantile(x,.95))

helispred$l05 <- apply(predmat,2,function(x) quantile(x,.05))

png('select1.png')

qplot(y=Mean,x=l05,data=helispred)

dev.off()

select <- helispred[helispred$Mean>3.2 & helispred$l05>2.7,]

summary(select)

########

helispred <- expand.grid(

PaperType=c('bond','regular1'),

WingLength=12.065,

BodyWidth=seq(2.5,5,length.out=11),

BodyLength=seq(3.8,12,length.out=11),

TapedBody=c('No'),

TapedWing=c('No','Yes'),

PaperClip='No',

PaperClip2=c('WH'),

Fold='No',

test='WH',

Time=NA)

helisboth <- rbind(helis,helispred)

datain <- list(

PaperType=c(2,1,3,1)[helisboth$PaperType],

WingLength=helisboth$WingLength,

BodyLength=helisboth$BodyLength,

BodyWidth=helisboth$BodyWidth,

PaperClip=c(1,2,3)[helisboth$PaperClip2],

TapedBody=c(0,1)[helisboth$TapedBody],

TapedWing=c(0,1)[helisboth$TapedWing],

test=c(1,2)[helisboth$test],

Time=helisboth$Time,

n=nrow(helis),

m=nrow(helispred))

jj <- jags(model.file=jmodel,

data=datain,

parameters=parameters,

progress.bar='gui',

n.chain=5,

n.iter=4000,

inits=function() list(Mul=1.3,WL=0.15,BL=-.08,PT=rep(1,3),

PC=c(NA,0,0),TB=0,TW=0))

#jj

predmat <- jj$BUGSoutput$sims.matrix[,grep('pred',dimnames(jj$BUGSoutput$sims.matrix)[[2]],value=TRUE)]

helispred$Mean <- colMeans(predmat)

helispred$u95 <- apply(predmat,2,function(x) quantile(x,.95))

helispred$l05 <- apply(predmat,2,function(x) quantile(x,.05))

#

png('select2.png')

qplot(y=Mean,x=l05,data=helispred)

dev.off()

png('l05.png')

v <- ggplot(helispred, aes(BodyLength, BodyWidth, z = l05))

v + stat_contour(aes(colour= ..level.. )) +

scale_colour_gradient(name='Time' )+

facet_grid(PaperType ~ TapedWing )+

ggtitle('Lower 95% predicion')

dev.off()

png('mean.png')

v <- ggplot(helispred, aes(BodyLength, BodyWidth, z = Mean))

v + stat_contour(aes(colour= ..level.. )) +

scale_colour_gradient(name='Time' )+

facet_grid(PaperType ~ TapedWing ) +

ggtitle('Mean prediction')

dev.off()

To **leave a comment** for the author, please follow the link and comment on his blog: ** Wiekvoet**.

R-bloggers.com offers

(This article was first published on ** geomorph**, and kindly contributed to R-bloggers)

Geomorph users,We have uploaded version 2.1.5 of

- New Auto Mode allows users to include pre-digitized landmarks added to build.template() and digitsurface()
- New gridPar() is a new function to customize plots of plotRefToTarget()

- New digit.curves() is a new function to calculate equidistant semilandmarks along 2D and 3D curves (based on tpsDIG algorithm for 2D curves).
- define.sliders() is new interactive function for defining sliding semilandmarks for 2D and 3D curves, plus an automatic mode when given a sequence of semilandmarks along a curve
- plotGMPhyloMorphoSpace() now has options to customise the plots

Important Bug Fixes:

- Corrected an error in plotAllometry() where verbose=T did not return

Other Changes:

- pairwiseD.test() and pairwise.slope.test() deprecated and replaced by advanced.procD.lm()
- Read functions now allow both tab and space delimited files
- define.sliders.2d() and define.sliders.3d() deprecated and replaced by define.sliders()

Emma

* geomorph: Geometric Morphometric Analyses of 2D/3D Landmark Data

Read, manipulate, and digitize landmark data, generate shape variables via Procrustes analysis for points, curves and surfaces, perform shape analyses, and provide graphical depictions of shapes and patterns of shape variation.

To **leave a comment** for the author, please follow the link and comment on his blog: ** geomorph**.

R-bloggers.com offers

(This article was first published on ** Xi'an's Og » R**, and kindly contributed to R-bloggers)

After the Singapore Maths Olympiad birthday problem that went viral, here is a Vietnamese primary school puzzle that made the frontline in The Guardian. The question is: *Fill the empty slots with all integers from 1 to 9 for the equality to hold*. In other words, find *a,b,c,d,e,f,g,h,i* such that

*a*+13x*b*:*c*+*d*+12x*e*–*f*-11+*g*x*h*:*i*-10=66.

With presumably the operation ordering corresponding to

*a*+(13x*b*:*c)*+*d*+(12x*e)*–*f*-11+(*g*x*h*:*i)*-10=66

although this is not specified in the question. Which amounts to

*a*+(13x*b*:*c)*+*d*+(12x*e)*–*f*+(*g*x*h*:*i)*=87

and implies that *c* divides *b* and *i* divides *g*x*h*. Rather than pursing this analytical quest further, I resorted to R coding, checking by brute force whether or not a given sequence was working.

baoloc=function(ord=sample(1:9)){ if (ord[1]+(13*ord[2]/ord[3])+ord[4]+ 12*ord[5]-ord[6]-11+(ord[7]*ord[8]/ ord[9])-10==66) return(ord)}

I then applied this function to all permutations of {1,…,9}* [with the help of the perm(combinat) R function]* and found the 128 distinct solutions. Including some for which b:c is not an integer. (Not of this obviously gives a hint as to how a 8-year old could solve the puzzle.)

Filed under: Books, Kids, R, University life Tagged: mathematical puzzle, permutation, primary school, The Guardian, Vietnam

To **leave a comment** for the author, please follow the link and comment on his blog: ** Xi'an's Og » R**.

R-bloggers.com offers

The post Review of ‘Advanced R’ by Hadley Wickham appeared first on Burns Statistics.

]]>
(This article was first published on ** Burns Statistics » R language**, and kindly contributed to R-bloggers)

Surprisingly good.

And it’s not like my expectations were especially low.

There are 20 chapters. I mostly like the chapters and their order.

Hadley breaks the 20 chapters into 4 parts. He’s wrong. Figure 1 illustrates the correct way to formulate parts.

Figure 1: Chapters and Parts of Advanced R.

There are by now lots of introductions to R. There’s even an *R for Dummies*.

The introduction here is clean and serviceable. If you are an experienced programmer learning R, these chapters will provide you with most of the basics.

Chapter 5 on code style is a bit of fluff that insulates the introductory material from the more advanced part. I hope most people can just about cope with there being an extra space or new line compared to their preference.

However, the chapter does talk about one thing that I think is very important: consistent naming style. Consistent naming conserves a lot of energy for users — it allows them to think about what they are doing rather than trying to remember trivia. I believe my personal best for continuous consistent naming in R is 4.3 hours of coding time. In R it’s hard.

In which the balance of the universe is partially restored.

There are lots of places that talk about what a chaotic mess R is. A book-length dose of venom is *The R Inferno*.

These chapters, in contrast, show the elegance, the flexibility, the power of the R language. This is the mesmerizing part of the book.

Both views are valid: the mess in on the surface, the beauty is deeper.

Useful. But it includes the word “work” so it can’t be all good.

R doesn’t protect you from yourself: you can easily shoot yourself in the foot. As long as you don’t aim the gun at your foot and pull the trigger, you won’t have a problem.

On speed:

R was purposely designed to make data analysis and statistics easier for you to do. It was not designed to make life easier for your computer. While R is slow compared to other programming languages, for most purposes, it’s fast enough.

On object orientation:

S3 is informal and ad hoc, but it has a certain elegance in its minimalism: you can’t take away any part of it and still have a useful OO system.

Here is the code that created Figure 1:

P.advanced_R_table_of_contents <- function (filename = "advanced_R_table_of_contents.png", seed=18) { if(length(filename)) { png(file=filename, width=512, height=700) par(mar=rep(0,4)+.1) } advR.chap <- c('Introduction', 'Data structures', 'Subsetting', 'Vocabulary', 'Style guide', 'Functions', 'OO field guide', 'Environments', 'Debugging, condition handling, and defensive programming', 'Functional programming', 'Functionals', 'Function operators', 'Non-standard evaluation', 'Expressions', 'Domain specific languages', 'Performance', 'Optimising code', 'Memory', 'High performance functions with Rcpp', "R's C interface") advR.chap[9] <- "Debugging, [...]" plot.new() plot.window(xlim=c(0,4), ylim=c(21,0)) hl <- 1.7 pl <- 2.3 if(length(seed) && !is.na(seed)) set.seed(seed) text(c(.5, 2, 3.5), .5, c("Hadley part", "Chapter", "Pat part"), font=2) rect(0, 1.5, hl, 9.5, col=do.call('rgb', as.list(runif(3, .8, 1))), border=NA) rect(0, 9.5, hl, 12.5, col=do.call('rgb', as.list(runif(3, .8, 1))), border=NA) rect(0, 12.5, hl, 15.5, col=do.call('rgb', as.list(runif(3, .8, 1))), border=NA) rect(0, 15.5, hl, 20.5, col=do.call('rgb', as.list(runif(3, .8, 1))), border=NA) rect(pl, 1.5, 4, 4.5, col=do.call('rgb', as.list(runif(3, .8, 1))), border=NA) rect(pl, 5.5, 4, 14.5, col=do.call('rgb', as.list(runif(3, .8, 1))), border=NA) rect(pl, 14.5, 4, 20.5, col=do.call('rgb', as.list(runif(3, .8, 1))), border=NA) text(2, 1:20, advR.chap) text(hl/2, 5.5, "Foundations") text(hl/2, 11, "Functionalnprogramming") text(hl/2, 14, "Computingnon thenlanguage") text(hl/2, 18, "Performance") text((4+pl)/2, 3, "IntroductorynR") text((4+pl)/2, 10, "LanguagenR") text((4+pl)/2, 17.5, "Working withnR") if(length(filename)) { dev.off() } }

The part that may be of most interest is that the colors are randomly generated. Not all colors are allowed — they can only go so dark.

The post Review of ‘Advanced R’ by Hadley Wickham appeared first on Burns Statistics.

To **leave a comment** for the author, please follow the link and comment on his blog: ** Burns Statistics » R language**.

R-bloggers.com offers