(This article was first published on ** R – Xi'an's Og**, and kindly contributed to R-bloggers)

[A review of Bayesian Essentials that appeared in Technometrics two weeks ago, with the first author being rechristened Jean-Michael!]

“Overall this book is a very helpful and useful introduction to Bayesianmethods of data analysis. I found the use of R, the code in the book, and thecompanion R package, bayess, to be helpful to those who want to begin usingBayesian methods in data analysis. One topic that I would like to see added isthe use of Bayesian methods in change point problems, a topic that we founduseful in a recent article and which could be added to the time series chapter.Overall this is a solid book and well worth considering by its intended audience.”

David E. BOOTH

Kent State University

Filed under: Books, R, Statistics, University life Tagged: Bayesian Core, Bayesian Essentials with R, book review, Jean-Michel Marin, Kent State University, R, Technometrics, time series

To **leave a comment** for the author, please follow the link and comment on their blog: ** R – Xi'an's Og**.

R-bloggers.com offers

(This article was first published on ** Revolutions**, and kindly contributed to R-bloggers)

Since it expanded its focus from predicting the US election, FiveThirtyEight has emerged as a prominent source of in-depth data journalism, with data-driven analysis of media, culture, politics and society. A recent feature combined CDC and independent data sources to break down the nearly 34,000 gun deaths in the US in 2014 by cause of death and the gender, age and race of the victim:

Other than a few hold-outs working in Stata and Excel, the data journalism team at FiveThirtyEight uses the open-source R language to perform the analysis. So that others can reproduce their results, the R code is usually published on GitHub (the R code for the gun deaths feature is here). In fact, R forms a key component of the data science workflow at FiveThirtyEight, as described in this presentation by quantiative editor Andrew Flowers at the useR!2016 conference:

Andrew mentioned in the presentation that one of the many reasons why they use R is because of the flexibility of its graphics system, which allows them to produce attractive yet complex charts on deadline. In fact, they use a custom (unreleased) ggplot2 theme to give their graphics the recognizable FiveThirtyEight style (sometimes with some touching up in Illustrator):

If you haven't seen them before, check out some of the reports Andrew mentions in his presentation, all facilitated by R:

To **leave a comment** for the author, please follow the link and comment on their blog: ** Revolutions**.

R-bloggers.com offers

By Kay Ewbank

If you've enjoyed books such as *Freakonomics *or *Outliers*, you'll feel at home reading this book as it uses a similar approach; take an interesting question such as 'Does the higher price of cigarettes deter smoking?', and use that as the basis for some data analysis.

The aim is to teach you how to do your own analyses. Haider works through the examples in R, Stata, SPSS and SAS. Within the book the examples are worked mainly in R, and one of the other languages. The code for the other languages is available for download from the IBM Press website, along with details of how to use it.

The book opens with a chapter called 'the bazaar of storytellers' that discusses what data science is and gives the author's definition of a data scientist. The next chapter, data in the 24/7 connected world, identifies sources of data that you can analyse, and also introduces the concept of big data. Chapter three looks at how data becomes meaningful when it is used as the basis for 'stories'. Haider's view is that the strength of data science lies in the power of the narrative, and that is what underpins most of the book.

From a practical perspective, the book begins to get useful in chapter four, which looks at how you can generate summary tables, including multi-dimensional tables. Next is a chapter on graphics and how to generate them. If you're thinking that it seems a bit odd to concentrate on the 'end result' first, you have to remember that the author's view is that data analysis is only useful if your audience actually looks at the results and understands them.

The next chapter gets more into the workings of data analysis with an examination of hypothesis testing using techniques such as t-tests and correlation analysis. Regression analysis is looked at next, based on the notions "why tall parents don't have even taller children". This is a fun chapter, with examples including consumer spending on food and alcohol, housing markets, and whether the appearance of teachers affects their evaluations by students.

A chapter on analysis of binary variables considers logit and probit models using data from New York transit use. Categorical data and multinomial variables are the topic of the next chapter, which expands on the ideas of logit models.

Spatial data analysis is covered next, taking us into the use of GIS systems and how these have expanded the options for data analysis. There's a good chapter on time series analysis looking at how regression models can be used with time series data, using the examples of forecasting housing markets.

The final chapter introduces the field of data mining. It's more of a taster discussing some of the techniques that can be used, but fun anyway.

Overall, this is a book that is accessible, interesting and still manages to introduce the statistical techniques you need to use for real data analytical work. A good way to get into data analysis.

To keep up with our coverage of books for programmers, follow @bookwatchiprog on Twitter or subscribe to I Programmer's Books RSS feed for each day's new addition to Book Watch and for new reviews.

(This article was first published on ** eKonometrics**, and kindly contributed to R-bloggers)

I PROGRAMMER’s Kay Ewbank’s reviews *Getting Started with Data Science: Making Sense of Data with Analytics*.
## “*Overall, this is a book that is accessible, interesting and still manages to introduce the statistical techniques you need to use for real data analytical work. A good way to get into data analysis*.”

#### Related Reviews

By Kay Ewbank

If you’ve enjoyed books such as *Freakonomics *or *Outliers*, you’ll feel at home reading this book as it uses a similar approach; take an interesting question such as *‘Does the higher price of cigarettes deter smoking?’*, and use that as the basis for some data analysis.

The aim is to teach you how to do your own analyses. Haider works through the examples in R, Stata, SPSS and SAS. Within the book the examples are worked mainly in R, and one of the other languages. The code for the other languages is available for download from the IBM Press website, along with details of how to use it.

The book opens with a chapter called ‘the bazaar of storytellers’ that discusses what data science is and gives the author’s definition of a data scientist. The next chapter, data in the 24/7 connected world, identifies sources of data that you can analyse, and also introduces the concept of big data. Chapter three looks at how data becomes meaningful when it is used as the basis for ‘stories’. Haider’s view is that the strength of data science lies in the power of the narrative, and that is what underpins most of the book.

From a practical perspective, the book begins to get useful in chapter four, which looks at how you can generate summary tables, including multi-dimensional tables. Next is a chapter on graphics and how to generate them. If you’re thinking that it seems a bit odd to concentrate on the ‘end result’ first, you have to remember that the author’s view is that data analysis is only useful if your audience actually looks at the results and understands them.

The next chapter gets more into the workings of data analysis with an examination of hypothesis testing using techniques such as t-tests and correlation analysis. Regression analysis is looked at next, based on the notions “why tall parents don’t have even taller children”. This is a fun chapter, with examples including consumer spending on food and alcohol, housing markets, and whether the appearance of teachers affects their evaluations by students.

A chapter on analysis of binary variables considers logit and probit models using data from New York transit use. Categorical data and multinomial variables are the topic of the next chapter, which expands on the ideas of logit models.

Spatial data analysis is covered next, taking us into the use of GIS systems and how these have expanded the options for data analysis. There’s a good chapter on time series analysis looking at how regression models can be used with time series data, using the examples of forecasting housing markets.

The final chapter introduces the field of data mining. It’s more of a taster discussing some of the techniques that can be used, but fun anyway.

Overall, this is a book that is accessible, interesting and still manages to introduce the statistical techniques you need to use for real data analytical work. A good way to get into data analysis.

To keep up with our coverage of books for programmers, follow @bookwatchiprog on Twitter or subscribe to I Programmer’s Books RSS feed for each day’s new addition to Book Watch and for new reviews.

To **leave a comment** for the author, please follow the link and comment on their blog: ** eKonometrics**.

R-bloggers.com offers

(This article was first published on ** R – Tech and Mortals**, and kindly contributed to R-bloggers)

I recently participated in a weekend-long data science hackathon, titled ‘The Smart Recruits’. Organized by the amazing folks at Analytics Vidhya, it saw some serious competition. Although my performance can be classified as decent at best (47 out of 379 participants), it was among the more satisfying ones I have participated in on both AV (profile) and Kaggle (profile) over the last few months. Thus, I decided it might be worthwhile to try and share some insights as a data science autodidact.

The competition required us to use historical data to create a model to help an organization pick out better recruits. The evaluation metric to be used for judging the predictions was AUC (area under the ROC curve). You can read the problem statement on the competition page.

The hackathon itself was a weekend-long sprint and the data reflected this. The training set (1.2 MB) comprised of 9527 observations consisting of 23 variables including the target variable. The test set (~600 KB) comprised of 5045 observations and 22 predictor variables.

My code for the competition can be found here. The code is in R although the description of my approach towards the problem in this post is presented in a language-agnostic manner.

As for the choice of language, I used R since I am relatively more proficient with it as compared to Python, my other language of choice for data-intensive work. And that was an important factor considering the sprint nature of the competition. Also, I find R more suited than Python for data manipulation and visualization. I prefer Python when dabbling in deep learning or working with image and textual data. Then again, it’s just my opinion.

*Recommended read: R vs Python for Data Science: The Winner is …*

I spent some time exploring the data and the nature of variables through statistical summaries and visualizations. Getting to know your data is extremely important and just throwing it in a model straightway without doing that is a really bad approach! That can’t be stressed enough.

That being said, I didn’t spend as much time on this part as one normally would when participating in a month-long Kaggle competition or when working with real-life data. It’s a sprint after all.

*Recommended read: A Comprehensive Guide to Data Exploration*

If I was forced to choose just one significant learning I have acquired from participating in such competitions, it has to be the importance of cross-validation.

So the next thing I did after playing around with data for a while was to try and set up a good CV framework. There are all sorts of complicated frameworks that are used by experienced campaigners; peer into an active Kaggle forum and you will know.

But k-fold cross validation, which I used is, in general, a good enough start for most competitions. The decision regarding how to perform the split is critical. Random splits might be good enough at times. Other times the classes (0/1) are unbalanced so you might need to do stratified sampling or sometimes time-based splits (month, quarter, year, etc.) will have to be made.

*Recommended read: Improve Your Model Performance using Cross Validation*

As a beginner, I used to stumble across this term quite a lot and was unable to straightaway find some good resource. Over time, I have learnt what it represents and why it’s an indispensable part of any machine learning problem. A quote from this gem of an article, Discover Feature Engineering, sums it up nicely:

Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data.

Bulk of my time was spent on this part. Briefly, I encoded the categorical variables (such as Gender, Occupation, Qualification), performed imputation to deal with missing/NA values , created new variables and removed some variables.

*The encoding part* is fairly straightforward; a glance through the code and one can understand the process.

*Dealing with missing values* is tricky for beginners and experienced campaigners alike. There are various methodologies one might employ. You can do away with incomplete observations altogether. That isn’t a good idea and in a case like ours when we don’t have a lot of data points, it’s a very bad approach. Other usual approach is to use the mean/median/mode of observations for numeric variables. That is what I have used. There are also sophisticated algorithms for imputation. MICE had given me good results in the past but after trying it out, I chose not to implement it in the final model.

*Creating new useful variables *was key in this competition. I managed to come up with some useful new features. For example, I split Applicant_DOB into three separate columns for date, month and year and further used the year column to create Applicant_Age.

*Recommended read: Discover Feature Engineering, How to Engineer Features and How to Get Good at It*

The final model employs XGBoost, the most popular algorithm in data science competitions. I also tried out a couple of models from the H2O package but went with XGBoost eventually since it provided better performance both locally and on the public leaderboard.

Simply put, hyperparameters can be thought of as cogs which can be turned to fine-tune the machine that is your algorithm. In case of XGBoost, nrounds (number of iterations performed during training) and max_depth (maximum depth of a tree created during training) are examples of hyperparameters.

There are automated methods such as GridSearch and RandomizedSearch which can be used. I used a manual approach, as described here by Davut Polat, a Kaggle master.

*Recommended read: How to Evaluate Machine Learning Models: Hyperparameter Tuning*

Solving a problem such as this takes time and perseverance, among other things. Analytics Vidhya’s hackathons have progressively become better in terms of the quality of competition on offer. Having Kaggle grandmasters and masters (including the eventual winner, Rohan Rao) in the discussion forums, and the competition itself, helped.

It’s fun so if you are looking to get your hands dirty in a data science competition, don’t think too much. Dive right in. Even if it seems overwhelming at first, you will only end up having fun and learn something along the way.

**The code is available as a Github repo.
If you read and liked the article, sharing it would be a good next step.
Drop me a mail, or hit me up on Twitter or Quora in case you want to get in touch.
**

To **leave a comment** for the author, please follow the link and comment on their blog: ** R – Tech and Mortals**.

R-bloggers.com offers

(This article was first published on ** R – rud.is**, and kindly contributed to R-bloggers)

This is another `purrr`

-focused post but it’s also an homage to the nascent `magick`

package (R interface to ImageMagick) by @opencpu.

We’re starting to see/feel the impact of the increasing drought up here in southern Maine. I’ve used the data from the U.S. Drought Monitor before on the blog, but they also provide shapefiles and this seemed like a good opportunity to further demonstrate the utility of `purrr`

and make animations directly using `magick`

. Plus, I wanted to see the progression of the drought. Putting `library()`

statements for `purrr`

, `magick`

and `broom`

together was completely random, but I now feel compelled to find a set of functions to put into a `cauldron`

package. But, I digress.

Apart from giving you an idea of the extent of the drought, working through this will help you:

- use the
`quietly()`

function (which automagically turns off warnings for a function) - see another example of a formula function
- illustrate the utility
`map_df()`

, and - see how to create an animation pipeline for
`magick`

Comments are in the code and the drought gif is at the end. I deliberately only had it loop once, so refresh the image if you want to see the progression again. Also, drop a note in the comments if anything needs more exposition. (NOTE: I was fairly bad and did virtually no file cleanup in the function, so you’ll have half a year’s shapefiles in your `getwd()`

. Consider the cleanup an exercise for the reader

```
library(rgdal)
library(sp)
library(albersusa) # devtools::install_github("hrbrmstr/albersusa")
library(ggplot2) # devtools::install_github("hadley/ggplot2")
library(ggthemes)
library(rgeos)
# the witch's brew
library(purrr)
library(broom)
library(magick)
#' Get a drought map shapefile and turn it into a PNG
drought_map <- function(wk) {
# need to hush some chatty functions
hush_tidy <- quietly(tidy)
# some are more stubbon than others
old_warn <- getOption("warn")
options(warn=-1)
week <- format(wk, "%Y%m%d")
# get the drought shapefile only if we don't have it already
URL <- sprintf("http://droughtmonitor.unl.edu/data/shapefiles_m/USDM_%s_M.zip", week)
(fil <- basename(URL))
if (!file.exists(fil)) download.file(URL, fil)
unzip(fil)
# read in the shapefile and reduce the polygon complexity
dr <- readOGR(sprintf("USDM_%s.shp", week),
sprintf("USDM_%s", week),
verbose=FALSE,
stringsAsFactors=FALSE)
dr <- SpatialPolygonsDataFrame(gSimplify(dr, 0.01, TRUE), dr@data)
# turn separate out each drought level into its own fortified data.frame
map(dr$DM, ~subset(dr, DM==.)) %>%
map(hush_tidy) %>%
map_df("result", .id="DM") -> m
# get a conus base map (prbly cld have done map_data("usa"), too)
usa_composite() %>%
subset(!(iso_3166_2 %in% c("AK", "HI"))) %>%
hush_tidy() -> usa
usa <- usa$result # an artifact of using quietly()
# this is all Ushey's fault. the utility of cmd-enter to run
# the entire ggplot2 chain (in RStudio) turns out to have a
# greater productity boost (i measured it) than my shortcuts for
# gg <- gg + snippets and hand-editing the "+" bits out when
# editing old plot constructs. I'm not giving up on gg <- gg + tho
# Note putting the "base" layer on top since we don't really
# want to deal with alpha levels of the drought polygons and
# we're only plotting the outline of the us/states, not filling
# the interior(s).
ggplot() +
geom_map(data=m, map=m,
aes(long, lat, fill=DM, map_id=id),
color="#2b2b2b", size=0.05) +
geom_map(data=usa, map=usa, aes(long, lat, map_id=id),
color="#2b2b2b88", fill=NA, size=0.1) +
scale_fill_brewer("Drought Level", palette="YlOrBr") +
coord_map("polyconic", xlim=c(-130, -65), ylim=c(25, 50)) +
labs(x=sprintf("Week: %s", wk)) +
theme_map() +
theme(axis.title=element_text()) +
theme(axis.title.x=element_text()) +
theme(axis.title.y=element_blank()) +
theme(legend.position="bottom") +
theme(legend.direction="horizontal") -> gg
options(warn=old_warn) # put things back the way they were
outfil <- sprintf("gg-dm-%s.png", wk)
ggsave(outfil, gg, width=8, height=5)
outfil
}
# - create a vector of weeks (minus the current one)
# - create the individual map PNGs
# - read the individual map PNGs into a list
# - join the images together
# - create the animated gif structure
# - write the gif to a file
seq(as.Date("2016-01-05"), Sys.Date(), by="1 week") %>%
head(-1) %>%
map(drought_map) %>%
map(image_read) %>%
image_join() %>%
image_animate(fps=2, loop=1) %>%
image_write("drought.gif")
```

To **leave a comment** for the author, please follow the link and comment on their blog: ** R – rud.is**.

R-bloggers.com offers

(This article was first published on ** R – Longhow Lam's Blog**, and kindly contributed to R-bloggers)

Some time ago I had the honor to follow an interesting talk from Tijmen Blankevoort on neural networks and deeplearning. Convolutional and recurrent neural networks were topics that already caught my interest and this talk inspired me to dive into these topics deeper and do some more experiments with it.

In the same session organized by Martin de Lusenet for Ziggo (a Dutch cable company) I also had the honor to give a talk, my presentation contained a text mining experiment that I did earlier on the Dutch TV soap GTST *“Goede Tijden Slechte Tijden”*. A nice idea by Tijmen was: Why not use deep learning to generate new episode plots for GTST?

So I did that, see my LinkedIn post on GTST. However, these episodes are in Dutch and I guess only interesting for people here in the Netherlands. So to make things **more international** and more **spicier** I generated *some new texts* based on deep learning and the erotic romance novel **50 shades of grey**

In R or SAS you could already train plain vanilla neural networks for a long time. The so-called fully connected networks where all input nodes are connected to all nodes in the following hidden layer.And all nodes in a hidden layer are connected to all nodes in the following hidden layer or output layer.

In more recent years deep learning frame works have become very popular. For example Caffe, Torch, CTNK, Tensorflow and MXNET. The additional value of these frame works compared to SAS for example are:

- They support more network types than plain vanilla networks. For example, convolutional networks, where not all input nodes are connected to a next layer. And recurrent networks, where loops are present. A nice introduction to these networks can be found here and here.
- They support computations on GPU’s, which could speed up things dramatically.
- They are open-source and free. No need for long sales and implementation cycles Just download it and use it!

For my experiment I used the text of the erotic romance novel ** 50 shades of grey**. A pdf can be found here, I used

Moreover, the R example script of MXNET is ready to run, I just changed the input data and used more rounds of training and more hidden layers. The script and the data can be found on Github.

The LSTM model is fit on character level, the complete romance novel contains 817,204 characters, all these characters are mapped to a number (91 unique numbers). The first few numbers are shown in the following figure.

Once the model has been trained it can generate new text, character by character!

arsess whateveryuu’re still expeliar a sally.Reftion while break in a limot.”“Yes, ald what’s at my artmer and brow maned, but I’m so then for adinches suppretion. If you think vining. “Anastasia, and depregineon posing rave.He’d deharing minuld, him drits.“Miss Steele“Fasting at liptfel, Miss I’ve dacind her leaches reme,” he knimes.“I want to blight on to the wriptions of my great. I find sU she asks the stroke, to read with what’s old both – in our fills into his ear, surge • whirl happy, this is subconisue. Mrs. I can say about the battractive see. I sluesis her ever returns.“Anab.It’s too even ullnes.“By heaven. Greyabout his voice. “Rest of the meriction.”He scrompts to the possible. I shuke my too sucking four finishessaures. I need to fush quint the only more eat at me.“Oh my. Kate. He’s follower socks?“Lice in Quietly. In so morcieut wait to obsed teach beside my tired steately liked trying that.”Kate for new of its street of confcinged. I haven’t Can regree.“Where.” I fluscsup hwindwer-and I haveI’ll staring for conisure, pain!”I know he’s just doesn’t walk to my backeting on Kate has hotelby of confidered Christaal side, supproately. Elliot, but it’s the ESca, that feel posing, it make my just drinking my eyes bigror on my head. S I’ll tratter topality butterch,” I muda nevignes, bleamn.“It’s not by there soup. He’s washing, and I arms and have. I wave to make my eyes. It’s forgately? Dash I’d desire to come your drink my heathman legtyou hay D1 Eyep, Christian Gry, husder with a truite sippking, I coold behind, it didn’t want to mive not to my stop?”“Yes.”“Sire, stcaring it was do and he licks his viice ever.”I murmurs,most stare thut’s the then staraline for neced outsive. Sheso know what differ at,” he murmurs?“I shake my headanold.” Jeez.“Are you?” Eviulder keep “Oh,_ I frosing gylaced in – angred. I am most drink to start and try aparts through. I really thrial you, dly woff you stund, there, I care an right dains to rainer.” He likes his eye finally finally my eyes to over opper heaven, places my trars his Necked her jups.“Do you think your or Christian find at me, is so with that stand at my mouth sait the laxes any litee, this is a memory rude. Itflush,” He says usteer?”“Are so that front up.I preparraps. I don’t scomine Kneat for from Christian.“Christian,’! he leads the acnook. I can’t see. I breathing Kate’ve bill more over keen by. He releases?”“I’m kisses take other in to peekies my tipgents my

The generated text **does not make any sense**, nor will it win any literature prize soon Keep in mind, that the model is based ‘only’ on 817,204 characters (which is considered a small number), and I did not bother to fine-tune the model at all. But still it is **funny and remarkable** to see that when you use it to generate text, character by character, it can still produce a lot of correct English words and even some correct basic grammar patterns!

cheers, Longhow.

To **leave a comment** for the author, please follow the link and comment on their blog: ** R – Longhow Lam's Blog**.

R-bloggers.com offers

(This article was first published on ** R – Petr Keil**, and kindly contributed to R-bloggers)

This is my tribute to the fantastic R package spatstat. All the artwork was 100% done in R, the source code is here. Click the images for hi-res (6000 x 4000) versions.

License: This is a public domain work. Feel free to do absolutely whatever you want with the code or the images, there are no restrictions on the use.

**Figure 1A:**

**Figure 1B:**

**Figure 2A:**

**Figure 2B:**

To **leave a comment** for the author, please follow the link and comment on their blog: ** R – Petr Keil**.

R-bloggers.com offers

(This article was first published on ** mages' blog**, and kindly contributed to R-bloggers)

The 4th R in Insurance conference took place at Cass Business School London on 11 July 2016. This one-day conference focused once more on the wide range of applications of R in insurance, actuarial science and beyond. The conference programme covered topics including reserving, pricing, loss modelling, the use of R in a production environment and much more.

The audience of the conference included both practitioners (c.80%) and academics (c.20%) who are active or interested in the applications of R in Insurance. It was a truly international event with speakers and delegates from Europe, Asia and the Americas. The coffee breaks and conference dinner offered great networking opportunities.

Mario Wüthrich, ETH Zürich |

In the first plenary session Mario Wüthrich (RiskLab ETH Zurich) spoke about the (new) challenges in actuarial science. While fundamentals of analysing data have not changed over the years, the data and technology available has, and with that new challenges emerged. Yet, as Mario pointed out, insurance is still often concerned with analysing ‘little’ data, as losses occur rarely. Furthermore, the bigger data sets, often generated by sensors, require careful calibration, monitoring and cleansing. Those new challenges provide opportunities for new research (if data is being made available) and the industry. The R community can provide links between the two. Mario would like to see more and better documentation of R packages, more insurance examples and better handling of big data.

Thereafter, the programme consisted of a combination of contributed presentations and lightning talks, as well as a panel discussion on how analytics is transforming the insurance business. Adrian Cuc (Verisk), Simon Brickman (Beazley), Roland Schmid (Mirai Solutions) and Markus Gesmann (Vario Partners) discussed the efforts made in bridging between data vendors, consultants and insurers, as well as the challenges of developing collaborative business models that respond to market needs.

Dan Murphy, Trinostics |

In the closing plenary, Dan Murphy (Trinostics, San Francisco) gave an insight into his experience as an actuary on how to provide persuasive advice for senior management. He uses the three-C’s: context, confidence and clarity. Context is about articulating the problem in a language senior management can understand it. Why does the management need to worry about the problem? If you have a solution, then you have to deliver it with conviction, because, most importantly is has to be actionable. Clarity, of your actionable insight, ensures that those actions can be delegated to the relevant team/employee by the management without you in the room.

The slides of the conference are available on request.

The members of the scientific committee were: Katrien Antonio (KU Leuven, UvA), Christophe Dutang (Université du Maine), Markus Gesmann (Vario Partners), Giorgio Spedicato (UnipolSai ) and Andreas Tsanakas (Cass Business School).

Finally, we are grateful to our sponsors Verisk, Mirai Solutions, Applied AI, RStudio, CYBAEA and Oasis, without whom the event wouldn’t be possible.

We are delighted to announce next year’s event already. The conference will travel across the Channel to ENSAE, Paris, 8 June 2017. Further details will be published on www.rininsurance.com.

To **leave a comment** for the author, please follow the link and comment on their blog: ** mages' blog**.

R-bloggers.com offers