Coding practice in R : what are the advantages and disadvantages of different styles?
Asked Answered
C

4

34

The recent questions regarding the use of require versus :: raised the question about which programming styles are used when programming in R, and what their advantages/disadvantages are. Browsing through the source code or browsing on the net, you see a lot of different styles displayed.

The main trends in my code :

  • heavy vectorization I play a lot with the indices (and nested indices), which results in rather obscure code sometimes but is generally a lot faster than other solutions. eg: x[x < 5] <- 0 instead of x <- ifelse(x < 5, x, 0)

  • I tend to nest functions to avoid overloading the memory with temporary objects that I need to clean up. Especially with functions manipulating large datasets this can be a real burden. eg : y <- cbind(x,as.numeric(factor(x))) instead of y <- as.numeric(factor(x)) ; z <- cbind(x,y)

  • I write a lot of custom functions, even if I use the code only once in eg. an sapply. I believe it keeps it more readible without creating objects that can remain lying around.

  • I avoid loops at all costs, as I consider vectorization to be a lot cleaner (and faster)

Yet, I've noticed that opinions on this differ, and some people tend to back away from what they would call my "Perl" way of programming (or even "Lisp", with all those brackets flying around in my code. I wouldn't go that far though).

What do you consider good coding practice in R?

What is your programming style, and how do you see its advantages and disadvantages?

Crosshatch answered 10/12, 2010 at 8:16 Comment(1)
How many rows and columns are your datasets, do you do a lot of grouping/keying? If you do a lot of in-place mutation (x[x < 5] <- 0), esp. on grouped data, I'd lean towards data.table's := operator. Is your priority fast code, dense compact code, or legibility at slight performance penalty? Also, please show some examples of your custom functions so people can comment.Titania
O
21

What I do will depend on why I am writing the code. If I am writing a data analysis script for my research (day job), I want something that works but that is readable and understandable months or even years later. I don't care too much about compute times. Vectorizing with lapply et al. can lead to obfuscation, which I would like to avoid.

In such cases, I would use loops for a repetitive process if lapply made me jump through hoops to construct the appropriate anonymous function for example. I would use the ifelse() in your first bullet because, to my mind at least, the intention of that call is easier to comprehend than the subset+replacement version. With my data analysis I am more concerned with getting things correct than necessarily with compute time --- there are always the weekends and nights when I'm not in the office when I can run big jobs.

For your other bullets; I would tend not to inline/nest calls unless they were very trivial. If I spell out the steps explicitly, I find the code easier to read and therefore less likely to contain bugs.

I write custom functions all the time, especially if I am going to be calling the code equivalent of the function repeatedly in a loop or similar. That way I have encapsulated the code out of the main data analysis script into it's own .R file which helps keep the intention of the analysis separate from how the analysis is done. And if the function is useful I have it for use in other projects etc.

If I am writing code for a package, I might start with the same attitude as my data analysis (familiarity) to get something I know works, and only then go for the optimisation if I want to improve compute times.

The one thing I try to avoid doing, is being too clever when I code, whatever I am coding for. Ultimately I am never as clever as I think I am at times and if I keep things simple, I tend not to fall on my face as often as I might if I were trying to be clever.

Ossieossietzky answered 10/12, 2010 at 9:44 Comment(3)
+1 for the being too clever. Although, I got used to using indices that I can pretty easily read the code and see what it does. But I agree that that's not always as obvious for the one coming behind me.Crosshatch
Isn't that what comments are for?Inwardness
Well yes, but some R code can be very cryptic so even though I know what something does, how it does it can be easier to divine if the code is readable & understandable too.Ossieossietzky
W
11

I write functions (in standalone .R files) for various chunks of code that conceptually do one thing. This keeps things short and sweet. I found debugging somewhat easier, because traceback() gives you which function produced an error.

I too tend to avoid loops, except when its absolutely necessary. I feel somewhat dirty if I use a for() loop. :) I try really hard to do everything vectorized or with the apply family. This is not always the best practice, especially if you need to explain the code to another person who is not as fluent in apply or vectorization.

Regarding the use of require vs ::, I tend to use both. If I only need one function from a certain package I use it via ::, but if I need several functions, I load the entire package. If there's a conflict in function names between packages, I try to remember and use ::.

I try to find a function for every task I'm trying to achieve. I believe someone before me has thought of it and made a function that works better than anything I can come up with. This sometimes works, sometimes not so much.

I try to write my code so that I can understand it. This means I comment a lot and construct chunks of code so that they somehow follow the idea of what I'm trying to achieve. I often overwrite objects as the function progresses. I think this keeps the transparency of the task, especially if you're referring to these objects later in the function. I think about speed when computing time exceeds my patience. If a function takes so long to finish that I start browsing SO, I see if I can improve it.

I found out that a good syntax editor with code folding and syntax coloring (I use Eclipse + StatET) has saved me a lot of headaches.

Based on VitoshKa's post, I am adding that I use capitalizedWords (sensu Java) for function names and fullstop.delimited for variables. I see that I could have another style for function arguments.

Witting answered 10/12, 2010 at 10:15 Comment(3)
+1 for code folding in the editor. That's something I should figure out in Tinn-R as well. It is possible, if I only knew how... But I'd like to stress that the apply family is also a loop structure, albeit with different side effects (see the discussion in this question : #2276396 ).Crosshatch
Agreed, apply is basically "for each element in", which is exactly like for loop. The only difference is that when you get used to apply, you don't have to think about certain details (like selecting by columns, rows, list elements) and the code can be very readable. But maybe it's just me.Serpentiform
Also I come across situations daily where replacing a for loop with a good sapply/lapply has sped up execution by an order of magnitude at least, and not lessened readability. So I (almost) never "for" anymore.Tryck
G
8

Naming conventions are extremely important for the readability of the code. Inspired by R's S4 internal style here is what I use:

  • camelCase for global functions and objects (like doSomething, getXyyy, upperLimit)
  • functions start with a verb
  • not exported and helper functions always start with "."
  • local variables and functions are all in small letters and in "_" syntax (do_something, get_xyyy), It makes it easy to distinguish local vs global and therefore leads to a cleaner code.
Graaf answered 12/12, 2010 at 9:57 Comment(0)
B
4

For data juggling I try to use as much SQL as possible, at least for the basic things like GROUP BY averages. I like R a lot but sometimes it's not only fun to realize that your research strategy was not good enough to find yet another function hidden in yet another package. For my cases SQL dialects do not differ much and the code is really transparent. Most of the time the threshold (when to start to use R syntax) is rather intuitive to discover. e.g.

require(RMySQL)
# selection of variables alongside conditions in SQL is really transparent
# even if conditional variables are not part of the selection
statement = "SELECT id,v1,v2,v3,v4,v5 FROM mytable
             WHERE this=5
             AND that != 6" 
mydf <- dbGetQuery(con,statement)
# some simple things get really tricky (at least in MySQL), but simple in R
# standard deviation of table rows
dframe$rowsd <- sd(t(dframe))

So I consider it good practice and really recommend to use a SQL database for your data for most use cases. I am also looking into TSdbi and saving time series in relational database, but cannot really judge that yet.

Bibliopole answered 10/12, 2010 at 10:27 Comment(5)
+1 for using SQL, given the fact that we're talking about huge datasets. (+100000 rows of many variables). On smaller datasets that seems pretty much overkill.Crosshatch
See John Chambers' recent talk on 'Multilingualism and R'.Shaver
@Dirk : care to share a link?Crosshatch
Sure, sorry, given how Dave Smith blogged about it I assumed it was public knowledge around here: stat.stanford.edu/~jmc4/talks/Stanford2010_slides.pdfShaver
Who has -1d this, I am not complaining, just don't understand what's wrong with using SQL? Maybe that paper helps :)Bibliopole

© 2022 - 2024 — McMap. All rights reserved.