Functions are a fundamental building block of R: to master many of the more advanced techniques in this book, you need a solid foundation in how functions work. You’ve probably already created many R functions, and you’re familiar with the basics of how they work. The focus of this chapter is to turn your existing, informal knowledge of functions into a rigorous understanding of what functions are and how they work. You’ll see some interesting tricks and techniques in this chapter, but most of what you’ll learn will be more important as the building blocks for more advanced techniques.
The most important thing to understand about R is that functions are objects in their own right. You can work with them exactly the same way you work with any other type of object. This theme will be explored in depth in functional programming.
Answer the following questions to see if you can safely skip this chapter. You can find the answers at the end of the chapter in answers.
What are the three components of a function?
What does the following code return?
How would you more typically write this code?
How could you make this call easier to read?
Does the following function throw an error when called? Why/why not?
What is an infix function? How do you write it? What’s a replacement function? How do you write it?
What function do you use to ensure that a cleanup action occurs regardless of how a function terminates?
Function components describes the three main components of a function.
Lexical scoping teaches you how R finds values from names, the process of lexical scoping.
Every operation is a function call shows you that everything that happens in R is a result of a function call, even if it doesn’t look like it.
Function arguments discusses the three ways of supplying arguments to a function, how to call a function given a list of arguments, and the impact of lazy evaluation.
Special calls describes two special types of function: infix and replacement functions.
Return values discusses how and when functions return values, and how you can ensure that a function does something before it exits.
The only package you’ll need is , which is used to explore what happens when modifying vectors in place. Install it with .
All R functions have three parts:
the , the code inside the function.
the , the list of arguments which controls how you can call the function.
the , the “map” of the location of the function’s variables.
When you print a function in R, it shows you these three important components. If the environment isn’t displayed, it means that the function was created in the global environment.
The assignment forms of , , and can also be used to modify functions.
Like all objects in R, functions can also possess any number of additional . One attribute used by base R is “srcref”, short for source reference, which points to the source code used to create the function. Unlike , this contains code comments and other formatting. You can also add attributes to a function. For example, you can set the and add a custom method.
There is one exception to the rule that functions have three components. Primitive functions, like , call C code directly with and contain no R code. Therefore their , , and are all :
Primitive functions are only found in the package, and since they operate at a low level, they can be more efficient (primitive replacement functions don’t have to make copies), and can have different rules for argument matching (e.g., and ). This, however, comes at a cost of behaving differently from all other functions in R. Hence the R core team generally avoids creating them unless there is no other option.
What function allows you to tell if an object is a function? What function allows you to tell if a function is a primitive function?
This code makes a list of all functions in the base package.
Use it to answer the following questions:
Which base function has the most arguments?
How many base functions have no arguments? What’s special about those functions?
How could you adapt the code to find all primitive functions?
What are the three important components of a function?
When does printing a function not show what environment it was created in?
Scoping is the set of rules that govern how R looks up the value of a symbol. In the example below, scoping is the set of rules that R applies to go from the symbol to its value :
Understanding scoping allows you to:
R has two types of scoping: lexical scoping, implemented automatically at the language level, and dynamic scoping, used in select functions to save typing during interactive analysis. We discuss lexical scoping here because it is intimately tied to function creation. Dynamic scoping is described in more detail in scoping issues.
Lexical scoping looks up symbol values based on how functions were nested when they were created, not how they are nested when they are called. With lexical scoping, you don’t need to know how the function is called to figure out where the value of a variable will be looked up. You just need to look at the function’s definition.
The “lexical” in lexical scoping doesn’t correspond to the usual English definition (“of or relating to words or the vocabulary of a language as distinguished from its grammar and construction”) but comes from the computer science term “lexing”, which is part of the process that converts code represented as text to meaningful pieces that the programming language understands.
There are four basic principles behind R’s implementation of lexical scoping:
- name masking
- functions vs. variables
- a fresh start
- dynamic lookup
You probably know many of these principles already, although you might not have thought about them explicitly. Test your knowledge by mentally running through the code in each block before looking at the answers.
The following example illustrates the most basic principle of lexical scoping, and you should have no problem predicting the output.
If a name isn’t defined inside a function, R will look one level up.
The same rules apply if a function is defined inside another function: look inside the current function, then where that function was defined, and so on, all the way up to the global environment, and then on to other loaded packages. Run the following code in your head, then confirm the output by running the R code.
The same rules apply to closures, functions created by other functions. Closures will be described in more detail in functional programming; here we’ll just look at how they interact with scoping. The following function, , returns a function. What do you think this function will return when we call it?
This seems a little magical (how does R know what the value of is after the function has been called). It works because preserves the environment in which it was defined and because the environment includes the value of . Environments gives some pointers on how you can dive in and figure out what values are stored in the environment associated with each function.
Functions vs. variables
The same principles apply regardless of the type of associated value — finding functions works exactly the same way as finding variables:
For functions, there is one small tweak to the rule. If you are using a name in a context where it’s obvious that you want a function (e.g., ), R will ignore objects that are not functions while it is searching. In the following example takes on a different value depending on whether R is looking for a function or a variable.
However, using the same name for functions and other objects will make for confusing code, and is generally best avoided.
A fresh start
What happens to the values in between invocations of a function? What will happen the first time you run this function? What will happen the second time? (If you haven’t seen before: it returns if there’s a variable of that name, otherwise it returns .)
You might be surprised that it returns the same value, , every time. This is because every time a function is called, a new environment is created to host execution. A function has no way to tell what happened the last time it was run; each invocation is completely independent. (We’ll see some ways to get around this in mutable state.)
Lexical scoping determines where to look for values, not when to look for them. R looks for values when the function is run, not when it’s created. This means that the output of a function can be different depending on objects outside its environment:
You generally want to avoid this behaviour because it means the function is no longer self-contained. This is a common error — if you make a spelling mistake in your code, you won’t get an error when you create the function, and you might not even get one when you run the function, depending on what variables are defined in the global environment.
One way to detect this problem is the function from . This function lists all the external dependencies of a function:
Another way to try and solve the problem would be to manually change the environment of the function to the , an environment which contains absolutely nothing:
This doesn’t work because R relies on lexical scoping to find everything, even the operator. It’s never possible to make a function completely self-contained because you must always rely on functions defined in base R or other packages.
You can use this same idea to do other things that are extremely ill-advised. For example, since all of the standard operators in R are functions, you can override them with your own alternatives. If you ever are feeling particularly evil, run the following code while your friend is away from their computer:
This will introduce a particularly pernicious bug: 10% of the time, 1 will be added to any numeric calculation inside parentheses. This is another good reason to regularly restart with a clean R session!
What does the following code return? Why? What does each of the three ’s mean?
What are the four principles that govern how R looks for values?
What does the following function return? Make a prediction before running the code yourself.
Every operation is a function call
“To understand computations in R, two slogans are helpful:
- Everything that exists is an object.
- Everything that happens is a function call."
— John Chambers
The previous example of redefining works because every operation in R is a function call, whether or not it looks like one. This includes infix operators like , control flow operators like , , and , subsetting operators like and , and even the curly brace . This means that each pair of statements in the following example is exactly equivalent. Note that , the backtick, lets you refer to functions or variables that have otherwise reserved or illegal names:
It is possible to override the definitions of these special functions, but this is almost certainly a bad idea. However, there are occasions when it might be useful: it allows you to do something that would have otherwise been impossible. For example, this feature makes it possible for the package to translate R expressions into SQL expressions. Domain specific languages uses this idea to create domain specific languages that allow you to concisely express new concepts using existing R constructs.
It’s more often useful to treat special functions as ordinary functions. For example, we could use to add 3 to every element of a list by first defining a function , like this:
But we can also get the same effect using the built-in function.
Note the difference between and . The first one is the value of the object called , and the second is a string containing the character . The second version works because can be given the name of a function instead of the function itself: if you read the source of , you’ll see the first line uses to find functions given their names.
A more useful application is to combine or with subsetting:
Remembering that everything that happens in R is a function call will help you in metaprogramming.
It’s useful to distinguish between the formal arguments and the actual arguments of a function. The formal arguments are a property of the function, whereas the actual or calling arguments can vary each time you call the function. This section discusses how calling arguments are mapped to formal arguments, how you can call a function given a list of arguments, how default arguments work, and the impact of lazy evaluation.
When calling a function you can specify arguments by position, by complete name, or by partial name. Arguments are matched first by exact name (perfect matching), then by prefix matching, and finally by position.
Generally, you only want to use positional matching for the first one or two arguments; they will be the most commonly used, and most readers will know what they are. Avoid using positional matching for less commonly used arguments, and only use readable abbreviations with partial matching. (If you are writing code for a package that you want to publish on CRAN you can not use partial matching, and must use complete names.) Named arguments should always come after unnamed arguments. If a function uses (discussed in more detail below), you can only specify arguments listed after with their full name.
These are good calls:
This is probably overkill:
And these are just confusing:
Calling a function given a list of arguments
Suppose you had a list of function arguments:
How could you then send that list to ? You need :
Default and missing arguments
Function arguments in R can have default values.
Since arguments in R are evaluated lazily (more on that below), the default value can be defined in terms of other arguments:
Default arguments can even be defined in terms of variables created within the function. This is used frequently in base R functions, but I think it is bad practice, because you can’t understand what the default values will be without reading the complete source code.
You can determine if an argument was supplied or not with the function.
Sometimes you want to add a non-trivial default value, which might take several lines of code to compute. Instead of inserting that code in the function definition, you could use to conditionally compute it if needed. However, this makes it hard to know which arguments are required and which are optional without carefully reading the documentation. Instead, I usually set the default value to and use to check if the argument was supplied.
By default, R function arguments are lazy — they’re only evaluated if they’re actually used:
If you want to ensure that an argument is evaluated you can use :
This is important when creating closures with or a loop:
is lazily evaluated the first time that you call one of the adder functions. At this point, the loop is complete and the final value of is 10. Therefore all of the adder functions will add 10 on to their input, probably not what you wanted! Manually forcing evaluation fixes the problem:
This code is exactly equivalent to
because the force function is defined as . However, using this function clearly indicates that you’re forcing evaluation, not that you’ve accidentally typed .
Default arguments are evaluated inside the function. This means that if the expression depends on the current environment the results will differ depending on whether you use the default value or explicitly provide one.
More technically, an unevaluated argument is called a promise, or (less commonly) a thunk. A promise is made up of two parts:
The expression which gives rise to the delayed computation. (It can be accessed with . See non-standard evaluation for more details.)
The environment where the expression was created and where it should be evaluated.
The first time a promise is accessed the expression is evaluated in the environment where it was created. This value is cached, so that subsequent access to the evaluated promise does not recompute the value (but the original expression is still associated with the value, so can continue to access it). You can find more information about a promise using . This uses some C++ code to extract information about the promise without evaluating it, which is impossible to do in pure R code.
Laziness is useful in if statements — the second statement below will be evaluated only if the first is true. If it wasn’t, the statement would return an error because is a logical vector of length 0 and not a valid input to .
We could implement “&&” ourselves:
This function would not work without lazy evaluation because both and would always be evaluated, testing even when was NULL.
Sometimes you can also use laziness to eliminate an if statement altogether. For example, instead of:
You could write:
There is a special argument called . This argument will match any arguments not otherwise matched, and can be easily passed on to other functions. This is useful if you want to collect arguments to call another function, but you don’t want to prespecify their possible names. is often used in conjunction with S3 generic functions to allow individual methods to be more flexible.
One relatively sophisticated user of is the base function. is a generic method with arguments , and . To understand what does for a given function we need to read the help: “Arguments to be passed to methods, such as graphical parameters”. Most simple invocations of end up calling which has many more arguments, but also has . Again, reading the documentation reveals that accepts “other graphical parameters”, which are listed in the help for . This allows us to write code like:
This illustrates both the advantages and disadvantages of : it makes very flexible, but to understand how to use it, we have to carefully read the documentation. Additionally, if we read the source code for , we can discover undocumented features. It’s possible to pass along other arguments to and :
To capture in a form that is easier to work with, you can use . (See capturing unevaluated dots for other ways to capture without evaluating the arguments.)
Using comes at a price — any misspelled arguments will not raise an error, and any arguments after must be fully named. This makes it easy for typos to go unnoticed:
It’s often better to be explicit rather than implicit, so you might instead ask users to supply a list of additional arguments. That’s certainly easier if you’re trying to use with multiple additional functions.
Clarify the following list of odd function calls:
What does this function return? Why? Which principle does it illustrate?
What does this function return? Why? Which principle does it illustrate?
R supports two additional syntaxes for calling special types of functions: infix and replacement functions.
Most functions in R are “prefix” operators: the name of the function comes before the arguments. You can also create infix functions where the function name comes in between its arguments, like or . All user-created infix functions must start and end with . R comes with the following infix functions predefined: , , , , , . (The complete list of built-in infix operators that don’t need is: )
For example, we could create a new operator that pastes together strings:
Note that when creating the function, you have to put the name in backticks because it’s a special name. This is just a syntactic sugar for an ordinary function call; as far as R is concerned there is no difference between these two expressions:
Or indeed between
The names of infix functions are more flexible than regular R functions: they can contain any sequence of characters (except “%”, of course). You will need to escape any special characters in the string used to define the function, but not when you call it:
R’s default precedence rules mean that infix operators are composed from left to right:
There’s one infix function that I use very often. It’s inspired by Ruby’s logical or operator, although it works a little differently in R because Ruby has a more flexible definition of what evaluates to in an if statement. It’s useful as a way of providing a default value in case the output of another function is :
Replacement functions act like they modify their arguments in place, and have the special name . They typically have two arguments ( and ), although they can have more, and they must return the modified object. For example, the following function allows you to modify the second element of a vector:
When R evaluates the assignment , it notices that the left hand side of the is not a simple name, so it looks for a function named to do the replacement.
I say they “act” like they modify their arguments in place, because they actually create a modified copy. We can see that by using to find the memory address of the underlying object.
Built-in functions that are implemented using will modify in place:
It’s important to be aware of this behaviour since it has important performance implications.
If you want to supply additional arguments, they go in between and :
When you call , behind the scenes R turns it into:
This means you can’t do things like:
because that gets turned into the invalid code:
It’s often useful to combine replacement and subsetting:
This works because the expression is evaluated as if you had written:
(Yes, it really does create a local variable named , which is removed afterwards.)
Create a list of all the replacement functions found in the base package. Which ones are primitive functions?
What are valid names for user-created infix functions?
Create an infix operator.
Create infix versions of the set functions , , and .
Create a replacement function that modifies a random location in a vector.
The last expression evaluated in a function becomes the return value, the result of invoking the function.
Generally, I think it’s good style to reserve the use of an explicit for when you are returning early, such as for an error, or a simple case of the function. This style of programming can also reduce the level of indentation, and generally make functions easier to understand because you can reason about them locally.
Functions can return only a single object. But this is not a limitation because you can return a list containing any number of objects.
The functions that are the easiest to understand and reason about are pure functions: functions that always map the same input to the same output and have no other impact on the workspace. In other words, pure functions have no side effects: they don’t affect the state of the world in any way apart from the value they return.
R protects you from one type of side effect: most R objects have copy-on-modify semantics. So modifying a function argument does not change the original value:
(There are two important exceptions to the copy-on-modify rule: environments and reference classes. These can be modified in place, so extra care is needed when working with them.)
This is notably different to languages like Java where you can modify the inputs of a function. This copy-on-modify behaviour has important performance consequences which are discussed in depth in profiling. (Note that the performance consequences are a result of R’s implementation of copy-on-modify semantics; they are not true in general. Clojure is a new language that makes extensive use of copy-on-modify semantics with limited performance consequences.)
Most base R functions are pure, with a few notable exceptions:
which loads a package, and hence modifies the search path.
, , which change the working directory, environment variables, and the locale, respectively.
and friends which produce graphical output.
, , , etc. which save output to disk.
and which modify global settings.
S4 related functions which modify global tables of classes and methods.
Random number generators which produce different numbers each time you run them.
It’s generally a good idea to minimise the use of side effects, and where possible, to minimise the footprint of side effects by separating pure from impure functions. Pure functions are easier to test (because all you need to worry about are the input values and the output), and are less likely to work differently on different versions of R or on different platforms. For example, this is one of the motivating principles of ggplot2: most operations work on an object that represents a plot, and only the final or call has the side effect of actually drawing the plot.
Functions can return values, which are not printed out by default when you call the function.
You can force an invisible value to be displayed by wrapping it in parentheses:
The most common function that returns invisibly is :
This is what makes it possible to assign one value to multiple variables:
because this is parsed as:
As well as returning a value, functions can set up other triggers to occur when the function is finished using . This is often used as a way to guarantee that changes to the global state are restored when the function exits. The code in is run regardless of how the function exits, whether with an explicit (early) return, an error, or simply reaching the end of the function body.
The basic pattern is simple:
We first set the directory to a new location, capturing the current location from the output of .
We then use to ensure that the working directory is returned to the previous value regardless of how the function exits.
Finally, we explicitly force evaluation of the code. (We don’t actually need here, but it makes it clear to readers what we’re doing.)
Caution: If you’re using multiple calls within a function, make sure to set . Unfortunately, the default in is , so that every time you run it, it overwrites existing exit expressions. Because of the way is implemented, it’s not possible to create a variant with , so you must be careful when using it.
How does the parameter of compare to ? Why might you prefer one approach to the other?
What function undoes the action of ? How do you save and restore the values of and ?
Write a function that opens a graphics device, runs the supplied code, and closes the graphics device (always, regardless of whether or not the plotting code worked).
We can use to implement a simple version of .
Compare to . How do the functions differ? What features have I removed to make the key ideas easier to see? How have I rewritten the key ideas to be easier to understand?
The three components of a function are its body, arguments, and environment.
You’d normally write it in infix style: .
Rewriting the call to is easier to understand.
No, it does not throw an error because the second argument is never used so it’s never evaluated.
See infix and replacement functions.
You use ; see on exit for details.
It helps to think of as equivalent to (if you set the parameter in that function to ). The benefit of is that it allows you to specify more parameters (e.g. the environment), so I prefer to use over in most cases.
Using and means that "enclosing environments of the supplied environment are searched until the variable 'x' is encountered." In other words, it will keep going through the environments in order until it finds a variable with that name, and it will assign it to that. This can be within the scope of a function, or in the global environment.
In order to understand what these functions do, you need to also understand R environments (e.g. using ).
I regularly use these functions when I'm running a large simulation and I want to save intermediate results. This allows you to create the object outside the scope of the given function or loop. That's very helpful, especially if you have any concern about a large loop ending unexpectedly (e.g. a database disconnection), in which case you could lose everything in the process. This would be equivalent to writing your results out to a database or file during a long running process, except that it's storing the results within the R environment instead.
My primary warning with this: be careful because you're now working with global variables, especially when using . That means that you can end up with situations where a function is using an object value from the environment, when you expected it to be using one that was supplied as a parameter. This is one of the main things that functional programming tries to avoid (see side effects). I avoid this problem by assigning my values to a unique variable names (using paste with a set or unique parameters) that are never used within the function, but just used for caching and in case I need to recover later on (or do some meta-analysis on the intermediate results).