[This article was first published on R on Zhenguo Zhang's Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.
Zhenguo Zhang’s Blog /2025/05/10/r-use-new-scale-xxx-function-to-add-the-same-scale-type-in-different-ggplot-layers/ –
In one ggplot figure, normally you can only use one scale for each aesthetic mapping. For example, if you use scale_color_manual() to set the color scale for a layer, you cannot use another scale_color_manual() for another layer, or
set the color scale more then once in the function aes(). However, you can use the new_scale_color() function from the ggnewscale package to add a new scale for the same aesthetic mapping in different layers.
In this post, I will showcase how to use the new_scale_color() function to add two different color scales in a ggplot figure. The first scale will be for a discrete variable (e.g., number of cylinders), and the second scale will be for a continuous variable (e.g., density level).
Load packages first.
library(ggplot2)
library(ggnewscale)
Use the mtcars dataset for the example
data(mtcars)
Create a plot with two color scales:
1. Points colored by ‘cyl’ (discrete)
2. Density contours colored by density level (continuous)
First, let’s make a scatter plot of mpg vs wt with points colored by the number of cylinders (cyl). We will use the geom_point() function for this layer.
# Reset the color scale for the next layer
plt <- plt + new_scale_color()
Add a second layer: Density contours colored by density level (continuous variable)
plt <- plt +
geom_density_2d(aes(color = after_stat(level))) +
scale_color_viridis_c(name = "Density Level", option = "magma") +
# Add labels and theme
labs(title = "Dual Color Scales with new_scale_color()",
x = "Weight (1000 lbs)",
y = "Miles per Gallon") +
theme_minimal()
plt
Here I demonstrated how to use the new_scale_color() function from the ggnewscale package, one can also use new_scale_fill() for fill aesthetics.
For other aesthetics, such as size, shape, etc., you can call new_scale("size"), new_scale("shape"), etc. to add new scales.
The text elaborates on using the new_scale_color() function from the ggnewscale package to add a new scale for the same aesthetic mapping in different layers of ggplot. The feature allows for the addition of more than one scale, an action which was earlier unattainable. This function adds to the versatility of data presentation in ggplots, permitting users to map multiple variables with different scales to a single aesthetic effectively.
Long-term Implications and Future Developments
This development in enhancing the aesthetic mapping capabilities of ggplot is a significant leap toward improving the visualization tools available in R programming. In the long run, it could accelerate data science progress since better and clearer visualization tools enable data scientists and researchers to extract more insights from their data effectively and efficiently. Expect to observe more advancements in this field in the form of improved or new functions that cater to a wider range of data types and categories, resulting in more informative, visually pleasing, and comprehensive graphical representations of complex data sets.
Actionable Advice
Here are some tips for utilizing this new feature:
Explore the Package: R users, particularly those involved in data analysis, should acquaint themselves with the ggnewscale package and its features to harness its full potential.
Practice Implementing: Implement the new_scale_color() in your visualizations. Try to recreate your existing plots using this functionality to compare, contrast, and appreciate its advantages.
Stay Updated: With constant updates to R packages and their functions, it’s crucial to stay current by regularly checking official documentation and community forums.
Help Evolve: If you spot any issues or have ideas for enhancements, contribute to the R community by reporting these issues or coming up with solutions.
In a world that increasingly relies on data, tools like ggnewscale that enable clearer, more dynamic visualizations, play a vital role. Leveraging these tools efficiently can dramatically enhance the ability to interpret and draw insights from complex datasets.
Read more on integration and function usage of ggnewscale package here.
STAR isn’t suitable for technical jobs, so how do you answer behavioral interview questions while still showing you’re a data scientist?
Data Science and Behavioral Interviews: Strategies and Implications
The roles and responsibilities of data scientists extend beyond mere technical knowledge and skills. As the domain of data science increasingly intersects with business decision-making, the need for data scientists with strong communication and interpersonal skills is becoming more critical. The ability to smoothly navigate behavioral interview questions is a big part of showcasing these skills. However, the realm of behavioral interviews can be challenging for individuals grounded deeply in technical roles, such as data scientists.
The STAR interview response method, which stands for Situation, Task, Action, Result, is commonly used to structure responses to behavioral interview questions. While it offers a robust framework for most job roles, it can fall short for technical jobs like data science. This brings up the challenging question: how does one answer behavioral interview questions while still highlighting their competency as a data scientist?
Long-term Implications and Future Developments
Demonstrating the ability to answer behavioral interview questions effectively signifies to employers that a data scientist not only possesses the technical acumen necessary for the role but also the interpersonal and communication skills, critical thinking prowess, and teamwork abilities. These are all qualities that are becoming increasingly important in a corporate environment that is more collaborative and agile.
The demand for such well-rounded data scientists will continue to rise in the future. Companies seek individuals who can not only crunch numbers and interrogate datasets but also communicate complex data insights in simple, understandable language to various stakeholders. Thus, the ability to balance technical savvy with interpersonal skills won’t just be a plus—it could be a requirement.
Actionable Advice to Showcase Your Data Scientist Skills in Behavioral Interviews
Here are some key strategies for data scientists to shine in behavioral interviews:
Connect the dots: When asked about a situation or experience, try to relate it to your data science skills. Discuss how your analytical thinking helped resolve a problem or how your knack for detail-oriented work led to a particular outcome.
Speak their language: As a data scientist, you’re expected to discuss complex insights with non-technical team members. Practice explaining your work or projects in language simple enough for anyone to understand.
Share the spotlight: Emphasize your team-oriented skills. Talk about instances where you collaborated with others, showcasing your ability to contribute within a team and towards a collective outcome.
Highlight problem-solving: Demonstrate your analytical and problem-solving skills not just by discussing algorithms or equations, but by talking about real-world problems you’ve solved through your innovative approaches.
In conclusion, while the STAR approach might not perfectly fit the needs of technical interviews, with a bit of creativity, you can morph and mold it to showcase your competency as an effective and holistic data scientist. This approach will serve data scientists well as the landscape of their role continues to evolve over time.
[This article was first published on R Works, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.
A new, universal development environment
VS Code is Microsoft’s open-source software development environment that can be customized for any language. It is arguably the general-purpose GUI programming tool of choice when working across multiple computer languages. As a data scientist, its R <-> Python integration deserves careful consideration. Its ecosystem is rich with Extensions to add language-specific features, including features for machine learning specific notebook and visualization tools. For R language development, the RStudio development environment by Posit (the creator of RStudio) still sets the standard. VS Code has appropriated most features available from RStudio, including the Quarto coding and publishing system. While RStudio is more “Quarto native” – with all its document publishing ability – VS Code makes debugging and Cloud integrations possible for more extensive programming projects. In this post we will be taking advantage of its file management and editing, its ability for symbolic debugging, and the chameleonic VS Code Extensions by which it can be adapted to many languages. If you’re not familiar with it, you should give it a try.
VS Code combines R and Python development, borrowing features from dedicated environments from both languages. This article reveals several ways to mix languages in a project.
Running VS Code
VS Code has a visual coding and development GUI that runs locally. It works at the level of a folder where your project code and other file artifacts live, similar to the way you’d organize a project with Git. In fact, it integrates smoothly with Git. As a Microsoft product, it is positioned for Azure Cloud inter-operability, making it possible to build Cloud systems at scale — an entirely different topic covered in my article Simply Python.
So download VS Code for your Windows, Mac or Linux machine if you don’t have it already. This is its default layout:
The interface is partitioned into panes and bars. The main editor pane also shows previews and can be split horizontally and vertically ad infinitum. Below it is the terminal, which can also show other scrolling output. The icons on the far left compose the activity bar that determines what is displayed in the pane immediately to its right, which here shows the File Explorer and Outline. For a tour of the interface and its basic features, see the Getting Started page on the VS Code website.
How to run R with Python in VS Code
There is more than one way to do this. These are the various ways covered in this article:
Use VS Code’s R Extension and Debugger to run .R files.
Run R code in Jupyter Notebooks in VS Code.
Run R code using rpy2 in Python Jupyter Notebooks.
Run .qmd Quarto markdown documents.
1. Running R files
First, check your current R executable.
First, you need to set up your current R executable environment. Assume you have a current working R version that runs in the terminal. Check when you run
.libPaths()
that it returns a valid path. On my Mac, this is what appears:
On the Mac, VS Code finds the R executable and library paths automatically. You can check this by opening a shell (e.g., zsh or bash) in VS Code and invoking R.
This R package exposes an API for common text editors not specific to VS Code.
VS Code Extensions
Then, install the R Extension for Visual Studio Code Extension. Find the Extension by selecting the icon from the VS-Code Activity Bar, found in a column on the far left. (Don’t confuse this Extension with “R Tools” or any of the several R-related extensions that are available.) It needs to be enabled:
Configuring the Extension is done by the drop-down menu from its “gear” icon. Find Settings under the Extension header in a drop-down menu attached to the gear icon. However the default settings typically work fine.
The R Extension also recommends installingthe R Debugger Extension to enable source code debugging.
Now, when you do a New File... from the VS Code menu, you’ll have a choice of both an .R file and .Rmd file option. R files include “intellisense” popup suggestions for command completion in addition to other smart editor features. For now, try the .R file feature; .rmd Rmarkdown files need additional extensions enabled. The R Extension also adds an “R terminal” to the list of choices in the lower right terminal dropdown:
You could just copy-paste your code into the terminal to run R. But you don’t need to create an active R terminal to execute R code.
How to execute .R files
The R Extension enables the keyboard shortcut cmd-return (Mac) to send the current line or file selection to the R process running in the current terminal window. Or it creates a process to run it if none exists. I’m not sure if this always finds the right R process. To ensure that the R Extension has an attached R process, I suggest you start an R terminal before executing any R code to get the attached features to work.
Of course, you can interact directly with the R process at the prompt in the terminal. (This does not require you to load the aforementioned languageserver library in the R process.)
An alternate command is the Run code (keyboard shortcut: cmd-option-n) in the “arrow” drop-down in the file title bar or in the file’s right-click context menu to run the current file via Rscript. Be sure to save the file first.
Since there can be several active R processes, the current process PID is shown on the right in the Status bar, under the Terminal pane. Either method to run code will pop up a graphics pane if necessary, which can be saved as a .png file.
Other actions are analogous to those familiar in RStudio, but the interface is adapted to VS Code.
Executing .R code in the source debugger
Yet another way to run .R code is to use the VS Code debugger from the Run menu (keyboard shortcut: F5). This gives you the familiar breakpoint, variable inspection, and debug console features of VS Code. Apparently, “Run without Debugging” calls the debugger just the same.
Additionally, for any “attached” process, in the far left side pane, the icon brings up the “Workspace” with a browsable list of global R objects similar to the RStudio Environment feature – a more full-featured inspector than the VS Code variable inspector, but not available with the VS Code debugger.
Honestly, with the several execution methods provided, possibly attached to different R processes, it can get complicated to understand which process is running one’s code and what the current process state is.
Help Pages
Below the workspace inspector in the side pane is the help page tool menu. Similarly a file’s right-click context menu “Open help for selection” will bring up the documentation file for an R object.
2. Running Jupyter R Notebooks
The second engine for running R uses the Jupyter Notebook integration with VS Code. Conventional Jupyter notebooks can run R if they have an R kernel installed that connects them with the R executable that runs on your machine. To set this up, at the R prompt install the IRkernel package:
install.packages('IRkernel')
IRkernel::installspec() # to register the kernel, making it visible to Jupyter.
Now restart VS Code and create a Jupyter Notebook from the New File... menu item. To switch to the R kernel, click the “Select Kernel” button above the notebook to the right. Then, in the menu-bar popup, select the R executable, and the “MagicPython” dropdown menu on the notebook cell will change to R. If no R kernel is shown, Choose “Select Another Kernel…” to search the file system for one. Once you’ve found one, it will appear next time in the kernel selection menu.
As with Python, you have a choice with new cells to create them as either R or Markdown cells. In a notebook, a cell’s graphic output will appear below the cell rather than in a separate pane.
3. Notebooks for R <-> Python interoperability
Alas, your R notebook is for R code only; there is no option to mix notebook cells of different languages. This section discusses how it is possible to create notebooks that can switch back and forth between cells running different languages in the same notebook.
Polyglot notebooks
You may notice that cells in VS Code Python notebooks do give you a choice of languages. This is Microsoft’s Polyglot Notebooks Extension built on top of .NET. When running a VS Code Python Notebook, here’s the drop-down for cell types. Sadly, R is not a choice.
Solution: The rpy2 Python module
As I’ve promised in this post, there is a way you can work freely between R and Python. There’s no need to choose one over the other.
Polyglot VS Code Python notebooks running a Python kernel do allow mixtures of cells of different languages but unfortunately, this feature does not include R. You either choose an R kernel or a Python one. The solution is to load this special Python module that mediates between R and Python.
In Jupyter notebooks running the Python kernel, one can use “R magics” exposed by the rpy2 module to execute native R code. This is a feature built on the rpy2 Python package. This is a way to insert R code in a native Python notebook. Several cloud services use the package by having built-inrpy2 “magics” to transfer variables between the two languages. Think of these “magics” as commands that read a variable from one process and write it into the other. If you subsequently change a transferred variable’s value in a cell in one language, it needs to be re-imported to the other language to gain access to the changes.
To use rpy2 in VS Code notebooks.
Since “rpy2 magics” are not built in to VS Code kernels, you need to load rpy2 into the Python kernel. Of course, your Python environment needs the package installed:
pip install rpy2
First, configure your Python notebook by loading rpy2 with this cell magic in one of the notebook (preferably the first) cells:
%load_ext rpy2.ipython
Then in a cell where you want to run R and import an existing Python variable into the cell, say my_df, preface the cell with this magic command:
%%R -i my_df
Similarly, to export an R variable from an R cell so it’s visible in Python, preface it with:
%%R -o my_df
The analogous single “%” R magic %R -i my_df imports from R into a Python cell. To see documentation on the set of magic commands run %magic in a Python cell. The authors of the rpy2 Python package continue to support it for sharing data frames between notebook cells in the two languages. This blog explains how.
4. Quarto: An alternative to multi-lingual notebooks
Quarto is designed to build beautiful documents interspersed with embedded, functioning code. In contrast, Jupyter notebooks are intended to be a programming environment, with a sequence of code cells alternated with text. The line between them gets blurred as more features are added to both. Quarto evolved from R Markdown, as a multi-purpose document generation tool, combining the best of existing tools with the ability to run code. In fact, .Rmd “R Markdown” files, an extension of standard Markdown, find a natural extension with Quarto .qmd files. Quarto can be run from the command line as a batch task that knits .qmd files into HTML, PDF, or other formats. However, it is fully integrated with RStudio and VS Code so that one never needs to resort to the command line.
Reticulate: Integrating Python and R with Quarto
An alternative to running R with Python in Jupyter Notebooks is the competitive feature of running code in Quarto’s .qmd files by using the reticulate R library. It supports code blocks in multiple languages that are embedded in an extended, full-featured version of Markdown. This is a preferred solution for sharing R and Python data frames in a document. See this article about reticulate.
Python in .qmd markdown files
Analogous to how rpy2 makes it possible to share data frames between R and Python in Jupyter Notebooks, reticulate makes the same possible in Quarto. The syntax is different. To retrieve a data frame in an R cell from a previous Python cell, use:
The opposite conversion from R to Python in a Python cell is:
```{python}
python_df = r.r_df
```
I find sometimes that loading library(reticulate) is necessary, although Quarto tries to figure this out for you, assuming it is already installed in your R environment.
Once a Python object is created from an R object it is available in all subsequently executed cells. The same holds true for R objects created from Python.
Code display options
There is both a source mode and a visual editing mode for Quarto files. Cells in source mode (unlike in visual mode) have a live link along their upper edge to execute them. Unlike Jupyter notebooks, where the output of cell computations is included in-line following the cells, the output is shown in a separate pane when running cells in .qmd files. Running an individual R cell injects the cell’s code into a terminal running R. In Quarto, Python cell output can be displayed in various ways, sometimes in a separate pane called “Interactive”, or in a pop-up window.
When rendering a document, Quarto gives one freedom whether the code or its output is shown or both. By default, both the code and output are included in the document. To not include the output, create a cell like this:
Similarly, the comment to not show the code in the document is #| echo: false. This works in both R and Python cells. To enforce this globally, these comments can also be included as yaml comments in the .qmd file header as shown here. For instance, here’s another comment to make the code visible in a drop down labeled “The code”:
---
title: "Folded code example"
format: html
code-fold: true
code-summary: "The code"
---
See the Quarto documentation on Code Cells: Jupyter for all cell comment options.
Bottom line: Which environment to use for which purpose?
This post is a brief survey of the existing R <-> Python integrations available for the VS Code development environment. Here are my thoughts about when they come in useful.
Running R code in VS Code is a work-alike to running RStudio. RStudio is a dedicated R development environment with some recent additions to accommodate Python. Conversely, VS Code competes as a multi-language project development environment, with extensions for notebook, Python, and R programming. I find RStudio more intuitive and R-ish, and having to suffer multiple ways to accomplish similar tasks in VS Code adds needless complication to the interface. But if you are familiar with VS Code and don’t want to learn another tool, VS Code runs R code just fine.
When I need some R code capabilities not available in Python (yes, R still has better statistics and graphics libraries), I use rpy2 to extend my Jupyter notebooks. This way I can still use the extensive features of VS Code such as evolving my (pure) notebooks into Python modules. On the other hand just running R notebooks in VS Code has no advantages over running Quarto documents.
I’m learning Quarto not only for its “polyglot” language features but also for its extensive and varied document creation possibilities. This blog is actually created as a .qmd document! It remains to be seen if VS Code will equal the ease of Quarto development in RStudio as I get more proficient with it.
About the author
Recently, after departing his Data Scientist position at Microsoft, John Mark Agosta began teaching the “Math Foundations” course for the SJSU online Master’s program and founded Fondata, LLC to pursue his interest in probabilistic graphical models and related Bayesian modeling methods.
To leave a comment for the author, please follow the link and comment on their blog: R Works.
The usage of Visual Studio Code (VS Code) as a standard graphical user interface (GUI) programming tool offers various benefits, particularly in the realm of data science. Emphasizing its potential for R and Python integration, the tool stands out for its comprehensive ecosystem rich in Extensions that enhance language-specific features.
Long-Term Implications
VS Code’s potential for customization across languages is likely to increase its adoption by data scientists and developers across the globe. The future development of VS Code’s ecosystem could result in a further enrichment of features, making it an even more powerful tool for machine learning.
Additionally, VS Code’s cloud integration capabilities could streamline the development process for extensive programming projects. As programming continues to evolve with more complex, language-agnostic projects, tools like VS Code will continue to emerge as industry standards.
Future Developments
The integration of Python and R represents a revolutionary approach to programming, promising more streamlined coding and debugging processes. The possibility of switching between languages within the same project may lead to an increase in the tool’s popularity, pushing developers towards comprehensive language knowledge.
Furthermore, given the growing interest in inter-language operability, the possibility of introducing more languages into VS Code’s ecosystem could soon be a reality. This would allow for an integrated, all-in-one coding environment that could not only simplify but also revolutionize how programming is done.
Actionable Advice
For Individuals:
If you are a data scientist or developer looking to work across multiple computer languages, do consider adopting VS Code.
For individuals working in R, it’s worthwhile to explore the VS Code options, even if you’re already using RStudio. While RStudio provides ready-made solutions and processes, the versatility and adaptability of VS Code could streamline extended programming projects and provide better debugging and Cloud integrations.
If you’re interested in creating a coding project in multiple languages, explore the “mixing languages” potential of VS Code. It offers an opportunity to leverage the power of various computer languages to solve complex programming problems.
For Organizations:
Organizations should consider training their developers and data scientists to use tools like VS Code. This tool improves productivity by enabling simultaneous work across multiple languages.
Given the rate at which this tool updates and evolves, it is essential for organizations to keep track of new features and extensions that could immediately improve team productivity and code quality.
As VS Code integrates smoothly with Git and aligns neatly with Azure Cloud interoperability, organizations can consider this platform as part of their cloud strategies.
Learn these 7 debugging moves and you’ll laugh at your old error messages.
Understanding Debugging Techniques: The Key to Swift Error Resolution
Every developer, at some point in their coding journey, experiences the frustration of an unhandled exception, error messages or non-working code. It is at these moments that robust debugging skills come into play. Being proficient with debugging techniques paves the way for fluent problem solving, efficient coding, and rapid error resolution.
Perks of Debugging Proficiency
Error Resolution: The most obvious advantage of efficient debugging is the ability to resolve errors quickly, reducing downtime and accelerating development.
Smoother Workflow: Debugging skills can smoothen your workflow, limiting time wasted on bug-hunting and allowing more productive problem-solving.
Code Quality Improvement: Debugging forces a deeper understanding of the code, which can lead to cleaner, more robust, and quality code.
Future Implication and Developments
As technology evolves and becomes more complex, so too does the depth of potential bugs and errors. The stakes are higher than ever, as bugs can result in significant financial loss or even a threat to personal security. In turn, the demand for proficient debuggers will only increase. Knowing which debugging techniques to employ and when can be the difference between a quick fix and a significant headache.
Possibility of Advanced Debugging Tools
As the world moves towards increased automation and artificial intelligence (AI), the future could hold advanced automated debugging tools. This would not only potentially save significant time and money but also vastly improve code quality by preventing bugs before they occur.
Actionable Advice: Enhancing Debugging Skills
To stay ahead in this ever-evolving digital world, consider taking the following steps:
Learn Debugging Techniques: Understand the various debugging techniques at your disposal, including breakpoints, step over, step into, and watch. Dive deeper into their strengths and weaknesses to know when to use which debugging method.
Practice: Debugging skills, like any others, need practice. Try debugging codes with varying complexity levels to effectively improve your debugging ability.
Stay Up-to-date: Always stay current with new debugging tools and software. Technologies continually progress, and newer debugging tools can offer more efficient ways to tackle bugs.
Remember, efficient debugging is not just about fixing the code; it’s about understanding your code thoroughly. The better your understanding, the faster and accurately you’ll be able to diagnose and resolve issues.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.
A few years ago, the R community started using ORCID (“Open Researcher and Contributor ID”) to persistently and uniquely identify individual authors of packages in DESCRIPTION.
The idea is the following: you enter authors’ ORCID as a specially named comment in their person() object.
For instance I can be represented by:
Although anyone could use your ORCID, maliciously or inadvertently1, you definitely benefit from using your ORCID in your work.
In the case of R packages, CRAN pages and pkgdown websites feature a pretty icon linking to your ORCID profile that in turn can link to your favorite online presence.
Recognition! Personal branding!
This year, the exact same idea was applied to organizations using ROR (“Research Organizations Registry”) IDs.
Any organization, be it a research organization, an initiative or a company, can request to be listed in the registry.
A few months ago, it became possible to list ROR IDs in DESCRIPTION, which a few dozen CRAN packages currently do –
although this is still far from the thousands of CRAN packages adopting ORCIDs.
Thanks to R Core for adding the feature2 and to Achim Zeileis for spreading the news.
A package maintainer might need to list organizations in DESCRIPTION: for instance a company that owns the copyright to the package (“cph” role), an entity that funded work on the software (“fnd” role).
Adding the organization’s ROR ID to its person() object identifies it even more clearly.
As an illustration, rOpenSci can be represented by:
person("rOpenSci", role = "fnd",
comment = c("https://ropensci.org/", ROR = "019jywm96"))
The ROR icon, although less striking than the bright green ORCID icon, appears on the CRAN page of the package and links to the organization’s ROR page that in turn can link to the organization’s website:
In 2018 we had reported about tooling for using ORCID.
This year, we’d like to explain the tooling for including ROR IDs.
ROR support in the {devtools} ecosystem
Once ROR IDs were supported by base R, a next technical step was for them to be supported by Posit’s “devtools ecosystem” too.
Even if devtools is not strictly necessary for developing packages, many package developers, including some in the rOpenSci community, do use devtools.
The code supporting ROR in desc, roxygen2 and pkgdown follows the code supporting ORCID in those packages.
It is very fortunate that ORCID support was added before ROR because “orcid” is a better string to search for than “ror” that comes up in, say, “error”.
ROR IDs support in {desc}
The desc package, maintained by Gábor Csárdi, helps you manipulate DESCRIPTION files programmatically.
In its current development version, all functions handling authors (adding, searching or complementing entries) now feature a ror argument.
Furthermore, a new function, desc_add_ror(), was created.
For instance you can add a ROR ID to an author entry:
desc::desc_add_ror("019jywm96", given = "rOpenSci")
You can add an author entry including its ROR ID:
desc::desc_add_author(given = "rOpenSci", ror = "019jywm96", role = "fnd")
These functions can be handy to update a bunch of packages at once.
Even if packages are updated one by one, it is shorter to share and apply the instructions as a code snippet.
ROR support in {roxygen2}
The roxygen2 package, maintained by Hadley Wickham, generates your package’s NAMESPACE and manual pages using specially formatted comments.
Among those manual pages, your package might (and should, according to our dev guide) contains a package-level one.
You can create such a page using usethis::use_package_doc().
The following content will be added to R/package-name-package.R, for instance R/usethis-package.R.
The pkgdown package, maintained by Hadley Wickham, creates a documentation website for your package based on its metadata and documentation.
Since its 2.1.2 version, ROR IDs in DESCRIPTION are transformed into icons, similar to ORCID IDs.
The sidebar of tinkr’s website includes a ROR icon near rOpenSci name.
Support for ROR icons?
As of today, ROR icons like those on the CRAN pages, pkgdown websites and our website’s footer come from files. We have however opened an icon request for ROR in the Font Awesome repository, that you can upvote by using thumbs up. This strategy worked for ORCID. There’s already a ROR icon in the more specialized academicons library.
Conclusion: go forth, register and use ROR IDs!
In this tech note, we explained what ROR IDs are: persistent IDs for organizations.
They are to organizations what ORCIDs are to individuals.
We’ve shown ROR IDs are supported in the base R and devtools ecosystems.
ROR IDs can help identify more clearly an entity you list in your package’s DESCRIPTION because it, say, funded the work or owns the copyrights to it.
We encourage you to register your organization to the Research Organization Registry and to use the resulting ID in your package’s DESCRIPTION.
Such a task could be tackled during a package spring cleaning.
Don’t we all resort to copy-pasting formatting from others’ metadata files? ︎
Currently, packages on CRAN with a ROR ID in DESCRIPTION get a NOTE in CRAN checks, that can be ignored. Example︎
The R community uses special identifiers such as ORCID (“Open Researcher and Contributor ID”) and ROR (“Research Organizations Registry”) to uniquely and persistently identify individual authors and organizations involved in the creation of R packages. These identifiers offer recognition, a personal and organization brand element and can be linked to online profiles or websites.
Long-Term Implications
If used consistently and appropriately, ORCID and ROR IDs can greatly support the open science movement by ensuring clear attributions of contributions to scientific packages and results. This can foster transparency and collaboration within the scientific community, stimulating research and development. In the future, these IDs could become a standard tool for recognizing the work of researchers and organizations involved in the creation of scientific packages. It could also enhance the mobility and recognition of individual contributors across multiple projects.
Possible Future Developments
We may witness an expansion of these unique identifiers in other areas of open-source development, reaching beyond the scientific community. As these identifiers grow in popularity, they could integrate with other digital tools used by researchers, such as digital repositories, lab notebooks and bibliographic management tools. This would allow for a seamless tracking and crediting of research contributions, while also promoting open science practices.
Actionable Advice
If you’re a part of the R community or if you’re engaged with open source development, consider adopting the use of ORCID and ROR IDs. Registering your organization with the Research Organization Registry and using these IDs consistently can enhance visibility and recognition for your work. Also, take advantage of the tooling available for including ROR IDs such as in ‘devtools’, ‘desc’, ‘roxygen2’ and ‘pkgdown’ packages.
If you’re already using these identifiers, explore further how you can integrate them with other tools and platforms you use. And lastly, contribute to further enhancement of the system by submitting and voting for icon requests.
Tired of rewriting boilerplate code? These copy-ready custom decorators are reusable patterns that belong in every developer’s toolkit.
Long-term Implications and Future Developments
The long-term implications of using custom decorators in coding are profound. With the advancements in different programming paradigms, a trend towards less redundancy and complexity is becoming more evident. As custom decorators enable developers to avoid rewriting the same code over and over again, they are an incredible tool to simplify and accelerate the coding process significantly.
Less Redundancy and More Efficiency
Implementing copy-ready custom decorators in your code brings efficiency to a whole new level. This means that you save time and resources, which can be spent on researching new features or sorting out bugs in your projects. This increase in productivity might even lead to shorter delivery times, creating a faster and more streamlined workflow.
Readability and Maintainability
With custom decorators, you make your code more readable and maintainable. This is a valuable property, especially in larger codebases or when working in a team. The reuse of decorators minimizes code complexity, which in turn reduces the chance of errors and makes the code more understandable to other developers.
Future Developments
As software development evolves, the use of such tools is likely to become an industry standard. By that time, developers who do not adopt these practices may struggle to keep up with the pace of the industry, especially as client and user expectations continue to rise.
Actionable Advice
Start Now: Start using custom decorators for your projects now. If you’re still writing boilerplate code, it’s time to change and build up a toolkit of reusable code.
Learn and Understand: Make sure to take the time to learn and understand how these patterns work. Custom decorators can be powerful, but they can also be confusing if used incorrectly.
Don’t Overuse Them: As with any tool, custom decorators should not be overused. Remember to only use them when necessary. Creating decorators for tasks that can be done simply without them can lead to unnecessary complexity.
Keep Up with the Latest: Technology is always evolving, so constantly keep an eye on the latest practices and trends. This can help ensure that you’re staying on top of your game.
Taking the willingness to learn, understand, and implement custom decorators is a game-changer for developers, striving for efficiency and productivity. Custom decorators have a considerable potential to transform the way we approach coding by reducing redundancy, improving readability and maintainability, and increasing efficiency.