[This article was first published on R Code – Geekcologist, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.
As an ecologist, being able to easily visualize biodiversity occurrence data is an essential need as this kind of data visualization provides critical insights into species distribution patterns and ecological requirements, which is essential for understanding biodiversity dynamics in space and time. Moreover, for pragmatic reasons, fast and simple biodiversity data visualization can help us to define sampling and monitoring strategies in the field, optimizing resource and time allocation. As someone who is a passionate enthusiast for wildlife and nature, being able to visualize species occurrence is also particularly important when I am planning a trip to a new place I have never been, or just “virtually exploring” a far and unknown corner of the planet.
This simple R Shiny app allows users to explore species distributions by selecting a taxonomic group and defining a geographic area of interest using latitude and longitude coordinates to define a polygon (only min and max latitude and longitude are used to define the polygon, thus complex boundaries can not be included here), which will be used to retrieve species occurrence data from GBIF within this quadrilateral. The data will be displayed on an interactive map using the R package leaflet.
In the source code, you can also change the taxa of interest to be shown in the user interface and the months of the year in which the data was collected, which might be useful for seasonal species.
The user interface should look like this:
Here is the R code!
# Install packages
install.packages(c("shiny", "rgbif", "leaflet", dependencies=T))
# Load packages
require(shiny)
require(rgbif)
require(leaflet)
# Define function to search for occurrences of specified clades within a polygon (i.e, bounding box=bbox)
search_occurrences <- function(bbox, clade) {
occ_search_result <- occ_search(
geometry = paste("POLYGON((", bbox["min_longitude"], " ", bbox["min_latitude"], ",",
bbox["min_longitude"], " ", bbox["max_latitude"], ",",
bbox["max_longitude"], " ", bbox["max_latitude"], ",",
bbox["max_longitude"], " ", bbox["min_latitude"], ",",
bbox["min_longitude"], " ", bbox["min_latitude"], "))"),
month = 1, 12,###define months of the year
scientificName = clade,
hasCoordinate = TRUE
)
return(occ_search_result)
}
# Define user interface
ui <- fluidPage(
titlePanel("Species Occurrence"),
sidebarLayout(
sidebarPanel(
selectInput("clade", "Choose a clade:",
choices = c("Aves", "Coleoptera", "Amphibia", "Plantae", "Mammalia", "Actinopterygii", "Insecta"),#you can change the default clades according to your taste in biodiversity
selected = "Aves"), #first clade to be shown in the drop down box
numericInput("min_longitude", "Minimum Longitude:", value = -9),##by default you will have the approximate borders of portugal, but this can be changed in the user interface or directly here
numericInput("max_longitude", "Maximum Longitude:", value = -6),
numericInput("min_latitude", "Minimum Latitude:", value = 36),
numericInput("max_latitude", "Maximum Latitude:", value = 42)
),
mainPanel(
leafletOutput("map")
)
)
)
# Define server logic
server <- function(input, output) {
# Render the leaflet map based on user's clade selection and polygon coordinates
output$map <- renderLeaflet({
clade <- input$clade
bbox <- c(
min_longitude = input$min_longitude,
min_latitude = input$min_latitude,
max_longitude = input$max_longitude,
max_latitude = input$max_latitude
)
occ_search_result <- search_occurrences(bbox, clade)
leaflet() %>%
addTiles() %>%
addCircleMarkers(
data = occ_search_result$data,
lng = ~decimalLongitude,
lat = ~decimalLatitude,
popup = ~species,
radius = 5,
color = "blue",
fillOpacity = 0.7
) %>%
setView(
lng = mean(bbox[c("min_longitude", "max_longitude")]),
lat = mean(bbox[c("min_latitude", "max_latitude")]),
zoom = 14
)
})
}
#et voilà! You can run the application
shinyApp(ui = ui, server = server)
To leave a comment for the author, please follow the link and comment on their blog: R Code – Geekcologist.
Implications and Future Developments of Biodiversity Occurrence Visualization Using GBIF and R Shiny
The use of the Global Biodiversity Information Facility (GBIF) and R Shiny for biodiversity occurrence data visualization is a vital tool in understanding species distribution patterns and ecological requirements. The insights this kind of data visualization offers are critical for understanding biodiversity dynamics both in space and time. Furthermore, such visualization is extremely useful in strategic planning of sampling and monitoring activities in the field, through optimized resource and time allocation.
Long-term Implications
Implementing R Shiny apps to collect data from GBIF can have broad, long-term implications for ecological research and conservation. Being able to easily access and visualize over 2 billion records of species occurrence globally helps scientists monitor species distributions over space and time. This can be critical in detecting biodiversity changes due to climate change, habitat destruction, or invasive species. The app can be particularly useful for conservation ecologists who are planning fieldwork in new or remote areas, as it can help decide the best areas to sample based on known species occurrences.
Potential Future Developments
While the current system is already beneficial, there is room for significant improvements in the future. Here are a few potential developments:
Expanding the polygon specifications: Currently, this app only uses minimum and maximum latitude and longitude to define a polygon, restricting its use to a quadrilateral. In the future, more complex polygons could be used to better restrict searches to the actual area of interest.
Including more filtering options: Being able to filter the data by additional parameters, such as date, would be useful. This could allow for more specific temporal analyses, such as species migration tracking.
Enhancing the data presentation: Currently, the app displays the data in an interactive map with basic information. More detailed results, that possibly include the introduction of graphs or charts for data visualization, could be beneficial for complex analyses.
Actionable Advice
Overall, using this R Shiny app for accessing GBIF data offers a great opportunity for researchers and biodiversity enthusiasts to visualize global biodiversity occurrence data. To make the most of this app, users should:
Ensure they have a good understanding of R programming to customize the data retrieval and visualization to better suit their specific needs.
Regularly keep the app updated with the latest data from GBIF to have the most recent information available for any analysis.
Consider the potential future developments and how they might be implemented into the application to enhance usability and precision.
Title: Future Trends in Accessibility for Neurodivergent Individuals: Improving Cultural Spaces
Introduction
In recent years, there has been an increasing recognition of the need to improve accessibility in cultural spaces for neurodivergent individuals. Neurodivergent people, including those on the autism spectrum, have unique sensory, cognitive, and social processing traits. Unfortunately, most cultural venues were not initially designed with their specific needs in mind. However, World Autism Acceptance Week serves as a timely reminder to work towards creating inclusive spaces. This article explores the key issues and potential future trends related to accessibility for neurodivergent individuals, along with unique predictions and recommendations for the industry.
The Current State of Cultural Venues and Neurodivergent Accessibility
While some cultural spaces have made efforts to improve accessibility for neurodivergent individuals, the majority still lag behind in understanding their specific needs. Sensory overload, crowded spaces, and limited accommodations can often make visits to theaters, cinemas, museums, and galleries overwhelming experiences for neurodivergent individuals and their families. Recognizing this problem is the first step towards achieving meaningful change.
1. Sensory-Friendly Experiences
One potential future trend is the development of sensory-friendly experiences in cultural venues. By creating designated sensory-friendly spaces or events, venues can cater to the needs of neurodivergent individuals. These spaces might include reduced lighting, quieter environments, and the provision of sensory accommodation kits, such as noise-cancelling headphones or fidget toys.
2. Digital and Virtual Accessibility
The advent of digital and virtual technologies presents significant opportunities to improve neurodivergent accessibility in cultural spaces. Virtual reality (VR) and augmented reality (AR) technologies can offer alternative ways for individuals to engage with exhibitions and performances. Additionally, the integration of digital platforms for pre-visit planning, online ticketing, and virtual tours can help reduce anxiety by providing individuals with the ability to familiarize themselves with the venue beforehand.
3. Staff Training and Sensitivity
Proper training and sensitivity among cultural venue staff is crucial to ensuring positive experiences for neurodivergent visitors. Implementing comprehensive training programs that educate staff about neurodiversity, sensory processing differences, and communication techniques will greatly enhance inclusivity. Staff members should be equipped with the necessary knowledge and tools to provide appropriate support and understanding to all visitors.
4. Collaborations and Partnerships
Collaborations and partnerships between cultural venues and organizations that specialize in supporting neurodivergent individuals can foster innovation and ensure sustained improvements in accessibility. By sharing expertise and resources, venues can learn from existing best practices and implement them effectively. Such collaborations can also help raise awareness and advocate for the rights of neurodivergent individuals on a broader scale.
Predictions for the Future
Looking ahead, the future holds promising prospects for enhanced accessibility in cultural spaces for neurodivergent individuals.
Progressive Legislation: Governments around the world are increasingly recognizing the importance of accessibility and inclusivity. Legislation and regulatory frameworks will play a pivotal role in ensuring that cultural venues proactively address the needs of neurodivergent individuals.
Technological Advancements: Continued advancements in technology have the potential to transform neurodivergent accessibility. Innovations in automated sensory control systems, personalized digital guides, and virtual reality experiences will greatly enhance the overall cultural experience for neurodivergent individuals.
Shift in Cultural Awareness: As awareness and understanding of neurodiversity continue to grow, cultural venues will place a greater emphasis on prioritizing the needs of neurodivergent individuals in their design and programming decisions. Inclusivity will become a fundamental aspect of cultural offerings.
Recommendations for the Industry
To foster positive change and improve accessibility for neurodivergent individuals, the industry should consider the following recommendations:
Invest in Research: Undertake comprehensive research to gain deeper insights into the specific needs and preferences of neurodivergent audiences. This will provide a basis for informed decision-making and drive targeted improvements in accessibility.
Collaborate with Specialists: Forge partnerships with organizations and professionals specializing in neurodivergent support to create tailored accessibility strategies. By tapping into their expertise, cultural venues can ensure that their efforts align with best practices and are effective in meeting the needs of the neurodivergent community.
Share Best Practices: Cultural venues should actively share information and best practices to facilitate knowledge exchange. By learning from one another, the industry can collectively evolve towards creating barrier-free, inclusive spaces for neurodivergent individuals.
Engage Neurodivergent Individuals: Involve neurodivergent individuals and their families in the design and evaluation of accessibility initiatives. Their firsthand experiences and insights are invaluable in shaping effective improvements and ensuring that the efforts resonate with the community they aim to serve.
Conclusion
As awareness grows surrounding the unparalleled experiences and perspectives neurodivergent individuals bring, the industry must continue striving for inclusivity in cultural spaces. By focusing on sensory-friendly experiences, embracing emerging technologies, providing comprehensive staff training, and fostering collaborations, cultural venues can create transformative spaces that welcome and cater to the needs of neurodivergent individuals. Together, we can build a more inclusive future.
“Accessibility is not an option, it is a fundamental human right.” – Unknown
References:
Pohlman, E. (2018). Neurodivergent Perspectives on Accessibility in Cultural Settings. International Journal on Disability and Human Development, 17(4), 457-465.
Harpur, P. (2020). Making Museums Autism Friendly – Insights from Australia. In The Routledge Handbook of Museums, Media, and Disability (pp. 303-315). Routledge.
[This article was first published on T. Moudiki’s Webpage – R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.
Predictive simulation of time series data is useful for many applications such as risk management and stress-testing in finance or insurance, climate modeling, and electricity load forecasting. This (preprint) paper proposes a new approach to uncertainty quantification for univariate time series forecasting. This approach adapts split conformal prediction to sequential data: after training the model on a proper training set, and obtaining an inference of the residuals on a calibration set, out-of-sample predictive simulations are obtained through the use of various parametric and semi-parametric simulation methods. Empirical results on uncertainty quantification scores are presented for more than 250 time series data sets, both real world and synthetic, reproducing a wide range of time series stylized facts.
To leave a comment for the author, please follow the link and comment on their blog: T. Moudiki’s Webpage – R.
Implications and Future Developments of Predictive Simulation of Time Series Data
The text puts forth the potential of predictive simulation of time series data in numerous applications such as insurance, finance for risk management, as well as for stress-testing. Additionally, predictive simulations can also play a significant role in areas like climate modeling and electricity load forecasting. The paper especially focuses on the advent of a new methodology for quantifying uncertainty in univariate time series forecasting. The approach is an adaptation of split conformal prediction on sequential data.
Potential Long-term Implications
Long-term consequences of this new approach can be multifaceted. The ability to efficiently predict time series data can have pronounced implications for sectors like finance, risk management, and insurance where predictive accuracy can drive decision making and have significant financial implications.
In the domain of climate modeling, advancing predictive simulation of time series data will prove invaluable in developing more accurate models and could potentially assist in mitigating the impact of climate change through timely interventions.
For electricity load forecasting, this could lead to improved operational efficiency and cost savings. A more reliable load forecast can help utility managers make better capacity planning decisions, thus reducing waste and improving service level.
Future Developments
Considering the demonstrated usefulness of predictive simulations for univariate time series data, it’s plausible that future research could focus on applying this technique to multivariate time series, thereby unlocking even greater predictive power. Furthermore, refining the parametric and semi-parametric simulation methods to deliver even more precise results will likely be a key focus of subsequent work in this field.
Actionable Advice
Based on the discussed points, the following actionable advice can be drawn:
Invest in predictive simulation training: Be it finance, insurance, or any other domain where time series data is used, investing in training relevant personnel in the methods and tools of predictive simulation can be beneficial.
Prioritize implementation: In sectors where time series data is critical, like climate modeling and load forecasting, initiatives should be in place to implement the latest predictive simulation techniques. This can lead to better decision-making, improved efficiency, and cost-effectiveness.
Encourage research and development: Given the promising advancements in this field, supporting further research and development into predictive simulations, especially with multivariate data, could certainly yield significant returns in the future.
# A tibble: 4 × 2
cup caffeine
<chr> <dbl>
1 Coffee 99
2 Tea 34
3 Green Tea 34
4 Decaf Tea 4
Now we can plot the caffeine meter using ggplot2:
library(ggplot2)
g <- ggplot(caffeine_meter) +
geom_col(aes(x = cup, y = caffeine, fill = cup))
g
Then I add the colours that I extracted with GIMP:
pal <- c("#f444b3", "#3004c9", "#85d26a", "#3a5dff")
g + scale_fill_manual(values = pal)
The Decaf Tea category should be at the end of the plot, so I need to transform the “cup” column to a factor sorted decreasingly by the “caffeine” column:
library(forcats)
caffeine_meter <- caffeine_meter %>%
mutate(cup = fct_reorder(cup, -caffeine))
g <- ggplot(caffeine_meter) +
geom_col(aes(x = cup, y = caffeine, fill = cup)) +
scale_fill_manual(values = pal)
g
Now I can change the background colour to a more blueish gray:
g +
theme(panel.background = element_rect(fill = "#dcecfc"))
Now I need to add the title with a blue background, so putting all together:
Understanding the Impact and Application of Data Visualization Techniques Using R Programming
Data visualization plays a crucial role in understanding complex data. The text discusses how one can use the R programming language and the ggplot2 package to recreate a caffeine meter originally found on Tetley tea boxes.
The process involved creating a tibble (or data frame in R terminology), plotting the caffeine meter values using ggplot2, and adding colors using GIMP. Additionally, the authors highlight how to rearrange categories and customize the plot’s aesthetics, such as changing the background color or adding a title.
Implications and Future Developments
While seemingly simple, this step-by-step approach of recreating a caffeine meter not only shows the power of data visualization, but also how programmers can leverage R’s flexibility to customize and manipulate plots. The practicality and ease of use of the ggplot2 package make it a valuable tool for R users seeking to understand and present their data better.
In the long term, this technique could lead to more sophisticated data visualization projects. With the increasing complexity and volume of data, there will be a growing demand for data visualization skills. Enhancements in ggplot2 and similar packages would help create more intuitive and user-friendly graphics that make complicated data more understandable.
Moreover, considering the rapid progress within the R programming community, we may expect the release of new packages or functionalities that offer even more customization options and easier methods of plot manipulation.
Actionable advice
Based on the above insights, here are some suggestions for those interested in data visualization and R programming:
Start simple: Beginners should start with simple projects, like the one mentioned in the text, to understand the basics of data visualization using R and ggplot2.
Continuous learning: Stay updated with developments in the R community. The capabilities of R are continuously growing, and new packages and functionalities are regularly released.
Incorporate design principles: Despite the technical nature of data visualization, remember that plots are a form of communication. Learning basic design principles will go a long way in making your plots more easy to understand and aesthetically pleasing.
Explore data: Try visualizing different parameters and variables of your data. Often, the best way to understand the dataset is to plot it.
Remember that data visualization, like any other skill, requires time and practice to master. So, patience is key! Get your hands dirty with code, make plenty of mistakes, and most importantly, keep having fun throughout your journey.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.
Introduction
As a data scientist or analyst, you often encounter situations where you need to combine data from multiple sources. One common task is merging data frames based on multiple columns. In this guide, we’ll walk through several step-by-step examples of how to accomplish this efficiently using R.
Understanding the Problem
Let’s start with a simple scenario. You have two data frames, and you want to merge them based on two columns: ID and Year. The goal is to combine the data where the ID and Year values match in both data frames.
Examples
Example Data
For demonstration purposes, let’s create two sample data frames:
An inner join combines rows from both data frames where there is a match based on the specified columns (ID and Year in this case). Rows with unmatched values are excluded.
# Merge based on ID and Year using inner join
merged_inner <- merge(df1, df2, by = c("ID", "Year"))
Example 2: Left Join
A left join retains all rows from the left data frame (df1), and includes matching rows from the right data frame (df2). If there is no match, NA values are filled in for the columns from df2.
# Merge based on ID and Year using left join
merged_left <- merge(df1, df2, by = c("ID", "Year"), all.x = TRUE)
Example 3: Right Join
A right join retains all rows from the right data frame (df2), and includes matching rows from the left data frame (df1). If there is no match, NA values are filled in for the columns from df1.
# Merge based on ID and Year using right join
merged_right <- merge(df1, df2, by = c("ID", "Year"), all.y = TRUE)
Example 4: Full Join
A full join retains all rows from both data frames, filling in NA values for columns where there is no match.
# Merge based on ID and Year using full join
merged_full <- merge(df1, df2, by = c("ID", "Year"), all = TRUE)
Conclusion
Merging data frames based on multiple columns is a common operation in data analysis. By using functions like merge() in R, you can efficiently combine data from different sources while retaining flexibility in how you handle unmatched values.
I encourage you to try these examples with your own data sets and explore the various options available for merging data frames. Understanding how to effectively merge data is an essential skill for any data professional, and mastering it will greatly enhance your ability to derive insights from your data. Happy merging!
Merging Data Frames Based on Multiple Columns in R: Future Implications and Advice
In the era of Big Data, data scientists and analysts often find themselves having to merge data from different sources. Data fusion is a common operation in data analysis generally conducted using software like R, as discussed in detail in the article from Steve’s Data Tips and Tricks. The article focuses on merging data frames based on multiple columns in R. This content summary endeavors to highlight the long-term implications and future developments of this all-important process.
Understanding the Process
As provided in the article, you may often find yourself needing to combine two data frames based on two columns, specifically the ‘ID’ and ‘Year’. The primary goal in these scenarios is to merge the data where the ‘ID’ and ‘Year’ values correspond in both data frames. To illustrate this concept more vividly, we can look at the four types of merges covered: Inner Join, Left Join, Right Join, and Full Join.
Inner Join: This merge combines rows from both data frames based on matching values on specified columns. Non-matching values are left out.
Left Join: This merge retains all rows from the left data frame and includes matching rows from the right one. Non-matching rows in the right are filled with NA values.
Right Join: This merge retains all rows from the right data frame, along with matching rows from the left one. Non-matching rows in the left are filled with NA values.
Full Join: This merge retains all rows from both data frames and fills in NA values for columns with non-matching values.
Future Implications
This article’s techniques underpin a significant capability for data scientists or any other data-related professionals. With our growing reliance on data, the ability to effectively merge and manipulate data will come to define future innovations. These merging techniques, in particular, will aid in the crucial task of data cleaning, which is paramount in the creation of accurate predictive models and statistics.
As we see a shift of data storage to cloud-based sources like AWS and Google Cloud, these techniques may also find practical applications in managing and integrating large datasets. Combining separate datasets is also a fundamental step in creating data lakes, which many businesses presently employ to analyze big data.
Actionable Advice
Understanding these merging techniques is indeed essential. The following actionable advice can be recommended:
Intensify your practice on merging data frames with these techniques using different data sets. This would help in the effective learning and application of these functions.
Keep abreast with changes and improvements related to these techniques in R. The R community is very active, and updates are frequent.
Consider familiarizing yourself with similar operations in other languages like Python. Techniques in data merging are quite standard and will commonly find application in any data analysis workflow.
In conclusion, the techniques highlighted in the article from Steve’s Data Tips and Tricks provide an insightful resource for data scientists. Effectively merging data is an essential process, aiding in the derivation of accurate insights from data. Happy merging!