DELTA: Decomposed Efficient Long-Term Robot Task Planning using…

DELTA: Decomposed Efficient Long-Term Robot Task Planning using…

Recent advancements in Large Language Models (LLMs) have sparked a revolution across various research fields. In particular, the integration of common-sense knowledge from LLMs into robot task and…

automation systems has opened up new possibilities for improving their performance and adaptability. This article explores the impact of incorporating common-sense knowledge from LLMs into robot task and automation systems, highlighting the potential benefits and challenges associated with this integration. By leveraging the vast amount of information contained within LLMs, robots can now possess a deeper understanding of the world, enabling them to make more informed decisions and navigate complex environments with greater efficiency. However, this integration also raises concerns regarding the reliability and biases inherent in these language models. The article delves into these issues and discusses possible solutions to ensure the responsible and ethical use of LLMs in robotics. Overall, the advancements in LLMs hold immense promise for revolutionizing the capabilities of robots and automation systems, but careful consideration must be given to the potential implications and limitations of these technologies.

Exploring the Power of Large Language Models (LLMs) in Revolutionizing Research Fields

Recent advancements in Large Language Models (LLMs) have sparked a revolution across various research fields. These models have the potential to reshape the way we approach problem-solving and knowledge integration in fields such as robotics, linguistics, and artificial intelligence. One area where the integration of common-sense knowledge from LLMs shows great promise is in robot task and interaction.

The Potential of LLMs in Robotics

Robots have always been limited by their ability to understand and interact with the world around them. Traditional approaches rely on predefined rules and structured data, which can be time-consuming and limited in their applicability. However, LLMs offer a new avenue for robots to understand and respond to human commands or navigate complex environments.

By integrating LLMs into robotics systems, robots can tap into vast amounts of common-sense knowledge, enabling them to make more informed decisions. For example, a robot tasked with household chores can utilize LLMs to understand and adapt to various scenarios, such as distinguishing between dirty dishes and clean ones or knowing how fragile certain objects are. This integration opens up new possibilities for robots to interact seamlessly with humans and their surroundings.

Bridging the Gap in Linguistics

LLMs also have the potential to revolutionize linguistics, especially in natural language processing (NLP) tasks. Traditional NLP models often struggle with understanding context and inferring implicit meanings. LLMs, on the other hand, can leverage their vast training data to capture nuanced language patterns and semantic relationships.

With the help of LLMs, linguists can gain deeper insights into language understanding, sentiment analysis, and translation tasks. These models can assist in accurately capturing fine-grained meanings, even in complex sentence structures, leading to more accurate and precise language processing systems.

Expanding the Horizon of Artificial Intelligence

Artificial Intelligence (AI) systems have always relied on structured data and predefined rules to perform tasks. However, LLMs offer a path towards more robust and adaptable AI systems. By integrating common-sense knowledge from LLMs, AI systems can overcome the limitations of predefined rules and rely on real-world learning.

LLMs enable AI systems to learn from vast amounts of unstructured text data, improving their ability to understand and respond to human queries or tasks. This integration allows AI systems to bridge the gap between human-like interactions and intelligent problem-solving, offering more effective and natural user experiences.

Innovative Solutions and Ideas

As the potential of LLMs continues to unfold, researchers are exploring various innovative solutions and ideas to fully leverage their power. One area of focus is enhancing the ethical considerations of LLM integration. Ensuring unbiased and reliable outputs from LLMs is critical to prevent reinforcing societal biases or spreading misinformation.

Another promising avenue is collaborative research between linguists, roboticists, and AI experts. By leveraging the expertise of these diverse fields, researchers can develop interdisciplinary approaches that push the boundaries of LLM integration across different research domains. Collaboration can lead to breakthroughs in areas such as explainability, human-robot interaction, and more.

Conclusion: Large Language Models have ushered in a new era of possibilities in various research fields. From robotics to linguistics and artificial intelligence, the integration of common-sense knowledge from LLMs holds great promise for revolutionizing research and problem-solving. With collaborative efforts and a focus on ethical considerations, LLMs can pave the way for innovative solutions, enabling robots to better interact with humans, linguists to delve into deeper language understanding, and AI systems to provide more human-like experiences.

automation systems has opened up new possibilities for intelligent machines. These LLMs, such as OpenAI’s GPT-3, have shown remarkable progress in understanding and generating human-like text, enabling them to comprehend and respond to a wide range of queries and prompts.

The integration of common-sense knowledge into robot task and automation systems is a significant development. Common-sense understanding is crucial for machines to interact with humans effectively and navigate real-world scenarios. By incorporating this knowledge, LLMs can exhibit more natural and context-aware behavior, enhancing their ability to assist in various tasks.

One potential application of LLMs in robot task and automation systems is in customer service. These models can be utilized to provide personalized and accurate responses to customer queries, improving the overall customer experience. LLMs’ ability to understand context and generate coherent text allows them to engage in meaningful conversations, addressing complex issues and resolving problems efficiently.

Moreover, LLMs can play a vital role in autonomous vehicles and robotics. By integrating these language models into the decision-making processes of autonomous systems, machines can better understand and interpret their environment. This enables them to make informed choices, anticipate potential obstacles, and navigate complex situations more effectively. For example, an autonomous car equipped with an LLM can understand natural language instructions from passengers, ensuring a smoother and more intuitive human-machine interaction.

However, there are challenges that need to be addressed in order to fully leverage the potential of LLMs in robot task and automation systems. One major concern is the ethical use of these models. LLMs are trained on vast amounts of text data, which can inadvertently include biased or prejudiced information. Careful measures must be taken to mitigate and prevent the propagation of such biases in the responses generated by LLMs, ensuring fairness and inclusivity in their interactions.

Another challenge lies in the computational resources required to deploy LLMs in real-time applications. Large language models like GPT-3 are computationally expensive, making it difficult to implement them on resource-constrained systems. Researchers and engineers must continue to explore techniques for optimizing and scaling down these models without sacrificing their performance.

Looking ahead, the integration of LLMs into robot task and automation systems will continue to evolve. Future advancements may see the development of more specialized LLMs, tailored to specific domains or industries. These domain-specific models could possess even deeper knowledge and understanding, enabling more accurate and context-aware responses.

Furthermore, ongoing research in multimodal learning, combining language with visual and audio inputs, will likely enhance the capabilities of LLMs. By incorporating visual perception and auditory understanding, machines will be able to comprehend and respond to a broader range of stimuli, opening up new possibilities for intelligent automation systems.

In conclusion, the integration of common-sense knowledge from Large Language Models into robot task and automation systems marks a significant advancement in the field of artificial intelligence. These models have the potential to revolutionize customer service, autonomous vehicles, and robotics by enabling machines to understand and generate human-like text. While challenges such as bias mitigation and computational resources remain, continued research and development will undoubtedly pave the way for even more sophisticated and context-aware LLMs in the future.
Read the original article

“NeuroPrune: Efficient Sparsity Approaches for Transformer-based Language Models in NLP”

“NeuroPrune: Efficient Sparsity Approaches for Transformer-based Language Models in NLP”

Abstract: This article discusses the use of sparsity approaches in Transformer-based Language Models to address the challenges of scalability and efficiency in training and inference. Transformer-based models have shown outstanding performance in Natural Language Processing (NLP) tasks, but their high resource requirements limit their widespread applicability. By examining the impact of sparsity on network topology, the authors draw inspiration from biological neuronal networks and propose NeuroPrune, a model-agnostic sparsity approach. Despite not focusing solely on performance optimization, NeuroPrune demonstrates competitive or superior performance compared to baselines on various NLP tasks, including classification and generation. Additionally, NeuroPrune significantly reduces training time and exhibits improvements in inference time in many cases.

Introduction

Transformer-based Language Models have revolutionized NLP with their exceptional performance across diverse tasks. However, their resource-intensive nature poses significant challenges in terms of training and inference efficiency. To overcome this hurdle, the authors explore the application of sparsity techniques inspired by biological networks.

Sparsity and Network Topology

The authors highlight the importance of understanding the impact of sparsity on network topology. They propose mechanisms such as preferential attachment and redundant synapse pruning that mimic the behavior of biological neuronal networks. By incorporating these principles into sparsity approaches, they aim to enhance the efficiency and performance of Transformer-based Language Models.

NeuroPrune: A Model-Agnostic Sparsity Approach

NeuroPrune is introduced as a principled, model-agnostic sparsity approach that leverages the insights from biological networks. It aims to address the challenges of scalability and efficiency in Transformer-based Language Models. Despite not solely focusing on performance optimization, NeuroPrune demonstrates competitive results compared to the baseline models on both classification and generation tasks in NLP.

Key Findings

NeuroPrune offers several noteworthy advantages over traditional models:

  1. Reduced Training Time: NeuroPrune achieves up to 10 times faster training time for a given level of sparsity compared to baselines. This improvement in efficiency is crucial for large-scale NLP applications.
  2. Improved Inference Time: In many cases, NeuroPrune exhibits measurable improvements in inference time. This benefit is particularly significant in real-time applications and systems where low latency is crucial.
  3. Competitive Performance: Despite not solely optimizing for performance, NeuroPrune performs on par with or surpasses baselines on various NLP tasks, including natural language inference, summarization, and machine translation.

Conclusion

The exploration of sparsity approaches in Transformer-based Language Models through the lens of network topology has yielded promising results. NeuroPrune, a model-agnostic sparsity approach inspired by biological networks, demonstrates competitive performance, reduced training time, and improvements in inference time. These findings open new avenues for addressing the scalability and efficiency challenges in NLP tasks, paving the way for broader applicability of Transformer-based models.

“By exploiting mechanisms seen in biological networks, NeuroPrune presents an innovative approach to sparsity in Transformer-based models. Its efficiency gains in training and inference time, coupled with its competitive performance, make it a compelling solution for large-scale NLP applications.”

– Expert Commentator

Read the original article

“Author Correction: SARS-CoV-2 Antibody Structures”

“Author Correction: SARS-CoV-2 Antibody Structures”

Author Correction: SARS-CoV-2 Antibody Structures

Analyzing Future Trends in Therapeutic Strategies using SARS-CoV-2 Neutralizing Antibody Structures

As the world continues to battle with the ongoing COVID-19 pandemic, scientific advancements in understanding the structure and function of neutralizing antibodies against SARS-CoV-2 have been crucial in shaping therapeutic strategies. A recent article published in Nature on April 2, 2024, titled “Author Correction: SARS-CoV-2 neutralizing antibody structures inform therapeutic strategies,” provides valuable insights into the key points and potential future trends related to this crucial area of research.

Understanding Neutralizing Antibodies against SARS-CoV-2

Neutralizing antibodies play a vital role in fighting viral infections by specifically targeting and neutralizing the infecting virus. In the case of SARS-CoV-2, the virus responsible for the COVID-19 pandemic, neutralizing antibodies have become an area of intense focus. This recent article highlights the importance of understanding the structures of these antibodies and how they interact with the virus.

The study utilizes advanced imaging techniques, such as cryo-electron microscopy, to determine the atomic-level structures of SARS-CoV-2 neutralizing antibodies. By determining these structures, researchers gain insights into the precise regions of the virus targeted by neutralizing antibodies, as well as the mechanisms by which they prevent infection.

Predicting Future Trends

Based on the key findings and insights provided in the article, several potential future trends can be anticipated in the field of therapeutic strategies against SARS-CoV-2 and other viral infections:

  1. Rational design of antibody-based therapeutics: The detailed understanding of the atomic-level structures of neutralizing antibodies will likely facilitate the rational design of antibody-based therapeutics. Researchers can now precisely engineer antibodies to enhance their neutralizing capabilities or modify them to target specific regions of the virus more effectively. This approach holds promise for developing highly potent and targeted therapies against SARS-CoV-2 and other viral pathogens.
  2. Combination therapies: Another future trend may involve the development of combination therapies that utilize multiple neutralizing antibodies with complementary mechanisms of action. By targeting different regions of the virus simultaneously, the chances of viral escape and resistance can be significantly reduced. This approach has shown promise in other viral infections, and the insights gained from neutralizing antibody structures can further fuel the development of combination therapies against SARS-CoV-2.
  3. Therapeutic antibody cocktails: Building on the concept of combination therapies, the article suggests the potential use of antibody cocktails consisting of multiple neutralizing antibodies. This strategy allows for the targeting of various viral variants and can be especially effective against rapidly evolving viruses such as SARS-CoV-2. By utilizing a diverse array of antibodies, these cocktails can provide a robust and adaptable defense against viral infections.
  4. Broad-spectrum antivirals: The detailed understanding of neutralizing antibody structures can also pave the way for the development of broad-spectrum antiviral therapies. By targeting conserved regions of viral proteins, these therapeutics can potentially combat a range of related viruses beyond SARS-CoV-2. The insights gained from this research can aid in the identification and design of broad-spectrum antivirals that can be deployed in future outbreaks.

Recommendations for the Industry

Based on the future trends discussed above, several recommendations can be made for the industry involved in therapeutic development against viral infections:

  1. Invest in structural biology research: Given the significant impact of understanding neutralizing antibody structures, increased investment in structural biology research is warranted. This includes the use of advanced imaging techniques and computational modeling to determine the atomic-level structures of neutralizing antibodies and viral targets. Such investments will provide a solid foundation for the development of innovative therapeutic strategies against both known and emerging viral infections.
  2. Establish collaborative networks: Collaboration between academia, industry, and government agencies is crucial for accelerating therapeutic development. The complex nature of viral infections necessitates multidisciplinary approaches and the sharing of knowledge, resources, and data. Establishing collaborative networks will foster synergy and enable faster translation of scientific discoveries into clinical applications.
  3. Encourage regulatory flexibility: As new therapeutic strategies emerge, it is essential for regulatory bodies to adapt and provide flexibility in evaluating and approving these innovations. The rapid development and deployment of therapeutics against emerging viral threats require streamlined regulatory processes that prioritize safety and efficacy while also allowing for timely access to potentially life-saving treatments.
  4. Invest in manufacturing capacity: With the potential for antibody cocktails and combination therapies, it is crucial to invest in manufacturing capacity to meet the increased demand. Flexibility in manufacturing platforms, scalability, and efficient production processes will be essential to ensure an adequate supply of these therapeutics during outbreaks.

Conclusion

The article “SARS-CoV-2 neutralizing antibody structures inform therapeutic strategies” published in Nature provides significant insights into the future trends of therapeutic strategies against SARS-CoV-2 and other viral infections. The understanding of neutralizing antibody structures offers opportunities for rational design, combination therapies, antibody cocktails, and broad-spectrum antivirals. The industry should prioritize investments in structural biology research, foster collaboration, encourage regulatory flexibility, and invest in manufacturing capacity to effectively respond to future viral outbreaks. By embracing these recommendations, we can enhance our ability to combat viral infections and minimize their global impact.

Reference: Nature. (2024, April 2). Author Correction: SARS-CoV-2 neutralizing antibody structures inform therapeutic strategies. Retrieved from https://doi.org/10.1038/s41586-024-07344-w

rOpenSci Monthly News: Leadership Changes, Multilingual Dev Guide, New Packages, and More

rOpenSci Monthly News: Leadership Changes, Multilingual Dev Guide, New Packages, and More

[This article was first published on rOpenSci – open tools for open science, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

Dear rOpenSci friends, it’s time for our monthly news roundup!

You can read this post on our blog.
Now let’s dive into the activity at and around rOpenSci!

rOpenSci HQ

Leadership changes at rOpenSci

After 13 years at the helm of rOpenSci, our founding executive director Karthik Ram is stepping down.
Noam Ross, rOpenSci’s current lead for peer review, will be our new Executive Director.
Karthik will remain a key advisor to rOpenSci.
We thank him for his years of leadership and service to the community!

Read Karthik’s farewell post, and Noam’s post about his new role on our blog

rOpenSci Dev Guide 0.9.0: Multilingual Now! And Better

We’re delighted to announce we’ve released a new version of our guide,
“rOpenSci Packages: Development, Maintenance, and Peer Review”!

A highlight is that our guide is now bilingual (English and Spanish), thanks to work by Yanina Bellini Saibene, Elio Campitelli and Pao Corrales, and thanks to support of the Chan Zuckerberg Initiative, NumFOCUS, and the R Consortium.
Read the guide in Spanish.

Our guide is now also getting translated to Portuguese thanks to volunteers.
We are very grateful for their work!

Read more in the blog post about the release.
Thanks to all contributors who made this release possible.

Interview with Code for Thought podcast

Our community manager, Yanina Bellini Saibene, talked with Peter Schmidt of the Code for Thought podcast, about the importance of making computing materials accessible to non-English speaking learners.
Listen to the episode.
Find our more about rOpenSci multilingual publishing project.

Coworking

Read all about coworking!

Join us for social coworking & office hours monthly on first Tuesdays!
Hosted by Steffi LaZerte and various community hosts.
Everyone welcome.
No RSVP needed.
Consult our Events page to find your local time and how to join.

And remember, you can always cowork independently on work related to R, work on packages that tend to be neglected, or work on what ever you need to get done!

Software 📦

New packages

The following three packages recently became a part of our software suite, or were recently reviewed again:

  • nuts, developed by Moritz Hennicke together with Werner Krause: Motivated by changing administrative boundaries over time, the nuts package can convert European regional data with NUTS codes between versions (2006, 2010, 2013, 2016 and 2021) and levels (NUTS 1, NUTS 2 and NUTS 3). The package uses spatial interpolation as in Lam (1983) doi:10.1559/152304083783914958 based on granular (100m x 100m) area, population and land use data provided by the European Commission’s Joint Research Center. It is available on CRAN. It has been reviewed by Pueyo-Ros Josep and Le Meur Nolwenn.

  • quadkeyr, developed by Florencia D’Andrea together with Pilar Fernandez: Quadkeyr functions generate raster images based on QuadKey-identified data, facilitating efficient integration of Tile Maps data into R workflows. In particular, Quadkeyr provides support to process and analyze Facebook mobility datasets within the R environment. It has been reviewed by Maria Paula Caldas and Vincent van Hees.

  • weatherOz, developed by Rodrigo Pires together with Anna Hepworth, Rebecca O’Leary, Jonathan Carroll, James Goldie, Dean Marchiori, Paul Melloy, Mark Padgham, Hugh Parsonage, Keith Pembleton, and Adam H. Sparks: Provides automated downloading, parsing and formatting of weather data for Australia through API endpoints provided by the Department of Primary Industries and Regional Development (DPIRD) of Western Australia and by the Science and Technology Division of the Queensland Governments Department of Environment and Science (DES). As well as the Bureau of Meteorology (BOM) of the Australian government precis and coastal forecasts, agriculture bulletin data, and downloading and importing radar and satellite imagery files. It has been reviewed by Laurens Geffert and Sam Rogers.

Discover more packages, read more about Software Peer Review.

New versions

The following nineteen packages have had an update since the last newsletter: frictionless (v1.0.3), aRxiv (0.10), cffr (v1.0.0), chromer (v0.8), drake (7.13.9), GSODR (v4.0.0), lightr (v1.7.1), lingtypology (v1.1.17), magick (2.8.3), melt (v1.11.2), nodbi (v0.10.4), nuts (v1.0.0), paleobioDB (v1.0.0), quadkeyr (v0.1.0), rtweet (v2.0.0), ruODK (v1.4.2), spocc (v1.2.3), tarchetypes (0.8.0), and targets (1.6.0).

Software Peer Review

There are thirteen recently closed and active submissions and 6 submissions on hold. Issues are at different stages:

Find out more about Software Peer Review and how to get involved.

On the blog

Software Review

Tech Notes

Calls for contributions

Calls for maintainers

If you’re interested in maintaining any of the R packages below, you might enjoy reading our blog post What Does It Mean to Maintain a Package?.

Calls for contributions

Also refer to our help wanted page – before opening a PR, we recommend asking in the issue whether help is still needed.

Package development corner

Some useful tips for R package developers. 👀

Reminder: R Consortium Infrastructure Steering Committee (ISC) Grant Program Accepting Proposals until April 1st!

The R Consortium Call for Proposal might be a relevant funding opportunity for your package!
Find out more in their post.
If you can’t prepare your proposal in time, the next call will start September 1st.

@examplesIf for conditional examples in package manuals

Do you know you can make some examples of your package manual conditional on, say, the session being interactive?
The @examplesIf roxygen2 tag is really handy.
What’s more, inside the examples of a single manual page, you can seamlessly mix and match @examples and @examplesIf pieces.

‘argument “..2” is missing, with no default’

Mike Mahoney posted an important PSA on Mastodon:

if you’re getting a new error message ‘argument “..2” is missing, with no default’ on #rstats 4.3.3, it’s likely because you have a trailing comma in a call to glue::glue()
seeing this pop up in a few Slacks so figured I’d share
https://github.com/tidyverse/glue/issues/320

Thanks, Mike!

Useful hack: a CRAN-specific .Rbuildignore

The .Rbuildignore file lists the files to not be included when building your package, such as your pkgdown configuration file.
Trevor L. Davis posted a neat idea on Mastodon: using a CRAN-specific .Rbuildignore, so that CRAN submissions omit some tests and vignettes to keep the package under the size limit.

Regarding tests themselves, remember you can skip some or all on CRAN (but make sure you’re running them on continuous integration!).

Key advantages of using the keyring package

If your package needs the user to provide secrets, like API tokens, to work, you might be interested in wrapping or recommending the keyring package (maintained by Gábor Csárdi), that accesses the system credential store from R.
See this recent R-hub blog post.

A package for linting roxygen2 documentation

The compelling roxylint package by Doug Kelkhoff allows you to check some aspects of your roxygen2 docs, such as the use of full stops and sentence case.
See the list of current rules.

Last words

Thanks for reading! If you want to get involved with rOpenSci, check out our Contributing Guide that can help direct you to the right place, whether you want to make code contributions, non-code contributions, or contribute in other ways like sharing use cases.
You can also support our work through donations.

If you haven’t subscribed to our newsletter yet, you can do so via a form. Until it’s time for our next newsletter, you can keep in touch with us via our website and Mastodon account.

To leave a comment for the author, please follow the link and comment on their blog: rOpenSci – open tools for open science.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you’re looking to post or find an R/data-science job.


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

Continue reading: rOpenSci News Digest, March 2024

Long-term implications and possible future developments

With numerous intriguing updates and developments mentioned in the rOpenSci news article, here are several long-term implications and possible future directions.

Leadership Changes at rOpenSci

The change in leadership from Karthik Ram to Noam Ross, both integral individuals in rOpenSci, is likely to generate some shift in the approach and direction of the organization. As Noam takes over the helm, there might be changes to the strategic roadmap for rOpenSci, and the organization’s priorities may evolve, leading to the implementation of new initiatives and the modification of existing practices.

Enhanced Guide In Multiple Language

The fact that rOpenSci’s guide is now bilingual (English and Spanish) has the potential to dramatically expand the organization’s reach to non-English speaking audience. The ongoing translation to Portuguese suggests a broader aim of making rOpenSci accessible to as many global users as possible. This implies that more language versions may also be developed in the future.

Coworking and Community Building

rOpenSci’s coworking initiative helps to foster a sense of community, where users can collaborate, learn from one another, and also help in improving and maintaining various R packages. It can result in creativity and productivity enhancement, and knowledge exchange, fostering a more robust R user base.

New Packages

The inclusion of new packages like ‘nuts’, ‘quadkeyr’, and ‘weatherOz’ to rOpenSci demonstrates growth and adaptability of the open source software that it provides. This would make rOpenSci a more versatile and valuable platform for open science, particularly for researchers working on data related to Europe’s regional data, quadkey-identified data, and Australian weather data respectively.

Actionable advice based on these insights

If you are an existing member of rOpenSci, considering the leadership change, explore any new strategic directions that Noam Ross plans to implement, and find out how you can align your help with those plans. For all users, the availability of the guide in different languages means that there are fewer barriers to using rOpenSci’s resources, so take this opportunity to deepen your understanding.

Engage in the coworking sessions by rOpenSci which offer an opportunity to learn from and connect with other users across the globe. Explore the newly added packages and check if any could serve beneficial for your research or contributions. Lastly, consider if you could contribute to rOpenSci, whether by code or non-code contributions, proactive participation will only enhance your skills and increase your understanding of open science.

Read the original article

Preserving the Penobscot River: A Journey with R

Preserving the Penobscot River: A Journey with R

[This article was first published on R Consortium, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

Angie Reed sampling Chlorophyll on the Penobscot River where a dam was removed 

In a recent interview by the R Consortium, Angie Reed, Water Resources Planner for the Penobscot Indian Nation, shared her experience learning and using R in river conservation and helping preserve a whole way of life. Educated in New Hampshire and Colorado, Angie began her career with the Houlton Band of Maliseet Indians, later joining the Penobscot Indian Nation. Her discovery of R transformed her approach to environmental statistics, leading to the development of an interactive R Shiny application for community engagement. 

pαnawάhpskewi (Penobscot people) derive their name from the pαnawάhpskewtəkʷ (Penobscot River), and their view of the Penobscot River as a relative guides all of the Water Resources Program’s efforts. This perspective is also reflected in the Penobscot Water Song, which thanks the water and expresses love and respect.  Angie has been honored to:

  • work for the Water Resources Program, 
  • contribute to the Tribal Exchange Network Group,
  • engage young students in environmental stewardship and R coding, blending traditional views with modern technology for effective environmental protection and community involvement, and
  • work with Posit to develop the animated video about Penobscot Nation and show it at the opening of posit:conf 2024

Please tell us about your background and how you came to use R as part of your work on the Penobscot Indian Nation.

I grew up in New Hampshire and completed my Bachelor of Science at the University of New Hampshire, followed by a Master of Science at Colorado State University. After spending some time out west, I returned to the Northeast for work. I began by joining the Houlton Band of Maliseet Indians in Houlton, Maine, right after finishing my graduate studies in 1998. Then, in 2004, I started working with the Penobscot Indian Nation. Currently, I work for both tribes, full-time with Penobscot and part-time with Maliseet.

My first encounter with R was during an environmental statistics class taught by a former USGS employee, Dennis Helsel during a class he taught for his business Practical Stats. He introduced us to a package in R called R Commander. Initially, I only used it for statistics, but soon, I realized there was much more to R. I began teaching myself how to use ggplot for graphing. I spent months searching and learning, often frustrated, but it paid off as I started creating more sophisticated graphs for our reports.

We often collaborate with staff from the Environmental Protection Agency (EPA) in Region One (New England, including Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont and 10 Tribal Nations). One of their staff, Valerie Bataille, introduced us to R Carpentries classes. She organized a free class for tribal staff in our region. Taking that class was enlightening; I realized there was so much more I could have learned earlier, making my journey easier. This experience was foundational for me, marking the transition from seeing R as an environmental statistics tool to recognizing its broader applications. It’s a bit cliché, but this journey typifies how many people discover and learn new skills in this field.

The Penobscot Nation views the Penobscot River as a relative or family. How does that make water management for the Penobscot River different from other water resource management?

If you watch The River is Our Relative, the video delves deeper into seeing the river from a relative, beautiful, and challenging perspective. This view fundamentally shifts how I perceive my work, imbuing it with a deeper meaning that transcends typical Western scientific approaches to river conservation. It’s a constant reminder that my job aligns with everything I believe in, reinforcing that there’s a profound reason behind my feelings.

Working with the Penobscot Nation and other tribal nations to protect their waters and ways of life is an honor and has revealed the challenges of conveying the differences in perspective to others. Often, attempts to bridge the gap get lost in translation. Many see their work as just a job, but for the Penobscot people, it’s an integral part of their identity. It’s not merely about accomplishing tasks; it’s about their entire way of life. The river provides sustenance, acts as a transportation route, and is a living relative to whom they have a responsibility. 

How does using open source software allow better sharing of results with Penobscot Nation citizens?

My co-worker, Jan Paul, and I had the pleasure of attending and presenting at posit::conf 2023   and working with Posit staff to create an animated video that describes what we do and how opensource and Posit tools help us do it.  It was so heart-warming to watch the video shown to all attendees at the start of conf, and was a great introduction to my shameless ask for help during my presentation and through a table where I offered a volunteer sign-up sheet/form, I was humbled by the number of generous offers and am already  receiving some assistance on a project I’ve been eager to accomplish. Jasmine Kindness, One World Analytics, is helping me recreate a Tableau viz I made years ago as an interactive, map-based R Shiny tool. 

I find that people connect more with maps, especially when it comes to visualizing data that is geographically referenced. For instance, if there’s an issue in the water, people can see exactly where it is on the map. This is particularly relevant as people in this area are very familiar with the Penobscot River watershed.  My aim is to create tools that are not only interactive but also intuitive, allowing users to zoom into familiar areas and understand what’s happening there. 

This experience has really highlighted the value of the open source community. It’s not just about the tools; it’s also about the people and the generosity within this community. The Posit conference was a great reminder of this, andthe experience of working with someone so helpful and skilled has truly reinforced how amazing and generous the open source community is.

How has your use of R helped to achieve more stringent protections for the Penobscot River?

Before we started using open source tools, my team and I had been diligently working to centralize our data management system, which significantly improved our efficiency. A major shift occurred when we began using R and RStudio (currently Posit) to extract data from this system to create summaries. This has been particularly useful in a biennial process where the State of Maine requests data and proposals for upgrading water quality classifications.

In Maine, water bodies are classified into four major categories: AA, A, B, and C. If new data suggests that a water body, currently classified as a lower grade, could qualify for a higher classification, we can submit a proposal for this upgrade. In the past we have facilitated upgrades for hundreds of miles of streams, however it took much longer to compile the data.  For the first time in 2018 we used R and RStudio to prepare a proposal to the Maine Department of Environmental Protection (DEP) to upgrade the last segment of the Penobscot River from C to B.  Using open source tools, we were able to quickly summarize data and compile data into a format that could be used for this proposal, a task that previously took a significantly longer time.  DEP accepted our proposal because our data clearly supported the upgrade.  In 2019, the proposal was passed and we anticipate this process continuing to be easier in the future.

You are part of a larger network of tribal environmental professionals, working together to learn R and share data and insights. Can you share details about how that works?

Jan Paul, Water Quality Lab Coordinator at Penobscot Nation, sampling in field

I’m involved in the Tribal Exchange Network Group (TXG), which is a national group of tribal environmental professionals like myself and is funded by a cooperative agreement with the Office of Information Management (OIM) at the Environmental Protection Agency (EPA). We work in various fields, such as air, water, and fisheries, focusing on environmental protection. Our goal is to ensure that tribes are well-represented in EPA’s Exchange Network, and we also assist tribes individually with managing their data.

Since attending a Carpentries class, I’ve been helping TXG organize and host many of them. We’ve held one every year since 2019, and we’re now moving towards more advanced topics. In addition to trainings, TXG provides a variety of activities and support, including small group discussions, 1-on-1 assistance and  conferences.  Although COVID-19 disrupted our schedule we are planning our next conference for this year.

Our smaller, more conversational monthly data drop-in sessions always include the opportunity to have a  breakout room to work on R. People can come with their R-related questions, or the host might prepare a demo.

Our 1-on-1  tribal assistance hours allows Tribes tosign up for help with issues related to their specific data. I work with individuals on R code for various tasks, such as managing temperature sensor data or generating annual assessment reports in R Markdown format. This personalized assistance has significantly improved skill building and confidence among participants and are particularly effective as they use real data and often result in a tangible product, like a table or graph, which is exciting for participants.  We’ve also seen great benefits, especially in terms of staff turnover. When staff members leave, the program still has well-documented code, making it easier for their successors to pick up where they left off. These one-on-one sessions.

Additionally, I’ve been involved in forming a Pacific Northwest Tribal coding group, which still doesn’t have an official name as it is only a few months old. It began from discussions with staff from the Northwest Indian Fisheries Commission (NWIFC) and staff from member Tribes. And I am thrilled to say we’ve already attracted many new members from staff of the Columbia River Inter-Tribal Fish Commission (CRITFC). This group is a direct offshoot of the TXG efforts with Marissa Pauling of NWIFC facilitating, and we’re excited about the learning opportunities it presents.

Our work, including the tribal assistance hours, is funded through a grant that reimburses the Penobscot Nation for the time I spend on these activities. As we move forward with the coding group, planning to invite speakers and organize events, it’s clear there’s much to share with this audience, possibly in future blogs like this one. This work is all part of our broader effort to support tribes in their environmental data management endeavors.  If anyone would like to offer their time toward these kinds of assistance, they can use the TXG google form to sign up.

How do you engage with young people?

I am deeply committed to engaging the younger generation, especially the students at Penobscot Nation’s Indian Island school (pre-K through 8th grade). In our Water Resources Program at Penobscot Nation, we actively involve these students in our river conservation efforts. We see our role as not just their employees but as protectors of the river for their future.

Sampling for Bacteria 

Our approach includes hands-on activities like taking students to the river for bacteria monitoring. They participate in collecting samples and processing them in our lab, gaining practical experience in environmental monitoring. This hands-on learning is now being enhanced with the development of the R Shiny app I’m working on with Jasmine, to make data interpretation more interactive and engaging for the students.

Recognizing their budding interest in technology, I’m also exploring the possibility of starting a mini R coding group at the school. With students already exposed to basic coding through MIT’s Scratch, advancing to R seems a promising and exciting step.

Beyond the Penobscot Nation school, we’re extending our reach to local high schools like Orono Middle School. We recently involved eighth graders, including two Penobscot Nation citizens, in our bacteria monitoring project. This collaboration has motivated me to consider establishing an R coding group in these high schools, allowing our students continuous access to these learning opportunities.

Processing bacteria sample

My vision is to create a learning environment in local high schools where students can delve deeper into data analysis and coding. This initiative aims to extend our impact, ensuring students have continuous access to educational opportunities that merge environmental knowledge with tech skills and an appreciation of Penobscot people, culture and the work being done in our program.

Over the years, witnessing the growth of students who participated in our programs has been immensely gratifying. . A particularly inspiring example is a young Penobscot woman, Shantel Neptune, who did an internship with us through the Wabanaki Youth in Science (WaYS) Program a few years back , then a data internship through TXG and is now a full-time employee in the Water Resources Program.  Shantel is also now helping to teach another young Penobscot woman, Maddie Huerth, about data collection, management, analysis and visualization while she is our temporary employee.  We’re planning sessions this winter to further enhance their R coding skills, a critical aspect of their professional development. 

It’s essential to me that these women, along with others, receive comprehensive training. Our program’s success hinges on it being led by individuals who are not only skilled but who also embody Penobscot Nation’s values and traditions. Empowering young Penobscot citizens to lead these initiatives is not just a goal but a necessity. Their growth and development are vital to the continuity and integrity of our work, and I am committed to nurturing their skills and confidence. This endeavor is more than just education; it’s about preserving identity  and ensuring our environmental efforts resonate with the Penobscot spirit and needs.

The post Aligning Beliefs and Profession: Using R in Protecting the Penobscot Nation’s Traditional Lifeways appeared first on R Consortium.

To leave a comment for the author, please follow the link and comment on their blog: R Consortium.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you’re looking to post or find an R/data-science job.


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

Continue reading: Aligning Beliefs and Profession: Using R in Protecting the Penobscot Nation’s Traditional Lifeways

Integration of R in River Conservation and Environmental Management: The Penobscot Nation’s Approach

In a recent interview by the R Consortium, Angie Reed, Water Resources Planner for the Penobscot Indian Nation, shared her experience using R, a programming language, in river conservation. R has helped revolutionize the approach to environmental statistics and foster community engagement through interactive R Shiny applications.

R Shaping Environmental Conservation for the Penobscot Indian Nation

R programming language has transformed the way environmental data is handled, organized, and shared in the Penobscot Indian Nation. Angie Reed’s introduction to R through a class taught by a former USGS employee set the stage for a programming revolution within her workplace. By utilizing tools within R like R Commander and ggplot, Angie and her colleagues have been able to create sophisticated graphs for their reports and streamline their processes.

Angie’s use of R has been influential in establishing more stringent protections for the Penobscot River. R and RStudio have allowed streamlined access and summarization of vital data, making the process of submitting upgrade proposals for water quality classifications significantly more efficient.

R’s Impact on Community Engagement and Environmental Stewardship

R’s geographically referenced data visualization tools are connecting the community with their local environment in a more intuitive way. Their general applicability shows immense promise for future conservation efforts, both for the Penobscot Indian Nation and beyond.
Additionally, R’s practical application in coding has been introduced to local students, marrying traditional views with modern technology for effective environmental protection and community involvement.

The Future of R in Conservation

R’s integration into river conservation sets a precedent for other water bodies and environmental resources. As witnessed in Penobscot Indian Nation’s utilization of R, open-source tools can heighten the effectiveness of environmental protection efforts by simplifying and speeding up the data management process. This plausible fact lends to the prospect of more organizations globally adopting R in their environmental conservation efforts.

Actionable Advice Based on These Insights

Organizations involved in environmental conservation should tap into the potential of open-source tools such as R. They can achieve this by investing in upskilling their employees to learn programming languages. Additionally, they should incorporate data-driven decision-making into the environmental planning process. This approach will improve their environmental protection and preservation endeavours.

Organizations should also follow in the steps of the Penobscot Indian Nation by pioneering interactive community engagement. This can be done by creating geographically referenced data visualization tools that make environmental data accessible and comprehensible to the general public.

Lastly, organizations should consider introducing young people to coding and data-focused tasks related to environmental conservation. This approach serves a dual purpose: nurturing the next generation of environmental conservation advocates and equipping them with the technical skills needed to navigate the increasingly digital landscape of environmental stewardship.

Essentially, the use of modern technology like R, in combination with traditional environmental stewardship, can make a significant contribution to protecting the environment and engaging the community. The success of the Penobscot Nation serves as a blueprint for other tribes and organizations globally.

Read the original article