Satellite Data with R: Unveiling Earth’s Surface Using the ICESat2R Package

Satellite Data with R: Unveiling Earth’s Surface Using the ICESat2R Package

[This article was first published on R Consortium, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

The R Consortium recently connected with Lampros Sp. Mouselimis, the creator of the ICESat2R package, discussing the ICESat-2 mission, a significant initiative in understanding the Earth’s surface dynamics. This NASA mission, utilizing the Advanced Topographic Laser Altimeter System (ATLAS), provides in-depth altimetry data, capturing Earth’s topography with unparalleled precision. 

Mouselimis’ contribution, the ICESat2R package, is an R-based tool designed to streamline the analysis of ICESat-2 data. It simplifies accessing, processing, and visualizing the vast datasets generated by ATLAS, which emits 10,000 laser pulses per second to measure aspects like ice sheet elevation, sea ice thickness, and global vegetation biomass. This package enables users to analyze complex environmental changes such as ice-sheet elevation change, sea-ice freeboard, and vegetation canopy height more efficiently and accurately. The R Consortium funded this project. 

Lampros Sp. Mouselimis is an experienced Data Analyst and Programmer who holds a degree in Business Administration and has received post-graduate training in Data Processing, Analysis, and Programming. His preferred programming language is R, but he can also work with Python and C++. As an open-source developer, you can find his work on GitHub  With over a decade of experience in data processing using programming, he mainly works as a freelancer and runs his own business, Monopteryx, based in Greece. Outside of work, Lampros enjoys swimming, cycling, running, and tennis. He also takes care of two small agricultural fields that are partly filled with olive trees.

You built an R package called ICESat2R using the ICESat-2 satellite. Do you consider your ICESat2R project a success?

ICESat-2 R has 7,252 downloads, which, considering the smaller group of researchers who focus on using ICESat-2 data, qualifies it as a popular tool. It’s not as popular compared to some other remote sensing packages, but I believe it’s been a success based on two main points:

  • Contribution to the R users community: I hope that the R programmers who use the IceSat2R R package are now able to process altimetry data without any issues, and, if any, then I’ll be able to resolve these by updating the code in the GitHub and CRAN repositories.
  • Personal and Professional achievement: I applied for a grant to the R consortium, and my application was accepted. Moreover, I implemented the code by following the milestone timelines. Seeing a project through and providing it publicly is a success, I believe.

Who uses ICESat2R, and what are the main benefits? Any unique benefits compared to the Python and Julia interfaces?

The users of the ICESat2R package can be professionals, researchers, or R programming users in general. I assume that these users could be:

  • Ice scientists, ecologists, and hydrologists (to name a few) who would be interested in the altimeter data to perform their research
  • Public authorities or military personnel, who, for instance, would like to process data related to high-risk events such as floods
  • Policy and decision-makers (the ICESat-2 data can be used, for instance, in resource management)
  • R users that would like to “get their hands dirty” with altimeter data

I am aware of the Python and Julia interfaces, and to tell the truth, I looked at the authors’ code bases before implementing the code, mainly because I wanted to find out the exact source they used to download the ICESat-2 data. 

Based on the current implementation, I would say that the benefits of the ICESat2R package are the following:

What is an interesting example of using ICESat2R?

There are many examples where the ICESat2R package can be used. For instance, a potential use case would be to display differences between a Digital Elevation Model (Copernicus DEM) and land-ice-height ‘ICESat-2’ measurements. The next image shows the ICESat-2 land-ice-height in winter (green) and summer (orange) compared to a DEM,

From the package Vignette: ‘ICESat-2 Atlas Products’
From the package Vignette: ‘ICESat-2 Atlas Products’

More detailed explanations related to this use case exist in the Vignette ICESat-2 Atlas Products of the package.

Were there any issues using OpenAltimetry API (the “cyberinfrastructure platform for discovery, access, and visualization of data from NASA’s ICESat-2 mission”)? (NOTE: Currently, the OpenAltimetry API website appears to be down?)

At the beginning of October 2023, I was informed that the OpenAltimetry website (previously https://openaltimetry.org/) has migrated to https://openaltimetry.earthdatacloud.nasa.gov/. I then contacted the support of the  National Snow & Ice Data Center, which informed me about the migration of the API interface. 

Currently, I have an open issue in my Github repo related to this migration. Once the OpenAltimetry API becomes functional again, I’ll submit the updated version of the ICESat2R package to CRAN.

In your blog post for the copernicusDEM package, you showed a code snippet showing how it loads files, iterates over the files, and uses a for-loop to grab all the data. Can you provide something similar for ICESat2R?

Whenever I submit an R package to CRAN, I include one (or more) vignettes that explain the package’s functionality. Once the package is accepted, I also upload one of the vignettes to my personal blog. This was the case for the CopernicusDEM R package,

but also for the ICESat2R package,

The current version of IceSat2R on CRAN (https://CRAN.R-project.org/package=IceSat2R) is 1.04. Are you still actively supporting IceSat2R? Are you planning to add any major features?

Yes, I still actively support IceSat2R. I always respond to issues related to the package and fix potential bugs or errors. The NEWS page of the package includes the updates since the first upload of the code base to Github.

I don’t plan to add any new features in the near future, but I’m open to pull requests in the Github repository if a user would like to include new functionality that could benefit the R programming community.

About ISC Funded Projects

A major goal of the R Consortium is to strengthen and improve the infrastructure supporting the R Ecosystem. We seek to accomplish this by funding projects that will improve both technical infrastructure and social infrastructure.

The post Satellite Data with R: Unveiling Earth’s Surface Using the ICESat2R Package appeared first on R Consortium.

To leave a comment for the author, please follow the link and comment on their blog: R Consortium.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you’re looking to post or find an R/data-science job.


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

Continue reading: Satellite Data with R: Unveiling Earth’s Surface Using the ICESat2R Package

Understanding the Long-Term Implications of ICESat2R

The ICESat2R package creator Lampros Sp. Mouselimis and the R Consortium have shed light on NASA’s ICESat-2 mission utilizing the Advanced Topographic Laser Altimeter System (ATLAS). This R-based tool streamlines the examination of ICESat-2 data, making accessible, processing, and representing the large datasets generated by ATLAS. With this tool, users can better analyze environmental changes like ice-sheet elevation change, sea-ice freeboard, and vegetation canopy height.

Implications and Future Developments

As Mouselimis has reflected, ICESat2R significantly contributes to the R users community and showcases a personal and professional achievement. The usefulness of this package for wider applications in the future depicts a promising development, especially in environmental studies. Data from ICESat-2 can aid research for ice scientists, ecologists, and hydrologists, as well as assist decision-making processes for public authorities and policymakers. Even R users looking for hands-on experience with altimeter data can benefit from it.

Future developments of open-source packages like ICESat2R could enable broader data analysis capabilities. While existing interfaces like Python and Julia are effective, the ICESat2R package offers unique benefits – it allows R programming users to use NASA’s OpenAltimetry interface without requiring any credentials and it comes with detailed documentation.

Actionable Advice

Despite the current unavailability of OpenAltimetry API, once it becomes functional again, users can expect prompt updates from Mouselimis regarding any issues related to this migration. For those interested to better understand the utility of this package, full vignettes explaining the package’s functionality are provided in Mouselimis’ personal blog and uploaded to Github once they’ve been accepted to CRAN. It’s crucial to stay updated and make the most of these resources.

Indicatively, Mouselimis is open to pull requests in the Github repository if users would like to include new functionality that could be advantageous for the R programming community. For those passionate about data analysis, this is a prime opportunity to contribute to the evolution of the package and, by extension, the field.

Overall, packages such as ICESat2R underscore the importance of collaborative development and the power of technology in understanding and addressing complex environmental issues. For anyone interested in data analysis, whether novice or professional, engaging with such tools can lead not only to skills development but also a significant contribution to pressing global matters.

Read the original article

“Thriving in Data Science: Skill Assessment, Networking, and Future Developments”

“Thriving in Data Science: Skill Assessment, Networking, and Future Developments”

From Skill Assessment to Networking: Your Roadmap to Thriving in the World of Data Science.

Long-term Implications and Future Developments in Data Science

The world is increasingly becoming data-driven, and mastering data science skills is of paramount importance. The focus on skill assessment, networking, and other distinct facets of data science are crucial for business growth, job creation, and societal development. A keen understanding of these trends can help find interventions that will ensure long-term survival and prosperity in the data science world.

Imperative of Skill Assessment

As we move further into an increasingly data-centric world, the importance of data science skills continues to rise. Companies across all industries are recognizing the value of data science for driving their key decision-making processes. Strong expertise in data science can open up a range of opportunities.

The implication here is that in the long term, professionals who are proficient in data science will continue to be highly sought after. The demand for these skills will only increase as more data is collected and the need to understand this grows.

The Power of Networking

Networking in the field of data science has significant long-term implications. As with any industry, having connections with professionals and experts can lead to opportunities that would otherwise go unnoticed. As technologies evolve and business needs shift, having a solid professional network can be a powerful career driver.

Technological Advancements and Future Developments

Data Science will continue to evolve with the rapid pace of technological advancements. Developments in areas such as Artificial Intelligence (AI), Machine Learning (ML), big data, and cloud computing will shape the future of the field.

Inevitably, professionals who keep up-to-date with these advancements and continuously develop their skills accordingly will have an edge in the long term.

Actionable Advice

  1. Continuous Learning: To stay competitive in this rapidly-evolving field, continuous learning is essential. This includes refining existing data science skills and acquiring new ones in line with technological advancements.
  2. Networking: Building a solid professional network within the field of data science should be a priority. Attending industry events, joining relevant online communities, and being active on professional networking sites can help.
  3. Staying Updated: Keeping tabs on the latest news, trends, and developments in data science is crucial. Materials such as online resources, industry reports, webinars, and white papers can keep you informed and ahead of the curve.

In conclusion, to thrive in the world of Data Science, a commitment to continuous skill development and networking is crucial. Those who stay updated with the industry’s latest advancements will undoubtedly succeed.

Read the original article

Customer success stories illuminate how hardware accelerators speed necessary infrastructure to support all aspects of an accelerated AI and HPC computing datacenter.

Implications and Future Developments of Hardware Accelerators in AI and HPC Datacenters

Customer success stories continue to highlight the significant role of hardware accelerators in supercharging infrastructure in the realms of accelerated Artificial Intelligence (AI) and High-Performance Computing (HPC) datacenters.

Long-term Implications

The use of hardware accelerators in AI and HPC datacenters carries long-term implications that could transform the landscape of computing. Among the most important include:

  • Boosted Computing Power: Hardware accelerators enhance the processing capabilities of datacenters, making them more efficient and enabling quicker data processing.
  • Increased AI Adoption: As tasks become less expensive and more efficient, businesses can more feasibly adopt AI technologies, leading to widespread digitization and automation.
  • Implications for Workforce: The increasing reliance on automated systems may lead to a shift in workforce needs, emphasizing specialized skills related to maintaining these systems.

Potential Future Developments

As hardware accelerators continue to evolve, we can expect several potential developments that could change the way AI and HPC datacenters operate:

  1. Improved Energy Efficiency: Future hardware accelerators will likely strive for improved energy efficiencies, creating a more sustainable computing environment.
  2. Faster Data Processing: With continuous advancements, one can expect even faster data processing speeds, leading to more efficient work processes.
  3. Better Customizability: Future hardware accelerators might offer greater customizability, meeting specific business needs and allowing for faster integration with existing systems.

Actionable Advice

“Preparing for the future wave of AI and HPC acceleration isn’t just about adopting the latest technology. It’s about understanding and anticipating how these tools will transform your business requirements and workforce needs. Keep abreast of latest developments, understand their potential implications, and be ready to adapt.”

Therefore, it’s crucial to invest time in understanding how hardware accelerators work currently, along with their future developments. This can assist in shaping a forward-thinking strategy for maintaining competitiveness and relevancy in an increasingly digitalized world.

Read the original article

Machine Learning Mastery Expands into Computer Vision: Implications and Opportunities

Machine Learning Mastery Expands into Computer Vision: Implications and Opportunities

Today, we’re proud to announce a significant addition to our catalog at Machine Learning Mastery. Known for our detailed, code-centric guides, we’re taking a leap further into the realms of Computer Vision with our latest offering.

Machine Learning Mastery Expands With New Computer Vision Offering

Machine Learning Mastery has recently announced a substantial addition to its highly regarded catalogue of resources. Known for providing technical, code-driven guides, the company is now expanding its horizons into a particularly exciting and advancing field of study – Computer Vision.

Long-term Implications and Future Developments

As technologies continue to evolve, the ability for machines to view, understand, and interpret their environments becomes ever more crucial. With Machine Learning Mastery delving deeper into the fascinating and ever-progressing realm of Computer Vision, users can expect a variety of long-term implications benefiting the wider tech community.

Advancing Learning Opportunities

A key impetus lies in the potential to advance learning opportunities for those interested in this sector of technology. As Machine Learning Mastery is known for its detailed, code-driven guides, its move to provide resources on Computer Vision will provide an additional platform for learning. This expansion might inspire an increasing number of academically-inclined individuals, tech enthusiasts and professionals to take advantage of these resources and deepen their understanding of the field.

Democratizing Knowledge

By providing resources on Computer Vision, Machine Learning Mastery aids in democratizing knowledge around this advanced technology. This enables a wider audience to comprehend, experiment with, and contribute to this evolving technology. The direct implication is creating a more aware tech society that is more proficient in handling computer vision related aspects including the programming and issues that might arise.

Fostering Innovation

Lastly, such efforts foster innovation. As Machine Learning Mastery throws more light into this specialized area of technology, it might uncover unknown aspects, spark new ideas or even lead to novel means of applying Computer Vision technology in various industries. Therefore, these resources might be catalysts for the rise of future breakthroughs.

Actionable Advice

With the advent of this new and exciting resource, here are a few actionable pointers for interested individuals:

  1. Leverage the Resources: Make the most of this new resource by staying notified about the latest updates from Machine Learning Mastery. Subscribe to newsletters, engage with their community, and take full advantage of these learning tools.
  2. Continuous Learning: Technology is an ever-evolving field; thus, continuous learning remains crucial. Keep abreast with advancements by indulging in continual study and research.
  3. Experiment and Build: Transform theoretical knowledge into practical skills. Experiment with concepts learned, take on new projects and venture into developing your own applications using Computer Vision.
  4. Participate in Community: Engage in online communities to discuss ideas, ask questions, and share insights. The more active you are, the quicker you can better your understanding and develop your skills.

In conclusion, Machine Learning Mastery’s move into the realm of Computer Vision offers substantial benefits empowering learners and tech enthusiasts alike. The industry should expect growth in knowledge, innovative solutions, and tech proficiency as a result of this development.

Read the original article

Explore the power of graph databases in deciphering unstructured data. Uncover hidden connections for valuable insights and innovation.

Understanding the Power of Graph Databases in Deciphering Unstructured Data

Unstructured data presents a treasure trove of potential insights and vital business intelligence, but unlocking these insights requires employing the right tools and technology – and that’s where graph databases come in. These are powerful tools that can help businesses uncover hidden connections and valuable insights for innovation.

The Long-Term Implications and Possible Future Developments

As companies continue to generate and collect large amounts of data, one can expect the emergence of refined graph database technologies to aid in transforming this raw, unstructured data into valuable business insight. The ability to understand and interpret data is likely to become even more critical in the coming years, driving increased investment and innovation in the field of graph databases.

This technology has considerable potential for a variety of industries, including healthcare, finance, and retail, where understanding complex relationships between different data points can yield significant benefits. In the longer term, graph databases could prove instrumental in accelerating scientific research or uncovering trends in social data, for example.

Actionable Advice: Leveraging Graph Databases

  • Invest in Education: The first step to leveraging graph databases is understanding how they work and how they can be used to decipher unstructured data. Businesses should consider investing in employee training or hiring professionals with expertise in this field.
  • Choose the Right Tools: Multiple graph databases are available on the market, each with its strengths and weaknesses. Companies need to select a system that aligns best with their specific business needs and capabilities.
  • Start Small: It can be beneficial to start with a small, manageable project when first exploring graph databases. This approach allows businesses to refine strategies and techniques before scaling up.
  • Continuous Improvement: Just like any other technology, graph databases continue to evolve. Businesses should keep an eye on advancements in this field to ensure they are using the most effective and efficient tools available.

“Unlocking the insights within unstructured data using graph databases has the potential to drive significant innovation and give companies a competitive edge. Leveraging this technology effectively requires an investment of time and resources but offers considerable long-term benefits.”

Read the original article

Insight Analysis and Predictions from Puzzles Utilizing R

Insight Analysis and Predictions from Puzzles Utilizing R

[This article was first published on Numbers around us – Medium, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

#147–148

Puzzles

Author: ExcelBI

All files (xlsx with puzzle and R with solution) for each and every puzzle are available on my Github. Enjoy.

Puzzle #147

Picture above showing sudoku-like puzzles looks difficult, but puzzle itself is really hardcore. Probably one of the toughests since I join this puzzle solving. We get table filled with data like unsolved sudoku (that is theme of image). We need to populate it with proper data filling sometimes up, sometimes down. Easier to say than to do. Just look on this spreadsheet. There are even empty rows to fill.

So lets check how I did it. I’ll try to explain my chain of thoughts while solving.

Load libraries and data

library(tidyverse)
library(readxl)

input = read_excel("Power Query/PQ_Challenge_147.xlsx", range = "A1:D17")
test  = read_excel("Power Query/PQ_Challenge_147.xlsx", range = "F1:I17") %>%
  janitor::clean_names()

Transformation

reshape <- function(input) {
  input %>%
    janitor::clean_names() %>%
    mutate(nr = row_number()) %>%
    # for each column enumerate not empty cells
    mutate(across(c(cust_id, cust_name, amount, type),
                  ~ ifelse(is.na(.), NA, cumsum(!is.na(.))),
                  .names = "index_{.col}"),
    # for each row find max index which will be used to find cust_id per row
           max_index = pmax(index_cust_id, index_cust_name, index_amount, index_type, na.rm = TRUE)) %>%
    group_by(max_index) %>%
    mutate(across(c(cust_id, cust_name, amount, type),
                  ~ max(., na.rm = TRUE)),
    # for each max_index get first and last row in which it occurs
           min_row = min(nr, na.rm = TRUE),
           max_row = max(nr, na.rm = TRUE)) %>%
    ungroup() %>%
    # remove originally empty rows
    filter(!is.na(max_index)) %>%
    select(-starts_with("index_"), -max_index, -nr) %>%
    distinct() %>%
    # using first and last row per index make sequence and unnest it to rows
    mutate(row_seq = map2(min_row, max_row, seq)) %>%
    unnest(row_seq) %>%
    select(-min_row, -max_row, -row_seq) %>%
    group_by(cust_id) %>%
    # final touch. add original row number to type
    mutate(type = paste0(type, row_number())) %>%
    ungroup()
}


result = reshape(input)

Validation

identical(result, test)
# [1] TRUE

Puzzle #148

After one hardcore, comes one pretty nice and easy to solve. And it is about fruits. What we get is column with strings containing names of fruits. We need to split them separately, count them and put in some kind of crosstab. Little bit weird because both rows and columns has the same dimension, name of fruit. Let go into.

Load libraries and data

library(tidyverse)
library(readxl)

input = read_excel("Power Query/PQ_Challenge_148.xlsx", range = "A1:A12")
test  = read_excel("Power Query/PQ_Challenge_148.xlsx", range = "C1:N12") 

Transformation

result = input %>%
  separate_rows(Fruits, sep = ", ") %>%
  mutate(Fruits = str_remove_all(Fruits, " ")) %>%
  group_by(Fruits) %>%
  summarise(Count = n()) %>%
  ungroup() %>%
  mutate(Fruits2 = Fruits) %>%
  pivot_wider(names_from = Fruits2, values_from = Count) 

Validation

all.equal(result, test)
#> [1] TRUE

# Today for the first time I used all.equal() instead of identical().
# Main reason is because if there are NAs, NA is not identical to NA,
# it return NA instead of TRUE. But NA is equal to NA, so final df is not
# identical, but it is equal to given answer.
# In our eyes we can say they are the same. 

Feel free to comment, share and contact me with advices, questions and your ideas how to improve anything. Contact me on Linkedin if you wish as well.


PowerQuery Puzzle solved with R was originally published in Numbers around us on Medium, where people are continuing the conversation by highlighting and responding to this story.

To leave a comment for the author, please follow the link and comment on their blog: Numbers around us – Medium.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you’re looking to post or find an R/data-science job.


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

Continue reading: PowerQuery Puzzle solved with R

Insight Analysis and Predictions from Puzzles Utilizing R

In this article, we examine author ExcelBI’s puzzle solutions with the R programming language. Specifically, we evaluate puzzles #147 and #148. By doing this, we not only gain insight from the problem-solving methods used but also predict potential future developments in programming puzzles and provide actionable advice for R enthusiasts.

Puzzle #147

Puzzle #147 is described as an intensive, hardcore puzzle similar to unsolved Sudoku. Its complexity lies in the necessity to accurately populate a table with data consistently fluctuating upwards and downwards. Often, entire rows are left empty.

The puzzle was resolved using R libraries such as ‘tidyverse’ and ‘readxl,’ which are known for data manipulation and reading excel files respectively. An extensive transformation function was devised to methodically cleanse, adjust, enumerate and finally regenerate the data into the table.

Implications and Future Developments

Given the complexity of puzzles like #147, it showcases the efficiency of R in addressing intricate data manipulation tasks. In the future, similar problems could be made even more challenging by incorporating additional variables or conditions that further complicate the data handling process. A natural progression can be developing algorithms that automate these processes more effectively.

Actionable Advice

For those interesting in solving complex puzzles akin to #147, having a deep understanding of R libraries such as ‘tidyverse’ and ‘readxl’ is essential. Prospective problem solvers should also practice formulating complex transformation functions to better equip themselves for similar challenges.

Puzzle #148

Contrasting sharply with its predecessor, Puzzle #148 is a simpler problem focused on splitting, counting, and tabulating words within a string. Specifically focusing on names of fruits, values were separated, counted and then presented in a crosstab-like format.

The author uses the same R libraries, ‘tidyverse’ and ‘readxl,’ but employs different functions to resolve the puzzle effectively. The data is transformed through separating rows and string manipulation, followed by group summarization and ultimately dispersed into wider pivot tables.

Implications and Future Developments

Puzzle #148 showcases R’s ability to handle data wrangling tasks effectively. Future developments could revolve around parsing complex strings, creating opportunities for developing more advanced text mining techniques.

Actionable Advice

For R enthusiasts, puzzles like #148 give an opportunity to practice various data wrangling tasks. Familiarization with functions like ‘separate_rows’, ‘str_remove_all’, and ‘pivot_wider’ would be advantageous for these type of problems.

Concluding Thoughts

Both puzzles deliver different sets of insight but both affirm the dexterity of R programming. As problem-solving with R continues to evolve, understanding these tactics and learning from this analysis could serve an effective springboard for anyone interested in further honing their R puzzle-solving skills or even devising their own engaging puzzles.

Read the original article