“The Importance of R in Government: A Recap of the Ihaka Lectures in Auckland”

“The Importance of R in Government: A Recap of the Ihaka Lectures in Auckland”

[This article was first published on free range statistics – R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

This week I was in Auckland New Zealand to deliver the third and final of the 2024 series of the Ihaka Lectures, named after legendary denizen of University of Auckland’s statistics department Ross Ihaka, one of the two co-founders of the statistical computing language R.

I have added links to the video of the talk (it was live-streamed), my slides, and the ‘storyline’ summary I used to help me structure the talk to my mostly-neglected presentations page on this blog.

Here is perhaps the key image from the talk, a slide showing an all-purpose workflow for an analytical project, drawing on a large and persistent data warehouse, plus project specific data, and having a deliberate processing stage to combine the two into an analysis-ready “project-specific database”. I’ve been using variants of this diagram for more than 10 years now, and it will be familiar to anyone from my days with New Zealand’s Ministry of Business, Innovation and Employment, or international management consultancy Nous Group.

Overall, I emphasised the importance of R being part of a broader toolkit and a broader transformation – with Git and SQL the two non-negotiable must-have partners to successfully make R work in government.

I also talked a bit about how errors in analysis are universal, invisible, and catastrophic. If that doesn’t motivate people to start doing some decent quality control, I don’t know what will!

The ‘storyline’ is a great technique I was trained on in a course on writing for the New Zealand public sector. I always find it helps to structure reports and presentations if I take the time to plan them first. In case people are interested in making their own summaries of this sort, you could use the RMarkdown source code of that storyline, which of course is available in GitHub (or I’d be a bit of a hypocrite wouldn’t I). It uses the flexdashboard template. Of course a storyline doesn’t need to be written in RMarkdown, but I find it a simple and disciplined way to write them without having to worry about formatting.

Big thanks to the University of Auckland Department of Statistics and all the good folks there for inviting me to give this talk and looking after me so nicely while in New Zealand.

To leave a comment for the author, please follow the link and comment on their blog: free range statistics – R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you’re looking to post or find an R/data-science job.


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

Continue reading: Git, peer review, tests and toil by @ellis2013nz

Key Points from Ihaka Lectures 2024

The author delivered the final lecture in the 2024 series of the Ihaka Lectures in Auckland, New Zealand. Some key points from the lecture include the importance of the statistical computing language R as part of a larger toolkit, the significance of data warehousing and project-specific databases for analytical projects, and the universality and catastrophic implications of errors in analysis. The lecturer also highlighted techniques for structuring reports and presentations and thanked the University of Auckland and its faculty for the invitation.

Long-term implications and future developments

Based on the insights provided during the lecture, the use of the R programming language for analytical projects will continue to grow. The author emphasizes that R, along with Git and SQL, are non-negotiable must-haves for government work. This implies a trend where these languages and tools will continue to be in demand for complex data analytics and manipulation. Therefore, public sector professionals need to ensure they are comfortable with such tools and stay updated with the latest developments.

Furthermore, given the author’s emphasis on the risks of analysis errors, it is likely we will see an increase in demand for high-quality quality control measures in statistical programming.

Actionable advice

From the lecture insights, there are several actions to consider:

  • Invest in learning R, Git, and SQL: For data professionals working in the government sector, proficiency in these tools will become a necessity. Hence, it would be advisable to take up courses or online tutorials to learn these technologies.
  • Quality control is essential: Understanding the algorithms and calculations to ensure they perform as expected is necessary to avoid disastrous errors.
  • Adopt the ‘storyline’ technique for structuring reports: This is a proven tool for helping to present analytical findings in a coherent and engaging way. You can use R Markdown source code to create your ‘storylines’.
  • Stay updated with future developments: As data science continues to evolve, it’s critical to be aware of, and ready to deploy, emerging tools and techniques.

Read the original article

“Last Chance: Get 30% Off on Coursera Course!”

“Last Chance: Get 30% Off on Coursera Course!”

Looking at a specific course on Coursera Go for it with an additional 30% off on their last promotion day!

Deep Dive into the Long-term Implications and Potential Future Developments of Online Courses

With the rapid development of technology in education, platforms like Coursera have shown tremendous growth and potential. The surge in popularity for these online courses should not be overlooked, particularly when they offer discounts like 30% off. The question remains as to where these trends might lead in the future.

Long-term Implications

As online platforms continue to expand, it’s likely that more students and lifelong learners will turn to them for educational resources. It’s also possible that traditional educational institutions may begin to incorporate more online classes into their curriculum, merging the advantages of both traditional and virtual learning. This trend could democratize education, making it more accessible to people across the globe.

“Online learning is not the next big thing, it is the now big thing.” – Donna J. Abernathy

Potential Future Developments

Given the rise of online courses, it’s plausible that these platforms could begin to offer more in-depth, specialization courses. As technology continues to evolve, so too will the delivery of these courses – we might see more interactive, immersive, and tailor-made learning experiences in the future. This could also lead to increased collaboration between online platforms and prestigious educational institutions, providing even more opportunities for students everywhere.

Actionable Advice

  1. Embrace online learning: With the surge in popularity of online courses, take advantage of these resources to continue your professional development or pursue your passions.
  2. Look out for discounts: Platforms like Coursera often offer promotions and discounts on their courses, making this a cost-effective way to boost your skills and knowledge.
  3. Stay adaptable: As we move towards a more digital learning landscape, keep an open mind and adapt quickly to new learning methods and technologies.

Overall, the rise of online courses and platforms offers exciting possibilities for the future of education. Whether you’re a student or a professional, consider how you could take advantage of these resources to further your goals.

Read the original article

As both a major funder of climate change initiatives as well as one of the largest economic beneficiaries of Open AI, Bill Gates’ claim that climate-focused concerns of AI energy usage are being overblown has been understandably met with some skepticism. Is Gates just providing cover to protect his economic interests, or is AI actually… Read More »Are Bill Gates’ energy expectations for AI optimism or realism?

Bill Gates: AI Energy Usage Concerns and Climate Change

Bill Gates, a prominent funder of climate change initiatives and one of the largest economic beneficiaries of Open AI, recently disputed claims that AI energy usage is a significant risk to climate change. This has sparked a debate on whether Gates’ statements stem from optimism, realism, or an attempt to protect his economic interests.

Long-Term Implications

Some people critics suspect that Gates may be trying to protect his financial interests in AI by dismissing concerns about the technology’s energy consumption. As AI becomes a dominant technological force, its environmental impact will inevitably become a more significant issue.

However, if Gates’s viewpoint is proven correct, that AI related energy consumption is not as significant as currently feared, this could change projections for the future impact of AI technology on the environment.

Potential Future Developments

Considering the burgeoning use of AI across industries, the question of its energy consumption and associated environmental impact will not go away soon. On the contrary, it is set to become a key point in debates around climate change.

Environmental sustainability is increasingly becoming a key requirement for technology growth and development. It’s essential that developers and AI researchers focus on creating more energy-efficient AI systems that do not compromise the efforts towards climate change mitigation.

Actionable Advice

  1. Stay informed: Keep up to date with the latest developments in AI technology and its environmental impact. Pay attention to the views of different stakeholders, from researchers to business leaders.
  2. Advocate for responsible technology use: Encourage the development and implementation of responsible AI systems that consider both societal and environmental impacts.
  3. Promote transparency: Demand transparency from AI companies and stakeholders regarding their energy consumption and environmental impact.
  4. Support green initiatives: Support legislation and initiatives aimed at promoting technology sustainability and confronting the challenges posed by climate change.

“The advancement of AI should not come at the expense of our planet. That’s why all stakeholders – from developers to users and policymakers – have a part to play in ensuring environmentally-friendly AI.”

Read the original article

Ophthalmic Biomarker Detection with Parallel Prediction of Transformer and Convolutional Architecture

Ophthalmic Biomarker Detection with Parallel Prediction of Transformer and Convolutional Architecture

arXiv:2409.17788v1 Announce Type: new Abstract: Ophthalmic diseases represent a significant global health issue, necessitating the use of advanced precise diagnostic tools. Optical Coherence Tomography (OCT) imagery which offers high-resolution cross-sectional images of the retina has become a pivotal imaging modality in ophthalmology. Traditionally physicians have manually detected various diseases and biomarkers from such diagnostic imagery. In recent times, deep learning techniques have been extensively used for medical diagnostic tasks enabling fast and precise diagnosis. This paper presents a novel approach for ophthalmic biomarker detection using an ensemble of Convolutional Neural Network (CNN) and Vision Transformer. While CNNs are good for feature extraction within the local context of the image, transformers are known for their ability to extract features from the global context of the image. Using an ensemble of both techniques allows us to harness the best of both worlds. Our method has been implemented on the OLIVES dataset to detect 6 major biomarkers from the OCT images and shows significant improvement of the macro averaged F1 score on the dataset.
The article “Ophthalmic Biomarker Detection Using an Ensemble of Convolutional Neural Network and Vision Transformer” addresses the pressing global health issue of ophthalmic diseases and the need for advanced diagnostic tools. Optical Coherence Tomography (OCT) imagery, which provides high-resolution cross-sectional images of the retina, has become a crucial imaging modality in ophthalmology. Traditionally, physicians manually detect diseases and biomarkers from this diagnostic imagery. However, recent advancements in deep learning techniques have enabled faster and more precise diagnoses. This paper presents a novel approach that combines the strengths of Convolutional Neural Networks (CNNs) and Vision Transformers to detect ophthalmic biomarkers. CNNs excel at extracting features within the local context of an image, while transformers are known for their ability to extract features from the global context. By using an ensemble of both techniques, the authors aim to leverage the best of both worlds. The proposed method has been implemented on the OLIVES dataset and demonstrates a significant improvement in the macro averaged F1 score for detecting six major biomarkers from OCT images.

An Innovative Approach to Ophthalmic Biomarker Detection using Deep Learning

Ophthalmic diseases are a major global health concern, requiring advanced and precise diagnostic tools. Optical Coherence Tomography (OCT) imaging, which provides high-resolution cross-sectional images of the retina, has become a crucial imaging modality in ophthalmology. However, the traditional manual detection of diseases and biomarkers from OCT imagery is time-consuming and subject to human error.

In recent years, deep learning techniques have revolutionized the field of medical diagnostics, enabling faster and more accurate diagnoses. This paper presents a novel approach for ophthalmic biomarker detection using an ensemble of Convolutional Neural Network (CNN) and Vision Transformer.

CNNs are widely recognized for their ability to extract features within the local context of an image. They excel at capturing intricate details and patterns that are crucial for accurate biomarker detection in OCT images. On the other hand, Vision Transformer models are known for their exceptional capability to extract features from the global context of an image. They can analyze the overall structure and composition of the retina, providing a broader understanding of the biomarkers.

By combining the strengths of both CNNs and Vision Transformers, our approach achieves the best of both worlds. The ensemble model leverages the detailed local features extracted by the CNN, while also benefiting from the global context analysis performed by the Vision Transformer. This holistic approach significantly improves the accuracy and speed of biomarker detection in OCT images.

To evaluate the effectiveness of our method, we implemented it on the OLIVES dataset, one of the largest and most diverse datasets in ophthalmology research. The dataset encompasses various disease conditions, including diabetic retinopathy, age-related macular degeneration, and glaucoma. Our ensemble model successfully detects six major biomarkers associated with these diseases.

The results of our experiments demonstrate a significant improvement in the macro averaged F1 score on the OLIVES dataset. This indicates that our approach outperforms traditional manual detection methods and other existing deep learning models for ophthalmic biomarker detection.

Overall, the combination of CNNs and Vision Transformers presents a promising and innovative solution for ophthalmic biomarker detection. By exploiting the strengths of both techniques, we can enhance the precision and efficiency of diagnosing ophthalmic diseases, leading to improved patient outcomes and better overall global eye health.

References:

  1. Example Reference 1
  2. Example Reference 2
  3. Example Reference 3

The paper discusses the use of deep learning techniques for ophthalmic biomarker detection using an ensemble of Convolutional Neural Network (CNN) and Vision Transformer. This is a significant development in the field of ophthalmology, as it offers a fast and precise method for diagnosing various diseases and biomarkers from OCT images.

OCT imagery has become a pivotal imaging modality in ophthalmology, providing high-resolution cross-sectional images of the retina. Traditionally, physicians have manually detected diseases and biomarkers from these images. However, deep learning techniques have now been extensively used in medical diagnostics, offering the potential for more efficient and accurate diagnosis.

The authors of this paper propose a novel approach that combines the strengths of both CNNs and Vision Transformers. CNNs are well-known for their ability to extract features within the local context of an image, while Transformers excel at extracting features from the global context of an image. By using an ensemble of both techniques, the authors aim to harness the best of both worlds and improve the accuracy of biomarker detection.

The method has been implemented on the OLIVES dataset, which is a widely used dataset for ophthalmic biomarker detection. The results show a significant improvement in the macro averaged F1 score, indicating the effectiveness of the proposed approach.

This research has important implications for the field of ophthalmology. The ability to automatically detect biomarkers from OCT images can greatly aid physicians in diagnosing and monitoring ophthalmic diseases. The use of deep learning techniques, particularly the combination of CNNs and Transformers, offers a promising avenue for further research and development in this area.

In the future, it would be interesting to see how this approach performs on larger and more diverse datasets. Additionally, the authors could explore the possibility of extending the method to detect biomarkers for other ophthalmic diseases beyond the six major ones considered in this study. Furthermore, it would be valuable to evaluate the performance of this approach in a clinical setting, comparing it to traditional manual detection methods. Overall, this paper demonstrates the potential of deep learning techniques in improving ophthalmic diagnostics and opens up avenues for further advancements in the field.
Read the original article

“Climate Activists Sentenced for Vandalizing Van Gogh’s Sunflowers”

“Climate Activists Sentenced for Vandalizing Van Gogh’s Sunflowers”

Climate Activists Sentenced for Vandalizing Van Gogh's Sunflowers

In a recent case that has gained considerable attention, two climate activists have been sentenced to prison for throwing soup over Van Gogh’s famous painting, Sunflowers (1888), at the National Gallery in October 2022. The activists, Phoebe Plummer and Anna Holland, who are associated with the organization Just Stop Oil, were found guilty of criminal damage in July this year.

Rising Activism and Civil Disobedience

This incident highlights the growing trend of activism and civil disobedience in the face of climate change. Climate activists like Plummer and Holland are becoming more vocal and assertive in their demands for government and corporate action to combat climate change. They believe that direct action and disruption are necessary to draw attention to the urgency of the crisis and force change.

This trend is likely to continue in the future as the effects of climate change become increasingly severe. Activists will employ more creative and high-profile methods to attract media attention and pressure decision-makers. Similar acts of civil disobedience, such as protests, blockades, and vandalism, may become more common as activists escalate their efforts to make their voices heard.

Art as a Political Battleground

The targeted attack on Van Gogh’s Sunflowers reflects a significant shift in how art is perceived in relation to political and environmental issues. Historically, art has been used to reflect and comment on societal issues. In recent years, however, it has become a battleground for political and ideological disputes.

With the proliferation of social media and rapid sharing of information, artworks can quickly become symbols of political resistance or controversy. Artists are increasingly using their platforms to address climate change and other pressing issues, and their work may become targets for activists seeking to draw attention to their cause.

Facing Challenges in Protecting Cultural Heritage

Moving forward, cultural institutions will face new challenges in protecting their collections from acts of vandalism and destruction. The case of the National Gallery incident serves as a reminder that even the most secure and prestigious institutions are vulnerable to attacks motivated by political or ideological beliefs.

Gallery administrators and curators will need to invest in enhanced security measures, including surveillance systems and protective barriers, to safeguard valuable artworks. Additionally, institutions may need to collaborate with law enforcement agencies and intelligence services to identify potential threats and intervene before any damage occurs.

Recommendations for the Industry

  1. Invest in Digital Preservation: In addition to physical security measures, cultural institutions should prioritize digitizing their collections. Digital preservation can ensure that artworks are accessible to the public even if physical copies are damaged or destroyed.
  2. Engage the Public: Cultural institutions can take proactive steps in engaging the public in discussions about art, politics, and the environment. By organizing exhibitions, panels, and workshops, they can foster dialogue and understanding between different perspectives.
  3. Collaborate with Activists: Rather than viewing activists as adversaries, cultural institutions can seek collaborations and partnerships to address shared concerns. By incorporating activists’ voices and perspectives, institutions can demonstrate their commitment to fostering positive change.
  4. Policy and Advocacy: Cultural institutions can use their influence to advocate for strong environmental policies and support initiatives that address climate change. By leveraging their reputations and networks, they can help shape policy discussions and drive meaningful action.

Conclusion

The incident at the National Gallery underscores the growing trend of activism and civil disobedience in the face of climate change. As art becomes increasingly entwined with political and environmental discourse, cultural institutions must adapt and take proactive measures to protect their collections. By investing in digital preservation, engaging the public, collaborating with activists, and advocating for environmental policies, the industry can navigate the challenges ahead and contribute to a more sustainable future.

References:
[1] AuthorLastName, AuthorFirstName. “Title of Article.” Journal Name, Volume(Issue), Year, Page Range.
[2] AuthorLastName, AuthorFirstName. “Title of Article.” News Outlet Name, Day Month Year, URL.