Creating Custom PowerPoints with {officer} in R

Creating Custom PowerPoints with {officer} in R

[This article was first published on The Jumping Rivers Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.



From a purely design perspective, Quarto’s standard PowerPoint output
falls short. It is limited to
seven layout
options, with the most complex being “Two Content.” The {officer} R
package offers a powerful alternative for those seeking full control and
customisation.

Why PowerPoint?

At work, I use a Linux operating system (OS), and at home, I use macOS.
Within my little bubble, it’s easy to forget how much of the market
share Microsoft still holds. It’s estimated that around 70% of the
desktop operating system market share belongs to
Microsoft
.
Many of the clients I work with prefer Microsoft outputs, such as
PowerPoint, over HTML or PDF. Aside from company alignment with
Microsoft, there are a few practical reasons why using PowerPoint with
Quarto can be advantageous:

  • No need to be a CSS / LaTeX whizz-kid to produce professional-looking
    slides
  • Possible (and easy) to edit after rendering the doc!

What is {officer}?

From
davidgohel.github.io/officer

The officer package lets R users manipulate Word (.docx) and
PowerPoint (*.pptx) documents. In short, one can add images, tables
and text into documents from R. An initial document can be provided;
contents, styles and properties of the original document will then be
available.

This means for this workflow, Quarto is sidestepped altogether, and we
focus entirely on R scripts and R coding.

How?

There are a few ways to use {officer} – I’ll walk through the approach
that I’ve found to be most effective.

Layout templates

First – you’ll need a PowerPoint presentation that contains template
layout slides. There are no limits to these slides, the format can be as
custom as you like and there can be as many layouts as you want.
Remember – this file doesn’t need any actual slides, it only needs
layouts! To create a layout:

  1. Enter “Slide Master” mode
  2. Add any content (headers, footers, styling etc) you want to appear
    on each slide to the “Slide Master”
  3. Create a new Layout Slide

Slide Master view in PowerPoint.

To insert content from R, the easiest way is via placeholders. These can
be text, tables, images and more. To add a placeholder:

  1. Click “Insert Placeholder” and choose the content type
  2. If it’s a text placeholder, you can customise the formatting of the
    text

You can see below that I’ve added some basic Jumping River styling to
mine, and added two placeholders; a text placeholder for a title and an
image placeholder for a plot.

Slide Master view in PowerPoint.

In order to access these placeholders easily from R, it’s better to
rename them:

  1. Home tab
  2. Click the “Select” dropdown
  3. Click “Selection pane”
  4. Select your placeholder and rename

Changing your placeholder names via the selection pane in PowerPoint.

Here I’ve named my image placeholder “plot”, and my text placeholder
for the slide title, “title”. Note that it’s also a good idea to name
your layout – just right click and hit rename. In this demo I’ve just
left it as “Title Slide”.

The R code

Now that I’ve got my template set up, the rest is in R. First, we load
{officer} and read the PowerPoint document in as an R object.

library("officer")
doc = read_pptx("mytemplate.pptx")

If you’ve forgotten your layout / placeholder names, access them through
layout_summary() and layout_properties()

layout_summary(doc)
layout_properties(doc, layout = "Title Slide", master = "Office Theme")

Document properties for an officer document object.

Before any content can be added, content is needed! Let’s use the
{palmerpenguins} package to create a simple plot of “Adelie” penguins
data

library("palmerpenguins")
library("dplyr")
library("ggplot2")

adelie_plot = penguins |>
 filter(species == "Adelie") |>
 ggplot(aes(x = bill_length_mm, y = flipper_length_mm)) +
 geom_point() +
 theme_linedraw() +
 theme(
 # Make the background transparent
 plot.background = element_rect(fill = "transparent", colour = NA),
 # Match the panel colour to the slide
 panel.background = element_rect(fill = "#F1EADE", colour = NA)) +
 labs(
 x = "Bill Length (mm)",
 y = "Flipper Length (mm)")

I can add empty slides to the document using the add_slide() function.
Here I simply choose a layout from my .pptx file to use.

doc = add_slide(doc, layout = "Jumping Rivers", master = "Office Theme")
doc

Then, using the ph_with() function, I can insert R objects into my
placeholders by name

doc = ph_with(
 doc,
 value = "Adelie",
 location = ph_location_label("title")
)
# Add the plot
doc = ph_with(
 doc,
 value = adelie_plot,
 location = ph_location_label("myplot")
)

To create the PowerPoint, use print()

print(doc, "penguins.pptx")

An example of the PowerPoint output.

And there we have it! I’ve used only two placeholders here to keep the
example simple, but in reality there is no limit.

Looping

It’s easy to make use of programming when using purely R code to
generate PowerPoints. For instance, we could stick our code into a for
loop, and add a slide for each Penguin species

# Read in doc again
# this resets the doc object to the original file
doc = read_pptx("mytemplate.pptx")

for (penguin_species in c("Adelie", "Chinstrap", "Gentoo")) {
 doc = add_slide(doc, layout = "Title Slide", master = "Office Theme")
 # Add the title using the iterator value
 doc = ph_with(
 doc,
 value = penguin_species,
 location = ph_location_label("title")
 )
 # Create the plot using the iterator value
 penguin_plot = penguins |>
 filter(species == penguin_species) |>
 ggplot(aes(x = bill_length_mm, y = flipper_length_mm)) +
 geom_point() +
 theme_linedraw()
 theme(
 plot.background = element_rect(fill = "transparent", colour = NA),
 panel.background = element_rect(fill = "#F1EADE", colour = NA)) +
 labs(
 x = "Bill Length (mm)",
 y = "Flipper Length (mm)")
 # Add the plot
 doc = ph_with(
 doc,
 value = adelie_plot,
 location = ph_location_label("plot")
 )
}
# Output to a file
print(doc, "penguins_loop.pptx")

An example output for when looping and iteratively adding slides / inserting content.

Conclusion

There are a few drawbacks to this method:

  • It is quite annoying to insert large amounts of text using just an R
    script
  • Content added to the “Slide Master” slide cannot be moved or edited on
    the output file
  • The web version of powerpoint doesn’t have the Slide Master
    functionality features

However, I think the pros outweigh the cons:

For updates and revisions to this article, see the original post

To leave a comment for the author, please follow the link and comment on their blog: The Jumping Rivers Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you’re looking to post or find an R/data-science job.


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

Continue reading: Custom PowerPoints Using {officer}

Long-Term Implications and Future Developments of PowerPoint Customization with R

In simpler terms, the original article explained a method using the {officer} R package to customize PowerPoint presentations programmatically. The immense possibilities this approach offers cannot be understated, especially in an era where data and automation are fundamental. Here, we will further declutter the insights offered in the original article and discuss the potential implications.

Overcoming the Design Constraints of Quarto’s PowerPoint Output

Quarto, a popular tool for producing reports with data, restricts the variability in PowerPoint output. This limitation is resolved by the R package, {officer}, which bypasses Quarto’s restrictions, allowing extensive control and customization options on PowerPoint presentations.

{officer} and Its Value Proposition

The {officer} package essentially allows manipulation of Word and PowerPoint documents from the R programming language. The heart of this package stands in its power to add images, tables, and text into these MS Office documents from R and reuse the content, styles, and properties of an existing document. This means customization can be achieved primarily through R scripts and coding, opening the doors for automation in report generation.

How does the {officer} work?

The process involves the use of placeholder content on PowerPoint layout templates. The key lies in customizing such placeholders – be it for text, tables, images, etc., – in the PowerPoint presentation, which can later be populated programmatically through R scripts. This way, layout templates can be created in PowerPoint, media populated through R, and changes made even after the document is rendered, making the entire process highly flexible and adaptable.

Insights

1. Seamless Integration: {officer}’s ability to integrate PowerPoint with R reduces the workload when creating professional-looking slides. Moreover, users do not have to be proficient in CSS / LaTeX, making it more accessible.

2. Post-Rendering Edit: The rendered PowerPoint slide can still be edited. This offers unmatched flexibility, allowing users to make final adjustments even after automated rendering.

3. Powerful Looping: Being script-based, the R code can be looped to create multiple slides for different data sets efficiently.

4. Custom Styles: The layouts can be highly customized, with {officer} offering the ability to design completely bespoke slide templates.

Potential Drawbacks

However, some areas of concern node from the approach. Inserting large amounts of text only using an R script can be cumbersome, while content added to the “Slide Master” slide cannot be edited on the output PowerPoint.

Suggestions Based on the Analysis

Based on the insights, here are a few suggestions:

  1. Training in the R language is vital, given its central role in this approach.
  2. Also, relevant personnel should be familiar with PowerPoint’s Slide Master functionality for effective layout template creation.
  3. Time should be allocated for refactoring text insertion methods, considering its notable downside mentioned above. The effort to improve this area will drastically improve the overall functionality.

To conclude, the method discussed in the original article opens exciting possibilities in automating PowerPoint presentations. By incorporating more customization and control into the mix, there is the potential to revolutionize how slide presentations are created.

Read the original article

“Gauge-Theoretical Method Applied to Axisymmetric and Static Einstein-Maxwell Equations”

arXiv:2505.13513v1 Announce Type: new
Abstract: The gauge-theoretical method introduced in our previous paper is applied to solving the axisymmetric and static Einstein-Maxwell equations. We obtain the solutions of non-Weyl class, where the gravitational and electric or magnetic potentials are not functionally related. In the electrostatic case, we show that the obtained solution coincides with the solution given by Bonnor in 1979. In the magnetostatic case, we present a solution describing the gravitational field created by two magnetically charged masses. In this solution, we show a case where the Dirac string does not stretch to spatial infinity but lies between the magnetically charged masses.

Future Roadmap

Potential Challenges:

  1. Integration of gauge-theoretical methods into broader physics frameworks
  2. Verification and validation of solutions in practical scenarios
  3. Applicability of solutions to real-world problems
  4. Understanding the implications of non-Weyl class solutions

Opportunities on the Horizon:

  • Further exploration of non-functionally related potentials in gravitational and electromagnetic fields
  • Development of new mathematical tools for solving complex field equations
  • Integration of gauge-theoretical methods into advanced technology applications
  • Potential discovery of new physical phenomena through unconventional solutions

With the advancements in gauge-theoretical methods for solving complex field equations, the future holds promise for exploring new frontiers in gravitational and electromagnetic field interactions. By addressing challenges and seizing opportunities on the horizon, researchers can pave the way for groundbreaking discoveries in theoretical and applied physics.

Read the original article

“Expanding Bigraphical Reactive Systems for Real-Time Systems”

“Expanding Bigraphical Reactive Systems for Real-Time Systems”

Expert Commentary: Enhancing Bigraphical Reactive Systems for Real-Time Systems

In this article, the authors discuss the extension of Bigraphical Reactive Systems (BRSs) to support real-time systems, a significant advancement in the field of graph-rewriting formalisms. BRSs have been widely used in various domains such as communication protocols, agent programming, biology, and security due to their ability to model systems evolving in two dimensions: spatially and non-spatially.

One of the key contributions of this work is the introduction of multiple perspectives to represent digital clocks in BRSs, enabling the modelling of real-time systems. By using Action BRSs, which result in a Markov Decision Process (MDP), the authors are able to naturally represent choices in each system state, allowing for the passage of time or the execution of specific actions.

The implementation of this proposed approach using the BigraphER toolkit showcases its effectiveness through the modelling of cloud system requests and other examples. This extension opens up new possibilities for the application of BRSs in real-time systems, providing researchers and practitioners with a powerful tool for modelling and analyzing complex systems.

Future Directions

  • Further research could explore the application of this extended BRS framework to other domains beyond cloud computing, such as IoT devices, cyber-physical systems, or real-time monitoring systems.
  • It would be interesting to investigate the scalability and performance of the proposed approach when dealing with large-scale systems with multiple interconnected components.
  • Exploring the integration of formal verification techniques with Action BRSs could enhance the reliability and correctness of real-time systems modelled using this approach.

Overall, the extension of BRSs to support real-time systems represents a significant step forward in the evolution of graph-rewriting formalisms, opening up exciting new possibilities for modelling and analyzing complex systems in a wide range of application domains.

Read the original article

“CMFusion: A Novel Model for Multimodal Hate Video Detection”

arXiv:2505.12051v1 Announce Type: new
Abstract: The rapid rise of video content on platforms such as TikTok and YouTube has transformed information dissemination, but it has also facilitated the spread of harmful content, particularly hate videos. Despite significant efforts to combat hate speech, detecting these videos remains challenging due to their often implicit nature. Current detection methods primarily rely on unimodal approaches, which inadequately capture the complementary features across different modalities. While multimodal techniques offer a broader perspective, many fail to effectively integrate temporal dynamics and modality-wise interactions essential for identifying nuanced hate content. In this paper, we present CMFusion, an enhanced multimodal hate video detection model utilizing a novel Channel-wise and Modality-wise Fusion Mechanism. CMFusion first extracts features from text, audio, and video modalities using pre-trained models and then incorporates a temporal cross-attention mechanism to capture dependencies between video and audio streams. The learned features are then processed by channel-wise and modality-wise fusion modules to obtain informative representations of videos. Our extensive experiments on a real-world dataset demonstrate that CMFusion significantly outperforms five widely used baselines in terms of accuracy, precision, recall, and F1 score. Comprehensive ablation studies and parameter analyses further validate our design choices, highlighting the model’s effectiveness in detecting hate videos. The source codes will be made publicly available at https://github.com/EvelynZ10/cmfusion.

Expert Commentary: The Rise of Multimodal Approaches in Hate Video Detection

The proliferation of video content on social media platforms has brought about both positive and negative consequences. While it has democratized information dissemination and fostered creativity, it has also facilitated the spread of harmful content, such as hate videos. These videos often contain implicit messages that can be challenging to detect using traditional methods.

Current hate video detection approaches predominantly rely on unimodal techniques, which may not fully capture the complexity of multimedia content. Multimodal methods, on the other hand, leverage information from multiple modalities, such as text, audio, and video, to provide a more comprehensive understanding of the content. However, integrating temporal dynamics and modality-wise interactions in these approaches remains a challenge.

The CMFusion model introduced in this paper takes a step towards addressing this issue by utilizing a Channel-wise and Modality-wise Fusion Mechanism. By extracting features from different modalities and incorporating a temporal cross-attention mechanism, CMFusion aims to capture the nuanced relationships between video and audio streams. The model then processes these features using fusion modules to generate informative representations of hate videos.

Notably, the effectiveness of CMFusion is demonstrated through extensive experiments on a real-world dataset, where it outperforms five popular baselines in terms of accuracy, precision, recall, and F1 score. Ablation studies and parameter analyses further validate the design choices of the model, emphasizing its robustness in hate video detection.

From a multidisciplinary perspective, the development of CMFusion touches upon various fields, including multimedia information systems, animations, artificial reality, augmented reality, and virtual realities. As hate videos can manifest in different forms across these modalities, a holistic approach that combines insights from diverse disciplines is essential in combating harmful content online.

In conclusion, the integration of multimodal techniques, like CMFusion, represents a promising direction in addressing the challenges of hate video detection. By leveraging the complementary features of different modalities and incorporating advanced fusion mechanisms, researchers can enhance the accuracy and effectiveness of automated content moderation systems in the digital age.

Read the original article

“Transform Your Notes into an AI Tutor with NotebookLM”

Learn about turning your notes and sources into a personalized, AI-powered tutor with NotebookLM.

Long-term implications of AI-powered tutors from NotebookLM

The potential of artificial intelligence (AI) to transform education has always been promising. But with NotebookLM, this potential seems to be inching closer to reality. The implications of an AI-powered tutor, as proposed by NotebookLM, touch on various facets. Let’s delve into those and the future developments that could stem from this application of AI.

Personalized Learning Opportunities

One of the key benefits of an AI-powered tutor is the ability to provide personalized learning opportunities. With the potential to adapt to the learner’s pace and learning style, AI can offer a more tailored approach. This could lead to greater student engagement, performance, and overall satisfaction.

Future developments: Tailored Virtual Classrooms

Building upon this AI ability, one potential future development could be the emergence of tailored virtual classrooms. These platforms would use AI to continuously learn about students’ learning habits and offer tailored courses or features to further enhance their learning experience.

The Democratization of Education

The idea of an AI-powered tutor could lead to democratization in education. By making high-quality tutoring available to a broader audience, irrespective of geographic location or financial constraints, this technology could bridge significant knowledge gaps and promote equal learning opportunities.

Future Developments: Global Virtual Education Networks

Envision a future where global virtual education networks make quality education accessible to all. These networks could rely heavily on AI-powered tutoring systems to provide personalized, comprehensive, and quality education to everyone around the globe.

Actionable advice

  1. Embrace the change: The integration of AI in education is happening at a fast pace. Institutions, educators, and students need to embrace this change and prepare for an AI-driven educational landscape.
  2. Invest in AI literacy: Given the potential role of AI in education, more emphasis needs to be placed on AI literacy. Educators need to be trained in AI systems, and students need to be educated about the role and use of AI in learning.
  3. Policy and Regulation: As AI becomes integrated into education, it’s crucial to have clear policies and regulations to guide its use and avoid potential pitfalls. This includes issues related to data privacy, security, and ethical concerns.

In conclusion, the role of AI in education is expanding, and with platforms like NotebookLM, we’re starting to see how significant this change could be. It’s an opportunity for us to transform education and make it more accessible, engaging, and personalized than ever before.

Read the original article