I would like to make a .pdf report that alternates between portrait (for text) and landscape (for large figures) sections. I use the officer package in R which generates a .docx, that I can convert into .pdf using Word or LibreOffice.
I made some attempts, but I have the following problems: I have a blank portrait page at the end, which I would like to remove, and if I convert into pdf it adds blank pages between the portrait and landscapes pages. You can also detect these blank pages in word by numbering the pages (they switch for 1 to 3 will skipping the 2), or looking at the impression viewer. This problem is explained in http://wordfaqs.ssbarnhill.com/BlankPage.htm for how to deal with them in Word, but I would like a solution to remove those blank pages using officer because I will have hundreds of sections alternating between portrait and landscape to deal with.
Here is my attempt:
library(officer)
doc_1 <- read_docx()
doc_1 <- body_add_par(doc_1, value = "Portrait")
doc_1 <- body_end_block_section(doc_1, block_section(prop_section()))
doc_1 <- body_add_par(doc_1, value = "Landscape")
doc_1 <- body_end_section_landscape(doc_1)
temp <- tempfile(fileext = ".docx")
temp
print(doc_1, target = temp)
# system(paste0('open "', temp, '"'))
The answer of David (underneath) improves my problem, but it does remove some of the portrait orientations when I try to iterate it using body_add_docx (which I use for efficiency reasons, see https://github.com/davidgohel/officer/issues/184):
library(officer)
portrait_section_prop <- prop_section(page_size = page_size(orient = "portrait"))
landscape_section_prop <- prop_section(page_size = page_size(orient = "landscape"))
core <- function(i){
doc_1 <- read_docx() |>
body_add_par(value = paste("Portrait", i)) |>
body_end_block_section(value = block_section(portrait_section_prop)) |>
body_add_par(value = paste("Landscape", i)) |>
body_end_block_section(value = block_section(landscape_section_prop)) |>
body_set_default_section(landscape_section_prop)
return(doc_1)
}
accu <- core(1)
for(i in 2:10){
doc_1 <- core(i)
temp <- tempfile(fileext = ".docx")
print(doc_1, temp)
accu <- body_add_docx(accu, temp)
}
print(accu, target = tempfile(fileext = ".docx")) |> browseURL()
Here is the code you need, you need to define the same default section than the one you want to end the document so that Word agree to not add a page:
library(officer)
portrait_section_prop <- prop_section(page_size = page_size(orient = "portrait"))
landscape_section_prop <- prop_section(page_size = page_size(orient = "landscape"))
doc_1 <- read_docx() |>
body_add_par(value = "Portrait") |>
body_end_block_section(value = block_section(portrait_section_prop)) |>
body_add_par(value = "Landscape") |>
body_end_block_section(value = block_section(landscape_section_prop)) |>
body_set_default_section(landscape_section_prop)
temp <- tempfile(fileext = ".docx")
print(doc_1, target = temp)
Related
I have an rdocx and I want to manipulate something in the xml code. That's my document:
library(officer)
doc <- read_docx() %>%
body_add_par("centered text", style = "centered") %>%
slip_in_seqfield("STYLEREF 1 \\s") %>%
slip_in_text("\u2011") %>%
slip_in_seqfield(sprintf("SEQ %s \\* ARABIC \\s 1", "Table")) %>%
slip_in_text(str_c(": ", "My Caption")) %>%
body_bookmark("my_bookmark")
With doc$doc_obj$get() I can get the xml code with classes xml_document and xml_node. Now I want to replace some code, in detail I want the part with w:bookmarkEnd to appear later so the bookmarked part gets bigger. How can I achieve this? If I could achieve this with str_replace it would be awesome.
You can use run_bookmark() as in the following example (the manual does not state that lists are supported, I'll add that info soon):
library(officer)
bkm <- run_bookmark(
bkm = "test",
list(
run_word_field(field = "SEQ tab \\* ARABIC \\s"),
ftext(" Table", prop = fp_text_lite(color = "red"))
)
)
doc <- read_docx()
doc <- body_add_fpar(
x = doc,
value = fpar(bkm)
)
# how to make a reference to the bkm
doc <- body_add_fpar(
x = doc,
value = fpar(run_reference("test"))
)
print(doc, "zz.docx")
I am very new to Quarto and I am trying to create an markdown document using it. Everything works well except I am not able to render tables on the output HMTL file.
The following is my code. The HTML document shows ':: {.cell-output-display}' where the table is supposed to be rendered. I would really appreciate it if you could help me out with this.
process_results <- function(value){
results <- topTable(fit2, coef=value,n=Inf,sort.by = 'p')
top_results <- head(results, n = 10) %>%
kable(caption = value) %>% ### This works on traditional mardown.
kable_styling()
...........
..............
}
process_results('GroupB_vs_GroupA')
The function doesn't return a value is the problem.
library(kableExtra)
process_results <- function(value){
#results <- topTable(fit2, coef=value,n=Inf,sort.by = 'p')
top_results <- head(mtcars, n = 10) %>%
kable(caption = value) %>% ### This works on traditional mardown.
kable_styling()
return(top_results)
}
process_results('GroupB_vs_GroupA')
Is it possible to keep the original aspect ration while inserting an image in a .docx document with the function body_replace_img_at_bkm() from the package officer?
my code looks like this:
library(officer)
library(magrittr)
img.file <- file.path( R.home("doc"), "html", "logo.jpg" )
doc <- read_docx() %>%
body_add_par("centered text", style = "centered") %>%
slip_in_text(". How are you", style = "strong") %>%
body_bookmark("text_to_replace") %>%
body_replace_img_at_bkm("text_to_replace", value = external_img(src = img.file, width = .03, height = .03)) %>%
print(target = "yourpath/KeepAspectRatio.docx")
I tried this:
...
body_replace_img_at_bkm("text_to_replace", value = external_img(src = img.file)) %>%
...
This did not work. It shows the image with (i believe) the correct aspect ration but the image is not the desired size. I want to make it smaller without changing the aspect ratio.
Thank you very much in advance
p.s. I had to use width = .03, height = .03 in my case because the image was huge for some reason.
For png files I found the following solution/alternative:
library(png)
ImgZoom <- 3
temp <- readPNG("filepath.png",native = TRUE,info = TRUE)
body_replace_img_at_bkm("text_to_replace",
value = external_img(src = img.file
,width = ImgZoom*(attr(temp,"info")$dim[1]/attr(temp,"info")$dim[2])
,height = ImgZoom*(attr(temp,"info")$dim[2]/attr(temp,"info")$dim[2])))
But this is a small detour and does only work for png files so far.
I will also try this for other file types.
The best way would be an extra option in the function: body_replace_img_at_bkm().
Something like: body_replace_img_at_bkm(...,keep_aspect_ratio = TRUE)
Hi so I have I am trying to write my own function that will automatically size and central align the images/tables that I am generating from officer.
The top/left functions in the add_image and add_table are good when I add each table manually as The differing tables are differing sizes but I wasn't sure how to automatically do it for different shaped images and tables.
I have the following code for images:
Image <- function(PP,title,footer,pageNO,path){
im <- load.image(path)
width <- imager::width(im)
height <- imager::height(im)
ratio <- width/height
if(ratio < 9.15 / 3.6){
y <- 3.6
x <- 3.6 * ratio
}
if(ratio >= 9.15 / 3.6){
y <- 9.15 / ratio
x <- 9.15
}
PP <- PP %>%
add_slide(layout = "Title and Content", master = "Office Theme") %>%
ph_with_text(type = "title", str = title) %>%
ph_with_text(type = "dt", str = format(Sys.Date())) %>%
ph_with_img(type = "body", src = path, height = y, width = x) %>%
ph_with_text(type = "ftr",str = footer) %>%
ph_with_text(type = "sldNum", str = pageNO)
}
I would then like to align this image in the centre of the slide but am not sure how I would do this.
Then the same for tables but I have no idea how to do this as I would have to automatically rescale the width and height of columns based on original width and height of the flextable image as I wouldn't want to lose the aspect ratio unless there is a different way?
Current code for this is: but it doesn't do any resizing:
Slide <- function(PP,title,footer,pageNO,table){
PP <- PP %>%
add_slide(layout = "Title and Content", master = "Office Theme") %>%
ph_with_text(type = "title", str = title) %>%
ph_with_text(type = "ftr",str = footer) %>%
ph_with_text(type = "dt", str = format(Sys.Date())) %>%
ph_with_flextable(value = table, type = "body") %>%
ph_with_text(type = "sldNum", str = pageNO)
}
Many thanks in advance, any help would be greatly appreciated
Old question, but if anyone's looking here's a reproducible example that does the following:
Create a new PowerPoint deck;
Create a flextable table that we'd like to include;
Put the table in the centre of a new slide; and,
Write it to file.
You can extend this to solve more particular problems, e.g. by using PowerPoint templates, using officer::layout_properties() to get more details about layouts, and so on.
library(tidyverse)
library(officer)
library(flextable)
# create a PowerPoint slide deck
deck <- officer::read_pptx()
# make a fletable with the first 5 rows of mtcars
deck_table <- mtcars %>%
head(5) %>%
flextable::flextable()
# get table dimensions
table_width <- sum(dim(deck_table)$widths)
table_height <- sum(dim(deck_table)$heights)
# get slide dimensions in inches
# these are from PowerPoint defaults for 3:4; widescreen width is 13.333 in
slide_width <- 10
slide_height <- 7.5
# find the left and top margins to centre the table
table_leftmargin <- (slide_width - table_width)/2
table_topmargin <- (slide_height - table_height)/2
# add a slide, then add the table at the desired location
deck %>%
officer::add_slide() %>%
officer::ph_with(value = deck_table,
location = officer::ph_location(left = table_leftmargin,
top = table_topmargin))
# save to file
print(deck, "test.pptx")
I am writing up a report where the output gets pushed to a xlsx document via library(xlsx). This data then feeds into a table especially formatted with LaTeX code that formats the output:
```{r import_results, echo = F}
if(!file.exists("Analysis/results.xlsx")){
wb <- xlsx::createWorkbook(type = "xlsx")
sheets <- xlsx::createSheet(wb, "data")
}else{
wb <- loadWorkbook("Analysis/results.xlsx")
sheets <- removeSheet(wb, "data")
sheets <- xlsx::createSheet(wb, "data")
}
getSheets(wb)
addDataFrame(sheet = sheets, x = Results1)
addDataFrame(sheet = sheets, x = Results2, startRow = nrow(Results1)+2)
addDataFrame(sheet = sheets, x = Results3, startRow = nrow(Results1)+ nrow(Results2) + 4)
xlsx::saveWorkbook(wb, "Analysis/results.xlsx")
}
After writing to sheet that table data is linked to, I read it back into R, now with all the LaTeX in the cells and in essence I want to cat results so they are LaTeX code, but it prints the data.frame as a long string when I knit:
```{r, echo = F, results='asis'}
wb <- read.xlsx("Analysis/results.xlsx", sheetName = "import", header=F)
row.names(wb) <-NULL
wb
```
What is the appropriate way to automate this cross platform integration?
I got the example to work in a very "simplistic" manner. Although I dont think its the cleanest code out there, it does work for what I need it to do:
library(dplyr)
library(xlsx)
wb <- read.xlsx("Analysis/results.xlsx", sheetName = "import", header=F, stringsAsFactors = F)
wb <- mutate_each(wb, funs(replace(., which(is.na(.)), "")))
```{r, echo = F, results = 'asis'}
for(i in 1:nrow(wb)){
cat(unlist(wb[i,]),"\n")
}
```