it is a rather untypical scenario, I am using R Custom visual in PowerBI to plot a raster and the only way to pass data is by using a dataframe.
this what I have done so far,
generate a raster in R
save it to file using SaveRDS
encoded the file as a base64 and save it as a csv.
now using this code I manage to read the csv, load it to a dataframe combine al the rows
my question is how to decode it back to a raster Object ?
here is a reproducible example
# Input load. Please do not change #
`dataset` = read.csv('https://raw.githubusercontent.com/djouallah/keplergl/master/raster.csv', check.names = FALSE, encoding = "UTF-8", blank.lines.skip = FALSE);
# Original Script. Please update your script content here and once completed copy below section back to the original editing window #
library(caTools)
library(readr)
dataset$Value <- as.character(dataset$Value)
dataset <- dataset[order(dataset$Index),]
z <- paste(dataset$Value)
Raster <- base64decode(z,"raw")
here is the result
it turned out the solution is very easy, saveRDS has an option to save with ascii = TRUE
saveRDS(background,'test.rds',ascii = TRUE,compress = FALSE)
now I just read it as humain readbale format (which is easy to load to PowerBI) and it works
fil <- 'https://raw.githubusercontent.com/djouallah/keplergl/master/test.rds'
cony <- gzcon(url(fil))
XXX <- readRDS(cony,refhook = NULL)
plotRGB(XXX)
Related
Disclaimer: R noob here!
On a high level, I am trying to convert pdf to xls ;) The pdf is well formatted, no surprises expected. At one point I am trying to modify multiple cells using xlsx package in a loop. I've got a variable list of 3 - 5 elements and want to change the content of 7th column in .xls file, starting with 14th row. The list is coming from a PDF file (src.pdf below).
Here's the code:
library(xlsx)
library(pdftools)
library(stringr)
library(tabulizer)
library(tidyverse)
# for example data, separately download src.xls from https://file-examples-com.github.io/uploads/2017/02/file_example_XLS_100.xls, i. e. using wget -O src.xls https://file-examples-com.github.io/uploads/2017/02/file_example_XLS_100.xls
src <- xlsx::loadWorkbook(file = "src.xls")
sheets <- getSheets(src)
rows <- getRows(sheets$List1)
cc <- getCells(rows)
pdf_path <- "src.pdf"
# dest <- extract_tables("src.pdf", output="data.frame", area = list(c(163, 315, 217, 459)), guess = FALSE, header = FALSE)
dest <- extract_tables("https://unec.edu.az/application/uploads/2014/12/pdf-sample.pdf", output="data.frame", area=list(c(195,103,376,515)), guess = FALSE, header = FALSE)
#use as
#dest[[c(1,1)]][1]
#dest[[c(1,1)]][2]
#...
row = 0
for (i in 1:length(dest[[1]]$V1))
{
row = i+13
setCellValue(paste0("cc$`",row,".7`"), value = dest[[c(1,1)]][i])
}
This returns
Error in .jcall(cell, "V", "setCellValue", value) :
RcallMethod: cannot determine object class
Any ideas how to use setCellValue in a loop? I am open to using different modules as well, as long as they keep the formatting of the source .xls.
Thank you!
I am trying to automatically download a bunch of zipfiles using R. These files contain a wide variety of files, I only need to load one as a data.frame to post-process it. It has a unique name so I could catch it with str_detect(). However, using tempfile(), I cannot get a list of all files within it using list.files().
This is what I've tried so far:
temp <- tempfile()
download.file("https://url/file.zip", destfile = temp)
files <- list.files(temp) # this is where I only get "character(0)"
# After, I'd like to use something along the lines of:
data <- read.table(unz(temp, str_detect(files, "^file123.txt"), header = TRUE, sep = ";")
unlink(temp)
I know that the read.table() command probably won't work, but I think I'll be able to figure that out once I get a vector with the list of the files within temp.
I am on a Windows 7 machine and I am using R 3.6.0.
Following what was said before, this structure should allow you to check the correct download with a temporary file structure :
temp <- tempfile("test.zip")
download.file("https://url/file.zip", destfile = temp)
files <- list.files(temp)
I am using R to do some work but I'm having difficulties in transposing data.
My data is in rows and the columns are different variables. When using the function phyDat, the author indicates a transpose function because importing data is stored in columns.
So I use the following code to finish this process:
#read file from local disk in csv format. this format can be generated by save as function of excel.
origin <- read.csv(file.choose(),header = TRUE, row.names = 1)
origin <- t(origin)
events <- phyDat(origin, type="USER", levels=c(0,1))
When I check the data shown in R studio, it is transposed but the result it is not. So I went back and modified the code as follows:
origin <- read.csv(file.choose(),header = TRUE, row.names = 1)
events <- phyDat(origin, type="USER", levels=c(0,1))
This time the data does not reflect transposed data, and the result is consistent with it.
How I currently solve the problem is transposing the data in CSV file before importing to R. Is there something I can do to fix this problem?
I had the same problem and I solved it by doing an extra step as follows:
#read file from local disk in csv format. this format can be generated by save as function of excel.
origin <- read.csv(file.choose(),header = TRUE, row.names = 1)
origin <- as.data.frame(t(origin))
events <- phyDat(origin, type="USER", levels=c(0,1))
Maybe it is too late but hope it could help other users with the same problem.
I export my output file into a text file. Using two type of function.
sink()
write.table()
My list content is exported using sink() and data.frame content is exported using write.table().
Is it possible to open the text file automatically after created?
please give example .
I create text file into two way..
write.table(x, file ="F:\\frequent itemset.txt",row.names=FALSE,sep="=")
here x is data frame..And
sink("F:\\Large itemset.txt")
print(mylist)
sink()
print(mylist)
mylist is list data structure
Below code is used to open a data.frame in .csv file...
Is there any simpler way other than this?
myView <- function(dframe) {
# RStudio does not have a good viewer for large data frames. This
# function writes a dataframe to a temporary .csv and then opens it,
# presumably in excel (if that is the file association).
csvName <- paste0(tempdir(), "\\myView-", substitute(dframe),
format(Sys.time(), "%H%M%S"), ".csv")
write.csv(dframe, file = csvName)
shell.exec(csvName)
}
This is how package RMark opens a notepad on windows every time you call print on mark object. On non-Windows OS, you have to use another editor, obviously.
system(paste(shQuote("notepad"), "test.txt", sep = " "))
EDIT
Here is a self contained example of how to make up mock data, save it to a table and open it using a notepad.exe.
mydf <- data.frame(x = runif(10), y = runif(10))
filename <- "test.csv"
write.table(mydf, file = filename, sep = ",", row.names = FALSE)
system(paste(shQuote("notepad"), filename, sep = " "), wait = FALSE, invisible = FALSE)
If your question in 'only' about viewing data frames, you may have a look at the gvisTable function in package googleVis.
"The gvisTable function reads a data.frame and creates text output referring to the Google Visualisation API, which can be included into a web page, or as a stand-alone page. The actual chart is rendered by the web browser."
There are loads of nice tutorials on googleVis, e.g. the vignette. Here is just a very simple example.
library(googleVis)
gt <- gvisTable(iris)
plot(gt)
gt <- gvisTable(iris, options = list(page = 'enable', height = 300))
plot(gt)
I want to normalize data using RMA in R package. but there has problem it does not read .txt file. Please tell me, "what I do for normalizing data from .txt file?"
reply please
Basically all normalization methods in Bioconductor are based on the AffyBatch class. Therefore, you have to read your textfile (probably a matrix) and create an AffyBatch manually:
AB <- new("AffyBatch", exprs = exprs, cdfName = cdfname, phenoData = phenoData,...)
RMA needs ExpressionSet structure. After reading the file (read.table()) and cleaning colnames and row.names convert the file to matrix and use:
a<-ExpressionSet(assayData=matrix)
If didnt work, import your *.txt data to flexarray software which can read it and do rma.
This may work.
I use normalizeQuantiles() function from Limma R package:
library(limma)
mydata <- read.table("RDotPsoriasisLogNatTranformedmanuallyTABExport.tab", sep = "\t", header = TRUE) # read from file
b = as.matrix(cbind(mydata[, 2:5], mydata[, 6:11])) # set the numeric data set
m = normalizeQuantiles(b, ties=TRUE) # normilize
mydata_t <- t(as.matrix(m)) # transpose if you need