Problems exporting Power BI with R Script - r

can you help me?
DataSource.Error: ADO.NET: A problem occurred while processing your R script.
Here are the technical details: Information about a data source is required.
Details:
DataSourceKind = R
DataSourcePath = R
Message = A problem occurred while processing your R script.
Here are the technical details: Information about a data source is required.
ErrorCode = -2147467259
ExceptionType = Microsoft.PowerBI.Scripting.R.Exceptions.RUnexpectedException
This error comes up when I try to export my table using an R script
write.csv2(dataset, file = paste0("C:/Users/Acer/OneDrive/ALLPARTS/ALLPARTSNET_", format(Sys.time(), "%Y%m%d"), ".csv"), row.names=F)
Already tried:
reinstall R, R Studio and Power BI
Is there a solution?
LINK POWER BI FILE
https://1drv.ms/u/s!AnziDWh5m2I1gplLfe9FM2M0EFqDyw

Click on File > Options & settings > Options.
Inside the Options window, you can change the Privacy in the Global or in the Current File section. Click on Privacy.
Select “Ignore the Privacy Levels and potentially improve performance”.
RUN SCRIPT AGAIN!

Power BI often changes the data type upon importing data into Power BI, and this conflicts with how R interprets that data. In my specific case, the error I was getting was the exact same as yours
I fixed this by removing the Applied Step where Power BI changed the data type of all of my columns. It instantly fixed my issue.

Related

R Studio not recognizing files when compiling raw db

I have long-read RNA sequencing data from PacBio IsoSeq and trying to use IsoPops software (https://kellycochran.github.io/IsoPops/site_files/walkthrough.html) to generate graphs to interpret the data. I am new to R-studio so have limited proficiency. There is a walkthrough for the program which seems quite intuitive. I have successsfully installed the package on R Studio, and have opened the library and assigned variables to the files I have (which include a fasta file, gff file and abundance files). Once I've successfully assigned variables to the files, I'm supposed to generate a rawdb using the function compile_raw_db. However, when I do this function I get the error:
Error in system(command = paste("head -n1 ", filename, step = ""), intern = T) :
'head' not found
I've used the head -n1 function in linux command line, and can see that there definitely is a header and contents in the file. The error occurs for any file that I give to R studio, so my reasoning is that the programme is not recognizing the files.
Thought the files might have been empty, opened on Ubuntu and head -n2 function worked fine for each file, so I don't think it's an issue with the files.
My data was originally stored on an online shared network drive, moved files to local computer to eliminate any possibility for path errors. Still have the error message.
Have tried the same function with a number of different files, and I always have the same error message. So I think the issue is with how R studio is reading the files, and not the files themselves.
Would really appreciate some support with this, at the end of my PhD trying to analyse the last piece of my data and really struggling with this. Thanks in advance for any feedback!
> library(IsoPops)
Welcome to IsoPops version 0.3.1.
> transcript_AD3 <- "C:/R_Package_for_IsoSeq/IsoPops/IsoPops_Data/AD3-hq_transcripts.fasta"
> abundance_AD3 <- "C:/R_Package_for_IsoSeq/IsoPops/IsoPops_Data/AD3_Collapsed_Filtered_Isoform_Counts.abundance.txt"
> GFF_AD3 <- "C:/R_Package_for_IsoSeq/IsoPops/IsoPops_Data/AD3-collapse_isoforms.gff"
> rawDB <- compile_raw_db(transcript_AD3, abundance_AD3, GFF_AD3)
[1] "Loading sequences..."
Error in system(command = paste("head -n1 ", filename, step = ""), intern = T) :
'head' not found

R tryCatch RODBC function issue

We have a number of MS Access databases on a server which are copies from remote locations which are updated overnight. We collate some of the data from these machines for reporting purposes on a daily basis. Sometimes the overnight update fails, meaning we don’t have access to all of the databases, so I am attempting to write an R script which will test if we can connect (using a list of the database paths), and output an updated version of the list including only those which we can connect to. This will then be used to run a further script which will only update the data related to the available databases.
This is what I have so far (I am new to R but reasonably proficient in SAS and SQL – attempting to use R both as a learning exercise and for potential cost savings);
{
# Create Store data locations listing
A=matrix(c(1000,1,"One","//Server/Comms1/Access.mdb"
,2000,2,"Two","//Server/Comms2/Access.mdb"
,3000,3,"Three","//Server/Comms3/Access.mdb"
)
,nrow=3,ncol=4,byrow=TRUE)
# Add column names
colnames(A)<-c("Ref1","Ref2","Ref3","Location")
#Create summary for testing connections (Ref1 and Location)
B<-A[,c(1,4)]
ConnectionTest<-function(Ref1,Location)
{
out<-tryCatch({ch<-odbcDriverConnect(paste("Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=",Location))
sqlQuery(ch,paste("select ",Ref1," as Ref1,COUNT(variable) as Count from table"))}
,error=matrix(c(Ref1,0),nrow=1,ncol=2,byrow=TRUE)
)
return(out)
}
#Run function, using 'B' to provide arguments
C<-apply(B,1,function(x)do.call(ConnectionTest,as.list(x)))
#Convert to matrix and add column names
D<-matrix(unlist(C),ncol=2,byrow=T)
colnames(D)<-c("Ref1","Count")
}
When I run the script I get the following error message;
Error in value[3L] : attempt to apply non-function
I am guessing this is because I am using TryCatch incorrectly inside the UDF?
Does anyone have any advice on what I am doing incorrectly, or even if this is the best way to do what I am attempting?
Thanks
(apologies if this is formatted incorrectly, having to post on my phone due to Stackoverflow posting being blocked)
Edit - I think I fixed the 'Error in value[3L]' issue by adding function(e) {} around the matrix function in the error part of the tryCatch.
The issue now is that the script just fails if it can't reach one of the databases, rather than doing the matrix function. Do I need to add something else to make it ignore the error?
Edit 2 - it seems tryCatch does now work - it processes the
alternate function upon error but also shows warnings about the error, which makes sense.
As mentioned in the edit above, using 'function(e) {}' to wrap the Matrix function in the error section of the tryCatch fixed the 'Error in value[3L]' issue, so the script now works, but displays error messages if it can't access a particular channel. I am guessing the 'warning' section of the tryCatch can be used to adjust these as necessary.

Error in file(con, "w") : cannot open the connection [Using R-Studio to plot interactive bar graphs using rCharts, knitr]

I am getting an error when I am trying to run the code below in R-Studio 3.3.2 on a Mac (OS Sierra)
devtools::install_github('ramnathv/rCharts')
install.packages("knitr")
require(rCharts)
require(knitr)
haireye <- as.data.frame(HairEyeColor)
n1 <- nPlot(Freq ~ Hair, group = 'Eye', type = 'multiBarChart',
data = subset(haireye, Sex == 'Male')
)
n1$save('fig/n1.html', cdn = TRUE)
cat('<iframe src="fig/n1.html" width= 100%, height=600</iframe>')
Pls see output below:
Error in file(con, "w") : cannot open the connection
In addition: Warning message: In file(con, "w") : cannot open file 'fig/n1.html': No such file or directory
But I am able to generate the reqd bar graph in the viewer when I use:
n1$show(cdn = TRUE)
in lieu of n1$save('fig/n1.html', cdn = TRUE)
To take care of write permission issues (if any), I also tried including the line below, altering the WD path wherever necessary.
knitr::knit2html('Users/documents/n1.html')
But it did not help. I see the n1.html file created but it only opens an empty browser.
Any help to resolve this is appreciated.
Best,
S
A lot of times we face this error due to caching in RStudio and in that case, actual code errors don't show up. Restart RStudio and this error will be gone and actual code errors would show.
You have two separate problems.
The connection error appears because the fig/ folder does not exist. Create the folder and the save command will work. R has functions to check the existance of directories and create new ones if you would like to do it in your code.
The second problem comes from the way you save, you should use n1$save('fig/n1.html', standalone = TRUE). Here you have a similar situation.
As a side-note, I would say rCharts is not currently developed or mantained at all, so I would recommend you to use another library for your charts. In my opinion Plotly is quite nice. rCharts brought the NVD3 project to R and the chart style is in my opinion really nice. However, as far as I know both projects are stopped so I would look for a library that is still alive.
I have fixed this problem with good old rm(list=ls()) . I know I have
fallen into sequences where the error stops execution of my script. I fix the error, and then it won't run. This is likely due to lazy evaluation but it is a near impossible problem to diagnose, so the solution at the top works almost all the time.

Download a custom dataset in Azure ML Jupyter/iPython Notebook using R

I need to download a custom dataset in an Azure Jupyter/iPython Notebook.
My ultimate goal is to install an R package. To be able to do this the package (the dataset) needs to be downloaded in code. I followed the steps outlined by Andrie de Vries in the comments section of this post: Jupyter Notebooks with R in Azure ML Studio.
Uploading the package as a ZIP file was without problems, but when I run the code in my notebook I get an error:
Error in curl(x$DownloadLocation, handle = h, open = conn): Failure
when receiving data from the peer Traceback:
download.datasets(ws, "plotly_3.6.0.tar.gz.zip")
lapply(1:nrow(datasets), function(j) get_dataset(datasets[j, . ], ...))
FUN(1L[[1L]], ...)
get_dataset(datasets[j, ], ...)
curl(x$DownloadLocation, handle = h, open = conn)
So I simplified my code into:
library("AzureML")
ws <- workspace()
ds <- datasets(ws)
ds$Name
data <- download.datasets(ws, "plotly_3.6.0.tar.gz.zip")
head(data)
Where "plotly_3.6.0.tar.gz.zip" is the name of my dataset of data type "Zip".
Unfortunately this results in the same error.
To rule out data type issues I also tried to download another dataset of mine which is of data type "Dataset". Also the same error.
Now I change the dataset I want to download to one of the sample datasets of AzureML Studio.
"text.preprocessing.zip" is of datatype Zip
data <- download.datasets(ws, "text.preprocessing.zip")
"Flight Delays Data" is of datatype GenericCSV
data <- download.datasets(ws, "Flight Delays Data")
Both of the sample datasets can be downloaded without problems.
So why can't I download my own saved dataset?
I could not find anything helpful in the documentation of the download.datasets function. Not on rdocumentation.org, nor on cran.r-project.org (page 17-18).
Try this:
library(AzureML)
ws <- workspace(
id = "your AzureML ID",
auth = "your AzureML Key"
)
name <- "Name of your saved data"
ws <- workspace()
It seems the error I got was due to a bug in the (then early) Azure ML Studio.
I tried again after the reply of Daniel Prager only to find out my code works as expected without any changes. Adding the id and auth parameters was not needed.

error in reading R file while submitting a r job to condor

I have a R Job that is submitted to the condor, The R file(one.R) which is submitted to the condor is reading another R file(two.R), when I submit the job to the condor its is failed and the reason for that is the submitted R(one.R) file is not reading the called R file(two.R)
Error in text file is :
Error in file(file, "rt") : cannot open the connection
Calls: read.table -> file
In addition: Warning message:
In file(file, "rt") :
cannot open file 'C:/Users/pcname/Desktop/test_case/two.R': Permission denied
Execution halted
and my submit file is
#test_input.condor
#
executable = C:\R\R-2.10.1\bin\Rscript.exe
arguments = one.R
universe = vanilla
getenv = true
#requirements = ARCH == "INTEL" && OPSYS == "WINNT60"
input = one.R
should_transfer_files = yes
transfer_executable = false
when_to_transfer_output = ON_EXIT
transfer_input_files = C:/Users/OmegaAdmin/Desktop/test_case/two.R
log = test_input.log
output = test_input.out
error = test_input.err
queue
Appreciate any ideas on this.
Thanks,
This is not an R-related problem, but a problem of accessibility. The error message seems rather clear to me: the server has no reading rights for that file. Make sure you share the file or folder you want to read in. I don't know what the setup is of the network and clusters wherever you're located, but you better contact the admins to ask how you get your files to the right places.
Also make sure that if you transfer files to the servers/cluster, you adapt your R script so it points to the right directories. Which is probably not your own harddrive...
When you say
transfer_input_files = C:/Users/OmegaAdmin/Desktop/test_case/two.R
That tells Condor to copy two.R into the current working directory when the job starts. The current working directory is a specially created workspace, not (usually) a home directory. Thus I would expect the full path to look something like
C:/condor/execute/dir_28412/two.R
However, R is actually looking in
C:/Users/pcname/Desktop/test_case/two.R
Why is R looking there? Does one.R potentially say "Find two.R in $HOME/Desktop/test_case"? Does it perhaps say, "Look in Desktop/testcase/two.R" and R has configuration that wants to look relative to the user's home directory?
The solution is almost certainly to modify one.R or your R configuration to look for two.R in the current working directory. If for some reason R changes its current working directory, the environment variable _CONDOR_SCRATCH_DIR should contain it.
On a related note, you said:
arguments = one.R
input = one.R
The first is an argument passed to Rscript.exe, which I'm guessing tells R to load and run a file called one.R. Except that script isn't present! If you want that to work, you'll need to add it to transfer_input_files. But it obviously appears to work; why? Because "input=one.R" means "take the contents of one.R and pipe it in as standard input to Rscript.exe; the same as if you'd typed those contents in." I'm guessing you can remove the arguments, or remove the input and add one.R to your transfer_input_files, removing the ambiguity.

Resources