BlySky Statistics - File naming conventions - r

When opening file 'TestFile.RData' in BlueSky Statistics it is opened with this name PLUS Dataset3 attached. Looks like this in tab TestFile.RData(Dataset3)
I would like to use my original name when using r code in the r command editor but from what I see BlueSky wants me to use the Dataset3 name.
Please clarify this file name issue for me.
If my original name is changed I see issues with reproducing things - as the given name of Dataset3 is not controllable.
Regards

Your observation is correct. When ever a file is opened in BlueSky Statistics (that is not an R datafile) we create a dataframe object in R. We name these objects sequentially namely Dataset1, Dataset2,Dataset3, etc. We could always use the name of the original file, however we went with Dataset1,Dataset2,Dataset3 for compatibility with SPSS. Many of our users come from SPSS and that is exactly what SPSS does. There is a simple work around, see below.
To work around this you need to change the default code we use to open the dataset. To see the code in the output window, Go to the top level menu Tools , Tools->Configuration settings->Select the Output tab and select the checkbox near the text "Show syntax in output window"
The code you will see when you open a dataset in the output Window is
BSkyloadDataset(fullpathfilename='C:/Users/Aaron_2/Documents/BlueSky Statistics/Sample Datasets/IRT/engagement.csv', filetype='CSV', worksheetName='',load.missing=FALSE, character.to.factor=FALSE, csvHeader=TRUE, isBasketData=FALSE, trimSPSStrailing=FALSE, sepChar=',', deciChar='.', datasetName='Dataset2')
All you need to do is change the datasetName parameter to the name you want to use
I will also add an enhancement to make the default behavior of naming the dataset when opening files to be the name of the file. This is easy to do.
With R datasets this is not a problem because we load all dataframe objects into the grid. The name of the dataset in the grid, continues to be the dataset object

BlueSky is one of the few packages that use R and allow you to open and work on multiple data files at once. This naming approach is its way of allowing that while using files that have not yet been stored as R data files (.RData). After importing data from a non-R file, simply use "File> Save as" and save it as an R Object (.RData). The next time you open that file, it will maintain the name you've given it.

Related

How to keep style format unchanged after writing data using openxlsx in R

I am using openxlsx in order to write the outputs of my data.
I have used the following code to read my data using readxl.
df1=read_excel("C:/my_data.xlsx",skip=2);
Now I want to write the output and keep the original Excel file using any possible package. I have used the following codes, but it does not keep the original Excel file. Can we do it it in R packages?
write.xlsx(df1, 'C:/mydata.xlsx',skip=2)
Given your code, you should nhave two different data files in your working directory:
"my_data.xlsx" (the one that you loaded), and "mydata.xlsx" (the one that you created through R). R shouldn't overwrite your files if you give them different names.
If there's only one file, are you sure you didn't use the same name for both files? If so, then everything should work fine if you give the files different names (e.g. "my_file1.xlsx" and "my_file2.xlsx")!
Also, in general, it's a good idea to give data files an informative name so that you don't accidentally delete/overwrite files that you need. For example, if the original excel data is you raw data, consider naming it "data_raw.xlsx", and make sure that you only read it, and whenever you make some changes to it, save it under a different name (e.g. "data_processed1.xlsx").
You can also save data files in the native R format .rds using the save_rds() function, this is especially helpful if you want to keep special attributes of variables such as factors, etc...
Hope this helps!

How to investigate 5MB+ datasets in RStudio's source editor?

My question:
Can I change the parameters in R to use the source editor to also view >5MB data sets in R?
If not, what is your advice?
Background:
I recently stopped looking at data in Excel and switched to R entirely. As I did in Excel and still prefer to do in R, I like to look at the entire frame and then decide on filters.
Problem: Working with the World Development Indicators (WDI) data set which is over 100MB, opening it in the source editor does not work. View(df) opens an empty tab in RStudio as also shown below:
R threw another error when I selected the data set from the Files Tab in column on the right of RStudio which read:
The selected file 'wdi.csv' is too large to open in the source editor (the file is 104.5 MB and the maximum file size is 5MB).
Solutions?
My alter ego would tell me to increase the threshold of datasets' file size for the source editor, so I could investigate it there. In brief: change 5 to 200 MB. My alter ego would also tell me that I would probably encounter performance issues (since I am using a MacAir).
How I resolved the issue:
I used head() and dplyr's glimpse() to get a better idea, but ended up looking at the wdi matrix in excel and then filtered it out in R. Newly created dataframes could be opened in the source editor without any problems.
Thanks in advance!

RStudio: Save data from Viewer

Due to a stupid mistake and a defective USB stick I lost a bunch of data and I am now trying to recover it.
Some of the the data is still displayed in the Viewer tabs when I open RStudio. However, I can only save R Scripts and R Markdownfiles out of the Viewer. The displayed data frames are nice and complete, I can sort and filter them in the Viewer, however, I cannot find a "save" option. Is there a possibility to save this displayed data into Rdata or csv or something similar?
I would suggest three different approaches, but none of them will necessarily work. I sort them according to my prior expectations of success.
1) You can copy all your data frame from the viewer and paste it into an external spreadsheet software to obtain a .csv file. E.g. through the "convert text to columns" button in MS Excel.
2) You can copy and paste the character string into an object that is passed to the text option of read.table or to dput(). Check out the "Copy your data" section of this famous SO question
3) Finally, you can get google Chrome's "Inspect Element" function to inspect the html code of the object in the viewer. Once you find the table you can copy paste and scrape with an html parser, e.g. using the rvest package. Good luck!
Thanks everybody, there is a way to access the data as Rdata files, which was kindly explained to me here
I used the second method and located the files in %localappdata%\RStudio-Desktop\viewer-cache.

Displaying png files from R into spotfire

I want to pass data from Spotfire to R and then display the plot constructed by R.
What is the best way to do this?
I’ve figured out the trick of putting images into Spotfire. It’s not hard if you follow these directions, but it’s done in a way very different from how you guess you would do it in Spotfire, and that’s why it took me awhile to figure out.
Here’s an overview of how to do it. You create a DocumentProperty which is a binary object, you write some Spotfire code that gives a value to that Document Property, and you display that binary object using a Spotfire Property Control of the “Label” type.
The confusing parts are that you DON’T use the Spotfire “Insert Image” tool at all, and that you DON’T use the filename generated inside the R code in Spotfire at all. Once you get used to the idea that the two most obvious ways you think you would approach the problem in Spotfire are entirely useless and wrong, you can make some progress.
I’ll leave out the spiderplot specifics because the code’s pretty long.
Here’s what you do.
1) Create a document Property in Spotfire of type “Binary”, e.g., “imageThatGoesBackToSpotfire”
2) You write some R code that generates an image and writes it to a file:
# get a temporary directory name on the local machine. You wouldn’t need to do this is you were just
# going to run it on your own, but you need to do it if you intend to let anybody else run it on their own machine.
tempfilebase = tempfile()
# take the tempfilebase and prepend it to a filename.
myFilename<-“someFileName.jpg”
myFullFilename <- paste(tempfilebase,myFilename,sep="")
#open a jpeg
jpeg(filename=myFullFileName)
# generate the image, however you normally would in R
plot(input)
# close the file
dev.off
# open a connection to that file.
myConnection<-file(myFullFileName,open=”rb”)
imageThatGoesBackToSpotfire<- data.frame(r=readBin(myConnection, what="raw", n=(file.info(myFullFileName)$size)))
close(myConnection)
3) Run your R script, above. Select some columns that are the “input” to the plot, and make the R script return outputs to the “imageThatGoesBackToSpotfire” DocumentProperties.
4) Create a text area in Spotfire.
5) Insert a Property Control into the text area of type “label”. (Click on the icon that’s circled in the picture below). This opens a dialog,
You need to register a data function with inputs and outputs, and the specific PNG data needs to be returned as a binary label.
Some details: http://spotfire.tibco.com/tips/2014/02/25/dynamically-displaying-images-in-a-text-area/

How to write multiple tables, dataframes, regression results etc - to one excel file?

I am looking for an easy way to get objects into MS Excel.
(I am using the preinstalled "Puromycin"-dataset for the examples)
I would like to place the contents of these objects to a single excel file:
Puromycin
summary(Puromycin$rate)
summary(Purymycin$conc)
table(Puromycin$state)
lm( conc ~ rate , data=Puromycin)
By "contents" i mean what is shown in the console when i press enter. I dont know what to call it.
I tried to do this:
sink("datafilewhichexcelhopefullyunderstands.csv")
Puromycin
summary(Puromycin$rate)
summary(Purymycin$conc)
table(Puromycin$state)
lm( conc ~ rate , data=Puromycin)
sink()
This gives med a file with the CSV-extension, however when i open the file in notepad,
there is comma-separation. That means that i cant get Excel to open it properly. By properly
i mean that each number is in its own cell.
Others have suggested this for a similar problem
https://stackoverflow.com/a/13007555/1831980
But as a novice i feel that the solution is too complex, and I am hoping for a simpler method.
What I am doing now is this:
write.table(Puromycin, file="clipboard" , sep=";" , row.names=FALSE )
write.table(summary(Purymycin$conc), file="clipboard" , sep=";" , row.names=FALSE )
... etc...
But this requires i lot of copy-ing and pasting, which I hope to eliminate.
Any help would appreciated.
write.table and its friends are intended to write out columns of data separated by whatever separator is specified. Your clipboard contains several data types because you are using summary which always gives a unique output.
For writing the data values out, you can use write.csv on a data frame and then open with Excel. For example, Puromycin is already a data frame (which you can see with str(Puromycin)) so you can just write it out directly:
write.csv(file = "some file.csv", x = Puromycin)
Which will go into the current working directory (which can be determined with getwd()).
To write out/save the results of the regression model is a bit more of a challenge. You could definitely use sink as you did, but specify an extension of .txt on your file so a text editor can open it. There are fancier methods (sweave, knitr) which you might want to look into in the long run, as they can write really nice reports automatically.
In the meantime, get to know str(any R object) as it will be your friend. You can see all the objects in your workspace with ls().
This will only be helpful if you are prepared to use Excel's Data/Text to Columns functions:
capture.output( sapply( c(Puromycin,
summary(Puromycin$rate),
summary(Puromycin$conc),
table(Puromycin$state),
lm( conc ~ rate , data=Puromycin) ), FUN=print), file="datafilewhichexcelhopefullyunderstands.csv", append=TRUE)
The problem being that Excel will not read the whitespace as a cell separator unless you specifically tell it to. You can (and I have often done so) use the fixed filed input features offered by the Text-to-Columns dialog interface.
Your simplest option may be to use the RExcel tool, it transfers information between R and Excel. However it is not free software.
The XLConnect package is another option, it can be used to write information directly to an Excel file.
The tricky part is the lm call. lm does not return a simple vector, matrix, or data frame (all of which are easy to convert to csv or send directly) and there is not a clear way to convert the various parts of a list to cells in a spreadsheet. What would be better is to use extractor functions to pull the important parts from the return of lm or the summary of the lm object and send those to Excel using the other tools.
If you can tell us more about why you want the numbers in Excel and what you plan to do with them after, then we may be able to offer better help (you may be able to completely skip excel).
If the main goal is to share output with others then you should really look at the knitr package (or other related packages). This will not create Excel files, but can be used (along with the pandoc program and possibly other tools) to create a report file in a format easy to share with others not familiar with R. You could put everything into a .pdf file or a .docx file (the latter read by MS Word and would have tables wich can be edited using Word). There is not a simple way to get edits back into R, but with the track changes you can easily see what changes have been made and hand edit your R script/template accordingly.

Resources