Loading saved viewpoints from navisworks into forge - forge

Is it possible to load saved viewpoints from navisworks into the forge viewer? We are currently having to save a model multiple times and then load multiple times. If we can bring in saved viewpoints the process would be much better.
Thanks
Daniel

Related

Is there a way of rendering rmarkdown to an object directly without saving to disk?

We are working on a project where following an analysis of data using R, we use rmarkdown to render an html report which will be returned to users uploading the original dataset. This will be part of an online complex system involving multiple steps. One of the requirements is that the rmarkdown html will be serialized and saved in a SQL database for the system to return to users.
My question is - is there a way to render the markdown directly to an object in R to allow for direct serialisation? We want to avoid saving to disk unless absolutely needed as there will be multiple parallel processes doing similar tasks and resources might be limited. From my reasearch so far it doesn't seem possible, but would appreciate any insight.
You are correct, it's not possible due to the architecture of rmarkdown.
If you have this level of control over your server, you could create a RAM disk, using part of your the memory to simulate a hard drive. The actual hard disk won't be used.

R - Debugging data heavy scripts without reloading data

I have some algorithms that load quite large CSV files and was wondering if it is possible to debug some of the logic without having to reload the data every time?
In Spyder you can just debug a single cell is there something similar like that in R?

Speedup UI in loading shiny

In global.R file I am reading some 10-12 excel files, some user defined functions, modules and doing some data manipulation (not so heavy task) on top of that. I want to speed up loading shiny app. I was thinking if I save it in .RData and then do load("mydata.RData", envir = .GlobalEnv) instead of reading excel files and sourcing functions in global.R. Would it improve loading time of shiny app? I am fine even if UI appears but server still loads. I am more interested in showing UI to the user instantly and user can wait for some calculation. I am using docker for production, hence mainly interested in UI loading time as container takes some time to spin up which user has to wait and then loading the app also takes time.
That is a big topic and there are several points in which you can improve your Shiny-App, so that it runs faster.
The first idea would be, to put every tab into a module. Meaning that the code you normally run within your ui.R will get a lot shorter. Thus the app gets faster, since the plots, files, etc. what is needed within this module, just gets loaded once the user clicks on that tab.
Make your app more efficient by using data.table. This package is specifically designed for faster usage. You can even combine it with dplyr. In your case try loading your files with the data.table::fread() command.
When it comes to plotting you can even use JavaScript's D3. There is a package called r2d3, which enables you to use JS's D3 to plot within your Shiny-App.
Convert the excel-files into a more machine readable format, like .rds. This also increases the loading speed.
I would suggest, you once use the profvis package and run your Shiny-App with it. It will allow, after you've loaded your app and closed it again, to see what exactly took so much time. Maybe it was not the loading after all, but a different problem instead? Then you could go from there.

project organization with R [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Workflow for statistical analysis and report writing
I have been programming with R for not too long but am running into a project organization question that I was hoping somebody could give me some tips on. I am finding that a lot of the analysis I do is ad hoc: that is, I run something, think about the results, tweek it and run some more. This is conceptually different than in a language like C++ where you think about the entire thing you want to run before coding. It is a huge benefit of interpreted languages. However, the issue that comes up is I end up having a lot of .RData files that I save so I don't have to source my script every time. Does anyone have any good ideas about how to organize my project so I can return to it a month later and have a good idea of what each file is associated with?
This is sort of a documentation question I guess. Should I document my entire project at each leg and be vigorous about cleaning up files that will no longer be necessary but were a byproduct of the research? This is my current system but it is a bit cumbersome. Does anyone else have any other suggestions?
Per the comment below: One of the key things that I am trying to avoid is the proliferation of .R analysis files and .RData sets that go along with them.
Some thoughts on research project organisation here:
http://software-carpentry.org/4_0/data/mgmt/
the take-home message being:
Use Version Control for your programs
Use sensible directory names
Use Version Control for your metadata
Really, Version Control is a good thing.
My analysis is a knitr document, with some external .R files which are called from it.
All data is in a database, but during my analysis the processed data are saved as .RData files. Only when I delete the RData, they are recreated from the database when I run the analysis again. Kinda like a cache, saves database access and data processing time when I rerun (parts of) my analysis.
Using a knitr (Sweave, etc) document for the analysis enables you to easily write a documented workflow with the results included. And knitr caches the results of the analysis, so small changes do usually not result in a full rerun of all R code, but only of a small section. Saves quite some running time for a bigger analysis.
(Ah, and as said before: use version control. Another tip: working with knitr and version control is very easy with RStudio.)

How can I update .RData?

After reading this question I attempted to clean out my workspace and found that each time I opened R all the original items I had recently removed were restored. I then checked .RData and found that it had not been modified in a few weeks even though I repeatedly saved the workspace image. How often is .RData updated and how can I change when .RData is updated so that it reflects more recent changes?
It gets modified if and when you
use save.image()
use q() and answer yes
Otherwise it does not get changed.
My personal preference is to explicitly load and save data I want to cache across sessions or for further analysis.

Resources