Xcos - Include an xcos model inside another? - scilab

Is it possible to place one xcos model inside another?
I'm working on a motor control simulation. I would like to put together a script which plots open and closed loop responses. The most straightforward way to do this is to put the motor model in one superblock used for the open loop simulation, then make a copy of that superblock for the closed loop simulation. Unfortunately, that means any time I change the motor model, I have to change both copies (or copy the block again, which breaks the connections).
Ideally, I would like to have the motor model to live in a "motor.zcos" file and be able to place two instances of it in a main.zcos file. Changing motor.zcos would naturally affect both instances.
Is there any way to do this? Or is there another recommended way of solving the problem?
[Scilab 6.1.1, Windows 10]

You may have a single diagram for both simulations just opening the feed back loop removing a link

Related

Trying to know the cut off point of an inbuilt function, since currently it is not running. In R

In R, I am trying to use the markov chain package and converting clickstream data to markov chain. I have 4GB of RAM but the program cannot run the command(after a lot of time). This is because after a while the ongoing conversion cannot allocate more than 3969mb of data(that is what the screen says). I am trying to find out that, as to what point will the program run? So if I have say `n' nodes, till how many nodes(obviously less than n) or rows(the rows might contain same or different nodes) will the program run. I am trying to do Attribution Modelling using R. The conversion path are converted from clickstream form to a markov chain. Trying to find out the transition matrix using that.
Image with the function and a sample dataset. Here the h,c,d,p are different nodes. Image here of the code for a small clickstream data
Attached the image of the code and a sample data. The function converts this data into a markov chain containing a lot of important things out of which I am mainly trying to get the Transition Matrix and the Steady State. As I increase the data size(the number of different channel path or Users are not important, it is the different nodes that are important), the function is unable to perform as it cannot allocate more than the 4GB of RAM. I tried hit and trial to get to the point beyond which the function is not working but it did not help. Is there a way where I can know that till what node(or row) will the function work? So that I can generate the Transition Matrix till that point. And maybe the increase in the memory usage with every increasing node as I would like to believe the relationship between the two won't be linear.
Please let me know if the question is not specific enough and if it might need any more details.

Is it possible to get R intellisense or console output in PowerBI?

I've been trying to get started using R in PowerBI. However, the lack of intellisense and the lack of console output is hindering me. As I can't develop the script in RStudio or in Visual Studio with those aids.
EDIT: PowerBI does a nice thing where you can import data into the application and then work with the drag and drop tools to play around with the data, then when you select data fields that you want to add to an R plot, it creates an R stub that pulls those fields into a data.frame which makes things easy. However, that data is "inside" Power BI, I can't do the same thing in R studio because that data context doesn't exist.
What options are there? Am I missing something?
Thanks.
I think that what you are looking for is R tools for VS. It should make intellisense pick up the context and tell you what you can do with each object. Then for the output, can't you print and check the results in the output window in VS?
This also has R interactive window so when you debug you have a window to place code in, and evaluate it in that context. Say you have a method and you want to debug a statement plot(x, exp(x), type="l", col="green"), instead of making a fix at a time and then re-running to check the results, you can just make the fix say plot(x, exp(x), type="l", col="red") and see how that evaluates. This comes in handy when you wanna try a couple of things and check the results, since you can do it in one go, instead of "making one change and running it" x times.
Let me know if this does the job for you.
The only data context you have in R inside Power BI are Tables, which you pass in parameters of the call of R.Execute. Behind the scene, these Tables will be just dropped on disk as csv and then R process will pick them in order to do whatever you want. Actually this is the only relation between R and PowerBI desktop, if we speak about Transformations using R.
You can easily save such a context from PowerBI using R script of just one function save.image(“filename.RData”), and then open it using load("filename.RData") in your target R development environment.
While you should always test out your R code elsewhere first, it is often not enough for debugging purposes.
for general purpose debugging, you can nest your whole script in a block like this:
out <- capture.ouput({...})
Any intermediate value can be captured in this block with a cat:
cat(intermediate_value_i_want_to_test,'\n')
After your script block is done, simply convert your output to a data.frame and each cat method call will be printed in a new row of out.

Spawn process that run in parallel in R

I am writing a script that needs to be running continuously storing information on a MySQL database.
However, at some point of the day I will like to produce some summary of the data being colected, but writing this in the same script will stop collecting data while doing these summaries. Here's a sketch of the problem:
while (1==1) {
# get data and store it on the relational database
# At some point of the day (or time interval) do some summaries
if (time == certain_time) {
source("analyze_data.R")
}
}
The problem is that I'll like the data collection not to stop, being executed by another core of the computer.
I have seen references to packages parallel and multicore but my impression is that they are useful to repetitive tasks applied over vectors or lists.
You can use parallel to fork a process but you are right that the program will wait eternally for all the forked processes to come back together before proceeding (that is kind of the use case of parallel).
Why not run two separate R programs, one that collects the data and one that grabs it? Then, you simply run one continuously in the background and the other at set times. The problem then becomes one of getting the data out of the continuous data gathering program and into the summary program.
Do the logic outside of R:
Write 2 scripts; 1 with a while loop storing data, the other with a check. Run the while loop with one process and just leave it running.
Meanwhile, run your other (checking script) on demand to crunch the data. Or, put it in a cron job.
There are robust tools outside of R to handle this kind of thing; why do it inside R?

Source scripts or certain lines of script

I have multiple R scripts for different models and I need to make it easily accessible for other people to use. So I would like to have one script in which only contains sources to run the other scripts without people having to search through many files to find the right one. some of the scripts have more than one model in so if possible I would like to source only specific blocks of lines from those scripts.
For example to find the accuracy of ARIMA in different ways I have to run the following different scripts in turn;
Read data
Arima
Accuracy of in-sample
Accuracy out Read data
Accuracy of out forced param
Accuracy out sample
The amount of different scripts causes the risk of an error to be higher. especially as within 3 of those scripts is 5 other models which if running myself I would just highlight the specific model I'm wanting to use and run, but for other people that may be more confusing.
I know that I have to use source() to get the scripts to run but im stuck as to how to source only certain parts of the script and the correct way to source
Rather than trying to source parts of scripts, move these bits of code into functions, and then just call the functions you need.
Start by searching around for how to write R functions
You can put all your functions into a single file, source it, and then make your recipes of functions with orders for others.
You could make one code that automates the whole thing and then use knitr to create a word, or pdf document of the whole thing for other people to read easily?

writing functions vs. line-by-line interpretation in an R workflow

Much has been written here about developing a workflow in R for statistical projects. The most popular workflow seems to be Josh Reich's LCFD model. With a main.R containing code:
source('load.R')
source('clean.R')
source('func.R')
source('do.R')
so that a single source('main.R') runs the entire project.
Q: Is there a reason to prefer this workflow to one in which the line-by-line interpretive work done in load.R, clean.R, and do.R is replaced by functions which are called by main.R?
I can't find the link now, but I had read somewhere on SO that when programming in R one must get over their desire to write everything in terms of function calls---that R was MEANT to be written is this line-by-line interpretive form.
Q: Really? Why?
I've been frustrated with the LCFD approach and am going to probably write everything in terms of function calls. But before doing this, I'd like to hear from the good folks of SO as to whether this is a good idea or not.
EDIT: The project I'm working on right now is to (1) read in a set of financial data, (2) clean it (quite involved), (3) Estimate some quantity associated with the data using my estimator (4) Estimate that same quantity using traditional estimators (5) Report results. My programs should be written in such a way that it's a cinch to do the work (1) for different empirical data sets, (2) for simulation data, or (3) using different estimators. ALSO, it should follow literate programming and reproducible research guidelines so that it's simple for a newcomer to the code to run the program, understand what's going on, and how to tweak it.
I think that any temporary stuff created in source'd files won't get cleaned up. If I do:
x=matrix(runif(big^2),big,big)
z=sum(x)
and source that as a file, x hangs around although I don't need it. But if I do:
ff=function(big){
x = matrix(runif(big^2),big,big)
z=sum(x)
return(z)
}
and instead of source, do z=ff(big) in my script, the x matrix goes out of scope and so gets cleaned up.
Functions enable neat little re-usable encapsulations and don't pollute outside themselves. In general, they don't have side-effects. Your line-by-line scripts could be using global variables and names tied to the data set in current use, which makes them unre-usable.
I sometimes work line-by-line, but as soon as I get more than about five lines I see that what I have really needs making into a proper reusable function, and more often than not I do end up re-using it.
I don't think there is a single answer. The best thing to do is keep the relative merits in mind and then pick an approach for that situation.
1) functions. The advantage of not using functions is that all your variables are left in the workspace and you can examine them at the end. That may help you figure out what is going on if you have problems.
On the other hand, the advantage of well designed functions is that you can unit test them. That is you can test them apart from the rest of the code making them easier to test. Also when you use a function, modulo certain lower level constructs, you know that the results of one function won't affect the others unless they are passed out and this may limit the damage that one function's erroneous processing can do to another's. You can use the debug facility in R to debug your functions and being able to single step through them is an advantage.
2) LCFD. Regarding whether you should use a decomposition of load/clean/func/do regardless of whether its done via source or functions is a second question. The problem with this decomposition regardless of whether its done via source or functions is that you need to run one just to be able to test out the next so you can't really test them independently. From that viewpoint its not the ideal structure.
On the other hand, it does have the advantage that you may be able to replace the load step independently of the other steps if you want to try it on different data and can replace the other steps independently of the load and clean steps if you want to try different processing.
3) No. of Files There may be a third question implicit in what you are asking whether everything should be in one or multiple source files. The advantage of putting things in different source files is that you don't have to look at irrelevant items. In particular if you have routines that are not being used or not relevant to the current function you are looking at they won't interrupt the flow since you can arrange that they are in other files.
On the other hand, there may be an advantage in putting everything in one file from the viewpoint of (a) deployment, i.e. you can just send someone that single file, and (b) editing convenience as you can put the entire program in a single editor session which, for example, facilitates searching since you can search the entire program using the editor's functions as you don't have to determine which file a routine is in. Also successive undo commands will allow you to move backward across all units of your program and a single save will save the current state of all modules since there is only one. (c) speed, i.e. if you are working over a slow network it may be faster to keep a single file in your local machine and then just write it out occasionally rather than having to go back and forth to the slow remote.
Note: One other thing to think about is that using packages may be superior for your needs relative to sourcing files in the first place.
No one has mentioned an important consideration when writing functions: there's not much point in writing them unless you're repeating some action again and again. In some parts of an analysis, you'll being doing one-off operations, so there's not much point in writing a function for them. If you have to repeat something more than a few times, it's worth investing the time and effort to write a re-usable function.
Workflow:
I use something very similar:
Base.r: pulls primary data, calls on other files (items 2 through 5)
Functions.r: loads functions
Plot Options.r: loads a number of general plot options I use frequently
Lists.r: loads lists, I have a lot of them because company names, statements and the like change over time
Recodes.r: most of the work is done in this file, essentially it's data cleaning and sorting
No analysis has been done up to this point. This is just for data cleaning and sorting.
At the end of Recodes.r I save the environment to be reloaded into my actual analysis.
save(list=ls(), file="Cleaned.Rdata")
With the cleaning done, functions and plot options ready, I start getting into my analysis. Again, I continue to break it up into smaller files that are focused into topics or themes, like: demographics, client requests, correlations, correspondence analysis, plots, ect. I almost always run the first 5 automatically to get my environment set up and then I run the others on a line by line basis to ensure accuracy and explore.
At the beginning of every file I load the cleaned data environment and prosper.
load("Cleaned.Rdata")
Object Nomenclature:
I don't use lists, but I do use a nomenclature for my objects.
df.YYYY # Data for a certain year
demo.describe.YYYY ## Demographic data for a certain year
po.describe ## Plot option
list.describe.YYYY ## lists
f.describe ## Functions
Using a friendly mnemonic to replace "describe" in the above.
Commenting
I've been trying to get myself into the habit of using comment(x) which I've found incredibly useful. Comments in the code are helpful but oftentimes not enough.
Cleaning Up
Again, here, I always try to use the same object(s) for easy cleanup. tmp, tmp1, tmp2, tmp3 for example and ensuring to remove them at the end.
Functions
There has been some commentary in other posts about only writing a function for something if you're going to use it more than once. I'd like to adjust this to say, if you think there's a possibility that you may EVER use it again, you should throw it into a function. I can't even count the number of times I wished I wrote a function for a process I created on a line by line basis.
Also, BEFORE I change a function, I throw it into a file called Deprecated Functions.r, again, protecting against the "how the hell did I do that" effect.
I often divide up my code similarly to this (though I usually put Load and Clean in one file), but I never just source all the files to run the entire project; to me that defeats the purpose of dividing them up.
Like the comment from Sharpie, I think your workflow should depends a lot on the kind of work you're doing. I do mostly exploratory work, and in that context, keeping the data input (load and clean) separate from the analysis (functions and do), means that I don't have to reload and reclean when I come back the next day; I can instead save the data set after cleaning and then import it again.
I have little experience doing repetitive munging of daily data sets, but I imagine that I would find a different workflow helpful; as Hadley answers, if you're only doing something once (as I am when I load/clean my data), it may not be helpful to write a function. But if you're doing it over and over again (as it seems you would be) it might be much more helpful.
In short, I've found dividing up the code helpful for exploratory analyses, but would probably do something different for repetitive analyses, just like you're thinking about.
I've been pondering workflow tradeoffs for some time.
Here is what I do for any project involving data analysis:
Load and Clean: Create clean versions of the raw datasets for the project, as if I was building a local relational database. Thus, I structure the tables in 3n normal form where possible. I perform basic munging but I do not merge or filter tables at this step; again, I'm simply creating a normalized database for a given project. I put this step in its own file and I will save the objects to disk at the end using save.
Functions: I create a function script with functions for data filtering, merging and aggregation tasks. This is the most intellectually challenging part of the workflow as I'm forced to think about how to create proper abstractions so that the functions are reusable. The functions need to generalize so that I can flexibly merge and aggregate data from the load and clean step. As in the LCFD model, this script has no side effects as it only loads function definitions.
Function Tests: I create a separate script to test and optimize the performance of the functions defined in step 2. I clearly define what the output from the functions should be, so this step serves as a kind of documentation (think unit testing).
Main: I load the objects saved in step 1. If the tables are too big to fit in RAM, I can filter the tables with a SQL query, keeping with the database thinking. I then filter, merge and aggregate the tables by calling the functions defined in step 2. The tables are passed as arguments to the functions I defined. The output of the functions are data structures in a form suitable for plotting, modeling and analysis. Obviously, I may have a few extra line by line steps where it makes little sense to create a new function.
This workflow allows me to do lightning fast exploration at the Main.R step. This is because I have built clear, generalizable, and optimized functions. The main difference from the LCFD model is that I do not preform line-by-line filtering, merging or aggregating; I assume that I may want to filter, merge, or aggregate the data in different ways as part of exploration. Additionally, I don't want to pollute my global environment with lengthy line-by-line script; as Spacedman points out, functions help with this.

Resources