Dymos how to use previous trajectory solution as initial guess? - openmdao

I am running my trajectory problem multiple times in a row while varying a parameter to generate plots and compare to other things. I think I can make it run faster by just using the previous solution as a guess.
Would I do something like
p['traj.phase_1.states:v'] = prev_p.get_val['traj.phase_1.states:v']
also is there a single function to load the file "dymos_simulation.db" into memory?

The dymos.run_problem is intended to be the mechanism that makes this simple.
There is currently a PR that addresses some shortcomings, but expect it to be merged sometime today and included in dymos 0.18.0 in the next day or two. In the meantime you can test against the source branch of the PR if you like:
https://github.com/OpenMDAO/dymos/pull/510
First, you can simulate out the initial guess of the controls (this is not recommended if you're likely to hit a singularity in the ODE during your simulation).
dymos.run_problem(p, run_driver=False, simulate=True)
That will generate the file 'dymos_simulation.db'. Then you can run
dymos.run_problem(p, run_driver=True, simulate=True, restart='dymos_simulation.db')
It will use the simulated guess as the initial guess for the solution. This should adequately satisfy the collocation constraints and give the optimizer an easier path to the solution.

Related

run_driver() / run_problem() "converged" feedback

I occasionally don't get convergence on my problem. My problem is setup as a Dymos problem. I am using IPOPT as my optimizer. If I am only running the problem once, I can check IPOPT.out for the converged string and that's ok.
I often want to run parameter sweeps, where I vary boundary conditions and problem options. I use Ray https://www.ray.io/, a python library for running parallel processes to do these. I turn off all file I/O that I can for this as otherwise the multiple processes interfere with each other writing to file.
However, it's then difficult to know if a particular process / case did not converge. For this reason actually having run_problem() return information on convergence would be useful. It doesn't seem to do that, so is there a way to get convergence info some other way, that does not involve reading a file?
I do realize there is the whole DOE driver system that is setup for OpenMDAO. However the learning curve looked rather steep. I got parallel processing working with Ray in a matter of hours, and it works quite well except for this one issue.
prob.driver.fail should be False if the the optimization was successful, and doesn't need to be read from a file. However, given the various levels of success in optimizers this might not be completely accurate. For instance, solved to acceptable tolerance vs. optimal solution found is a little difficult to capture in a simple boolean output, and we should probably find a better way to report the optimizer's success.

Saving Data in Dymos Changes Optimisation and Simulation Results

I had a similar issue as expressed in this question. I followed Rob Flack's answer but had issues. If anyone could help me out, I would appreciate it.
I used the code suggested in the answer but had an issue: It changed the simulation results. I added a line in the script for the min_time_climb example that goes like this:
phase.add_timeseries_output('aero.mach', units=None, shape=(1,), output_name = "recorded_mach")
I used the name "recorded_mach" so as to not override anything else Dymos may or may not have been recording. The issue is that the default Altitude (h) vs. time graph actually changed, both the discrete points and simulation curve. I ended up recording 4 variables with similar commands to what I have just shown and that somehow made the simulation track better with the discrete optimisation points on the graph. When I recorded another 4 variables on top of that, it made it track worse. I find this very strange because I don't see why recording the simulation should change its output.
Have you ever come across this? Any insight you could provide into the issue would be greatly appreciated.
Notes:
I have somewhat modified the example in order to fit a different sutuation (Different thrust and fuel burn data, different lift and drag polars, different height and speed goals) before implimenting the code described above. However, it was working fine still.
Without some kind of example to look at, I can only make an educated guess. So please take my answer with a grain of salt.
Some optimization problems have very ill conditioned Jacobians and/or KKT matrices (which you as a user would not normally see, but can be problematic none the less). There are many potential causes for this ill conditioning, but some common ones are very large derivatives (i.e. approaching infinity) or very larger ranges in magnitude between different derivatives. Another common cuase is the introduction of a saddle point, where you have infinite numbers of answers that are all equally good. Sometimes you can fix the problem with scaling, other times you need to re-work the problem formulation.
Ill conditioning has two bad effects on the optimizer. First, it makes it very hard for the numerics inside to comput inverses which are needed to compute step sizes. It will get an answer, but may be highly subject to numerical noise. Second, it may prevent certain approximations (like BFGS) from performing well in the first place.
In these cases, small changes in execution order or extra steps (e.g. case recoding) can cause the optimizer to take a different path. If you're finding that the path ultimately leads one case to work and another to fail, then you might have a marginally stable problem where you got lucky one time and not the other.
Look carefully for anything singular-like in your jacobian. 0 rows/columns? a constraint that happens to be satisfied, but still has a 0 row is a problem that comes up in Dymos cases if you forget to add additional degrees of freedom when you add constraints. Saddle points also arise if you're careful with your objective.

Fastest way to reduce dimensionality for multi-classification in R

What I currently have:
I have a data frame with one column of factors called "Class" which contains 160 different classes. I have 1200 variables, each one being an integer and no individual cell exceeding the value of 1000 (if that helps). About 1/4 of the cells are the number zero. The total dataset contains 60,000 rows. I have already used the nearZeroVar function, and the findCorrelation function to get it down to this number of variables. In my particular dataset some individual variables may appear unimportant by themselves, but are likely to be predictive when combined with two other variables.
What I have tried:
First I tried just creating a random forest model then planned on using the varimp property to filter out the useless stuff, gave up after letting it run for days. Then I tried using fscaret, but that ran overnight on a 8-core machine with 64GB of RAM (same as the previous attempt) and didn't finish. Then I tried:
Feature Selection using Genetic Algorithms That ran overnight and didn't finish either. I was trying to make principal component analysis work, but for some reason couldn't. I have never been able to successfully do PCA within Caret which could be my problem and solution here. I can follow all the "toy" demo examples on the web, but I still think I am missing something in my case.
What I need:
I need some way to quickly reduce the dimensionality of my dataset so I can make it usable for creating a model. Maybe a good place to start would be an example of using PCA with a dataset like mine using Caret. Of course, I'm happy to hear any other ideas that might get me out of the quicksand I am in right now.
I have done only some toy examples too.
Still, here are some ideas that do not fit into a comment.
All your attributes seem to be numeric. Maybe running the Naive Bayes algorithm on your dataset will gives some reasonable classifications? Then, all attributes are assumed to be independent from each other, but experience shows / many scholars say that NaiveBayes results are often still useful, despite strong assumptions?
If you absolutely MUST do attribute selection .e.g as part of an assignment:
Did you try to process your dataset with the free GUI-based data-mining tool Weka? There is an "attribute selection" tab where you have several algorithms (or algorithm-combinations) for removing irrelevant attributes at your disposal. That is an art, and the results are not so easy to interpret, though.
Read this pdf as an introduction and see this video for a walk-through and an introduction to the theoretical approach.
The videos assume familiarity with Weka, but maybe it still helps.
There is an RWeka interface but it's a bit laborious to install, so working with the Weka GUI might be easier.

Parallel computing for TraMineR

I have a large dataset with more than 250,000 observations, and I would like to use the TraMineR package for my analysis. In particular, I would like to use the commands seqtreeand seqdist, which works fine when I for example use a subsample of 10,000 observations. The limit my computer can manage is around 20,000 observations.
I would like to use all the observations and I do have access to a supercomputer who should be able to do just that. However, this doesn't help much as the process runs on a single core only. Therefore my question, is it possible to apply parallel computing technics to the above mentioned commands? Or are there other ways to speed up the process? Any help would be appreciated!
The internal seqdist function is written in C++ and has numerous optimizations. For this reason, if you want to parallelize seqdist, you need to do it in C++. The loop is located in the source file "distancefunctions.cpp" and you need to look at the two loops located around line 300 in function "cstringdistance" (Sorry but all comments are in French). Unfortunately, the second important optimization is that the memory is shared between all computations. For this reason, I think that parallelization would be very complicated.
Apart from selecting a sample, you should consider the following optimizations:
aggregation of identical sequences (see here: Problem with big data (?) during computation of sequence distances using TraMineR )
If relevant, you can try to reduce the time granularity. Distance computation time is highly dependent on sequence length (O^2). See https://stats.stackexchange.com/questions/43601/modifying-the-time-granularity-of-a-state-sequence
Reducing time granularity may also increase the number of identical sequences, and hence, the impact of optimization one.
There is a hidden option in seqdist to use an optimized version of the optimal matching algorithm. It is still in testing phase (that's why it is hidden), but it should replace the actual algorithm in a future version. To use it, set method="OMopt", instead of method="OM". Depending on your sequences, it may reduce computation time.

writing functions vs. line-by-line interpretation in an R workflow

Much has been written here about developing a workflow in R for statistical projects. The most popular workflow seems to be Josh Reich's LCFD model. With a main.R containing code:
source('load.R')
source('clean.R')
source('func.R')
source('do.R')
so that a single source('main.R') runs the entire project.
Q: Is there a reason to prefer this workflow to one in which the line-by-line interpretive work done in load.R, clean.R, and do.R is replaced by functions which are called by main.R?
I can't find the link now, but I had read somewhere on SO that when programming in R one must get over their desire to write everything in terms of function calls---that R was MEANT to be written is this line-by-line interpretive form.
Q: Really? Why?
I've been frustrated with the LCFD approach and am going to probably write everything in terms of function calls. But before doing this, I'd like to hear from the good folks of SO as to whether this is a good idea or not.
EDIT: The project I'm working on right now is to (1) read in a set of financial data, (2) clean it (quite involved), (3) Estimate some quantity associated with the data using my estimator (4) Estimate that same quantity using traditional estimators (5) Report results. My programs should be written in such a way that it's a cinch to do the work (1) for different empirical data sets, (2) for simulation data, or (3) using different estimators. ALSO, it should follow literate programming and reproducible research guidelines so that it's simple for a newcomer to the code to run the program, understand what's going on, and how to tweak it.
I think that any temporary stuff created in source'd files won't get cleaned up. If I do:
x=matrix(runif(big^2),big,big)
z=sum(x)
and source that as a file, x hangs around although I don't need it. But if I do:
ff=function(big){
x = matrix(runif(big^2),big,big)
z=sum(x)
return(z)
}
and instead of source, do z=ff(big) in my script, the x matrix goes out of scope and so gets cleaned up.
Functions enable neat little re-usable encapsulations and don't pollute outside themselves. In general, they don't have side-effects. Your line-by-line scripts could be using global variables and names tied to the data set in current use, which makes them unre-usable.
I sometimes work line-by-line, but as soon as I get more than about five lines I see that what I have really needs making into a proper reusable function, and more often than not I do end up re-using it.
I don't think there is a single answer. The best thing to do is keep the relative merits in mind and then pick an approach for that situation.
1) functions. The advantage of not using functions is that all your variables are left in the workspace and you can examine them at the end. That may help you figure out what is going on if you have problems.
On the other hand, the advantage of well designed functions is that you can unit test them. That is you can test them apart from the rest of the code making them easier to test. Also when you use a function, modulo certain lower level constructs, you know that the results of one function won't affect the others unless they are passed out and this may limit the damage that one function's erroneous processing can do to another's. You can use the debug facility in R to debug your functions and being able to single step through them is an advantage.
2) LCFD. Regarding whether you should use a decomposition of load/clean/func/do regardless of whether its done via source or functions is a second question. The problem with this decomposition regardless of whether its done via source or functions is that you need to run one just to be able to test out the next so you can't really test them independently. From that viewpoint its not the ideal structure.
On the other hand, it does have the advantage that you may be able to replace the load step independently of the other steps if you want to try it on different data and can replace the other steps independently of the load and clean steps if you want to try different processing.
3) No. of Files There may be a third question implicit in what you are asking whether everything should be in one or multiple source files. The advantage of putting things in different source files is that you don't have to look at irrelevant items. In particular if you have routines that are not being used or not relevant to the current function you are looking at they won't interrupt the flow since you can arrange that they are in other files.
On the other hand, there may be an advantage in putting everything in one file from the viewpoint of (a) deployment, i.e. you can just send someone that single file, and (b) editing convenience as you can put the entire program in a single editor session which, for example, facilitates searching since you can search the entire program using the editor's functions as you don't have to determine which file a routine is in. Also successive undo commands will allow you to move backward across all units of your program and a single save will save the current state of all modules since there is only one. (c) speed, i.e. if you are working over a slow network it may be faster to keep a single file in your local machine and then just write it out occasionally rather than having to go back and forth to the slow remote.
Note: One other thing to think about is that using packages may be superior for your needs relative to sourcing files in the first place.
No one has mentioned an important consideration when writing functions: there's not much point in writing them unless you're repeating some action again and again. In some parts of an analysis, you'll being doing one-off operations, so there's not much point in writing a function for them. If you have to repeat something more than a few times, it's worth investing the time and effort to write a re-usable function.
Workflow:
I use something very similar:
Base.r: pulls primary data, calls on other files (items 2 through 5)
Functions.r: loads functions
Plot Options.r: loads a number of general plot options I use frequently
Lists.r: loads lists, I have a lot of them because company names, statements and the like change over time
Recodes.r: most of the work is done in this file, essentially it's data cleaning and sorting
No analysis has been done up to this point. This is just for data cleaning and sorting.
At the end of Recodes.r I save the environment to be reloaded into my actual analysis.
save(list=ls(), file="Cleaned.Rdata")
With the cleaning done, functions and plot options ready, I start getting into my analysis. Again, I continue to break it up into smaller files that are focused into topics or themes, like: demographics, client requests, correlations, correspondence analysis, plots, ect. I almost always run the first 5 automatically to get my environment set up and then I run the others on a line by line basis to ensure accuracy and explore.
At the beginning of every file I load the cleaned data environment and prosper.
load("Cleaned.Rdata")
Object Nomenclature:
I don't use lists, but I do use a nomenclature for my objects.
df.YYYY # Data for a certain year
demo.describe.YYYY ## Demographic data for a certain year
po.describe ## Plot option
list.describe.YYYY ## lists
f.describe ## Functions
Using a friendly mnemonic to replace "describe" in the above.
Commenting
I've been trying to get myself into the habit of using comment(x) which I've found incredibly useful. Comments in the code are helpful but oftentimes not enough.
Cleaning Up
Again, here, I always try to use the same object(s) for easy cleanup. tmp, tmp1, tmp2, tmp3 for example and ensuring to remove them at the end.
Functions
There has been some commentary in other posts about only writing a function for something if you're going to use it more than once. I'd like to adjust this to say, if you think there's a possibility that you may EVER use it again, you should throw it into a function. I can't even count the number of times I wished I wrote a function for a process I created on a line by line basis.
Also, BEFORE I change a function, I throw it into a file called Deprecated Functions.r, again, protecting against the "how the hell did I do that" effect.
I often divide up my code similarly to this (though I usually put Load and Clean in one file), but I never just source all the files to run the entire project; to me that defeats the purpose of dividing them up.
Like the comment from Sharpie, I think your workflow should depends a lot on the kind of work you're doing. I do mostly exploratory work, and in that context, keeping the data input (load and clean) separate from the analysis (functions and do), means that I don't have to reload and reclean when I come back the next day; I can instead save the data set after cleaning and then import it again.
I have little experience doing repetitive munging of daily data sets, but I imagine that I would find a different workflow helpful; as Hadley answers, if you're only doing something once (as I am when I load/clean my data), it may not be helpful to write a function. But if you're doing it over and over again (as it seems you would be) it might be much more helpful.
In short, I've found dividing up the code helpful for exploratory analyses, but would probably do something different for repetitive analyses, just like you're thinking about.
I've been pondering workflow tradeoffs for some time.
Here is what I do for any project involving data analysis:
Load and Clean: Create clean versions of the raw datasets for the project, as if I was building a local relational database. Thus, I structure the tables in 3n normal form where possible. I perform basic munging but I do not merge or filter tables at this step; again, I'm simply creating a normalized database for a given project. I put this step in its own file and I will save the objects to disk at the end using save.
Functions: I create a function script with functions for data filtering, merging and aggregation tasks. This is the most intellectually challenging part of the workflow as I'm forced to think about how to create proper abstractions so that the functions are reusable. The functions need to generalize so that I can flexibly merge and aggregate data from the load and clean step. As in the LCFD model, this script has no side effects as it only loads function definitions.
Function Tests: I create a separate script to test and optimize the performance of the functions defined in step 2. I clearly define what the output from the functions should be, so this step serves as a kind of documentation (think unit testing).
Main: I load the objects saved in step 1. If the tables are too big to fit in RAM, I can filter the tables with a SQL query, keeping with the database thinking. I then filter, merge and aggregate the tables by calling the functions defined in step 2. The tables are passed as arguments to the functions I defined. The output of the functions are data structures in a form suitable for plotting, modeling and analysis. Obviously, I may have a few extra line by line steps where it makes little sense to create a new function.
This workflow allows me to do lightning fast exploration at the Main.R step. This is because I have built clear, generalizable, and optimized functions. The main difference from the LCFD model is that I do not preform line-by-line filtering, merging or aggregating; I assume that I may want to filter, merge, or aggregate the data in different ways as part of exploration. Additionally, I don't want to pollute my global environment with lengthy line-by-line script; as Spacedman points out, functions help with this.

Resources