IDF and CO versions of Sellar in OpenMDAO 1.7.3 - openmdao

Does anyone have the IDF and CO versions of the Sellar Problem compatible with OpenMDAO 1.7.3? I could only find the MDF and SAND versions in the "examples" directory of the Git repository.

I too struggled with this issue, and came to the conclusion that the SubProblem component provides the basis for the IDF and CO MDAO architectures. I have implemented the CO in this way with legacy codes each representing a discipline. An additional benefit of this implementation is the ability to modularize the disciplines to quickly configure a problem with as-needed disciplines with little new effort. Note: do not confuse subgroups with subproblems as the two are entirely unrelated but likely both employed within a problem. The subproblem architecture as implemented in my project does require initializing guesses for each subproblem to converge while the SAND might not. Modularity is more beneficial than full automation in my project. I can envision passing control information down through the problem-subproblem-subsubproblem hierarchy to automate the configuration and perhaps the component "state" can be subverted to that use? Hope this stimulates your imagination to complete your project.

Related

Can meteor.js web framework support a social networking architecture effectively?

So I'm new to node.js, javascript frameworks, and meteor.com. I'm trying to learn how to build social networks, and I'm naive/struggling to understand why Meteor.js (meteor.com) wouldn't be able to do all the great things you see now that twitter, facebook, instagram are doing?
There's the comet technology between client/server, authentication configs, asynchronous coding for scaling and performance, and built on top of node.js.
I'm trying to learn more about long polling, comet, gridFS or how files are stored, and in general things like replication sets, and sharding to help with performance (esp since Redhat has this openshift platform that we can build our own private clouds with).
I have some computer science background, but it seems like magic, so what am I missing? If you all could think of a few buzz words that make a social network tick that Meteor.js doesn't support, what would it be?
I hear things about parallel and concurrency (webworkers fixes that in part, no?), websockets, that high level languages like python or java are better off supporting. There's only one to learn my answers, and thats by doing, but thought someone could sway me one way or the other via this thread. Thanks!
This question encompasses a really broad idea and just focusing on using meteor alone would solve this issue. Here are a few points to consider:
I don't think this framework would be a good starting point to learn long-polling, gridFS, etc etc. Meteor aims to be a framework that tends to be more of an ecosystem of packages e.g. you can certainly roll your own aformentioned strategies -- however for dynamic updates, Meteor uses its own Data Delivery Protocol (DDP) supported/implemented by (surprise) a good bunch of core packages such as Spark.
Parallel processing and concurrency can be better off done using other languages, but why not with? Since Meteor is largely based on node.js, and node.js is really good with the aforementioned stuff plus it can play very well with other languages so you could integrate smoothly. Meteor doesn't really require you purely rely on it, as other languages would say the same thing. It's all in the general engineering / planning for your project. There are already lots of really good stuff out there that rely on Meteor, join in! don't be afraid. It all boils down to planning (and the courage/perseverance to pull it off, of course).
Right now, we cannot tell if Meteor would be incapable of the usual great stuff by gigantic websites. Sure, we can do live updates, (its own kind of) publish/subscribe patterns, and powerful stuff to boost development (look at the seven core concepts of meteor to best understand this). It is not impossible to replicate what is already out there, really. We can only say it with uncertainty at the moment mainly because.. (see next point)
The framework is so young! it's still at 0.6.x at the time of writing. Please take time to look at the Meteor Roadmap to see how things are going in terms of broader support for persistence/databases, performance considerations, and the official DDP specification.
I hope I have answered your enquiry (and more, I hope). I'm really excited for meteor myself as it could easily be the next big thing. We have a couple of (for-)production projects using Meteor as well, so you're getting direct insight from a person who has done quite a bit of hacking (and tons of research and first-hand experience) in Meteor. Not that i'm saying i'm an expert or so, it's just so much fun to work with Meteor and i'm totally not kidding you.
Hope this helps!
P.S.: Fair warning though, resources and documentation is really sparse at this point. I try to contribute to the community as much as I can about it (one of my starting points is here, on SO).

Rhapsody for DO-178 avionics environment?

Has anyone successfully used Rhapsody in a DO-178 avionics environment? That is, working with the FAA/DER process to provide artifacts to them and have them approved. Since it is my understanding that Rhapsody isn't a certifiable MDD tool, I was curious if there were other mitigating factors.
If you were successful so, what steps did you take in order to be able to accomplish this?
Thanks for any feedback and insights.
I have used Rhapsody on a project that was developed in accordance with (but not certified) to DO-178B level D. The requirements were managed in DOORS and linked into Rhapsody using the Rhapsody Gateway tool, which worked reasonably well. This was important as traceability is a key part of 178B.
The software was modelled in Rhapsody and the code then generated manually. Manual code generation was chosen as auto-generation of code would then require Rhapsody to be qualified as a development tool to comply with 178B. I don't know if IBM provide any 178B certification for Rhapsody.
Verification of the software against requirements was performed using a bespoke test tool, and for this we had to perform some significant testing of the tool in order to qualify it as a verification tool.
Your question is quite hard to answer as you don't include any information on what level of 178B you are working to, what tools you are using/planning to use (other than Rhapsody), or whether you are intending to auto generate code, etc.
Hope this is of some help.
I have experience using Rhapsody C++ for DO-178B Level A/B compliant project.
Auto generated code is verified in accordance with the coverage requirements, including MC/DC coverage, for the proper level. Since the generated code are fully verified with rigorous static/dynamic tests and manual reviews, as if they were hand coded, the Rhapsody tool qualification was not mandatory.
We have put much effort in customizing Rhapsody code generation properties to generate only the needed code such as ctor/dtors and get/setters, and to avoid library functions which are not deterministic or the ones with dynamic memory allocations.
We were able to fully utilize round-trip engineering so that the Rhapsody model files, not the code, are version-controlled since the model contains all the code.
Rhapsody UML should be considered for developing reusable and portable software architecture.
Rhapsody is being used in our Level A/C/D project with Arinc 653. Since Output of Rhapsody(Auto code generators) are being verified.
Hence, Qualifying Rhapsody is not necessary. Rhapsody gives advantages in Traceability and generation or modifying Test Scripts by updating just "Tags" field.
So the entire Test script or traces in Test script need not to be modified.

Branching and merging in TFS 2010

What is the best branching and merging strategy for a small development team making a lot of concurrent development on the same project?
We need to be able to easily apply a hotfix to a production release while other developers are working.
I was looking at the TFS Branching Guidance guide from codeplex and can't seem to figure out what is the best option for us.
Thanks
Without knowing your organization or how your team develops, it is hard (maybe impossible) to make a recommendation.
In our organization, the majority of our development is organized around releases, so we did a "branch on release" approach. That works great for us. We also do bug fixes, so we've implemented a "branch on feature" approach off of the production line for bug-fixes.
If you have different people all working on different features that might make it to production at different times, a "branch on feature" approach might work.
If you are all working on the same development line, a single "development" branch might work for you.
It took us months to finalize our branching strategy (for 14+ team projects, around 80 developers, and multiple applications). I don't expect that it will take quite as long for a smaller organization, but definitely spend some quality time thinking about this, and consider bringing in some outside expertise to give you guidance.
In addtion to Robaticus, you need to figure out why you want to do the parallel development.
In my opinion it is all about what you want to isolate
If you do parallel development because there are multiple features that are developed, it depends whether the features need to be released in the same package (version / release / give it a name) and whether you want the different features to don't integrate with each other during development. The latter is important when the different features interfere with each other and you only want to spend time on the integration when you merge both features.
If you need isolation on any of these reasons, you have within one release multiple branches which you merge before you do the integration test.
If you do parallel development because you want to support multiple releases, then you need to know how many releases you need to support (how many release that are rolled out in production do you need to support, how many release do you have in pre-production, etc.). In this case, the advise is to have a branch per release that you need to support
It sounds that you need to have this branching strategy for your organization.
It also depends on the type of files that you are branching. If you have SSIS or SSRS files (or any other XML-based files), binary files, or any files that are not easy text (in contrast to C#), then it is not easy to merge the files from two different branches. You then need to have some manual involvement to actually merge these files!
So as Robaticus already said, we need more information on your specific scenario to give more detailed guidance.

Helps participating in an System Verification Test team getting a better programmer?

I am developing applications for 9 years now - meanly Java. Now am asked to participate in the SVT team for the next release. Overall this means installing complex system setups and running specific user scenarios on these setups as well as doing long runs and load runs.
Overall I am positive about it as I will learn something new. But I am also affraid to loose some grip and knowledge with programming, because of not doing it a lot then.
I know doing programming in side projects such as helping with open source projects will be one alternative, but finding the time on top of a familiy life and a fulltime jop is not that easy.
What do you think, is doing concrete testing work helping getting a better software engineer?
Thanks in advance,
Michael
Testing isn't asside of programming.
You can still program automated systems so you can have recursion testing. From unit tests to real complex automated systems, the best i know is selenium which generates code you can use to build testing scripts in most languages.
There are other tools for non webapps. But I personaly believe that testing is a bit far away from "stoping coding. Unless you're just doing user point-of-view testing.
You can also do error injections which will make you write small singletons to inject them in the memory of your application.
So you can code while testing ;) and learn new stuff also.
Having been in a testing team i think it really helps, because you'll learn to exploit code easily, which will reflect when you build your own API or App at a later date.
I would say it depends on your skill and temparament. Programming knowledge will serve you well while testing. At the same time, I know that it needs a different approach and mindset and is on a completely different career track. You can always keep up your programming skills by writing code for a project you like (even if you have to make one up).

What are your experiences with Windows Workflow Foundation?

I am evaluating WF for use in line of business applications on the web, and I would love to hear some recent first-hand accounts of this technology.
My main interest here is in improving the maintainability of projects and maybe in increasing developer productivity when working on complex processes that change frequently.
I really like the idea of WF, however it seems to be relatively unknown and many older comments I've come across mention that it's overwhelmingly complex once you get into it.
If it's overdesigned to the point that it's unusable (or a bad tradeoff) for a small to medium-sized project, that's something that I need to know.
Of course, it has been out since late 2006, so perhaps it has matured. If that's the case, that's another piece of information that would be very helpful!
Thanks in advance!
Windows Workflow Foundation is a very capable product but still very much in its 1st version :-(
The main reasons for use include:
Visually modeling business requirements.
Separating your business logic from the business rules and externalizing rules as XML files.
Separating your business flow from your application by externalizing your workflows as XML files.
Creating long running processes with the automatic ability to react if nothing has happened for some extended period of time. For example an invoice not being paid.
Automatic persistence of long running workflows to keep resource usage down and allow a process and/or machine to restart.
Automatic tracking of workflows helping with business requirements.
WF comes as a library/framework so most of the time you need to write the host that instantiates the WF runtime. That said, using WCF hosted in IIS is a viable solution and saves a lot of work. However the WCF/WF coupling is less than perfect and needs some serious work. See here http://msmvps.com/blogs/theproblemsolver/archive/2008/08/06/using-a-transactionscopeactivity-with-a-wcf-receiveactivity.aspx for more details. Expect quite a few changes/enhancements in the next version.
WF (and WCF) are pretty central to a lot of the new stuff coming out of Microsoft. You can expect some interesting announcements during the PDC.
BTW keeping multiple versions of a workflow running takes a bit of work but that is mostly standard .NET. I just did a series of blog posts on the subject starting here: http://msmvps.com/blogs/theproblemsolver/archive/2008/09/10/versioning-long-running-workfows.aspx
About visually modeling business requirements.
In theory, this works quite well with a separation of intent and implementation. However, in practice, you will drop quite a few extra activities on a workflow purely for technical reasons, and that sort of defeats the purpose as You have to tell a business analyst to ignore half the shapes and lines.
Related question: When to use Windows Workflow Foundation? My answer there:
You may need WF only if any of the
following is true:
You have a long-running process.
You have a process that changes frequently.
You want a visual model of the process.
For more details, see Paul Andrew's
post: What to use Windows Workflow Foundation for?
Please do not confuse or relate WF
with visual programming of any kind.
It is wrong and can lead to very bad
architecture/design decisions.
So, if you have such requirements, then WF is a good candidate. Of course it is relatively complex, but mention that the problems that is trying to solve is also complex (and sometimes very complex). IMHO, it is very complex for example to dehydrate/rehydrate objects that have event handlers attached (with events that can be triggered when the object is not in memory).
I can not judge what you mean by "small to medium-sized project", but in general I would say that if your project has at least two requirements from the above list, then you can consider WF as a solution.
We've used WF in a large-ish SharePoint application and I can say it's OK. It has lots of power and flexibility. and, as Kevin mentions, once you grok the underlying concepts of workflows, you can do pretty much anything you want with it.
On the other hand, it has some really serious issues, like lack of versioning, which can really hurt your application in the future. We've been forced to deploy up to 3 parallel versions of the same workflow named xxx-v1, xxx-v2 and xxx-v3 to keep older instances running and have new instances use the updated versions. A real pain in the ass. Oh, and there are also some really non-intuitive concepts in there (correlation tokens, wtf??)
We had a project at work that I was involved in using Workflows.
The idea (from management), was that us programmers would write the Workflow Activities along with the "engine" and framework. Then non-programmers would take care of all the rest by compiling their own Workflows into dlls which the engine would automatically load.
Management was sold on this idea of non-programmers using Workflow to help develop software, and it was pretty much a complete waste of time. The problem we were trying to solve with this project was relatively complex and we knew from the very beginning that the software would have to be modified almost constantly (its calculations were dependent on other companies and governements).
The end result was that we were unable to make the Workflow modules generic enough for anyone else to use. So the programmers were the ones who were forced to work with the Workflows, and all the Workflows did was get in our way.
I've been using Workflow 4.0 for the last few months and although mostly impressed, I've found it extremely hard to learn.
For the most recent version (that comes with .NET 4.0 RC), there is next-to-no documentation on the web, in any books or no training courses available. I've only found articles relating to the now defunct 3.0 version. Even the MSDN documenation is light on the ground.
The workflow designer is not as intuitive as it should be by any means so learning is very hard. I've had to rely on answers from a single person on StackOverflow (thanks by the way Maurice!) - and I would be stuffed without his help.
So in summary, I think it has potential but you would be quite mad to learn it yet - wait for more training, documentation and books otherwise you will be going into it blind!
Last year we completed a working application with WF, now used as the backbone of an unbelievably huge system which is used by a very big bank for its mortgage process. The pe process has many steps starting from customer application to approval of credit.
Although it was a success, there were so many problems and crisis all along the way. And it wont worth the trouble for any smaller size projects.
I consider MS WF as a low-level workflow library rather than a fully fledged enterprise workflow product such as K2. It will enable you to build a workflow enabled application, but is not in itself a workflow application. My experiance of it in this capacity has been positive, although we have had to build a lot of our own infrastructure around it (a pub/sub framework, a worlkflow lifetime manager etc). A lot of the documentation out there is fairly simplistic and does not cover building up an enterprise workflow application based on MS WF.
Hard to learn. Quite flexible. Not to be confused with a visual tool for end users, only for programmers. Not sure if I like the dependancy property approach.
It really depends on what you want to do with it. I've only used it a little, but compared to more mature products like MetaStorm (I know technically it's a BPM, but there is still a workflow component), Process Choriographer and IBM MQ workflow, there's no comparison. It's just not mature enough. On the other hand it's free where the others are not and can probably get the job done. I don't know if I would place a multi-million dollar operation on it, but with smaller ones, I'd give it another shot. The real hurdle you are going to face is the change in thought process it requires. If you don't have developers that have worked with state systems before that can be a real hurdle.
Brian, I can't reply to your comment, but anyway, by versioning i mean making changes to the underlying code of the workflow without breaking already running instances, and gracefully applying updates to existing workflows. I'm not sure about 'stock' WF, but at least in SharePoint environment there's no concept of workflow versions so new versions have to be deployed as completely different workflows which becomes a maintenance nightmare.
This has nothing to do with 'rehydration', rehydration is the process by which you bring a 'dormant' workflow back to activity after some event or change in state. That is handled transparently by the workflow runtime.
WF ist integrated into SharePoint (WSS 3.0), and i have created quite a few workflows for various SharePoint-Websites, so i can tell about my experience of WF in SharePoint. Compared with other workflow-frameworks WF scores well. It's stable (i haven't experienced any mysterious errors), workflows are fairly easy to design (thank to the workflow-designer in Visual Studio) and you can use not only sequential but also state-machine workflows.
It's not perfect, of course, and a developer will definitly need some time to understand the concept (of i.e. the Activity Model); but it's definitely useable - even for "small tasks".
Never tried WFF, but I remember reading this article about WFF by Leon Bambrick where he basically says the whole genre of software development tools is nonsense. Might help you decide one way or the other.

Resources