Rhapsody for DO-178 avionics environment? - rhapsody

Has anyone successfully used Rhapsody in a DO-178 avionics environment? That is, working with the FAA/DER process to provide artifacts to them and have them approved. Since it is my understanding that Rhapsody isn't a certifiable MDD tool, I was curious if there were other mitigating factors.
If you were successful so, what steps did you take in order to be able to accomplish this?
Thanks for any feedback and insights.

I have used Rhapsody on a project that was developed in accordance with (but not certified) to DO-178B level D. The requirements were managed in DOORS and linked into Rhapsody using the Rhapsody Gateway tool, which worked reasonably well. This was important as traceability is a key part of 178B.
The software was modelled in Rhapsody and the code then generated manually. Manual code generation was chosen as auto-generation of code would then require Rhapsody to be qualified as a development tool to comply with 178B. I don't know if IBM provide any 178B certification for Rhapsody.
Verification of the software against requirements was performed using a bespoke test tool, and for this we had to perform some significant testing of the tool in order to qualify it as a verification tool.
Your question is quite hard to answer as you don't include any information on what level of 178B you are working to, what tools you are using/planning to use (other than Rhapsody), or whether you are intending to auto generate code, etc.
Hope this is of some help.

I have experience using Rhapsody C++ for DO-178B Level A/B compliant project.
Auto generated code is verified in accordance with the coverage requirements, including MC/DC coverage, for the proper level. Since the generated code are fully verified with rigorous static/dynamic tests and manual reviews, as if they were hand coded, the Rhapsody tool qualification was not mandatory.
We have put much effort in customizing Rhapsody code generation properties to generate only the needed code such as ctor/dtors and get/setters, and to avoid library functions which are not deterministic or the ones with dynamic memory allocations.
We were able to fully utilize round-trip engineering so that the Rhapsody model files, not the code, are version-controlled since the model contains all the code.
Rhapsody UML should be considered for developing reusable and portable software architecture.

Rhapsody is being used in our Level A/C/D project with Arinc 653. Since Output of Rhapsody(Auto code generators) are being verified.
Hence, Qualifying Rhapsody is not necessary. Rhapsody gives advantages in Traceability and generation or modifying Test Scripts by updating just "Tags" field.
So the entire Test script or traces in Test script need not to be modified.

Related

What is the best workflow engine for ASP.NET Core project

We are evaluating and looking for a workflow engine which support .NET-Core and I'd really appreciate the community input. I would like to hear on the suggestion based on your guys implementation stories.
My main evaluation criteria, so far, are below;
open source and OEM friendly license
production installations (success stories are a great help)
technical support available
open standards support - BPMN
dynamic creation/assembly of the workflow based on input
embeddable
Currently I am evaluating Elsa, Workflow Core, Argo, and Airflow. Elsa seems like a good candidate as well but never used it.
Do you guys have any successful deployments on Elsa workflow engine?
Full disclosure: I am the project lead of Elsa, but I will try and be as objective as I can.
Elsa does not currently support BPMN, so if this is a hard requirement then Elsa might not be suitable for your project. At least not until it implements BPMN in the future.
As for technical support, there is no official paid support available as of yet, but the community is very friendly & helpful, though still relatively small.
Dynamic creation based on input is possible since you can programmatically define workflows. But you cannot update workflows while it executes (which would be more or less similar to being able to update your C# program statements as your program runs). Not sure if this is what you are looking for or not?
Other than that, Elsa is OEM friendly, runs in production successfully at several companies that I know of and is embeddable.

ARM Templates are still the preferred deployment mechanism?

We're a little aghast at how time consuming it is to develop syntactically correct ARM templates from scratch.
The Portal helps, but pushes out non-development ready templates (pretty hard to find what the bug is when all the templates use 'name' for the resource name, versus maybe something more verbose like ('microsoftStorageAccountResourceName', microsoftStorageAccountResourceLocation, microsoftStorageAccountResourceTags, etc.).
We understand that there are many ways to deploy -- but if at all possible, we'd like some assurances that ARM is the current preferred way and will continue to be the preferred primary means of scripting deployments via VSTS -- or is it sliding towards a different -- maybe more programmatic -- approach (eg: Powershell, CLI, other).
We're asking this because it looks like we will have to invest significant effort to create a resource library for this organisation (to decrease the need for all projects to become proficient at ARM deployment) -- and would prefer to do it using an approach that will be preferred by developers over the coming years, for maintainability objectives.
Thanks for any insight on which approach to recommend as the best investment.
Templates are going to be around for the foreseeable future... it really depends on whether you want to orchestrate the deployment yourself (imperative deployments using CLI, PS, SDK) or you want ARM to orchestrate the deployment (via templates). Happy to chat in offline if you want to discuss more - email bmoore at microsoft.
Writing this now one year after the original post: The answer to 'ARM Templates are still the preferred deployment mechanism?' probably depends on who you ask. "Preferred" by Microsoft according to their product strategy may be meant differently than preferred by actual users that, well, feel the pain of vendor strategy decisions. I had started with an Azure automation book that used PS scripting only; I was lead (maybe mis-lead?) then to the ARM Template deployment model, mainly by the Microsoft web documentations, but found out that those templates need so much rework that writing a PS script, or even writing an ARM Template from scratch, seems to be a more efficient way to go. In fact, I am confused at the moment about what the 'Best Practice' is, i.e. what method other developers actually use. Is there a community-established opinion on this matter, now in August 2019? Or is it all VSTS / 3rd-party IDEs nowadays?

Suggest a suitable Automated Testing Tool for my project

We are in search of an automated testing tool for our project. As we are in testing department we prefer a tool which would have less programming in it. Please suggest some tools for us .Till now we are testing our application manually.
Our project is being developed in Java.
Is there any freeware tool that I could use or is it better to go for a paid tool?
Thanks in Advance.
Less programming? You'll need something like JUnit to write unit tests if you want to do serious regression testing, but unit tests require you to write some code
Here's a big list of open-source testing tools, some of them may offer what you want: http://java-source.net/open-source/testing-tools/junit
For example, T2 claims to be a random testing tool. As one, it is fully automatic, but one must keep in mind that the code coverage of random testing is in general very limited. It should be used as a complement to other testing methods. T2 checks for internal errors, run time exceptions, method specifications, and class invariant.
Not sure if you mean a CI tool or not, but we use Hudson at Zappos and it works pretty well.
http://hudson-ci.org/
..and there's also CruiseControl: http://cruisecontrol.sourceforge.net/
If you're not talking about CI, maybe you mean QA testing - in which case you should take a look at something like Selenium (for web apps):
http://seleniumhq.org/
If you're doing GUI testing? I'm not really familiar with that area, but I've heard about WinRunner and Rational:
http://en.wikipedia.org/wiki/HP_WinRunner
http://www-01.ibm.com/software/rational/offerings/quality/
..though neither are really free tools. Something like AutoIT might help you move widgets around, but it lacks the reporting parts:
http://www.autoitscript.com/autoit3/index.shtml
There could be two answer to you question:
Besides Selenium, though it has ample of advantages, I am reading about another tool which uses same API which Selenium use. The only changes in API I have seen so far is it reduces the complexity of functions thus making it more easier and simpler for user who is learning.
The tool is called 'Helium' and it has 50% (and more) less complex functions and code as Selenium has.
The only problem with this tool is it is paid tool for learning purpose and for implementing not-so-big scale project you can use it. But yeah after some time its gonna cost you.
I have implemented some code on Helium. Please let me know , if you face any issue initially or you are thinking to implement it.
Other being, you can use Selenium Builder(http://khyatisehgal.wordpress.com/2014/05/26/selenium-builder-exporting-and-execution/) which is an advanced form of Selenium IDE. It imports your command in different languages and does work more effectively and efficiently as Selenium IDE does(http://khyatisehgal.wordpress.com/2014/05/25/selenium-builder/) . So you can import scripts in Eclipse IDE and just execute them as is.
Please let me know , if you have any doubt in any of the tool.

Fitnesse vs any other subsystem testing tool [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
We are currently using Fitness for subsystem testing.
we are having lot of issues using the tool, few to mention
Development time for writing Fixture is more then writing the actual code
Issues around check in of the dlls so that Qa can test them
Issues in running Fitnesse for project which uses NHibernate
limited help online
We are planning to use some other tool to do the testing
Few options which we know are
SOAP UI
Story teller
I am not sure whether we will have similar problems with these tools
It would be great to know if someone has experience using these tool and could guide us
In our project we have adopted TDD so we have Nuits for unit testing.
It would be great if anyone is aware of tools/ideas which could extend nunits for subsystem testing as well.
Component testing tools are all about calling functions. Your tests cause functions to be called in "fixtures" that then call into the SUT. Any tool based on this premise will encounter the problems you reference above.
However, most of those problem are manageable. For example you should not be writing lots of fixtures. If you are, something is wrong. Secondly, your fixtures ought to be little more than wiring code to call the APIs in your application. If your fixtures are doing significant work, then something is wrong.
In most FitNesse environments the number of fixtures is rather small. For example, there are over two hundred acceptance tests for fitnesse itself, but the number of fixtures in on the order of a dozen, and they are all relatively simple.
Get help on the fitnesse#yahoogroups.com site. The folks there are usually very responsive to questions.
If you can communicate with your software using text, then I have had success on past projects rolling my own framework using expect.
The framework I cooked up stored tests as XML files, using a simple xUnit style markup. The xml files were then transformed into executable tests using a stylesheet. I ended up transforming the tests into Tcl/Expect, but you could transform them into anything. In fact, if you wanted, you could transform them into multiple languages, depending on your needs.
Several people have kindly reminded me (in the same way you remind you poor dottering grandfather about the drool on his chin) that we are in the 21st century when they inquire why I would choose Tcl over some more modern language. As it turns out, for the purposes of this kind of testing, I haven't yet found a better choice. The Tcl language still kicks butt in this area. Trust me, I didn't wake up one day and say to myself "self, what I need a test framework implemented in a scripting language everyone will hate!"
Believe it or not, I really was looking for a tool, any tool, that had the following characteristics:
Cross platform. This was non-negotiable. We do a lot of cross platform development and we already use WAY too many tools that don't support cross platform development.
Simple syntax. Say what you want about Tcl, but the syntax is very regular. I knew that some native code would probably creep even into the XML files (and originally it was Tcl only, no XML) and I wanted the syntax to be comprehensible to a non-programmer. This simplicity is a core strength of Tcl. As it turns out, it also made transforming the XML easier too.
Free. My favorite price ;-)
Writing tests as simple xml files allowed non-programmers to write customer acceptance level tests - no programming required.
Easily extended.
I did not set out to home grow this to the extent I have. Initially, I looked at established test frameworks like DejaGnu and android. Mostly they had way too many features. They were so feature laden that I didn't think they would be easy for a project to start using without a lot of up front training. Looking at DejaGnu, got me interested in Tcl in general, and after a brief look at tcltest, I almost gave up. Both DejaGnu and tcltest assume you are an advanced Tcl scripter, which I didn't think anyone at my company ever would be. In addition, I wanted the test framework (if possible) to support an xUnit type of test framework and neither of these tools did.
Eventually I found TclTkUnit, a Tcl based testing framework that is designed along xUnit lines. It was only a short leap of logic to realize I could run TclTkUnit in Expect instead of tclsh and get everything I needed.
As it ended up getting used more, I added another stylesheet to render the xml files nicely in a web browser. The test framework generated it's own documentation.
On another project we needs a very basic sim / stim environment to emulate a person throwing switches and pushing buttons on a piece of hardware we didn't have. It only took a few hours to hack the test framework to function as a simulator. Creating the framework took some work, but we felt that it did pay benefits in the long run. I really believe that these types of unforseen consequences of creating your own tools is why people in the agile community & XP in particular have always been such strong advocates.
We have adopted a Fitnesse-based but practically-code-free approach using GenericFixture (google for Anubhava to find his wordpress site) for Fitnesse.
What this allows us to do is to create "executable test narratives" using a language that is friendly to the business-side (as opposed to the technical-side). This language, which is very easily defined, practically without coding, in Generic Fixture, is called a DSL (domain specific language). So we can write our test narratives using e.g. medical terms or even in a language other than English. Basically what we get is transforming our Use Cases into executable narratives.
We are starting to use it in a large project (15 ppl for 2 years) and it seems (so far) to have a good future.
It easily allows Test Driven Development or test-creation after development (traditional approach).
It is wiki-based (Fitnesse) and its versioning and refactoring funcitonality has proven so far sufficient.
I can give more info if anyone is interested.
best regards,
Aristotelis.
We use unit-testing frameworks like NUnit to drive our subsystem tests as well - the tests don't care how they are run. It doesn't have fitnesse's document-based approach, though.

When should one use a project reference opposed to a binary reference?

My company has a common code library which consists of many class libary projects along with supporting test projects. Each class library project outputs a single binary, e.g. Company.Common.Serialization.dll. Since we own the compiled, tested binaries as well as the source code, there's debate as to whether our consuming applications should use binary or project references.
Some arguments in favor of project references:
Project references would allow users to debug and view all solution code without the overhead of loading additional projects/solutions.
Project references would assist in keeping up with common component changes committed to the source control system as changes would be easily identifiable without the active solution.
Some arguments in favor of binary references:
Binary references would simplify solutions and make for faster solution loading times.
Binary references would allow developers to focus on new code rather than potentially being distracted by code which is already baked and proven stable.
Binary references would force us to appropriately dogfood our stuff as we would be using the common library just as those outside of our organization would be required to do.
Since a binary reference can't be debugged (stepped into), one would be forced to replicate and fix issues by extending the existing test projects rather than testing and fixing within the context of the consuming application alone.
Binary references will ensure that concurrent development on the class library project will have no impact on the consuming application as a stable version of the binary will be referenced rather than an influx version. It would be the decision of the project lead whether or not to incorporate a newer release of the component if necessary.
What is your policy/preference when it comes to using project or binary references?
It sounds to me as though you've covered all the major points. We've had a similar discussion at work recently and we're not quite decided yet.
However, one thing we've looked into is to reference the binary files, to gain all the advantages you note, but have the binaries built by a common build system where the source code is in a common location, accessible from all developer machines (at least if they're sitting on the network at work), so that any debugging can in fact dive into library code, if necessary.
However, on the same note, we've also tagged a lot of the base classes with appropriate attributes in order to make the debugger skip them completely, because any debugging you do in your own classes (at the level you're developing) would only be vastly outsized by code from the base libraries. This way when you hit the Step Into debugging shortcut key on a library class, you resurface into the next piece of code at your current level, instead of having to wade through tons of library code.
Basically, I definitely vote up (in SO terms) your comments about keeping proven library code out of sight for the normal developer.
Also, if I load the global solution file, that contains all the projects and basically, just everything, ReSharper 4 seems to have some kind of coronary problem, as Visual Studio practically comes to a stand-still.
In my opinion the greatest problem with using project references is that it does not provide consumers with a common baseline for their development. I am assuming that the libraries are changing. If that's the case, building them and ensuring that they are versioned will give you an easily reproducible environment.
Not doing this will mean that your code will mysteriously break when the referenced project changes. But only on some machines.
I tend to treat common libraries like this as 3rd-party resources. This allows the library to have it's own build processes, QA testing, etc. When QA (or whomever) "blesses" a release of the library, it's copied to a central location available to all developers. It's then up to each project to decide which version of the library to consume by copying the binaries to a project folder and using binary references in the projects.
One thing that is important is to create debug symbol (pdb) files with each build of the library and make those available as well. The other option is to actually create a local symbol store on your network and have each developer add that symbol store to their VS configuration. This would allow you to debug through the code and still have the benefits of usinng binary references.
As for the benefits you mention for project references, I don't agree with your second point. To me, it's important that the consuming projects explicitly know which version of the common library they are consuming and for them to take a deliberate step to upgrade that version. This is the best way to guarantee that you don't accidentally pick up changes to the library that haven't been completed or tested.
when you don't want it in your solution, or have potential to split your solution, send all library output to a common, bin directory and reference there.
I have done this in order to allow developers to open a tight solution that only has the Domain, tests and Web projects. Our win services, and silverlight stuff, and web control libraries are in seperate solutions that include the projects you need when looking at those, but nant can build it all.
I believe your question is actually about when projects go together in the same solution; the reason being that projects in the same solution should have project references to each other, and projects in different solutions should have binary references to each other.
I tend to think solutions should contain projects that are developed closely together. Such as your API assemblies and your implementations of those APIs.
Closeness is relative, however. A designer for an application, by definition, is closely related to the app, however you wouldn't want to have the designer and the application within the same solution (if they are at all complex, that is). You'd probably want to develop the designer against a branch of the program that is merged at intervals further spaced apart than the normal daily integration.
I think that if the project is not part of the solution, you shouldn't include it there... but that's just my opinion
I separate it by concept in short

Resources