How to upgrade REDHAWK SDR framework to support SCA 4.1 specification(Software Communication Architecture) - idl

I am going through the source code of the REDHAWK SDR framework and want to upgrade it to support SCA 4.1 specification. It already is partially compliant to SCA 2.2.2 version.
I have the IDL descriptions for the interfaces of the SCA 4.1 and compiled them with omniORB IDL compiler with C++ mapping. The skeleton and stub codes are generated properly. Now I want to understand how to do the following:
where to place these generated skeleton and stub codes in the
REDHAWK source code.
Where to place the server and client codes based on these skeleton
and stub codes in the redhawk source code
I also want to upgrade to logging as specified in SCA 4.1 and available at link https://www.omg.org/spec/LtLOG/1.1/PDF. Here again I have generated the skeleton and stub codes but do not know how to proceed further.
Am I missing something(or a lot).
Any pointers will be helpful.Please ask for any information I have not included as I am also in the learning stage.

REDHAWK is an extension of SCA 2.2.2 rather than an implementation of it. Because of this, the IDL is not a complete one-to-one mapping of behavior, so even when switching the interfaces, you'll run into issues with mismatches in the underlying behavior. Also, REDHAWK extended the XML profile to include things like complex basic types and sequences as members of structures, which are not part of SCA 4.1.
You're also suggesting switching logging from log4cxx/log4j to the cos lightweight logging. Logging is embedded in the base classes of pretty much the entire code base and replacing it will be a substantial challenge
As a guide for the level of effort that you're considering, take a look at: https://github.com/Geontech/sca-jtnc. That project modified REDHAWK to implement the SCA 4.1 spec for the Python sandbox, code generators, a subset of device/component base classes, and a converter to transform REDHAWK projects into SCA 4.1 projects. It had to import multiple interfaces from REDHAWK and it does not include any of the system services (like the Domain Manager) or any updates to the IDE. That project can give you a working starting point to get you moving in the right direction and it should also provide you with some insight into the level of effort needed for the change.

It is fairly easy to write a very minimal implementation of CosLwLogService that supports just write_records, write_record. I chose to write one as a front end to log4cxx. This allows an application to make the standard CosLwLog calls and for the entries to print to the same log file used by the redhawk core framework (or redirected as log4cxx allows). I use log4cxx calls directly in my platform devices and services but this allows an SCA application to only use the standard CosLwLog calls. Of course, it is much more work to support the more complex features of CosLwLog but these are not as often needed by an SCA application.

Related

What reflection mechanisms are available in C++/WinRT?

I remember C++ had some runtime type information (RTTI) added sometime after Bjarne Stroustrup's original The C++ Programming Language, but I never had call to use it.
I am familiar with some of the COM and CLR reflection APIs including ITypeInfo and System.Reflection. Would any of these work in against types declared in compiled C++/WinRT app?
This question addressed a similar question 5 years back for C++/CX, have there been changes?
C++ /WinRT doesn't add to the native reflection capabilities of C++. However, the xlang metadata reader APIs can be used to inspect Windows Runtime metadata files (.winmd) that describe WinRT types. You can see the metadata reader library here (and there are examples of usage in the various tools in this repo):
https://github.com/Microsoft/xlang/blob/master/src/library/meta_reader.h
You can use that in conjunction with the Windows function RoGetMetadataFile to locate the metadata for a type at runtime.
https://learn.microsoft.com/en-us/windows/desktop/api/rometadataresolution/nf-rometadataresolution-rogetmetadatafile
Note that C++ /WinRT itself does not use the winmd file at runtime, and as such, code built with C++ /WinRT does not require the winmd to be available at runtime. If the winmd isn't present, you won't be able to rely on it for type information.
If the metadata file is supplied for a type written in C++ /WinRT, the .NET runtime can use the winmd to reflect over the projected types in much the same way that it can reflect over types written using the .NET runtime.
C++ /WinRT does not provide any support at this time for dynamic invocation of types. This is an infrequent but recurring ask and is on our backlog.
Thanks,
Ben

Typescript in VS 2015 ASP.NET 4 MVC 5 - what are the working combinations of settings and choices?

I am adding a typescript project to a VS2015 MVC 5 project (that makes it ASP.NET 4, not asp.net 5 or asp.net 6 code: only the MVC is version 5). This is the sole target for all aspects of my question, I cannot use generalized or theoretical guidance on typescript, node.js, module loaders, etc.
The problem is simpler with ASP.NET Core. But that's not what I'm facing. All the usual sources of examples and guidance avoid or provide scraps when it comes to ASP.NET 4 MVC 5 because it is hard. And no one will state exactly how hard or what precisely are the obstacles.
Worst, Typescript documentation is like Open Source Documentation: you can only get one issue, one step deep. This produces a research workflow consisting of endless-issue-tree-recursion.
I understand opinions, I even have one. But I'm looking for the experiential answer, what is the one combination that has proven to work for a production team.
So here are the specific items that need to be addressed and made to work within the confines of a working, medium-sized ASP.NET 4 MVC 5 LOB app:
Visual Studio's version of typescript. This is an installation issue (using node most simply) and the Tools/options - typescript settings have to match.
Browser-style testing (typically manual TDD workflow) or node.js testing (automated). This has to be chosen up-frontly to prevent more issue-tree-recursion. We are going with browser-based... phantomjs using WallabyJs.
NPM #types/library-name: supposed to fill a node_modules folder with both library-name and library-name.d.ts based only on a package.json with #Types references. But actually requires the package.json to hold a reference for both the #types/library-name and the library-name to work in my VS 2015 ENT v3 and asp.net 4 mvc 5 project. And all the versions specified then require manual correction' and even then the version look-up process is a little suspicious. This #types process may not be the way to go with ASP.NET 4 MVC 5, but I can't tell what the correction alternative might be. #Types is currently the only recommended option for typescript.
Which version of ECMAScript: es6 is apparently too far ahead. es2015 is likely, but this (maybe) appears to have relationships to several of the other issues. Supposedly, these designations are the same, but there are two places they can be set. I've chosen es2015 in tools/options/typescript. But getting any of these (now 3) settings wrong could be a problem.
Module system: CommonJs is for node and automated testing, and VS development testing is automated only for server-side, and VS UI tests are a manual process. So AMD, require JS is probably an option for VS, but it adds its own workflow and maintenance and considerations that are really hard to get right in ASP.NET. Using ASP.NET bundling and triple-slash references (dependable) might work, but after you have put the libraries in node-modules, you would want to use the full path into node-modules in the file name slug in an import statement. This is all very clumsy and involves the most guesswork. But solving this whole item might be the 'key' for the overall question.
There are probably a lot of other, smaller issues. But someone who has done this will have solved all the mentioned items and the others as well.
What I'm looking for is all the settings across all these issues in detail based on a working Typescript app in ASP.NET 4 MVC 5 implementation for browser-based unit/behavior tests in VS 2015. Those who have done it will understand.
Thanks very much for your consideration.
What you're missing is separation of concerns, in spite of the initial benefit of such starter templates, they start to cause incidental dependencies and complicate the mental model. It's much easier to have your front end in a separate project.
Regardless:
Visual Studio's version of typescript.
Always use the very latest available. This controls the version of TypeScript which powers the IDE. You will probably end up compiling in a separate process or in the browser during development. Again you will want to use the latest but it will likely be installed with a different package manager.
Browser-style testing (typically manual TDD workflow) or node.js
testing (automated). This has to be chosen up-frontly to prevent more
issue-tree-recursion.
Firstly, I definitely agree with the importance of choosing up front but, if it is still possible, just unpleasant, to add tests to an existing project.
TDD workflows involve automated testing as they rely on rapid feedback. This is orthogonal to whether you run your tests in the browser or using NodeJS.
You should use whichever approach makes the most sense for your application and that may be a mix of both.
Since you are writing a frontend JavaScript application you will likely want to run some tests in the browser. However, as Uncle Bob (Robert C. Martin) has stated, views should be dumb and require little testing. My interpretation of this is that we should not spend too much time testing things like Angular or React components to ensure that they render correctly, and instead focus on testing behavioral elements of the system such as services and plain old functions.
That said, you may well want to run tests of your client side services against an actual browser runtime, as opposed to just Node.js, and that is reasonable.
There are a number of testing libraries to help you with this. I do not have a specific recommendation besides to say that you should find a reliable test runner, and a simple assertion library. Tried and true testing libraries like QUnit and Tape are examples of solid options.
One last note, it is important that one not confuse the concept of integration testing with running tests in a web browser, it's perfectly valid to run TDD style tests, which implies unit tests, in a web browser.
NPM #types/library-name: supposed to fill a node_modules folder with
both library-name and library-name.d.ts, but requires the package.json
to hold a reference for both the #types/library-name and the
library-name to work in my VS 2015 ENT v3 and asp.net 4 mvc 5 project.
Simply put this goes to back to decoupling your front end from your back end. Visual Studio and certainly ASP.NET have nothing to do with versioning your types packages.
If a package comes with its own type declarations then you don't need to install an auxiliary types package otherwise you do.
Either way install JavaScript and TypeScript dependencies using a JavaScript oriented package manager (such as NPM, JSPM, or Yarn).
Do not use NuGet for these!
As you suggest there are versioning issues, it's currently a difficult problem in TypeScript. However once again it has nothing to do with ASP.NET or Visual Studio.
Which version of ECMAScript: es6 is apparently too far ahead. es2015
is likely, but this appears to have (maybe) relationships to several
of the other issues.
ES6 is the same as ES2015, the latter being the name under which the former was ultimately released. ECMAScript now follows a yearly cadence, roughly, with ES2017 just around the corner.
The nice thing about having a transpiler, such as TypeScript, is that you can use the latest features from es2017 and still target es5 for emit and you'll be fine.
Module system: CommonJS is for NodeJS and automated testing, and VS
development testing is automated only for server-side, and VS UI tests
are a manual process. So AMD/UMD require JS is probably the option for
VS, but it adds its own workflow and maintenance and considerations.
Using triple-slash references (dependable) might work, but after you
have put your/their libraries in node-modules you would want to use
the full path into node-modules in the file name slug in an import
statement. (solving this whole item might be the 'key' for the overall
question).
This is a very complex subject and probably the only one of your questions that you really need to spend a lot of time considering. As I said earlier using NodeJS or not is orthogonal to automated testing. But if you're targeting NodeJS natively with your test code then you will need to use CommonJS output.
For the actual application code, the choice has nothing at all remotely to do with whether or not you are using Visual Studio, I'm sorry for reiterating this but it really is important that you separate these ideas.
The question of which module format to use for your front end application code is a very important and contentious one.
Triple /// references are not a module format but rather a way of declaring the dependencies between global variables that are declared and referenced across multiple files.
They do not scale well, working acceptably when you have only a handful of files.
Triple /// references should not be used. They are not a modularity mechanism and their use is completely different from using any of the module systems/module formats you mention, including CommonJS.
Never combine them with a module system, which is what you would have to do in order to run your tests under NodeJS or load your app with RequireJS or anything else.
RequireJS is an excellent option which would imply AMD modules as you say. RequireJS does not require any use of triple slash references. In fact they should be avoided as the plague when using this format or any other module format!
I recommend strongly against using UMD modules. Isomorphic JavaScript is a problematic idea, and it offers you no benefits since you are creating a browser application with a .NET backend.
Many developers actually do use CommonJS modules in a browser. This requires bundling them continuously, using tools such as Webpack. This approach has advantages and disadvantages. The primary advantages are the ability to lean on existing NodeJS JavaScript server-side tools, such as npm, by way of Webpack or Browserify. This may not sound like a big advantage but the amount of rich tooling available for CommonJS modules is nothing to scoff at, making it a strong option.
Consider using the System module format and the SystemJS loader via jspm to both manage your packages and to load your code. With this approach, you gain the advantages of RequireJS, are able to run your tests under NodeJS and the browser using jspm run without needing to switch targets formats or bundle your code just to test it. There's also no need to bundle your code during development, although this is supported. More importantly, you gain the advantage of writing future compatible code, as it offers the only module format and loader which correctly models the semantics that ES Modules will eventually have when implemented natively in browsers. JSPM has first class support for TypeScript, Babel, and Traceur.
For posterity here is the description of the System module format taken from the link above:
System.register can be considered as a new module format designed to support the exact semantics of ES6 modules within ES5. It is a format that was developed out of collaboration and is supported as a module output in Traceur (as instantiate), Babel and TypeScript (as system). All dynamic binding and circular reference behaviors supported by ES6 modules are supported by this format. In this way it acts as a safe and comprehensive target format for the polyfill path into ES6 modules.
Disclaimer:
I am a member of the JSPM GitHub organization, playing a role in maintaining the registry and have made very minor contributions to the jspm cli.

How can I make an SBT build for multi-projects and multi-platforms?

I'm starting on a medium project with many independent components that can run either on Android or the JVM and I'm wondering how to break it into SBT projects so that the dependencies behave nicely. Here's what I've got so far:
core/ for platform agnostic core code, must not break on either platform, this includes interfaces for component launchers
android-core/ for implementations of the core interfaces that depend on android libraries (note, this project depends on sbt-android)
jvm-core/ for implementations of the core interfaces that depend on libraries that don't play well with or depend on android
So far so good, but now it's time to consume the core projects in the individual components. My requirements are:
Each component should compile to a separate Android app (perhaps sharing an aar library?), and the apps can be individually installed (still via sbt, a la the android:install task)
There is a wrapper build so that all project builds can be done from the same place.
It is so easily extensible that fresh grad students can correctly add components (bonus points if adding a component needs no change whatsoever to the build).
If a component depends on a platform specific library it does not prevent other components from being compiled agnostically.
Some of the questions I have are:
Should each component have an sbt project? (I'm inclined to think so so that students could add dependencies on libraries that don't run on both platforms, but I'm open to being wrong)
If so, will each component's project require an sbt build?
If so, how can I bootstrap the component builds to require minimum skill from the component author?
Later I'm going to be adding code generation to generate message classes from descriptions (think protobuf/thrift), that will want to run as a first pass before the components get compiled, I'm assuming this can be done, but do you have a link that explains how?
If two components each compile against the messages of each other will that create impossible circular dependencies?
Basically I'm looking for wisdom and experience, the nitty gritty code I'm sure I can hack my way through once I know what terms to search the docs to understand and roughly how the whole thing wants to hang together. Thanks for your help!

OSGi for non-java 3PPs

We are building a product that uses the apache hadoop & hbase frameworks for handling some of our big data requirements. We are also using Oracle for our reporting requirements. We are keen to go with the OSGi way of bundling our software to take advantage of the remote deployment,service management & loosely coupled packaging features that OSGi containers offer.
We have a few doubts in this area:
When it comes to our own Java apps, we now know how to create OSGi bundles out of them and deploy them over OSGi containers. But how do we handle Java based 3PPs that have a clustered architecture, for example HBase/Hadoop? We saw that Fuse Fabric has created a Hadoop (actually only HDFS not Map Reduce) bundle, but in general how do you go about creating bundles for 3PP's?
How do we handle non-java based 3PPs like for example Oracle. Should we create a OSGi bundle for it and deploy over OSGi or should we install these 3PP's outside of OSGi and write some monitoring scripts that are triggered over OSGi to track the status of these 3PP's? What are the best practices in this area?
Are all bundles launched over OSGi container (like Karaf) run within the same single JVM of the container? Some of our applications and 3PPs are huge and we may run into heap/GC issues if all of them are run inside a single JVM. What are the best practices here?
Thanks & Regards
Skanda
Creating bundles from non-OSGi libraries can be as simple as repackaging it with an appropriate manifest (there are tools for that, see below), but it can also become very difficult. OSGi has a special class-loading model, and many Java EE libraries that do dynamic class-loading don't play well with that.
I am not sure what you mean here. Theoretically OSGi supports loading native libraries using the Bundle-NativeCode manifest-header, but I have no experience with that.
Normally all bundles are run in the same virtual machine. However, Karaf supports clustering through Cellar, I don't know about other containers though.
Tools for wrapping 3rd-party libraries
In general you can use bnd for this (the tool of choice when it comes to automated generation of OSGi-bundle manifests). PAX-URL offers the wrap protocol-handler, which is present by default in Karaf. Using that, wrapping a library can be as simply as that (e.g. from the Karaf-command line, or a feature-descriptor):
wrap:file:path/to/library
The case of Oracle and most other db libs is simple. You can use the wrap protocol of pax url. Unter the covers it uses bnd with default options. I have a tutorial for using dbs with Apache Karaf.
In general making bundles out of third party libs can range from easy to quite complicated. It mainly depends on how much dirty classloading tricks the lib uses. Before you try to bundle stuff yourself you should look if there are ready made bundles. Most libs today either come directly as bundles or are already available as bundles from some source. For example the servicemix project creates a lot of bundles. You can ask on the user list there if something is available.

Qt & OpenGL: How do I force OpenGL 2.1?

I'm developing an application that makes use of Qt and OpenGL, using Qt Creator and QGLWidget subclassing.
My application has a user base that has a higher than average proportion of older hardware, which is why I need it to run on machines with graphics cards supporting OpenGL 2.1 only - or, in other words, I cannot rely on anything newer than 2.1 being present.
I am worried about unknowingly using OpenGL functionality that was introduced after 2.1. Is there any way I can configure OpenGL to "only" support 2.1, so that I would get a runtime error if I do something I shouldn't be doing? Or, failing that, what is the best practice to ensure compatibility?
The only thing you need to worry about is not creating a OpenGL-3 core profile context, and only to use functions found in the OpenGL-2.1 specification.
Since creating a OpenGL-3 core context requires you to jump some hoops, you're not running into problems there. The system may give you something newer than OpenGL-2.1 but as long as you don't use any functionality not found in the 2.1 specification document you're fine. You'll have to use the extension mechanism, to actually get the functionality on Windows; OpenGL-2.1 is technically a list of extensions made official functionality, so carefully read the Appendix of the specification, where the functionality that formerly were extensions are explicitly mentioned.

Resources