Is DataNucleus the successor to Kodo JDO? - jdo

I have inherited 15+ year old JEE application which uses a long-since unsupported persistence layer called Kodo by Solarmetric (v4). Solarmetrics was then bought out by BEA which was then bought by Oracle. Support for this persistence layer has long since stopped, and I am relying on 15+ year old technology to power the entire
application.
I am looking to change the persistence implementation. From what I have been able to deduct, Kodo is based on the JDO spec (but not entirely certain which version).
To replace the technology with Hibernate or a pure JPA solution would be nightmarish - too much of the logic baked into the application relies on the JDO entity Id.
Instead, I'm looking to see if I can more easily upgrade/replace with a more current JDO implementation, such as DataNucleus.
Does anyone have any experience/success stories in upgrading such an old technology to something more recent. Is DataNucleus backwards compatible with something as old and unsupported as Kodo? Has the JDO spec changed significantly enough since 2005 such that an implementation based on 2005 would be require a large rewrite to support the 2018 implementations?

DataNucleus is an independent (open source) implementation of JDO (and JPA too for that matter). It started life as TJDO, then became JPOX (and became the reference implementation for JDO 2.0), before changing its name to DataNucleus in 2008. It is still the reference implementation for JDO (JDO2.0, 2.1, 2.2, 3.0, 3.1, and 3.2).
It currently implements JDO 3.2, which is way more advanced than anything Kodo ever supported (they did JDO 2.0, before Oracle shafted anybody who used it by abandoning it). People have upgraded JDO applications to use DataNucleus from other JDO providers with success, but the answer to that question depends on whether you have used vendor extensions of Kodo. Of course DataNucleus is also open source (unlike Kodo) so you are protected from being held ransom by companies and can contribute fixes if you have problems.
JDO has been expanded significantly since JDO 2.0 (what you use), adding on annotations, type safe queries, many many more query methods, as well as other features. All releases of JDO are intended as backwards compatible from what I remember. Go look at the Apache JDO website and the DataNucleus docs to see what has changed in JDO.

I did not work with Kodo, but worked with other implementations of JDO and DataNucleus. All I can say is that I suppose it will be possible to port the code to DataNucleus. Generally it is supposed to be backwards compatible and what is changing is mainly configuration and not the code. I would strongly recommend not to try to move it to other standards as JDO is much broader and elastic then JPA or Hibernate - so it would not only be easier to port but also easier for further development.

Related

Axon Framework - Does it support ActiveMQ integration

Does Axon 4.0 support active mq integration?
I understand that it has got amqp-extension, however, it seems to be based on rabbit mq (com.rabbitmq » amqp-client).
Not able to find any examples either.
The Axon-AMQP module supports AMQP 0.9. The 1.0 specification isn’t supported yet.
The fact that Axon yses the RabbitMQ client doesn’t make it dependent on AMQP. It’s just an implementation that speaks AMQP 0.9.
It should be fairly easy to integrate Axon with other nessaging systems, taking the AMQP module as an example.
There might already be community-built modules out there. It’s worth doing a quick search on Google/Github to see if there is anything that suits your needs.

SQLite library for .net Core/Standard: MS EF or sqlite.org?

I'm having trouble getting a overview of the different SQLite libraries to be used with .Net Core and/or Standard.
It seems there are primarily two:
MS: Microsoft.EntityFrameworkCore.Sqlite
sqlite.org: System.Data.SQLite
Is the MS library completely independent of the sqlite.org's libraries? And if so, which one is recommended to use?
I prefer simplicity....it seems I just need two dll's if using sqlite.org.
There are two main SQLite packages for .NET Core/Standard. They are independent and use separate native binaries.
Microsoft.Data.Sqlite
System.Data.SQLite
The former is provided by Microsoft, the latter by SQLite.org. I prefer the Microsoft one but unless you're looking for specific functionality (SQLite.org's supports encryption, Microsoft's supports fts5, etc) either one will probably work fine. They both implement System.Data.Common and so the APIs are almost identical. SQLite.org's can load arbitrary extensions which Microsoft's cannot (though with the latest release it was looking like SQLite.org's couldn't load fts5 which I know worked with previous releases).
I would recommend using Entity Framework Core or another similar third-party database abstraction package unless you absolutely can't use modelling for some reason (EFCore still lets you run the occasional low level query if you need to). It's quicker to develop, and easier to maintain the code.
Microsoft.EntityFrameworkCore.Sqlite provides support for Sqlite in EFCore, using Microsoft.Data.Sqlite. So you would want to use that in this case. The internet says you can also use System.Data.SQLite but it looks like Microsoft.EntityFrameworkCore.Sqlite still tries to load the underlying native binary from Microsoft.Data.Sqlite for some purpose, though it does appear to use System.Data.SQLite for the actual database operations. Not sure exactly what's going on there.
Microsoft.EntityFrameworkCore.Sqlite is for Entity Framework Core ORM is more complete but heavier.
The System.Data.SQLite.Core can use with Dapper ORM SQLite and Dapper but I use the Microsoft.Data.Sqlite.Core with SQLitePCLRaw.bundle_e_sqlite3 and Dapper ORM, I think I had problems with System.Data.SQLite.Core.
Use without is hard and I not recommend.

Binary serialization in .NET Core

I am working on a .NET Core project and I am trying to parse my List<T> to byte[].
Using the .NET Framework, we could have achieved the same by using BinaryFormatter, but at the time of writing this question it looks like Microsoft does not yet support it in .NET Core and no upcoming releases seem to do that.
Can anybody tell how to perform this serialization in .NET Core?
Also, is binary serialization platform-dependent, and for such reason been deprecated in .NET Core?
You can use Binaron.Serializer - https://github.com/zachsaw/Binaron.Serializer
There's no need to decorate your class with any attributes.
Disclaimer: I'm the author of Binaron.Serializer.
You can use MessagePack. The package is chosen as Package of the week in .Net blog.
Nuget command:
Install-Package MessagePack
You can also take a look into their source code and see how it is implemented in .net core.
.NET Core 2.1 now includes a BinaryFormatter you can use for this.
You can find more details in this answer.
BinaryFormatter is getting obsoleted in the upcoming .NET versions due to its security flaws.
It is basically safe only if both serialization and deserialization happens in the same process (which is not the case in most scenarios) so it has been decided to remove it from future versions.
Though the obsoletion document says that in .NET 8 the complete binary serialization infrastructure will be removed I still hope this can be somewhat influenced. I recently opened an issue to discuss the possible ways of making binary serialization (and any polymorphic serialization) safe: https://github.com/dotnet/runtime/issues/50909
But as the other answers also illustrate, there are many custom binary serializers you can choose from. #ZachSaw's Binarion or MessagePack are equally popular, and I also made my binary serializer public a few years ago (NuGet). It tries to address the security aspects and good performance (meaning both speed and size).
But frankly, when communicating between remote entities (including file and database sources), a vulnerable binary serializer never should be used. And even the speed of the slower text-based serializers will be still much faster than any network communication so their speed barely can be real bottleneck.
For payload size and performance you can try BOIS which focuses on packed data size and provides the best packing so far. It also supports .Net Core
https://github.com/salarcode/Bois

Alfresco Community Enterprise Feature Comparison

I've seen this question but the answers are simply not good enough. I've searched the web and could find a clear listing of the main differences.
I am particularly surprised to see contradictions in the above link, that holds only 4 short answers.
So the question is, beyond support, what are (all) the differences between Alfresco Community and Enterprise editions (for the current versions of course)?
Are there functional or technical features that available in the Enterprise edition, that are not in the community edition?
I find it strange that it's so difficult to get a clear list. Looking at the forums to find this answer is not a serious option from a business perspective.
Until now, I found this link to be useful, but it's from 2009.
In particular, I find the platform support interesting, with the community edition supporting only lamp stuff:
Linux
MySQL
Tomcat
OpenLDAP
Firefox
And the enterprise edition supporting:
Windows
SQL Server
WebLogic, WebSphere
AD/Kerberos
IE and Safari
Apparently, these features are only available in the enterprise edition:
JMX monitoring
Runtime admininstration: What's that exactly? And what's in the community edition then?
Runtime indexing consistency check and update: What's in the community edition then?
High performance and availability: How is that implemented and what's in the community edition then?
Storage policies
Open source and proprietary technology stack support: which ones exaclty? Which ones are supported in the community edition?
If anyone could guide me towards serious documentation about these differences, that would be great.
I also went through the wiki but could not find an answer to my questions in there.
differences between Enterprise and Community vary in detail from version to version and are mainly visible for administrators. We see or maintain both flavors of Alfresco in midsize to very large environments and I would say it's more or less a question of taste and budget what the best decision / edition is for you. Excellent skills in infrastructure and java are highly advisable for both editions to run Alfresco in production.
The technical differences are not as dramatic as not being able to provide very similar functionality for the users - so if you're actually in a decision you should focus on a good technical partner, the support services and maybe the fact that you only get official patches in the Enterprise subscription, not on the Community. BTW Alfresco Enterprise is not Open Source but this is not a real point of interest for most end users. You can access the code as a subscription customer but it is not public available/accessible.
The main differences in features are already named more or less:
Administration
Enterprise has more views and setting in the admin web GUI. In Community you can access most configuration only from the command line. This may be a restriction but in real live Administrators prefer the command line and scripting automation.
Enterprise lets you change some Alfresco settings during runtime (most settings still require restart). Some can be change in the GUI and more in the jmx interface. Also you're able to stop and start subsystems like the CIFS protocol server. We use this feature to switch a system in read only mode. This point is meant with "runtime admininstration". Community requires restart of the service for most configuration changes. It is possible to work around this by advanced scripting like groovy or by implementing modules.
Indexing
Runtime indexing consistency check and update is not a self healing functionality as expected. You will have to learn (at least for now) that you have to recreate the Alfresco index from time to time even in Enterprise environments and that it is better to focus on good strategies how to speed recreation or how to setup standby indexes instead of hunting failed indexing transactions using the check and update methods. For major document model changes you need to recreate the index anyway.
High performance and availability
This is mainly the cluster and replication functionality which is no longer available in Community. It's similar to MS Clusters: It's a lot, lot work for very view more availability since some concepts are missing. The price is high in terms of complexity and can end up in loss of robustness. Even with enterprise support it's a hard job to keep a alfresco cluster running - so you need very good arguments why to go this way. But of course: its possible and available!
High performance: There shouldn't be any difference and if - I'm very curious about the explanation.
Technology stack
The main difference is the database support. In the Community you only can choose between MySQL and Postgres (No Oracle or MS SQL for Community). All other technologies are independent from Enterprise or Community (AD, Kerberos, OS, Browser, ...)
Java Container: I believe over 95% of all Alfresco installations run in tomcat. That's the configuration which is documented, tested and scales. Using WebLogic or WebSphere gives you no added value except new challenges - quite the contrary: You have to solve most issues for yourself and can't benefit from others experience.
Storage policies: I'm not pretty sure and should check in 4.2.x if the Content Store Selector / Storage policies is no longer available in the Community, but it was there in the 3.x versions.
[Edit]: storage policies have been removed in Community 4.2.x:
NoSuchBeanDefinitionException: No bean named 'storeSelectorContentStoreBase' is defined
If there is a really need for this functionality someone may re-enable that feature by coding a module for Community.
Regards
This page explains the difference between the editions:
https://wiki.alfresco.com/wiki/Enterprise_Edition
This page is the canonical, comprehensive list of the differences.
If you are considering an Enterprise Subscription and you have a question that isn't answered by what you can find on that page, you should talk to your account rep.
Well, regarding JMX monitoring:
Runtime administration: Alfresco enterprise allows to perform certain actions on Alfresco subsystems without restarting the server. This allows you to be very fast during debugging/developing and also making changes in production environment. Also you can access the JMX interface that supports JMX Remoting.
There is no consistency check or update, until you restart the server (during the startup you have to validate/check/rebuild your indexes). There is an option in alfresco.global.properties (or the original repository.properties config file) for that. If you have some inconsistencies in the Alfresco Community index, you're gonna have a bad time xD.
Alfresco Enterprise has specific license for clustering your architecture, the Community edition doesn't support those systems. Replicate and cluster Alfresco is one of the main improvements in performance/scalability/availability you could achieve.
The storage policies allow you to use Content Store selectors in Alfresco Enterprise. You can manage a primary and a secondary file store, and map/connect these stores in your architecture. The Community Edition allows you only to use one content store at a time.
These include everything inside Alfresco (Spring Framework, Apache-Lucene/Solr, Tomcat, and so on), because with the Enterprise license you have also the full support with everything inside the Alfresco package. The difference is that the Community is based on daily builds, supported by community, and therefor not guaranteed. The Enterprise support helps you resolve many problems that you might encounter during developing and in production environment, not only Alfresco related, but also on some configurations on supported platforms (Windows/Linux), your web application servers, and so on.
Hope it helps.

Are there any efforts building an object-CMIS-model-mapper / other higher level abstraction under way?

Building an object-oriented application on top of CMIS can feel just about as low level as using raw SQL. For SQL databases, we have OR-Mappers such as hibernate or libraries such as ibatis in the java world to provide us with basic CRUD functionality for writing an application.
Of course there is no spec-based API analogous to JDBC (on which the higher level relational "tools" rely) for CMIS, but I guess that does not make a significant difference addressing the issue.
Are there any efforts making the life of CMIS-App developers a little more convenient ?
Have a look at http://chemistry.apache.org/java/opencmis.html. It is mainly developed for Java but is available (at different "stability") for python, .net and phph

Resources