How to set up URIs for a classification that has new editions - uri

I have a classification that at regular times has a new edition. Concepts are added, the structure may change, or properties like classification codes may change, but most concepts stay and should have stable uri's. I want to know what is the best way of uri construction to deal with these issues.
My setup tries to conform to the rule that good base uri's and concept uri's don't contain version or datetime information. The concepts themselves can always be referred to by the same identifier, unless there is a change in meaning. Then a new concept has to be created. There can be new concepts or changes to the properties of an existing concept. This is how I set it up.
concept uri's: <http://example.org/myschema/concept/1>
schema uri: <http://example.org/myschema/2018>
schema uri next version: <http://example.org/myschema/2020>
#base <http://example.org/myschema/> .
#prefix dcterms <http://purl.org/dc/terms/> .
#prefix skos <http://www.w3.org/2004/02/skos/core> .
Example of a concept:
<http://example.org/myschema/concept/1> a skos: concept ;
skos:inScheme <http://example.org/myschema/2018>;
skos:prefLabel "nameless dog";
skos:notation "AB251".
In the next version the classfication code has changed:
<http://example.org/myschema/concept/1> a skos: concept ;
skos:inScheme <http://example.org/myschema/2020>;
skos:prefLabel "nameless dog";
skos:notation "AB254";
skos:changeNote "the classification code has changed from AB251 in 2018 to AB254 in 2020 due to addition of new concepts";
dcterms:modified 2020-08-19.
I want to know what happens when someone has referenced http://example.org/myschema/concept/1 with the prefLabel and the uri, possibly even with the classification code from 2018, when in 2020 there is a new version live. The 2018 version is also still live. Maybe this should depend upon my server setup, so that I always return the latest version when none is specified. I would like to keep giving access to the older classification version, but not create confusion.

Related

aem-6-2-replication-issue-for-node-containing-invalid-jcr-names

I am working on AEM 5.6 to 6.2 upgrade project. There are some nodes in aem 5.6 environment which contains invalid character(as per JCR naming convention like rte[2] is one of the node name which doesn't follow the naming convention)but somehow we are able to replicate those nodes in 5.6 environment. After upgrade it to aem 6.2,it seems like JCR is more restricted and won't allow the nodes to replicate if it is having invalid characters.
Getting the below error in aem 6.2: error:
com.day.cq.replication.ReplicationException: Repository error during node import: Cannot create a new node using a name including an index
Is there any way we can configure AEM 6.2 to stop checking JCR node names?or any other solution?
JCR 2 does not allow [ as a valid character therefore you won't get an easy workaround for this. It's one of the limitations just like the same-named-sibling.
My recommendation will be to modify these nodes before the upgrade/migration to 6.2. This can be complicated and costly for business but 6.2 won't allow it.
As a background [ was allowed in older version due to twisted support for grammar syntax for same-named-siblings.
Assuming that these are all content nodes as nothing out-of-the-box in AEM 5.x follows this naming convention.
Some ways to fix it:
Write a custom servlet to query and rename the paths across all references. You will have to test your content for these renames.
Use Groovy console (https://github.com/OlsonDigital/aem-groovy-console) to rename the nodes.
In either case, you will need to modify the nodes before the migration as the structure is not oak compliant therefore you cannot use crx2oak commit hooks also. This can be done with both in-place upgrade and side-by-side migration. This is similar to the problem with same named siblings that must be corrected before the migration.
Some efficiency techniques that might help:
Write queries to find invalid node names on top-level nodes like /content/mysite-a, /content/mysite-b etc. Don't run root level queries on /content as it might downgrade to
traversal and halt the execution.
Ensure that all references are updated in same commit. If you are using custom servlet, call session.save() only after updating all the node names and it's corresponding references.
As i mention in the comment this replication failure causes because of the oak workspace restriction as the code snippet below
//handle index
if (oakName.contains("["))
{ throw new RepositoryException("Cannot create a new node using a name including an index");
}
and i feel you can't escape this constraint as it it required by the repository to maintain consistency
you can find nodes that ends with '[', by below query
SELECT [jcr:path] FROM [nt:base] WHERE ISDESCENDANTNODE('/content/path/') AND [jcr:path] like '%\['
and to modify the JCR/CRX nodes you can use CURL or SlingPostServlet method
Some helpful posts are blow.
https://github.com/paulrohrbeck/aem-links/blob/master/curl_cheatsheet.md
http://sling.apache.org/site/manipulating-content-the-slingpostservlet-servletspost.html
Can you try migrating using a tool like oak-upgrade and let us know if you are still facing this issue.
The tool is robust and you have the flexibility to configure specific sub-trees for migration using this tool.

OpenJpa2.0 How to map Oracle sys.XMLTYPE column to String

I changed Change in persistence.xml
I also changed column definition (columnDefinition="XDB.XMLType") for xml fields
I checked OpenJpa(http://openjpa.208410.n2.nabble.com/Oracle-XMLType-fetch-problems-td6208344.html) site and IBM (http://www.ibm.com/support/knowledgecenter/SS7J6S_7.5.0/com.ibm.wsadapters.jca.jdbc.doc/env/doc/rjdb_problemsolutions.html)
My env is OpenJpa 2.0 and WAS 7
its throwing exception
org.apache.openjpa.persistence.PersistenceException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYS.XMLTYPE", line 169
Please suggest without changing OpenJpa2.0 as its part of IBM WebSphere Application Server V7.0 how can i handle sys.XMLTYPE data, i am migrating my application from db2 to Oracle in same environment.
Writing XML data can be tricky some times! Getting the correct drivers and things defined properly can have its challenges. I can not say exactly what you need to do given the lack of info on your domain model and such, but let me give some general things to look for. First, there is an XML test in the OpenJPA test framework if you want to make reference to it. It can be seen publicly here:
https://apache.googlesource.com/openjpa/+/refs/heads/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/jdbc/oracle/
Or, another test using an "XMLValueHandler" (likely this is beyond the scope of what you are looking for):
https://apache.googlesource.com/openjpa/+/refs/heads/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/persistence/xmlmapping/query/
Second, (stating the obvious) I assume you have a column in Oracle defined as "XMLTYPE". Also, I see you are using schema SYS. I'm sure you are aware but this is a system/admin schema......just for sanity sake you might want to first get things running using a non-system/admin schema just so we don't get hung up with any issues with your OpenJPA client not having the correct permissions.
Next, you need the following definition:
#Lob #Basic
#Column(name = "ANXMLCOLUMN", columnDefinition="XMLCOLUMN XMLType")
private String anXMLString;
The #Lob I think will be necessary if you are using data greater than 4000 chars (this was mentioned in one of the comments). To start I'd use a very small set of data (a couple characters), once that works, then experiment with > 4k.
Next, make sure to use the correct JDBC driver. The last time I experimented with an XMLType I used the Oracle JDBC 11.2.0.2 driver.
Finally, you might need to use the property "openjpa.jdbc.DBDictionary" with value "oracle(supportsSetClob=true,maxEmbeddedClobSize=-1)". Again, experiment with this AND look at the OpenJPA documentation on these properties to determine if they are necessary in your scenario. I think the supportsSetClob=true will only be necessary for older version (pre-2.2.x) of OpenJPA. You might also need to use property "openjpa.jdbc.SchemaFactory" with value "native". I would suggest you first try without either or these two properties. If that doesn't help, then experiment with these two properties. I know this is vague, but I don't know what your DDL or domain model looks like so I have to keep in vague.
Thanks,
Heath Thomann

Setting Vendor Number in X++ General Journal

I'm posting to a general journal in ax 2012. I'm able to create the journal and enter most line information from x++ code. Now i'm having an issue with some vendor numbers.
When I use this code most of the time it works. But there are a few vendor numbers that are not able to be found from this table and of course throws an error. Weird thing is if I actually go into ax and type in the vendor number it does accept it because it has been setup in the system but by this code it is not working (for some).
Just wondering if there is something wrong with how the vendor number is setup, properly not linked with something, or if there is another way to properly set this parameter.
It looks like parmledgerDimension uses a RecId for vendors and only pulls it from DimensionAttributeValueCombination.
Any ideas?
DimensionAttributeValueCombination davc;
firstonly RecId from davc where davc.DisplayValue == account; //could be 010-000001
journalTrans.parmLedgerDimension(davc.RecId);
Try using
DimensionStorage::getDynamicAccount(account, LedgerJournalACType::Vend);
https://msdn.microsoft.com/en-us/library/dimensionstorage.getdynamicaccount.aspx
Edit: the first line I posted was to get the default dimension. I'll leave it in case you need it
LedgerJournalEngine::getAccountDefaultDimension(account, curext(), LedgerJournalACType::Vend);
https://msdn.microsoft.com/en-us/library/ledgerjournalengine.getaccountdefaultdimension.aspx

Get AST from .Net assembly without source code (IL code)

I'd like to analyze .Net assemblies to be language independent from C#, VB.NET or whatever.
I know Roslyn and NRefactory but they only seem to work on C# source code level?
There is also the "Common Compiler Infrastructure: Code Model and AST API" project on CodePlex which claims to "supports a hierarchical object model that represents code blocks in a language-independent structured form" which sound exactly for what I looking for.
However I'am unable to find any useful documentation or code that is actual doing this.
Any advice how to archive this?
Can Mono.Cecil maybe doing something?
You can do this and there is also one (although tiny) example of this in the source of ILSpy.
var assembly = AssemblyDefinition.ReadAssembly("path/to/assembly.dll");
var astBuilder = new AstBuilder(new DecompilerContext(assembly.MainModule));
decompiler.AddAssembly(assembly);
astBuilder.SyntaxTree...
The CCI Code Model is somewhere between a IL disassembler and full C# decompiler: it gives your code some structure (e.g. if statements and expressions), but it also contains some low level stack operations like push and pop.
CCI contains a sample that shows this: PeToText.
For example, to get Code Model for the first method of the Program type (in the global namespace), you could use code like this:
string fileName = "whatever.exe";
using (var host = new PeReader.DefaultHost())
{
var module = (IModule)host.LoadUnitFrom(fileName);
var type = (ITypeDefinition)module.UnitNamespaceRoot.Members
.Single(m => m.Name.Value == "Program");
var method = (IMethodDefinition)type.Members.First();
var methodBody = new SourceMethodBody(method.Body, host, null, null);
}
To demonstrate, if you decompile the above code and show it using PeToText, you're going to get:
Microsoft.Cci.ITypeDefinition local_3;
Microsoft.Cci.ILToCodeModel.SourceMethodBody local_5;
string local_0 = "C:\\code\\tmp\\nuget tmp 2015\\bin\\Debug\\nuget tmp 2015.exe";
Microsoft.Cci.PeReader.DefaultHost local_1 = new Microsoft.Cci.PeReader.DefaultHost();
try
{
push (Microsoft.Cci.IModule)local_1.LoadUnitFrom(local_0).UnitNamespaceRoot.Members;
push Program.<>c.<>9__0_0;
if (dup == default(System.Func<Microsoft.Cci.INamespaceMember, bool>))
{
pop;
push Program.<>c.<>9.<Main0>b__0_0;
Program.<>c.<>9__0_0 = dup;
}
local_3 = (Microsoft.Cci.ITypeDefinition)System.Linq.Enumerable.Single<Microsoft.Cci.INamespaceMember>(pop, pop);
local_5 = new Microsoft.Cci.ILToCodeModel.SourceMethodBody((Microsoft.Cci.IMethodDefinition)System.Linq.Enumerable.First<Microsoft.Cci.ITypeDefinitionMember>(local_3.Members).Body, local_1, (Microsoft.Cci.ISourceLocationProvider)null, (Microsoft.Cci.ILocalScopeProvider)null, 0);
}
finally
{
if (local_1 != default(Microsoft.Cci.PeReader.DefaultHost))
{
local_1.Dispose();
}
}
Of note are all those push, pop and dup statements and the lambda caching condition.
As far as I know, it's not possible to build AST from binary (without sources) since AST itself generated by parser as part of compilation process from sources.
Mono.Cecil won't help because you can only modify opcodes/metadata with them, not analyze assembly.
But since it's .NET you can dump IL code from dll with help of ildasm. Then you can pass generated sources to any parser with CIL dictionary hooked up and get AST from parser. The problem is that as far as I know there is only one publically available CIL grammar for parser, so you don't really have a choice. And ECMA-355 is big enough so it's bad idea to write your own grammar.
So I can suggest you only one solution:
Pass assembly to ildasm.exe to get CIL.
Then pass CIL to ANTLR v3 parser with this CIL grammar wired up (note it's a little bit outdated - grammar created at 2004 and latest CIL specification is 2006, but CIL doesn't really change to much)
After that you can freely access AST generated by ANTLR
Note that you will need ANTLR v3 not v4, since grammar written for 3rd version, and it's hardly possible to port it to v4 without good knowledge of ANTLR syntax.
Also you can try to look into new Microsoft ryujit compiler sources at github (part of CoreCLR) - I don't sure that it's helps, but in theory it must contains CIL grammar and parser implementations since it works with CIL code. But it's written in CPP, have enormous code base and lacks of documentation since it's in active development stage, so it's may be easier to stuck with ANTLR.
If you treat the .net binary file as a stream of bytes, you ought to be able to "parse" it just fine.
You simply write a grammar whose tokens are essentially bytes. You can certainly build a classical lexer/parser with almost any set of lexer/parser tools by defining the lexer to read single bytes as tokens.
You can then build the AST using standard AST-building machinery for the parsing engine (on your own for YACC, automatically with ANTLR4).
What you will discover, of course, is that "parsing" isn't enough; you'll still need to build symbol tables, and carry out control and data flow analyses if you are going to do serious analysis of the corresponding code. See my essay on LifeAfterParsing.
You will also likely have to take into account "distinguished" functions that provide key runtime facilities to the particular programming languages that actually generated the CIL code. And these will make your analyzers language-dependent. Yes, you still get to share the part of the analysis that works on generic CIL.

Are there solutions for streamlining the update of legacy code in multiple places?

I'm working in some old code which was originally designed for handling two different kinds of files. I was recently tasked with adding a new kind of file to this code. Most of my problems were solved by filling out an extensive XML file with a new entry that handled everything from what lists were named to how the file is written in plural lower case. But this ended up being insufficient, as there were maybe 50 different places in 24 different code files where I had to update hardcoded switch-statements that only branched for the original two file types.
Unfortunately there is no consistency in this; there are methods which operate half from the XML file, and half off of hardcode. Some of the files which look like they would operate off of the XML file don't, and some that I would expect that I'd need to update the hardcode don't need it. So the only way to find the majority of these is to run through testing the whole system when only part of it is operational, finding that one step to fix (when I'm lucky that error logging actually tells me what is going on), and then running the whole thing again. This wastes time testing the parts of the code which are already confirmed to work, time better spent testing the new parts I have to add on top of it all.
It's a hassle and a half, and to my luck I can expect that I will have to add yet another new kind of file in the near future.
Are there any solutions out there which can aid in this kind of endeavour? Something which I can input some parameters of current features, document what points in a whole code project actually need to be updated, and run something nice the next time I need to add a new feature to the code. It needn't even be fully automated, something that'll help me navigate straight to the specific points in everything and maybe even record what kind of parameters need to be loaded.
Doubt it matters specifically, but the code is comprised of ASP.NET pages, some ASP.NET controls, hundreds of C# code files, and a handful of additional XML files. It's all currently in a couple big Visual Studio 2008 projects.
Not exactly what you are describing, but if you can introduce a seam into the code and lay down some interfaces you can break out and mock, a suite of unit/integration tests would go a long way to helping you modify old code you may not fully understand well.
I completely agree with the comment about using Michael Feathers' book to learn how to wedge new tests into legacy code. I'd also strongly recommend Refactoring, by Martin Fowler. What it sounds like you need to do for your code is to implement the "Replace conditionals with polymorphism" refactoring.
I imagine your code today looks somewhat like this:
if (filetype == 23)
{
type23parser.parse(file);
}
else if (filetype == 69)
{
filestore = type69reader.read(file);
File newfile = convertFSto23(filestore);
type23parser.parse(newfile);
}
What you want to do is to abstract away all the "if (type == foo)" kinds of logic into strategy patterns that are created in a factory.
class FileRules : pReader(NULL), pParser(NULL)
{
private:
FileReaderRules *pReader;
FileParserRules *pParser;
public:
void read(File* inFile) {pReader->read(inFile);};
void parse(File* inFile) {pParser->parse(inFile);};
};
class FileRulesFactory
{
FileRules* GetRules(int inputFiletype, int parserType)
{
switch (inputFiletype)
{
case 23:
pReader = new ASCIIReader;
break;
case 69:
pReader = new EBCDICReader;
break;
}
switch (parserType)
... etc...
then your main line of code looks like this:
FileRules* rules = FileRulesFactory.GetRules(filetype, parsertype);
rules.read(file);
rules.parse(file);
Pull off this refactoring, and adding a new set of file types, parsers, readers, etc., becomes as simple as writing one exclusive to your new type.
Of course, go read the book. I vastly oversimplified it here, and probably got stuff wrong, but you should get the general idea of how to approach it from this. I can also recommend another book, "Head First Design Patterns", which has a great section on the Factory patterns (if you like those "Head First" kinds of books.)

Resources