Migrate Ax2012 R3 CU12 model to Dynamics 365 error: no metadata to upgrade - axapta

I’m upgrading a custom product/model from AX2012 R3 CU12 to Dynamics365 using the Lifecycle Services code upgrade service.
At the point where uploading the compressed model store, I hit the following error:
“Customer models contain no metadata to upgrade. Please make sure the Descriptor folder has valid contents.”
I’ve tried searching online for more information or an explanation of the error, but I can’t find a solution.

I exported the model store again from AX2012 R3 CU12 and uploaded the new (compressed) file. The process completed successfully, so I assume there must have gone something wrong with the first export of the model store.

Related

Error in secure load of data source. An error occurred while decrypting data

I know a similar question already asked here but since given answer don't help me and my problem is kinda different I'm asking a new question.
I create data sources through admin panel -> Configure -> Datasources and it works fine. But if I restart server all created Datasources gonna missing from datasources list.
When I run ESB server I can see through logs that those datasources could not be loaded properly since ESB server is unable to decrypt sensitive data which it encrypted earlier:
DataSourceRepository Error in updating data source [remove:false] at path '/repository/components/org.wso2.carbon.ndatasource/myDatasource
Error in updating data source 'myDatasource' from registry
[remove:false]: Error in secure load of data source
meta info: An error occurred while decrypting data
Although myDatasource is missing from datasources list I can still see it in registry through admin panel -> Registry -> /_system/config/repository/components/org.wso2.carbon.ndatasource/myDatasource
I have the same issue. This seems to be some error introduced in 6.4.0.
6.3.0 does not exhibit this behaviour.
As a workaround, if you define the datasources in /conf/datasources/master-datasources.xml then the datasources load correctly at server startup. However this is not ideal solution as they cannot then be edited through the web console.
Alternatively you can download the registry entry , edit the password element to remove 'Encrypted=true' and change the password to be unencrypted. Then upload the edited file as a new registry entry alongside the old.
Both these methods are not feasible for production though as they leave the passwords unencrypted.
Out of interest are you running this on Windows? I discovered also on EI 6.4.0 that the Ciphertool utility will not run due to a malformed path. I suspect this might be related, but I cannot find where the malformed path is coming from it seems to be repeating the {carbon.home} element within the path :
C:\Program Files\WSO2\Enterprise Integrator\6.4.0\bin>ciphertool
Using CARBON_HOME: C:\Program Files\WSO2\Enterprise Integrator\6.4.0
Using JAVA_HOME: C:\WSO2\Enterprise Integrator\6.3.0\jdk\jdk1.8.0_192
Exception in thread "main" java.nio.file.InvalidPathException: Illegal char <:> at index 51: C:\Program Files\WSO2\Enterprise Integrator\6.4.0\C:\Program Files\WSO2\Enterprise Integrator\6.4.0\/repository/resources/security/wso2carbon.jks
at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
at java.nio.file.Paths.get(Paths.java:84)
at org.wso2.ciphertool.utils.Utils.getConfigFilePath(Utils.java:98)
at org.wso2.ciphertool.utils.Utils.setSystemProperties(Utils.java:289)
at org.wso2.ciphertool.CipherTool.initialize(CipherTool.java:93)
at org.wso2.ciphertool.CipherTool.main(CipherTool.java:52)

Import and explore Oracle .dmp file

I have a .dmp file (export from an Oracle DB) I need to get data from. I have installed Oracle 11g Express Edition successfully, with user SYSTEM and imp command I imported the file, with some warnings though, but now after hours of searching the Internet, I still have no idea how to explore the DB created by that file.
Can explain to me how to do that or at least point me to some relevant documentation, please? This is the first time I use Oracle.
Thanks!
Update: I already tried with Oracle SQL Developer. I use the user SYSTEM that was created automatically when I installed XE

Sonar 5.3 force issues report generation on publish analysis mode

I've been working with Sonar 4.5, although I would like to migrate to version 5.3. During the tests I've noticed that Issues Report is not getting generated when analysis type is publish (it needs to be preview), although it's a need for us to store the analysis to the DB on each Sonar run.
My question is, is there a way to force it to generate the issues report even running on publish mode (I suspect this limitation is related to the fact that publish mode takes longer to finish).
PS: I need the Issues Report in order to extract metrics from it and publish it on TeamCity.
This is not possible anymore. If you want to retrieve some information from SonarQube, you should use the standard public WS to achieve this.
Once the analysis has been sent to the server, you can check <work_dir>/report-task.txt to know which URL to call in order to know when the report is processed (URL is given by the ceTaskUrl property in this file)
Once the call to this WS returns the SUCCESS status, you can query the /api/resources WS (using the key of the project given by the projectKey property) to get any information you want on your project that has just been analysed.

Publishing Databases using DacPac in Visual Studio 2013

I need some clarification on when to register a Database as a Data Tier Application (DAC). I've looked at all the guides but am stuck on a few points.
The database is NOT registered
Build Database Project to produce DacPac
Publish the Database Project
Check "Register as a Data Tier Application"
Check "Block publish when database has drifted from registered version"
First time round, this works. It registers the database and succeeds.
However, on subsequent publishes is fails as it says the DB has drifted noting two users which have not changed.
Am I following the correct process? i.e. setting the Publish script to re-register each time?
What is the best practice for making changes? By changing the relevant .sql files in the Database Project and then building? The guides talk a lot about being able to version the DB using the DacPac but its not clear how. Should I rename each DacPac and commit it to TFS?
My next step is to publish the Database as part of the overall ASP.Net Solution. When I try to do that (it works fine when the DB publish is not included), it comes up with the following error
Web deployment task failed. (The SQL provider cannot run with dacpac option because of a missing dependency. Please make sure that DacFx is installed. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_DACFX_NEEDED_FOR_SQL_PROVIDER.)
However, I have all the required elements installed on the publishing machine. Do they need to be on the SQL Server or IIS VMs?
Any guidance would be much appreciated.
If you want to deploy your changes to a database using a dacpac you would need to register the database as a DAC. This basically creates a snapshot of the database at that point in time. You do this before making a change to create the initial snapshot and then after a deployment.
The reason you do this is to detect drift. Lets say you do a deployment and someone makes a change directly in that database, for instance changing the logic of a stored procedure, you would want to know about that change before making a subsequent deployment. If you deploy your dacpac and ignore this change it will revert their change to whats in the dacpac model. This is where drift occurs. You can generate an xml report on what has drifted through the sdk.
You can enable a setting to disable deployment if drift occurs so that you can retrofit those changes in the database directly in your source code. You would then need to re-register the database as a DAC to create a new snapshot.
Am I following the correct process? i.e. setting the Publish script to re-register each time? Yes
What is the best practice for making changes? By changing the relevant .sql files in the Database Project and then building? Yes
The guides talk a lot about being able to version the DB using the DacPac but its not clear how. Should I rename each DacPac and commit it to TFS? You can set a version within the databse. Have a look at the properties of the database project. You shouldnt rename the dacpac.
About the ASP.Net publish, I would need a bit more detail around the project structure and environment setup.

Searching through elmah error log files(Perhaps in 1000's)

We have a folder of elmah error logs in XML format. These files will be in millions and each file might be upto 50 kb in size. We need to be able to search on the files(eg: What errors occured, what system failed etc). Do we have a open source system that will index the files and perhaps help us search through the files using keywords? I have looked at Lucene.net but it seems that I will have the code the application.
Please advise.
If you need to have the logs in a folder in XML, elmah-loganalyzer might be of use.
You can also use Microsoft's Log Parser to perform "sql like" queries over the xml files:
LogParser -i:XML "SELECT * FROM *.xml WHERE detail like '%something%'"
EDIT:
You could use a combination of nutch+SOLR or logstash+Elastic Search as an indexing solution.
http://wiki.apache.org/nutch/NutchTutorial
http://lucene.apache.org/solr/tutorial.html
http://blog.building-blocks.com/building-a-search-engine-with-nutch-and-solr-in-10-minutes
http://www.logstash.net/
http://www.elasticsearch.org/tutorials/using-elasticsearch-for-logs/
http://www.javacodegeeks.com/2013/02/your-logs-are-your-data-logstash-elasticsearch.html
We are a couple of developers doing the website http://elmah.io. elmah.io index all your errors (in ElasticSearch) and makes it possible to do funky searches, group errors, hide errors, time filter errors and more. We are currently in beta, but you will get a link to the beta site if you sign up at http://elmah.io.
Unfortunately elmah.io doesn't import your existing error logs. We will open source an implementation of the ELMAH ErrorLog type, which index your errors in your own ElasticSearch (watch https://github.com/elmahio for the project). Again this error logger will not index your existing error logs, but you could implement a parser which runs through your XML files and index everything using our open source error logger. Also you could import the errors directly to elmah.io through our API, if you don't want to implement a new UI on top of ElasticSearch.

Resources