How to create DataBase Configuration as library in Mule 4? - mule4

I have a big Monolithic Oracle DB. I may end up creating around 20 System APIs to get Various data from this DB. So instead of configuring DB connection in all 20 system APIs, like to create a DB connector and make it as a jar file. So that every system APIs can add this in their POM and use that for connection.
Is that something possible or is there any better approach to handle it?

One method if all applications are in the same server is to create a domain and share the configuration by placing it in the domain. This is usually the recommended approach. This method is documented at https://docs.mulesoft.com/mule-runtime/4.3/shared-resources
If that's not possible (for example CloudHub doesn't support domains) or desired, then you have to package the flow in a jar by following the instructions in this KB article: https://help.mulesoft.com/s/article/How-to-add-a-call-to-an-external-flow-in-Mule-4. Note that while the article title mentions flows, the method works with both configurations and flows.

Related

applicationHost.xdt in App Service Plan instances

The Issue
I am currently in the process of integrating a pre-rendering service for SEO optimization, however we use an Azure App Service Plan to scale up or down when necessary.
One of the steps for setting up the proper configuration requires placing an applicationHost.xdt file in the /site/ directory, which is one level above the /site/wwwroot directory where the application itself gets deployed to.
What steps should I take in order to have the applicationHost.xdt file persist to new instances spawned by the scaling process?
Steps I have taken to solve the issue
So far I have been Googling a lot, but haven't succeeded in finding a lot of documentation on using an applicationHost.xdt file in combination with an Azure App Service Plan.
I am able to upload the file to an instance manually, however I have assumed that when we then scale up to more instances the manually uploaded file will not be present on the new instance(s).
Etcetera
We are using Prerender.io as pre-rendering service.
Should there be an easier to set-up & similarly priced service available, we would be open to suggestions as we are in an exploratory phase regarding pre-rendering.
Suppose this won't be a problem, cause all files under azure app are shared between all your instances. You could check it in this Kudu wiki:Persisted files. And in my test all instances will keep the file.
About upload the applicationHost.xdt, you don't have to do it manually, there is a IIS Manager Site Extension to lets you very easily create XDT files. And it will provide some sample XDT's for you.

Java FX app - unique id for each distibution

I have Java FX app which is available for download on my site. I am looking for a way how to remotely and uniquely identify each downloaded application. Is it possible to store the id (for ex. in txt file) into a package of the Java FX app immediately before download?
Thanks for any suggestions
Each time you distribute it, you could try signing and timestamping the jar file for distribution. That way you can ensure that the file is not tampered with and validate it's signature and timestamp either locally or in a callback to a service you provide if necessary.
Also consider java-webstart cited here.
Yes, signing and webstart technologies can be used together if desired. Those two technologies can be used separately or together, so you can choose what is appropriate for your app. See the javapackager documentation for more details of the packaging process for web start (go through the documentation and refer to the sections that reference jnlp). Be aware that web start currently only works with Oracle JDK (as far as I know).
For your purposes, you would create a script that executes on each download request to generate a unique id or timestamp (or gets a timestamp from a timestamp service) and adds that to the package before signing and offering the package for download. You could add the download instance UUID and timestamp together with the referring IP address or user id (if you have a login system on your website), to a server-side database to track who downloaded what at what time.
If using webstart, you use a JNLP deployment as mentioned in the linked documentation. There are options for the packaging the JNLP to interact with some Javascript on a webpage, which can reduce network traffic and speed up the download and usage process. Sophisticated deployment mechanisms can dynamically generate that download package, and the download page with Javascript calls which embed JNLP data. Details or samples of such systems are outside the scope of the information I can provide here.

Transfer content from one Alfresco instance to another (same version) on another server

What would be the best /better way to transfer repository content from one Alfresco (enterprise edition) to another instance running on a different server. Currently we copy the entire Alfresco database & file system under alf_data but that needs a down time on the servers.
I would require a mechanism without down time & the repository data be copied from one instance to another. Is there any way this is possible ?
In addition to Heiko's solution, you might be interested in:
The out-of-the-box replication service, which wouldn't be good for replicating your entire repo, but can be used for replicating a handful of nodes from one server to another.
A solution from Parashift which allows one- and two-way replication of nodes between servers.
An Alfresco presentation on using Apache Camel and Apache Kafka to replicate nodes between servers. This is available through Alfresco's professional services organization, but it may make it into the product at some point. Or you could use it as inspiration to write your own solution.
What is your intention? A standby system, a real copy, an external private cloud with a subset of data?
If you just need a 100% clone you can script backup & restore without downtime on the source server. Downtime is limited to the db and index restore on the target system. Your script shouldn't copy life data from solr index - use the backup done by the solr backup job instead. Depending on the database you use online db backup shouldn't be an issue.
Our Alfresco Virtual Appliance has preconfigured scripts and jobs for this task to start an additional alfresco instance from snapshot backups without copying the contentstore (we call this Alfresco Time Machine).
If your aim is an external private cloud server or a road warrior solution ecm4u has a commercial alfresco module to sync very efficient a subset of modified nodes including metadata/types/aspects (list of types and aspects needs to be defined). This sync provides a REST interface for automation and also manual execution from alfresco's admin console. We support mix of alfreso versions and editions. At the moment this sync is implemented as a unidirectional sync but could be extended as a bidirectional sync.
I recently did this task of installing 2 alfresco instances on my local running on 2 different ports.
While performing some tasks, I realized that 2 instances having same Repository ID is creating issues.
I was able to change the repository ID of one of them following below steps:
update alfresco-global.properties:
db.name="Add new DB Name"
(Alfresco will create a db db.name mentioned here while initializing)
and restart the server
If you are still facing issues, try deleting solr indexes under alf-data folder.

are connection strings safe in config.json

I am starting to play around with MVC 6 and I am wondering, with the new config.json structure... are my connection strings safe in the config.json file?
Also, I was watching a tutorial video and I saw the person only put their connection strings in their config.dev.json file, not just the config.json. This will mean the application will not have the connection strings while on the production side, correct? He must have meant to put them in both.
Thanks a lot for the help!
I think the Working with Multiple Environments document sums it up pretty well.
Basically, you can farm secret settings such as connection strings out into different files. These files would then be ignored by your source control system and every developer will have to manually create the file on their system (it might help to add some documentation on how to setup a project from a fresh clone of SCC).
For production, the compile will include the production settings. Typically, these are provided by a build server where they are locked away from developers. I'm not sure if that is totally automatic with MVC core or you have to add some kind of build step to do it, but that is how it is normally done.
If you are worried about storing connection strings in the production environment securely, you can extend the framework with your own configuration provider.

Does there exist an OpenStack API with its implementation being JClouds?

I am trying to find if there exists an OpenStack REST API with its implementation being JClouds. I am willing to pay for someone to produce such a thing as an open source project.
SwiftProxy offers an OpenStack Swift implementation backed by Apache jclouds:
https://github.com/bouncestorage/swiftproxy
It back ends onto multiple jclouds storage backends including the local file system and many object stores.

Resources