Where does ActiveMQ Artemis Console store address and queue definitions? - console

I created the broker folder inside /var/lib on Ubuntu 18.04. Inside /var/lib/[broker]/etc there is the broker.xml file that you can use to define addresses and queues. However, I used the administration console to create an address with a couple queue, and this file does not update. In fact, no files inside the broker directory or Artemis home update.
So where is the administration console storing the definitions?
Also is it a better practice to create the addresses and queue in the broker.xml file instead of through the console?

Definitions for addresses and queues created at runtime are stored in binary form in the broker's journal, specifically in the "bindings" journal which is separate from where messages are stored. In your configuration the bindings journal would be in /var/lib/[broker]/data/bindings by default.
As far as best practices go it really depends on the use-case. Some users like to have the address & queue definitions in broker.xml. The broker.xml can be updated at runtime and the broker will deploy newly configured addresses & queues. However, other users don't like to manually edit broker.xml and would rather use the management API instead either through the web console or through another management interface (e.g. HTTP via Jolokia, JMX, management messages, etc.). Still others don't manage addresses or queues at all but simply allow the broker to auto-create the resources which their applications need.

Related

Does BizTalk Server support exchanging large files over Azure File Shares when 3rd Party system is using the REST API?

"Starting with BizTalk Server 2016, you can connect to an Azure file
share using the File adapter. The Azure storage account must be
mounted on your BizTalk Server."
source: https://learn.microsoft.com/en-us/biztalk/core/configure-the-file-adapter
So at first glance, this would appear to be a supported thing to do. And until recently, we have been using Azure File Shares with BizTalk Server with no problems. However, we are now looking to exchange larger files (approx 2 MB). BizTalk Server is consuming the files without any errors but the file contains only NUL bytes. (The message in the tracking database is the correct size but is filled with NUL bytes).
The systems writing the files (Azure Logic Apps, Azure Storage Explorer) are seeing the following error:
{
"status": 409,
"message": "The specified resource may be in use by an SMB client.\r\nclientRequestId: 4e0085f6-4464-41b5-b529-6373fg9affb0",
}
If we try uploading the file to the mounted drive using Windows Explorer (thus using the SMB protocol), the file is picked up without problems by BizTalk Server.
As such, I suspect the BizTalk Server File adapter is not supported when the system writing or consuming the file is using the REST API rather than the SMB protocol.
So my questions are:
Is this a caveat to BizTalk Server support of Azure File Share that is documented somewhere?
Is there anything we can do to make this work?
Or do we just have to use a different way of exchanging files?
We have unsuccessfully investigated/tried the following:
I cannot see any settings in the Azure File Storage connector (as
used by Logic Apps) that would ensure files are locked till they are
fully written.
Tried using the File adapter advanced adapter property “rename files while reading”, this did not solve the problem.
Look at the SFTP-SSH connector. It does message chunking with a total file size of 1 GB or smaller and: Provides the Rename file action, which renames a file on the SFTP server.!!
With an ISE environment you could potentially leverage a total file size of 5B
Here is the solution we have implemented with justifications for this choice.
Chosen Option: We stuck with Azure File Shares and implemented the signal file pattern
The Logic Apps of the integrated system writes a signal file to the same folder where the message file is created. The signal file has the same filename but with a .done extension. e.g. myfile.json.done.
In the BizTalk solution, a custom pipeline component has be written to retrieve the related message file for the signal file.
Note: Concern that the Azure Files connector is still in preview.
Discounted Option 1: Logic Apps use the BizTalk Server connector
Whilst this would work, I was keen to keep a layer of separation between the system and BizTalk. This allows BizTalk applications to be deployed without downtime of the endpoints to system.
Restricts the load levelling (throttling) capabilities of BizTalk Server. Note: we have a custom file adapter to restrict the rate that files are picked up.
This option also requires setup of the “On-Premise Data Gateway”.
Discounted Option 2: Use of File System connector
Logic Apps writes the file in chunks of 2MB and then releases the lock on the file. This enables BizTalk to pick up the file instantly. When the connector tries to write the next chunk of 2MB, the file is not available anymore and hence fails with a 400 status error "The requested action could not be completed. Check your request parameters to make sure the path //test.json' exists on your file system.”
File sizes are limited to 20MB.
Required setup of On-Premise Data Gateway. Note: We also considered this to be a good time to also introduce use of Integration Service Environment (ISE) to host Logic Apps within the vNET. The thinking is that this would keep File exchanges between the system and BizTalk within the network. However, currently there is no ISE specific connector for the File System.
Discounted Option 3: Use of SFTP connector
Our expectation is that logic apps using FTP will experience similar chunking issues while Logic Apps is writing files.
The Azure SFTP connector has no rename action.
We were keen to avoid use of this ageing protocol.
We were keen to avoid extra infrastructure and software needed to support SFTP.
Discounted Option 4: Logic Apps Renames the File once written
There is no rename action in the File Storage REST API or File Connector. Only a Copy action. Our concern Copy is the file still needs time to be written so the same chunking problem remains.
Discounted Option 5: Logic Apps use of Service Bus Connector
The maximum size of a message is 1MB.
Discounted Option 6: Using Azure File Sync to Mirror files to another location.
The File Sync only happens once every 24 hours, as such was not suitable for our integration needs. Microsoft are planning to build change notifications into Azure File Shares to address this.
Microsoft have just announced "Azure Service Bus premium tier namespaces now support sending and receiving message payloads up to 100 MB (up from 1MB previously)."
https://azure.microsoft.com/en-us/updates/azure-service-bus-large-message-support-reaches-general-availability/
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-premium-messaging#large-messages-support

WSO2 clustering in a distributed deployment

I am trying to understand clustering concept of WSO2. My basic understanding of cluster is that there are 2 or more server with same function using VIP or load balance in front. So I would like to know which of the WSO2 components can be clustered. I am trying to achieve configuration mentioned in this diagram.
Image of Config I am trying to achieve:
Can this configuration is achievable or not?
Can we cluster 2 Publisher nodes and 2 store nodes or not?
And how do we cluster Key Manager use same setting as Identity Manager?
Should we use port offset when running 2 components on the same server? And if yes how we make sure that components are using the ports as mentioned in port offset?
Should we create separate external database for each CarnonDB datasource entry in master_datasource.xml file or we can keep using local H2 database for this. I have created following databases Let me know if I am correct in doing this or not. wso2 databases I created:
I made several copies of wso2 binary files as shown in Image and copied them to the servers where I want to run 2 components on same server. Is this correct way of running 2 components on same server?
For Load balancing which components should we load balance and what ports should be use for load balancing?
That configuration is achievable. But Analytics servers are best to run on separate servers as they utilize a lot of resources.
Yes, you can.
Yes, you need port-offset. If you're on Linux, you can use netstat -pln command and filter by server PID.
Every server needs a local database and other databases are shared as mentioned in https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0
Having copies is one way of doing that. Another way is letting a single server act as multiple components. For example, you can run publisher and store components together. You can see the recommended patterns in https://docs.wso2.com/display/AM210/Deployment+Patterns.
Except for Traffic manager, you can load balance every other component. For traffic manager, you can use fail-over. Here are the ports you need to load balance.
Servlet port - 9443(https)/9763 (For admin console and admin services)
NIO port - 8243(https)/8280 (For API calls at gateway)

Biztalk: Is it possible to have multiple hosts that each host can perform its own sending, receiving and processing function?

By reading documents on MSDN, I realized that it is recommended to create separate hosts by functionality (Sending hosts, Receiving hosts and Processing hosts). And if there is only one host in this bizTalk server, this host can perform all receiving, sending, and processing messages functionality.
My question is: Is it possible to have multiple hosts that each host can perform its own sending, receiving and processing function , and not affect each other?
This is for multiple developers working on the same project, because our current situation doesn't allow us to have a full set of SQL Server Database and SQL server for each developer or using VM.
Thanks a lot!
Multiple hosts is not a solution for letting multiple developers work on a single server. A single send/receive adapter can only be assigned to one host.
You will also run into other problems, as all the configuration settings are shared in a single database, a change from 1 developer will effect the others.
This same question was asked and answered at MSDN. What you are trying to do is not supported and will not work. There is no way around this.
You must deploy the same application code to each computer in a BizTalk Group.
Sharing a BizTalk computer for development work is not a workable or productive solution and will have a definite negative affect on productivity.
You are correct, the best way to handle DEV is a VM with the entire stack. This is the issue you must address in your environment.

BizTalk Archiving Pipeline Component Consideration

In my scenario, I have a Pipeline that (1) decrypts and then (2) disassembles a flat file on a receive port.
My requirement is to capture the file, and put it on a local fileshare, between (1) and (2).
My initial approach was to introduce an Archive component between these, but I have run into issues with this. The Archiving component uses direct access to storage to dump the file. This is essentially poor methodology, as per BizTalk principles, this is a function of a send port/send adapter. So, if for example the Archiving destination is an FTP host, the Archiving component is useless.
Hence two ideas come to mind:
A) Somehow configure the archiving component to use a Send Port(if that's even possible)
B) Abandon the idea of the archiving component and just use BizTalk's native functionality as follows:
-Receive the file using decrypt only pipeline
-Send the file to a temporary local storage using a Send Port
-Subscribe to the receive port to send the file to an archive
-Pick up the file form local storage using Disassemble pipeline (second receive port)
-Use orchestration to process the file from the second receive port.
Are there any issues with Option B)?
If NOT, then what's the point of even using an archive component?
Other options also include
C) Have an archive send port and a loop-back send port subscribe to the receive port, the loop-back send port would have the flat file dissembler on the receive.
D) Have an archive send port and an Orchestration that subscribe to the receive port. Call the dissemble pipeline in the Orchestration.
We've used used both these scenarios for different solutions.
If you are using Native Biztalk functionality setting up send ports subscribing to the message type for archive is sufficient.
If you are using the BizTalk ESB Toolkit it is very difficult to split message for archiving since you are executing in the pipeline context. Using an orchestration in your itinerary will allow you to split the message but that of course requires the itinerary to leave the pipeline and drop the message on the message box. Just doing simple message archiving may lend this solution to be over kill.
You can use a custom pipeline component such as the one below. It is a pipeline component that can be reused, works in a BizTalk ESB toolkit scenario (very handy if you want to original message because it is transformed), as a file archive or SQL archive and works on both inbound and outbound pipeline scenarios.
BizTalk Archiving - SQL and File
You will only be responsible for the maintenance of the old/unwanted messages to avoid bloat.

How to associate each user visiting a web app with a separate, temporary process allocated by a service?

I want to develop a web application using ASP.NET running on IIS.
If a user submits a MAXIMA input command, the code behind will ask a custom windows service to create a new distinct temporary process executing an external assembly.
More precisely, there is only one windows service serving for all users, but each user will be associated with a distinct, temporary process running an external assembly.
The windows service contains a single socket listening on a certain port and a list of asynchronous sockets for communication. Each socket of the list will communicate with a distinct, temporary process running an external assembly which works as a client socket.
Note that: I use a process rather than an application domain because the external assembly is a batch file (not managed assembly).
My questions are:
How to call windows service from code behind?
How to associate each user with a distinct, temporary process?
How to improve scalability if there are more and more users working simultaneously?
If the Maxima input command entered by a user cause long-running process, what is the wise way to notify the user about the progress?
The following link provide you with more detail about my project: https://sourceforge.net/projects/aspmaxima/forums/forum/1190702/topic/3786806
Thank you in advance.
You should not be using codebehind in an MVC app.
Scalability while interoprating with unmanaged code is hard. The only sane way to do this is to decompose the problem.
When you launch an unmanaged app, it already has its own process.
Multiple task flows in a service called from a web app, with monitoring? You're describing Windows Server AppFabric. Host your service with AppFabric, and you won't have to write all of this yourself.
Regarding scalability, when you're dealing with unmanaged processes, you're going to have to limit the number which can start concurrently. Trial and error will be necessary to determine the optimum on specific hardware.
You can only monitor an unmanaged task's progress if that app specifically provides for it.
Launching arbitrary unmanaged code from a service is dangerous, because the launched app, by default, inherits the service's (typically raised) permissions. Consider using specific, limited credentials for the launched app instead of the default.

Resources