Fusionauth entering silent mode - mariadb

We are running fusionauth with mariadb in cluster with 3 nodes. Fusion auth is getting started in 2 of the nodes and entering in Silent mode in one of them. Is there a way to increase the number of attempts or is there a way to get this running on all three nodes without one entering the silent mode.

FusionAuth will enter silent mode if the proper environment variables are defined, and if it needs to complete configuration.
If you are entering silent mode on only one of the 3 nodes perhaps that node does not have the correct JDBC connection or has a network issue of some type that would keep it from connecting to the search and database services on startup?
https://fusionauth.io/docs/v1/tech/installation-guide/docker

Related

How to configure ports used for communication between R and RStudio?

First, I'm not a R/RStudio user at all. I'm a Windows admin with the task to configure R and RStudio on a multi-user Citrix environment. To identify users between the multiple sessions, we are using the Palo Alto Terminal Server agent which will allocate a range of ports for each user and use them to identify each users. That's then used to give limited and specific access to resources for each users.
The problem is that the TS Agent also intercept the localhost connection that's created when you start RStudio (process rsession) and RStudio then cannot connect to R. One possible solution to solve this problem is to have control on the ports used when this local session is started.
I have made multiple research on the Internet but I have been unable to find if/how you can change the ports that are used. I have found different config files but none that seem to allow me to fix a single port or a port range.
Any insights on the way to fix the ports for the rsession process so I can better control them? Or another way to look at the problem: do you know the port range used by R/RStudio when they communicate together through the rsession? I can simply avoid using these range with the TS Agent.
I have only skimmed through the RStudio Source code, but it seems that the port is assigned randomly:
https://github.com/rstudio/rstudio/blob/bcc8655ba3676e3155d80296c421362881340a0f/src/node/desktop/src/main/application.ts#L226
However, it also seems like there is a startup parameter --www-port to set the port:
https://github.com/rstudio/rstudio/blob/bcc8655ba3676e3155d80296c421362881340a0f/src/node/desktop/src/main/session-launcher.ts#L592

Is it possible to specify a connection timeout for Maxscale connections from applications?

I have setup a two node MariaDB Galera cluster on Ubuntu systems. A simple application connects to a database using MaxScale and it works fine. But when the node in the cluster that is currently in use, say, node 1, fails, the application gets error such as 1927 or 1045. On receiving this error, the application tries to connect to database again but it keeps failing many times but succeeds once fail over from node 1 to node 2 is complete and MaxScale gives database connection to node 2. The connection trial duration ranges from 20 to 50 seconds in my cluster environment.
My question is whether or not there is any MaxScale connection time out parameter that I can use to specify connection timeout to some value such as 50 seconds so that application tries just once for a new connection instead of trying many times. (I used parameter connectTimeout in JDBC URL for the database but it was not effective for my application and I think this is expected.)
MaxScale is sending errors most likely because no master server is available. This error cannot be prevented with MaxScale 2.2 and client side re-connection is required.
In MaxScale 2.3, a new feature will be available that allows similar behavior to what you describe (see MXS-1501).
If you are performing read-only requests, it might be beneficial to enable master_failure_mode=error_on_write. This will allow read-only requests to be done even when no master server is available.

How to fix FQDN Mismatch for intel AMT system with Intel SCS 10

I have two systems on my domain and have configured Intel AMT with SCS. However I had need to change the Host Name on both systems and afterwards the SCS database is not getting updated correctly after a maintenance Task. The DB still shows old FQDN's and discovery is saying there is a mismatch error. How do I resolve this?
The source used to configure the FQDN setting (hostname.suffix) in the Intel AMT device is defined in the configuration profile. The profile includes several options that you can use to define how the FQDN of the device will be constructed.
When changes are made to the host computer or the network environment, the basis on which the FQDN setting was constructed might change. These changes can include changing the hard disk, replacing the operating system, or re-assigning the computer to a different user. If the FQDN setting in the Intel AMT device
is not updated with these changes, problems can occur.
Intel SCS includes options that you can use to detect and fix these “mismatches”.
Intel AMT configuration is bounded to your platforms HW, since the records on the SCS DB are currently in a mismatch status due with the information in your AMT host in order to remedy the situation you will require to perform the following procedures in order to fix the mismatch:
Download ACUConfig.exe to your AMT host and run the following command: ACUConfig.exe SystemDiscovery /ReportToRCS /AdminPassword <password> RCSAddress <RCSAddress> in your AMT host platform, where the values in pointy brackets need to be replaced with the values for your environment.
Under the Monitoring > Views tab you will see the system that was detected to have a missmatch, in order to reconcile the records in the DB you will need to perform one more action.
Create a Job. In the Job Definition window, select these options:
From the drop-down list in the Filter section, select Host FQDN Mismatch.
From the Operation drop-down list, select Fix host FQDN mismatch.
Now all that is left to do is run the job using the context menu on the recently created job. You can monitor the log of the AMT host log for more details and also you will see that the record in the Mismatch View gets cleared.
Good Luck!

cloudstack No suitable hosts found under this Cluster

When I try to start an instance through template,I get the following error messages:
2013-11-10 19:44:28,716 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] (Job-Executor-5:job-19 = [ d070b5ba-f342-4252-9137-4d2c1b19eca6 ]) No suitable hosts found under this Cluster: 2
2013-11-10 19:44:28,718 DEBUG [cloud.deploy.DeploymentPlanningManagerImpl] (Job-Executor-5:job-19 = [ d070b5ba-f342-4252-9137-4d2c1b19eca6 ]) Could not find suitable Deployment Destination for this VM under any clusters, returning.
2013-11-10 19:44:28,718 DEBUG [cloud.deploy.FirstFitPlanner] (Job-Executor-5:job-19 = [ d070b5ba-f342-4252-9137-4d2c1b19eca6 ]) Searching all possible resources under this Zone: 1
2013-11-10 19:44:28,718 DEBUG [cloud.deploy.FirstFitPlanner] (Job-Executor-5:job-19 = [ d070b5ba-f342-4252-9137-4d2c1b19eca6 ]) Listing clusters in order of aggregate capacity, that have (atleast one host with) enough CPU and RAM capacity under this Zone: 1
I feel confused because I already have a host in cluster 2.
Can anyone give me some suggestions?Any reply will be appreciated!
You need to have a closer look at the log file to understand why the CloudStack is unable to place a VM on the host. The information will appear above the entries you have provided. There are a lot of issues that can cause this problem.
E.g. this blog entry walks through a configuration problem with XenServer
Another common issue is when using local storage, you need to create a new compute offering that uses local storage disks. The default compute offerings do not support local storage.
Updated: changed answer to take into account that your sample is from the log file.
Most probably you are running out of capacity or due to some other error cloudstack has added you host in the "Avoid List". It does that when the management server finds an error while deploying the instance and from the next time till the problem gets resolved, the host and the cluster will be part of avoid list and will be avoided in the subsequent deployments.
You need to find out the exact reason by monitoring the management server logs. Login to your management server and go to folder /var/log/cloudstack/management/.
Now run the command tail -f management-server.log
This will give you continuous output of the management server log, so that you know what exactly is happening at the moment.
Now do the operation on UI (e.g. try to add instance) and quickly monitor the running logs.
Abort the command when you find an exception in the log and monitor the log statements just above the exception.
Also as a standard practice, always have a habit of monitoring the management server logs and agent logs (on host - /var/log/cloudstack/agent/agent.log)

Erlang: starting a remote node programmatically

I am aware that nodes can be started from the shell. What I am looking for is a way to start a remote node from within a module. I have searched, but have been able to find nothing.
Any help is appreciated.
There's a pool(3) facility:
pool can be used to run a set of
Erlang nodes as a pool of
computational processors. It is
organized as a master and a set of
slave nodes..
pool:start/1,2 starts a new pool.
The file .hosts.erlang is read to
find host names where the pool nodes
can be started. The slave nodes are
started with slave:start/2,3,
passing along Name and, if provided,
Args. Name is used as the first
part of the node names, Args is used
to specify command line arguments.
With pool you get load distribution facility for free.
Master node may be started this way:
erl -sname poolmaster -rsh ssh
Key -rsh here specifies an alternative to rsh for starting a slave node on a remote host. We used SSH here. Make sure your box have working SSH keys, and you can authenticate to the remote hosts using these keys.
If there are no hosts in the file .hosts.erlang, then no slave nodes are started, and you can use slave:start/2,3 to start slave nodes manually passing arguments if needed.
You could, for example start a remote node:
Arg = "-mnesia_dir " ++ M,
slave:start(H, Name, Arg).
Ensure epmd(1) is up and running on the remote boxes in order to start Erlang nodes.
Hope that helps.
A bit more low level that pool is the slave(3) module. Pool builds upon the functionality in slave.
Use slave:start to start a new slave.
You should probably also specify -rsh ssh on the command-line.
So use pool if you need the kind of functionality it offers, if you need something different you can build it yourself out of slave.

Resources