It is not clear from the JFrog documentation, if the HA deployment of Artifactory can be made using the embedded Derby database or it must always be external DB?
Artifactory requires and external database when installed in HA mode. This requirement is specified in the HA installation and setup document as part of the requirements section.
The reason for such a requirement is that all cluster nodes are sharing the same database and this cannot be done with the embedded derby.
Related
I have deployed API manager 4.0.0 All-in-one on 2 VMs. I am using Mysql as DB which is on separate VM and I am sharing the databases as mentioned in the document. Now I am trying to cluster these 2 nodes as mentioned in this document. There are few things which are not clear to me from this document.
Which node is manager and which is worker or they are both manager or worker? What is basic difference between manager and worker?
If I use nfs to share the resources between all the nodes, which node do we setup nfs?
(I setup nfs in a different vm , and both nodes are mounted to nfs server, is that right?)
What happens under the hood when you publish an API in version 4.0.0. I understand that when an API is published, it gets deployed on the API Gateway, and the API lifecycle state will be changed to PUBLISHED. What artifacts are persisted in the DB (and where) and what artifacts are persisted to the filesystem (my understanding is that they are located at <APIM_HOME>/repository/deployment/server/synapse-configs/default directory as XMLs, but i don't think i notice something changed in the directory, where are they?).
What does step1 and step9 mean? why we need this?
We had this manager-worker concept in the older versions. But in APIM v4, we don't have that concept. Both nodes are accepting requests.
In APIM v4 we have inbuilt artifact synchroniser by default and you don't need a NFS for API artifacts and rate limiting policies. But if you are using tenants and userstores, then you need to have the NFS. In that case, you can mount both the nodes to the NFS server.
Before APIM v4, we had this file system based artifact approach. But in the latest it is loaded to the memory. When you create an API, it publishes an event to itself and to the other node. Then both the nodes load the API from the database to the memory.
Step 1: You can't use the default shipped keystores in production. You have to change those.
Step 9: This is for distributed caching.
I am developing an application in ember. I would host it locally in a machine using mangoose webserver after production. Now I need to store data for the application (json is most welcome) even in file database. I need a serverless local database without internet. I should not install node.js or any other software for database because the machine in which is going to be hosted has not given any provision for installation. The database or file must be directly acccessible from js. I have viewed about sqlite3 but i found no useful js library to connect without node.js. How could I achieve it. Guide me please.
Thanks in advance
I'm on a W7 32-bit platform. Following the installation instructions in http://book.cakephp.org/3.0/en/installation.html, and using the builtin PHP server. The default web app reports:
Database driver Cake\Database\Driver\Mysql cannot be used due to a missing PHP extension or unmet dependency
At this early stage in my cake career, I do not know whether I can rebuild this default app, or how to. I have configured PHP to use PDO-sqlite not MySQL, and used it to write an app which successfully interrogates a SQLite database via PDO.
Fixed it. I was looking for the configuration file in the wrong place. The right place is <\my_app>\config\app.php, just change the driver name to that of the SQLite driver and it all works.
I don't understand on how to access to the bundled database. Do I need to go to a certain url to open an interface and log in? Or is it from cmd?
Must I do the configuration first, assited in alfresco community help page??
Thank you so much!
If Alfresco is installed on your local desktop, you should be able to easily connect with pgadmin. This you can download separately, or find in the Alfresco bundle, under the postgresql directory.
If Alfresco is installed on a remote server, then you will need to expose the Posgresql port, which 5432 is the default, through any firewalls that may exist. Then you will need to configure Postgresql to allow remote connections.
I am new to Artifactory and I have Artifactory installed in my local machine and is deployed on the standard Tomcat web container and a Weblogic web container. I want to know how Artfactory stores the artifacts. Would it be in the web container or would it be stored on my local machine?
Also is it possible to connect the storage meaning that if I deployed an artifact on my local machine with the Weblogic server is it possible to configure Artifactory such that if I use the Tomcat container I can still access the artifact deployed when I was using the Weblogic server?
Artifactory stores the actual binaries on the disk (by recommended default) and metadata about the binaries in a JDBC compliant database (Derby by default, but you can use other supported http://wiki.jfrog.org/confluence/display/RTF/Changing+the+Default+Storage)
Usually, you need only one Artifactory instance. Even though technically you could configure multiple instances of Artifactory to use the same directory for artifacts and the same connection to the metadata database, this setup will probably corrupt both the artifacts storage and the metadata database by executing concurrent writes.
DO NOT DO IT.
Artifactory stores data into a JDBC compliant database, I guess it's Derby by default but you can use MySql, etc. http://wiki.jfrog.org/confluence/display/RTF20/Running+Artifactory+on+MySQL
Usually, you need only one Artifactory instance. Even though it should works on multiple containers if you share the data through the same database, I would advice you to use a unique instance