I am new to Artifactory and I have Artifactory installed in my local machine and is deployed on the standard Tomcat web container and a Weblogic web container. I want to know how Artfactory stores the artifacts. Would it be in the web container or would it be stored on my local machine?
Also is it possible to connect the storage meaning that if I deployed an artifact on my local machine with the Weblogic server is it possible to configure Artifactory such that if I use the Tomcat container I can still access the artifact deployed when I was using the Weblogic server?
Artifactory stores the actual binaries on the disk (by recommended default) and metadata about the binaries in a JDBC compliant database (Derby by default, but you can use other supported http://wiki.jfrog.org/confluence/display/RTF/Changing+the+Default+Storage)
Usually, you need only one Artifactory instance. Even though technically you could configure multiple instances of Artifactory to use the same directory for artifacts and the same connection to the metadata database, this setup will probably corrupt both the artifacts storage and the metadata database by executing concurrent writes.
DO NOT DO IT.
Artifactory stores data into a JDBC compliant database, I guess it's Derby by default but you can use MySql, etc. http://wiki.jfrog.org/confluence/display/RTF20/Running+Artifactory+on+MySQL
Usually, you need only one Artifactory instance. Even though it should works on multiple containers if you share the data through the same database, I would advice you to use a unique instance
Related
I have deployed API manager 4.0.0 All-in-one on 2 VMs. I am using Mysql as DB which is on separate VM and I am sharing the databases as mentioned in the document. Now I am trying to cluster these 2 nodes as mentioned in this document. There are few things which are not clear to me from this document.
Which node is manager and which is worker or they are both manager or worker? What is basic difference between manager and worker?
If I use nfs to share the resources between all the nodes, which node do we setup nfs?
(I setup nfs in a different vm , and both nodes are mounted to nfs server, is that right?)
What happens under the hood when you publish an API in version 4.0.0. I understand that when an API is published, it gets deployed on the API Gateway, and the API lifecycle state will be changed to PUBLISHED. What artifacts are persisted in the DB (and where) and what artifacts are persisted to the filesystem (my understanding is that they are located at <APIM_HOME>/repository/deployment/server/synapse-configs/default directory as XMLs, but i don't think i notice something changed in the directory, where are they?).
What does step1 and step9 mean? why we need this?
We had this manager-worker concept in the older versions. But in APIM v4, we don't have that concept. Both nodes are accepting requests.
In APIM v4 we have inbuilt artifact synchroniser by default and you don't need a NFS for API artifacts and rate limiting policies. But if you are using tenants and userstores, then you need to have the NFS. In that case, you can mount both the nodes to the NFS server.
Before APIM v4, we had this file system based artifact approach. But in the latest it is loaded to the memory. When you create an API, it publishes an event to itself and to the other node. Then both the nodes load the API from the database to the memory.
Step 1: You can't use the default shipped keystores in production. You have to change those.
Step 9: This is for distributed caching.
It is not clear from the JFrog documentation, if the HA deployment of Artifactory can be made using the embedded Derby database or it must always be external DB?
Artifactory requires and external database when installed in HA mode. This requirement is specified in the HA installation and setup document as part of the requirements section.
The reason for such a requirement is that all cluster nodes are sharing the same database and this cannot be done with the embedded derby.
I'm using Bonobo Git Server. I want to host this application on Azure Website, but there is a disk limitation to ~10gb (Basic plan). This is not enough to host git repositories. I'm thinking if there is any way to use Azure Storage connected to my website to host those repositories?
Currently bonobo application uses local git .exe to perform appropriate processes with repositories. I have no idea how I can make this working with Azure Storage or if it is possible at all.
You won't be able to get git.exe to work with azure storage without mounting it as a normal file system like in Azure Files which won't work in azure Web Apps anyway. You can upgrade to premium and get 500 GB though, so depending on your scenario, you may wanna look into that.
I am using Spring MVC 3.2 Embedded database (H2) Support for storing real-time progress of tasks,queuing notifications and some temporary logs.The only problem with this approch is that my data gets vanished ; If the application redeploys or server restarts.This scenario is probably very rare in production environment but still I want to know that using embedded databases in production environment is a good choice or not?..Or is there any way to persist embedded database state to hard-disk so that the next time server boots we can restore the database state to stored checkpoint?
Thank you.
Embedded databases are not meant for use in a Production Environment. They are meant for a quicker development option as you do not need to have the dependency of an external database running. With an embedded database, you can programmatically fire it up and optionally initialize it based on your needs.
The reason your changes are being lost during redeployments is because you are using the in-memory version of HsQL instead of In-process(Standalone file) mode. You can use the Standalone mode which keeps the changes persistent.
In-Process (Standalone) Mode
This mode runs the database engine as part of your application program in the same Java Virtual Machine. For most applications this mode can be faster, as the data is not converted and sent over the network. The main drawback is that it is not possible by default to connect to the database from outside your application. As a result you cannot check the contents of the database with external tools such as Database Manager while your application is running. In 1.8.0, you can run a server instance in a thread from the same virtual machine as your application and provide external access to your in-process database.
The recommended way of using the in-process mode in an application is to use an HSQLDB Server instance for the database while developing the application and then switch to In-Process mode for deployment.
An In-Process Mode database is started from JDBC, with the database file path specified in the connection URL. For example, if the database name is testdb and its files are located in the same directory as where the command to run your application was issued, the following code is used for the connection:
Connection c = DriverManager.getConnection("jdbc:hsqldb:file:testdb", "sa", "");
The database file path format can be specified using forward slashes in Windows hosts as well as Linux hosts. So relative paths or paths that refer to the same directory on the same drive can be identical. For example if your database path in Linux is /opt/db/testdb and you create an identical directory structure on the C: drive of a Windows host, you can use the same URL in both Windows and Linux:
Connection c = DriverManager.getConnection("jdbc:hsqldb:file:/opt/db/testdb", "sa", "");
When using relative paths, these paths will be taken relative to the directory in which the shell command to start the Java Virtual Machine was executed. Refer to Javadoc for jdbcConnection for more details.
HSQL documentation
i have a .net solution, with a Mvc 4.5 web, a c# server dll, another webservice layer dll, etc.
I want to deploy it on a azure virtual machine xx.cloudapp.net:8080
There are many guids on how to deploy a new website on azure, but since this solutions contains a lot of dlls, i need a virtual machine.
I didnt found any guide on how to do it, can you please give me a link or something?
You don't have to use Virtual Machines to install DLLs - you can do this with Cloud Services (web/worker role) as well, via startup tasks. As long as these DLLs are easy to fetch (e.g. blob storage) and quick to install, you can take that route. Many do just that, since this allows you to work with stateless OS VMs (where you don't worry about maintaining the OS, or making copies of a VM when wanting to scale out to multiple instances).
That said: To install to a Virtual Machine, you'd typically copy files to your VM somehow (maybe fetching from a CI engine, possibly ftp'ing the files, whatever procedure you'd typically use with a Windows server). And you'd use RDP for gaining access to the desktop.
Once you have the VM set up just how you want it, you can then create an image of the VM and add it to your personal gallery, whereby you can then spin up additional VMs based on that image. Unlike Cloud Services, each Virtual Machine will then take on a life of its own (and live in its own VHD in its own blob), where you'd have to distribute both OS updates and app updates to each VM as the need arises.