How did my Artifactory generic and docker repos suddenly change type/version? - artifactory

We have been running Artifactory (currently version 6.9.0) in EC2 for months now with no problems. This was originally a licensed instance of Enterprise Artifactory that we let lapse (intentionally).
Last week we started getting a storage warning (we use cluster-s3 storage) that we were at 95% utilization (which disables uploads) so we started cleaning up old artifacts (i.e., binaries, Docker images) to get the storage down. We got it down for a while, but it crept back up -- high enough this time that we couldn't ssh in, so we rebooted the machine via the EC2 Console.
It came right back with no obvious problems. Then we deleted a generic repository that someone had set up as a back up of another system (300GB) which bought us back plenty of space.
Today, a number of our builds started failing because the step to push the artifact to Artifactory failed. Upon further investigation, a number of our "generic" repositories are now appearing (and behaving) as "Docker" repositories. Further, a number of our v1 Docker repositories are now reporting as v2 Docker repos and blocking standard pushes from v1 clients.
The docs are pretty clear that we can't change the repo type, and I'm not seeing a way to migrate back to v1 from v2 Docker repos. I'm currently exporting one of the repos to see if we can import it as the right type.
Any idea what happened here? Did something get corrupted in the database? What can I even start to check?

Related

Artifactory pro v7.30.x fails to start (multiple versions and installation methods)

I am evaluating a self-hosted artifactory installation on a trial license. I followed the official installation instructions for the docker container and the linux archive file. Neither of these installation options are working. The artifactory service fails to start.
I have opened an issue to track the problem: https://www.jfrog.com/jira/browse/RTFACT-27182
TL;DR; A component fails, a nasty stack trace appears in the logs, and eventually the services stop.
It would seem that there is a bug in artifactory. I have traced this back to multiple versions and this issue spans multiple years.
The problem appears to be that artifactory cannot get past the bootstrapping/initialization phase when started with artifactoryctl. At a certain point (around 2-5 minutes in) all the services stop and a pid file is left over, which is bad.
The workaround I have found is that the service can pass this initialization phase only after multiple start/stops (3 to be exact). In other words, we call artifactoryctl start, wait for all failures, then artifactoryctl stop and repeat two more times. On the fourth and final start, we will see the service come online (in about 150 - 190s). From then on, the service will start correctly with one call to artifactoryctl start.
I have not yet looked at the systemd unit file. My guess would be that it has/or could be made to have a number of retries to work around this issue and perhapse this issue does not exist when using the service wrapper.
I have also not yet looked again at the docker container which appears to be failing for the same reason. A workaround off the top of my head would be to modify the entrypoint script. If you were to dockerk exec into the container and try the workaround above it would likely terminate the root process and kill the container.

Artifactory UI monitoring service status showing "online (0 of 0 nodes)" after migrating JFrog platform to new virtual machine

I have an existing JFrog/Artifactory Pro 7.27.10 RPM-based install (on a CentOS 8 VM) that I recently migrated to a new (Centos Stream 8 VM) running JFrog/Artifactory Pro 7.31.13 (also installed via RPM).
After copying my existing master.key file from the original JFrog install to the relevant directory, I started up the Artficatory Pro service on the new VM and proceeded to migrate my data using the "Simple migration with downtime" process described in this JFrog whitepaper. Everything worked fine, Artifactory is running as expected on the new VM and all my data appears good. I moved my frontend proxy DNS aliases over to the new VM and shut down the proxy on the old VM.
One problem I am now noticing is that in the Artifactory admin UI, the Monitoring > Service Status now doesn't appear to report my Artifactory/JFrog platform microservice status any more. It does show Artifactory with the correct backend IP address (running on port 8082) but then the "Status" shows "Online" with (0 of 0 nodes) and the ">" fold down arrow shows nothing when clicked. I went back to my old Artifactory instance and checked and it was still showing the single node with all of the individual JFrog platform service statuses properly.
My guess is that I missed something in the migration process and/or something else needs to be configured to allow the services to show up on the monitoring page, however I'm at a loss as to what this is or even where to look for it. I looked through the system.full-template.yaml but nothing seems obvious there. And while the Artifactory docs are usually fairly comprehensive, the page about monitoring doesn't seem to give much insight about how this is configured or what to do if it's missing. Also not sure if the initial startup of Artifactory on the new VM before I migrated by data affected how the monitoring was configured such that it now doesn't work with the imported data (unfortunately I didn't check the monitoring UI in the new VM before I did the data migration so I can't say for sure if it was initially working)
A couple of other details which may be relevant:
when migrating my VM, I kept the same (FQDN) hostname, but the IP address was different
I used the same frontend (nginx) proxy configurations on both the old/new VMs though I'm not sure if this is relevant here or not.
With the exception of going from CentOS 8 to Stream 8, the VM configurations themselves should be nearly identical as I create them from a kickstart (which was only updated for the new stream repo paths). Again not sure whether this is relevant at all here.
Any ideas on where I should be looking to figure out how to fix this?

Adobe AEM 6.1 Start up failed after Service Pack 2 upgrade

We require help.
We are applying Service Pack 2 to AEM 6.1 and SP2 deployment failed with Error 500. Unfortunately we've started the AEM and startup failed due to missing bundles or in correct bundle dependencies. We restored AEM from backup a which we took before night but while starting we received below error.
Any help to recover the instance is appreciated.
* [FelixStartLevel] org.apache.jackrabbit.oak.plugins.nodetype.TypeEditorProvider Node type change for cq:PollConfig requires repository scan: org.apache.jackrabbit.oak.plu
gins.nodetype.NodeTypeDefDiff[
nodeTypeName=cq:PollConfig,
mixinFlagDiff=NONE,
supertypesDiff=NONE,
propertyDifferences=[
org.apache.jackrabbit.oak.plugins.nodetype.NodeTypeDefDiff$PropDefDiff[itemName=target, type=TRIVIAL, operation=MODIFIED],
org.apache.jackrabbit.oak.plugins.nodetype.NodeTypeDefDiff$PropDefDiff[itemName=source, type=MAJOR, operation=ADDED]
],
In general, whenever a node type definition is changed, OAK triggers a scan to ensure the consistency of the repository. AEM 6.1 SP2 upgrade has a few node type definition changes and depending on the size of your repository the full scan (which is basically an in-depth node traversal) could take a substantial amount of time (several hours).
The message you are seeing in the log is an INFO message and not an error. If you keep monitoring your logs, you will eventually see something like:
[FelixStartLevel] org.apache.jackrabbit.oak.plugins.nodetype.TypeEditorProvider Node type changes: [cq:PollConfig, cq:PollConfig]; repository scan took <time duaration>ms
The startup is slow and hasn't failed unless you see errors from repository scans or other causes. Post an error if you see one and it can be looked at.
You can also try running the consistency checks via oak-run tool to verify the repository state after the upgrade. Detailed instructions can be found below:
https://github.com/apache/jackrabbit-oak/tree/trunk/oak-run
Note that this tool can work on offline repositories so you don't need a running instance to check for issues.
It is also recommended to install critical hotfixes and service packs using the crx-quickstart/install folder. Leave the packages in this folder after installation as they won't reinstall after subsequent startups but deleting some packages may uninstall them.
Hope this helps.

How Can I run two versions of Sonatype Nexus on the same machine?

so i just started working on a project, and my task is to upgrade sonatype nexus 1.9.x running on CentOS6 to 2.11.x. The old version is currently deployed via a war file. The goal is to get the new version deployed while not breaking builds when devs try to build their project.
My plan of attack is to download nexus. Make the current nexus that is deployed via tomcat, run on a different port, make the new nexus run on the current port, then proxy the old nexus.
Im running into a couple problems though. the old nexus uses java 1.6. If update java to 1.8, would this break the current running nexus?
Would I be able to run two version of nexus on the same vm? If so, how would i do that and minimize the change of messing something up?
Thanks everyone. Im just starting out and this is all very new to me.
Since you Nexus install is very old you have to consider your options:
You could upgrade the existing instance. 1.9 is VERY old so you have to upgrade in multiple steps. First to 2.0, then 2.7 and then 2.11. This is necessary due to data storage changes for configuration and removed upgrade steps.
You could just reconfigure a new server from scratch with the same configuration in terms of repositories and other things and simply rsync the repsitories over to the new storage. You really only have to do this for hosted repositories since the proxy repositories will hopefully still be online and you will just download whatever is requested anew.
If your setup is not too complex I would personally go with option 2. It gives you a chance to revisit things and clean up your setup.
For that setup the steps are roughly.
Install Java 8 in parallel to Java 6
Install Nexus 2.11 from the bundle so it runs with Eclipse Jetty. Do NOT try to run on Tomcat.
Configure it to run on port 9081 or some other non-conflicting port with your original setup and do all the other config including creating the repositories as desired as well as security setup.
Now you should be able to have both servers running.
Create a script that rsyncs the repositories (located in sonatype-work/nexus/storage) and run it with the new server offline
Start the new Nexus in parallel and run a number of tests against it.
Once you have confirmed everything is working plan for a specific time for the cutover and do this
Disable any deployment to Nexus (CI servers, tell people, switch hosted repositories to read only)
Run the rsync script one last time
Turn the old Nexus server off
Configure the new server to use the port of the old one
Start the new one up
You are done. Everything should be good now so the last step is to delete the old Nexus and Tomcat setup.
There are various variations for this process of course. Here are some tips for the rsync.
Also feel free to ping us on the mailing list or chat for further help and check out the comprehensive documentation as well.

openstack how to prevent losing vms

I am using "devstack" to play with the openstack in my desktop.
I had configured several vms in my instance. What happened was couple of days ago there was a power failure which caused my desktop to power down(I didnt have a UPS) attached to it. This resulted in my losing all the vms since i didnt unstack.
One of the solution to prevent this from happening next time is using a UPS. Are there any other solutions that I can use to back the vms so that even if there is a power loss the vms will run if i just restart and do ./stack.sh
Create snapshot of VM
Instance snapshots are uploaded to Glance which will store them in /var/lib/glance/images on the controller node.
Backup this folder.
When there is a data lose occurs , just restore this folder and Launch new instance by boot from image. select the snapshot and click launch.
Devstack is a developer environment, it is not meant to recover from power losses.
You should consider using another all-in-one openstack installer which should support restarting the openstack services without losing state. For instance, you can use Redhat's packstack - https://openstack.redhat.com/Quickstart

Resources