based on the documentation to upgrade Nexus2 to Nexus3 we can use the Upgrade Agent but now I am wondering is it possible to use it for data migration, my use case is, I have already Nexus3 with data inside, for the other project we are using Nexus2 which now we want to move data to Nexus3, just wondering if migrating in this way cause some configuration issue or overriding blob in Nexus3.
Does anyone tried it for migrating data from one instance to already existed instance with data inside?
at the end I had to come with this solution as I am using the Nexus OSS.
first download the target repository from Nexus 2
wget --user user --password pass --recursive --no-parent http://NEXUS2-URL/nexus/content/repositories/maven-releases/
then used this library to import them:
https://github.com/AlexLiue/nexus-repository-import-scripts, it supports NuGet, Maven, NPM
I would check repository import, maybe that solves your problems
Related
I migrated from an artifactory 4.0.2 to a 7.last suing method 1 suggested by https://jfrog.com/knowledge-base/what-is-the-best-way-to-migrate-a-large-artifactory-instance-with-minimal-downtime/.
Now surfing into the new repository I see all definitions fof my repositories as well as all metadata, however I don't see any artifact content (e.g. jar files).
I copied all filestore content from the old to the instance.
What I missed ?
Tks
From the scenario you've described, it appears to be an issue with the storage (filestore) configuration/connection.
What do we need to check next?
If you have updated the filestore path as a part of the migration, update the path into the storage configuration file [binarystore.xml] in the Artifactory instance. Let's say, you have moved the data from one mount to another mount to use the data via the new Artifactory version 7.x, this step will help Artifactory to know where exactly it needs to look out for the binaries.
The file will be present under $JFROG_HOME/etc/artifactory/ directory in the Artifactory 7.x version.
I'm using JFrog Artifactory OSS in a docker container.
I want to download the latest version of an artifact. But it seems that it is not possible in OSS version.
Does anybody know a way to download the latest version of the artifact?
You are right, the Latest Version API endpoint works only in Artifactory Pro.
Working with Maven repositories, you can use the SNAPSHOT support to get Artifactory to return you the latest artifact.
Setting the Maven Snapshot behavior in repo settings to Unique, deploy the artifacts with -SNAPSHOT suffix. Artifactory will assign a unique version to those files internally, but you will always be able to retrieve the latest one using the -SNAPSHOT suffix.
Thanks a lot for your fast answer. I forgot you tell my hole workflow.
I have a jenkins server, building, testing and deploying my stuff.
I want build a spring boot jar file with maven and deploy it to my repository(i use jfrog). This works perfect. In a next step i will create a docker image containing this jar file. So the in the image file ther must be a command to download the execuatbel jar from jfrog. So for this reason i have to know the latest version of the jar file.
I hope you could understand it, this is my first english question.
Thanks a lot for helping me !
Is it possible to add Presto interpreter to Zeppelin on AWS EMR 4.3 and if so, could someone please post the instructions? I have Presto-Sandbox and Zeppelin-Sandbox running on EMR.
There's no official Presto interpreter for Zeppelin, and the conclusion of the Jira ticket raised is that it's not necessary because you can just use the jdbc interpreter
https://issues.apache.org/jira/browse/ZEPPELIN-27
I'm running a later EMR with presto & zeppelin, and the default set of interpreters doesn't include jdbc, but it can be installed using a ssh to the master node and running
sudo /usr/lib/zeppelin/bin/install-interpreter.sh --name jdbc
Even better is to use that as a bootstrap script.
Then you can add a new interpreter in Zeppelin.
Click the login-name drop down in the top right of Zeppelin
Click Interpreter
Click +Create
Give it a name like presto, meaning you need to use %presto as a directive on the first line of a paragraph in zeppelin, or set it as the default interpreter.
The settings you need here are:
default.driver com.facebook.presto.jdbc.PrestoDriver
default.url jdbc:presto://<YOUR EMR CLUSTER MASTER DNS>:8889
default.user hadoop
Note there's no password provided because the EMR environment should be using IAM roles, and ppk keys etc for authentication.
You will also need a Dependency for the presto JDBC driver jar. There's multiple ways to add dependencies in Zeppelin, but one easy way is via a maven groupid:artifactid:version reference in the interpreter settings under Dependencies
e.g.
under artifact
com.facebook.presto:presto-jdbc:0.170
Note the version 0.170 corresponds to the version of Presto currently deployed on EMR, which will change in the future. You can see in the AWS EMR settings which version is being deployed to your cluster.
You can also get Zeppelin to connect directly to a catalog, or a catalog & schema by appending them to the default.url setting
As per the Presto docs for the JDBC driver
https://prestodb.io/docs/current/installation/jdbc.html
e.g. As an example, using Presto with a hive metastore with a database called datakeep
jdbc:presto://<YOUR EMR CLUSTER MASTER DNS>:8889/hive
OR
jdbc:presto://<YOUR EMR CLUSTER MASTER DNS>:8889/hive/datakeep
UPDATE Feb 2018
EMR 5.11.1 is using presto 0.187 and there is an issue in the way Zeppelin interpreter provides properties to the Presto Driver, causing an error something like Unrecognized connection property 'url'
Currently the only solutions appear to be using an older version in the artifact, or manually uploading a patched presto driver
See https://github.com/prestodb/presto/issues/9254 and https://issues.apache.org/jira/browse/ZEPPELIN-2891
In my case using an old reference to a driver (apparently must be older than 0.180) e.g. com.facebook.presto:presto-jdbc:0.179 did not work, and zeppelin gave me an error about can't download dependencies. Funny error but probably something to do with Zeppelin's local maven repo not containing this, not sure I gave up on that.
I can confirm that patching the driver works.
(Assuming you have java & maven installed)
Clone the presto github repo
Checkout the release tag e.g. git checkout 0.187
Make the edits as per that patch https://groups.google.com/group/presto-users/attach/1231343dbdd09/presto-jdbc.diff?part=0.1&authuser=0
Build the jar using mvn clean package
Copy the jar to the zeppelin machine somewhere zeppelin user has permission to read.
In the interpreter, under the Dependencies - Artifacts section, instead of a maven reference use the absolute path to that jar file.
There appears to be an issue passing the user to the presto driver, so just add it to the "default.url" jdbc connection string as a url parameter, e.g.
jdbc:presto://<YOUR EMR CLUSTER MASTER DNS>:8889?user=hadoop
Up and running. Meanwhile, it might be worth considering Athena as an alternative to Presto give it's serverless & is effectively just a fork of Presto. It does have limitation to External hive tables only, and they must be created in Athena's own catalog (or now in AWS Glue catalog, also restricted to External tables).
Chris Kang has a good post on doing that in spark-shell, http://theckang.com/2016/spark-with-presto/. I don't see you wouldn't be able to do that in Zeppelin. Another helpful post is making sure you have the right Java version in EMR, http://queirozf.com/entries/update-java-to-jdk-8-on-amazon-elastic-mapreduce. The current Presto version as of writing only runs on Java 8. I hope it sets you in the right direction.
I love using RStudio for it's built-in integration with version control systems. However with RStudio on Windows is there a way to change the Git protocol from http to ssh or vice versa for a project already under version control without first having to delete and recreate the project?
I might be missing something, but I originally cloned my repo using http which I subsequently found to be a massive pain because every time I want to push project changes to GitHub I have to re-enter my username and password. So I removed the project from version control(Project -> Project Option -> Git/SVN -> Version Control System: none) and then tried to re-add version control hoping to use ssh but it will only allow you to go back to the original protocol you selected when creating the project in the first place.
The only way I have found to change protocol it is to delete the project and then create a new project from GitHub using the correct ssh parameters. I'd really like to be able to change projects version control protocol from http to ssh without deleting and re-cloning first.
Is this possible?
Check out git config and the whole configuration stuff. You can configure several remotes to make the "distributed" aspect of git work.
You can try just copying the whole repository (or just .git/config, keep a copy!) and check what happens with your specific case when you change the configuration. It depends on lots of things that aren't under git's control, like firewall configurations en route, and the configuration on the other end.
I want to migrate my 20 year old maven repo to artifactory. What are my different possible options for migration starting with the easiest or the quick way to migrate and start using artifactory?
One way I could think was a remote repo in artifactory that would proxy to my existing repo.
The solution you've already suggested (passive proxying) is seamless and useful for weeding out artifacts that are no longer needed/used, but will also take you more time to migrate away from the old instance.
For a more direct approach, Artifactory has a repository import facility that allows you to import content into Artifactory from a physical location on the installed machine.
A little less convenient but also an option is to write a script that crawls the content directory of the old repo and deploys every artifact using an HTTP PUT request.