I want to move a set of artifacts from one Nexus to other (download and later upload). I just can download the artifacts one by one, ¿Is there any way to download the entire folder? ¿Is there any other kind of operation like export/import?
Thanks!
EDIT:
I have access to the nexus repository (sonatype-work\nexus\storage) in the user folder. I have got from there all the artifacts. I didn't find any way to do it from the web client.
Nexus stores the artifacts on disk in standard Maven 2/3 repository layout, so you can just directly copy the artifacts from one storage directory to the other using whatever means you like.
After you're finished, schedule a repair index task against the destination repository so that searching for artifacts from the web UI will work. Note that your builds will work immediately after you copy the artifacts, indexes are not used by maven builds.
Related
My company has a nexus installation that grew to a rather huge size on disk, thus I am in the process of cleaning it up. While doing so, I found a directory called storage_full in the sonatype-work/nexus directory, adjacent to the storage folder, which is not listed in the nexus directories documentation. Google finds nothing either. The folder is rather large and seems to be similar to storage in structure.
Questions:
What is the use of this directory?
Can I delete it, or clean it up? I'd rather not use trial and error to find out if it's important as any downtime of the service is unacceptable.
The nexus version is 2.11.2-03, but AFAIK it has been repeatedly updated and was running in the same work directory since about 2011 - could it be that this folder was left over from an older version?
Nexus has never had a directory called "storage_full" as part of it's setup, so I'm not sure what this is. Check to see of the local storage location of any of your repositories has been overridden to point to that location.
I want to create a master repository on our server, from which I can clone a local version onto my computer.
I am using R Studio v0.98.994.
So far, this is what I have tried doing:
Create a folder for the master repository to live in. I do this using 'new project' in R studio, and tell it to make a git repository.
I can then open up another new project, located on my C drive, and use R studio to clone, by telling it to open an existing project and setting the URL as the location of the master project.
However, then when I make changes and commit to my local repository (which works fine) I cannot push to the master repository, I get an error exactly as described in this question: git push fails: `refusing to update checked out branch: refs/heads/master`
So it appears that R Studio creates non-bare repositories?
Now I thought, well okay, I will use git bash to initialise the repository and then connect to that within R studio.
I do so, but cannot then find a way to use that repository in R Studio.
I am very new to Git, so it is entirely probable that this is one of those 'read the instructions' questions, in which case I am very sorry - and could someone possibly point me towards some guidance for this situation? I have spent the better half of a day googling around this error and haven't yet managed to pull together the pieces :( I also apologise; this doesn't feel like a very reproducible question.
It sounds like you are using Windows Git, with a setup on a local Windows machine (C: drive) and a server of some kind, mounted as the S: drive. There's a few things you should be aware of when doing this.
Shared Repositories
If you are intending for multiple people to share the same repository, you want to initiate a shared repository. See the --shared option in git-init for more details. Note that I'm not sure how having your repository on a Windows machine affects the sharing options. If you are just trying to keep your repository in two places, that makes things a lot easier.
Bare Repositories
Separate from the discussion of sharing is the discussion of bare repositories. If you don't intend to ever work with files in the server (i.e. it's just going to be a place to push changes so they are safely stored), you could initialize a bare repository. A bare repository contains the database structure of Git, but does not have the actual files in the directory.
A standard Git repository is a directory with a hidden folder in it named .git. This .git folder contains all the various data structures that Git uses to track changes. A bare repository is essentially a folder containing only the contents of .git.
The good thing about a bare repository is that no one can work in the repository itself (since there is no working directory, just the database). This means that no one could log into S: and edit the repository themselves. Instead, they would have to clone the repository, then push their changes back to the origin. The GitGuys have a good article about why this is ideal.
Note that shared repos and bare repos are not dependent or mutually exclusive. As a general practice, if you are having a "server repo" from which you pull and to which you push, you should have it be bare, regardless of whether the project is shared.
A Non-Shared Workflow
Since it's not clear if you are sharing or not sharing and you're on a Windows environment, which I don't know about from a sharing standpoint, I'm going to give you a simple example. Using git-bash, you should be able to change directories to wherever on S: you have your repositories. Then, use git init with the bare options as described by the link above to initialize a bare repository. Navigate to where you want your repository to live on C:, and then do git clone to get a working copy.
Add a README file or something else so you can do your initial commit, and then commit and do git push origin master to push your changes to the S: repository. Once all that is done, THEN initialize the RStudio Git project. RStudio should defer to your existing configuration, and things should hopefully work.
I'm going to start using grunt-rev & grunt-usemin with grunt-watch for my web development needs (a RESTful Web App specifically).
I have a local development machine which will run grunt-watch to attach revision identifiers on my JS files. I git commit and git push my tree to a git repo, and then ask the production server to git pull the changes from the git repo to show them to the web visitors.
The problem is that I don't want my git repo to store different filenames (due to grunt-rev) on each commit. That would be bad, because then I wouldn't be able to do git diff between commits without having my screen get flooded with the contents of files that appear and disappear, and also it could (sometimes) take up a lot more storage than if it only stored the small diffs of the files.
The only solution I see is to add the build directory containing the versioned filenames in my .gitignore, so as to not store those files (with the constantly changing filenames) in git. But wouldn't that mean that I would have to run grunt-watch on my production server as well, in order to produce the build directory with the versioned filenames there as well? But that gets complicated: a new process has to run on the remote server, maybe with its small chances of error in processing the files. Not the solution I was hoping for.
Do you people have another solution? What would you suggest I did?
What I do to solve this, is to remove previous "build" files before committing and deploying a new file. There is no need to keep older files that have been generated, because you can always rebuild them with the source files (which are in git).
Problem description:
I have multiple alfresco installations (development, testing, production) of one project.
I need to copy files under Data Dictionary folder (Scripts, Templates, Web Scripts) from one to another in one direction (development -> testing -> production).
Current solution:
I copy files manually via webdav, which is annoying and unreliable (I can forget to copy some.).
Desired solution:
I'd like to have I tool, which will copy changed files at my command, what they are ready for the next step. I had an idea, that it could internally use a Git repository with branches for each installation, being able to fetch the files from devel and push the files to testing and production. This way (with Git) it could also support reverting changes.
It looks like a quite common problem, but I wasn't able to google something about it, so I'm asking here. Does such a tool exist or is there a better way of managing multiple repositories?
If you have a brand new installation of your development/testing/production Alfresco instances, you could simply migrate alf_data dir content, that contains by default db, indexes, content-store, backup files. If you need, you could migrate the "shared" folder too, or at least some files from the shared folder as could be some Alfresco customization (custom scripts or similar). Here is the link that helps with migration steps:
http://wiki.alfresco.com/wiki/System_Migration
Otherwise, if you need only to move a folder from Data Dictionary, or a set of documents, you could use ACP in order to achieve that. Here is the wiki for doing this: http://wiki.alfresco.com/wiki/Export_and_Import
You could do this via FTP. When your want to deploy new changes, you can go with manual client like FileZila to download changes from Dev, then upload them to test.
But you can also automate FTP, so that it can run a scheduled check if there are new things on, say, dev and push them to test.
If you use Git for source control, you could also do this via git-ftp. Hold a copy of Data Dictionary in your source folder, then add some sort of pre-commit check, which will see if you changed any of those files. If you did, on commit it will push the change to dev and test.
I think Relication service AF is suitable for you.
http://wiki.alfresco.com/wiki/Alfresco_Community_3.4.a#Replication
We upload artifacts to Nexus through the file protocol with Maven deploy plugin. Sometimes, those artifacts do not appear directly in Nexus Web interface. I have to do 'expire cache' and refresh the page. Moreover, this causes builds dependant of this artifact to fail.
I guess this is because, we deploy though file protocol. Is there a way to prevent this ? I saw the 'Not Found Cache TTL' in Nexus interface. Not sure to understand the doc. If I set this to zero, will this work ?
Thanks
PW
Deploying directly to the file system should only be used in extreme cases such as bulk manipulations or imports. In order to make Nexus fully recognize the changes on disk, you would need to expire the cache and then you may have to rebuild the metadata. Both of these can be triggered from the repository screen. If you want the artifacts to be searchable, you would also have to fire off the indexer task as well.
All of those things happen automatically when you deploy via http/https directly to Nexus which is the way it is intended to be used