We have a bunch of jars in the artifactory server (the free version).
How can we download all the jars in a single http request from it ?
Do we need to tar all the jars into a single tar file in order to efficiently download the jars ?
Thanks
Sincerely
Since you are the one who generates the files, you have two options:
As you said, generate the tar before uploading it. You'll still be able to search and browse the files inside it.
Write a afterDownloadError user plugin. Each time the user tries to access a url with .tar in the end, create the tar from the needed files and serve it.
Related
When MSBuild packages an ASP.Net website, it creates a .zip file and a set of xml files. Using msdeploy, I can push that zip file and a set of parameter replacements to an IIS server.
Now, here's the conundrum: given that Web Package zip file, is there a way to unpack it and do the token replacement cleanly without deploying to IIS? Just unpack it to a folder and apply the SetParameters.xml so that it would be ready to robocopy to a file system location?
Msdeploy's parameters are opaque to say the least. I've tried deploying using the file and folder providers, and that doesn't work.
I could roll my own - I'd have to reverse-engineer the token replacement logic and deal with how the zip file contains the entire folder tree right down to the drive root (instead of being relative paths from the project folder, which is what it will use when it deploys). All of this is doable... but very annoying and I was hoping there was a better way.
Thanks in advance.
Following up on this: In the end I had to do it in powershell. The code is tied into a whole bunch of other deployment code that's company-specific, so I can't paste it here, but in short:
Unzip the .zip into a temporary location.
Recurse the directories until you find the packagetmp - this is the actual meat of the zip.
Delete the target web app folder on the webserver (carefully - as many sanity checks as possible need to happen before this step)
Copy the contents of packagetmp to the target folder-path on the web-server.
Apply use xpath operations to do the config transforms in powershell to provide connection strings and logging endpoints.
Restart the app pool (do not skip this step you get subtle failures if you do).
Clean-up (delete the temporary folder).
In my Chef recipes I use remote_file to access files at http server. I would like to use Chef server's ngnix to serve those files.
How to configure Chef's nginx to serve files in specific folder using http?
This is not a feature of Chef. You can use the files/ folder in a cookbook and the cookbook_file resource to serve files directly from the Chef Server but it is very limited and you should really use your own server to manage large files or complex downloads.
We have many machines in LAN to be configured with Chef. There are also many remote_file resources in our recipes. The problem was: downloading from internet takes a lot of time, but at the same time the file was available on another machine in LAN.
What I did is:
Installed Nexus for binary file storage.
Monkeypatched remote_file resource using a library in a cookbook, so that it gets loaded every time a cookbook is loaded overwriting original resource behaviour.
Now every time remote_file resource has to download some file, it will first ask Nexus. If that file is available in Nexus, it downloads from there.
If there is no such file there, Chef downloads the file from original source and then uploads it to Nexus too for other nodes.
I have a couple of development machines that I code my changes on and one production server where I have deployed my Symfony application. Currently my deployment process is tedious and consists of the following workflow:
Determine the files changed in the last commit:
svn log -v -r HEAD
FTP those files to the server as the regular user
As root manually copy those files to their destination and, if required because the file is new, change the owner to the apache user
The local user does not have access to the apache directories which is why I must use root. I'm always worried that something will go wrong either due to a forgotten file during the FTP or the copy to the apache src directory.
I was thinking that instead I should FTP the entire Symfony app/ and src/ directories along with composer.json to the server as the regular user then come up with a script using rsync to sync all of the files.
New workflow would be:
FTP app/ src/ composer.json to the server in the local user's project directory
Run the sync script to sync the files
clear the cache
Is this a good solution or is there something better for Symfony projects?
This question is similar and gives an example of the rsync, but the pros and cons of this method are not discussed. Ideally I'd like to get the method that is the most reliable and easy to setup preferably without the need to install new software.
Basically every automated solution would be better than rsync or ftp. There are multiple things to do as you have mentioned: copy files, clear cache, run migrations, generate assets, list goes on.
Here you will find list of potential solutions.
http://symfony.com/doc/current/cookbook/deployment/tools.html#using-build-scripts-and-other-tools
From my experience with symfony I can recommend capifony, it takes a while to understand it, but it pays off
I want to move files from local folder to remote URL in a scheduled way using WebDAV.
I found this URL useful but this shows the script to transfer only single file, instead I want to transfer all files from a local folder to remote URL through winscp using WebDAV protocol:
http://winscp.net/eng/docs/guide_automation
Any pointers for this would be helpful.
Use the following WinSCP script:
open http://user:password#example.com/
put d:\path\* /home/user/
close
Read about file masks.
If you really need to move files (as opposite to copy), add the -delete switch to the put command:
put -delete d:\path\* /home/user/
For scheduling, see the WinSCP guide to scheduling file transfers.
I have created a Rackspace account earlier today for CDN to serve my Opencart images from Rackspace.
I have created a container where i will upload over 500,000 images, but prefer to upload them as a compressed file, feels more flexible.
If i upload all the images in a compressed file how do i extract the file when it is in the container? and what compression type files would work?
The answer may depend on how you are attempting to upload your file/files. Since this was not specified, I will answer your question using the CLI from a *nix environment.
Answer to your question (using curl)
Using curl, you can upload a compressed file and have it extracted using the extract-archive feature.
$ tar cf archive.tar directory_to_be_archived
$ curl -i -XPUT -H'x-auth-token: AUTH_TOKEN' https://storage101.iad3.clouddrive.com/v1/MossoCloudFS_aaa-aaa-aaa-aaa?extract-archive=tar -T ./archive.tar
You can find the documentation for this feature here: http://docs.rackspace.com/files/api/v1/cf-devguide/content/Extract_Archive-d1e2338.html
Recommended solution (using Swiftly)
Uploading and extracting that many objects using the above method might take a long time to complete. Additionally if there is a network interruption during that time, you will have to start over from the beginning.
I would recommend instead using a tool like Swiftly, which will allow you to concurrently upload your files. This way if there is a problem during the upload, you don't have to re-upload objects that have alreaady been successfully uploaded.
An example of how to do this is as follows:
$ swiftly --auth-url="https://identity.api.rackspacecloud.com/v2.0" \
--auth-user="{username}" --auth-key="{api_key}" --region="DFW" \
--concurrency=10 put container_name -i images/
If there is a network interruption while uploading, or you have to stop/restart uploading your files, you can add the "--different" option after the 'put' in the above command. This will tell Swiftly to HEAD the object first and only upload if the time or size of the local file does not match its corresponding object, skipping objects that have already been uploaded.
Swiftly can be found on github here: https://github.com/gholt/swiftly
There are other clients that possibly do the same things, but I know Swiftly works, so I recommend it.