I'm looking for an efficient way to remove a large set of artifacts, spread across various locations from Artifactory (by retrievable with a search query).
I've tried using the JFrog CLI 'rt del' command (along with an AQL file) to search and then remove results, and this works. However, I am finding the removal rate is pretty slow for our instance -- around 1 artifact removal/sec. I will need to remove several hundred thousand artifacts, and this will take way too long. So I am looking for a batch removal mechanism which executes entirely serverside.
I noticed the Artifactory UI supports a 'search stash' feature where a search can be performed, then saved off and results can be acted upon (including deletion action). Is this available via the REST API? This seems like it would be a good match for this use-case.
Alternatively, is there a way to perform a search by creation date in the UI? If so, I could presumably use the search stash feature and perform the removal on the search stash.
Last option I can think of is to write a custom plugin to do this work, but I am hoping there is an easier way as it seems like a semi-common case.
Thanks in advance!
Deleting from search stash would delete the artfifacts from the stash results but won't delete from the artifactory(as per my understanding).
There is groovy plugin available that will clear your artifacts depending on few conditions(link below)
Groovy Clean Up
I found Artifactory AQL quite helpful in searching and deleting artifacts.
Also I had written a custom clean up script that in turn used aql to delete the artifacts for repo regex match and also checks for the artifacts promotion status
I'm setting up my first Ghost blog right now and trying to figure out: what is the best way to routinely backup my content using an automated script?
I see many posts online about how to manually backup the contents, and some scripts to do this. However, they all involve stopping the Node/Ghost processes while running a script to backup the database. Is there any way to do an automated backup without stopping Ghost?
Unfortunately, right now you cannot automate the backup without shutting down Ghost. This is because it isn't smart to be copying the database while it could be being read/written to. A script that stops Ghost, makes a copy, and starts it back up would prob only ~1 second of down time, so if you have times that are slower, you could probably get away with it. If you can't, then just do the manual export for all the post data, and have a script backup all your images.
Im using Iron:router and meteor-node-csv, after uploading a file i want to read each row of it and insert it on a collection, but when i process the file the whole web freeze (buttons dont respond). Time ago this same code worked 100% fine, after some upgrades it started working like this.
So, after a lot of testing a partner found that the waitOn is a flag here.
If i process the file and try to navigate to a route having a subscription to any other collections it freeze.
If i process the file and try to navigate to a route without a subscription it works perfect
If your file is big, then reactivity is causing this issue. There are variations of that on stackoverflow in a few questions. My recommendation, described here, is to disable reactivity while processing data. You can do that using a "guard" around reactive elements, e.g., using a session variable as described in the linked answer.
I'm using alfresco throw cmis.
On one of our environment, we have an issue.
We want to create a folder and put some docs in it.
This works fines in all our env except one.
In this one, we can create the folder.
But when we do a search to find the folder, the folder isn't found.
After that i can find it with the share gui.
I have no error message in the share app.
Does any one have an idea on what could be the issue?
Promoting a comment to an answer...
When using Alfresco with SOLR, you need to be aware that the SOLR index isn't quite real-time. Close to real time, sure, but it's asynchronous so there's always a lag. (It's an eventually consistent index, not a fully realtime one)
There's a lot of information on the Alfresco and SOLR Wiki, including the way you can query what the current lag is.
If the lag is very low (eg a lightly loaded system), you can find that SOLR will catch up almost instantly, and newly created items will show instantly in the search results. However, it's more normal to expect to have to wait a little bit, especially on more loaded systems.
If no new results are showing up even after several minutes, you'll want to follow the instructions on the wiki or the SOLR Monitoring and Troubleshooting docs to work out why and fix.
Or, actually establishing a build process when there isn't much of one in place to begin with.
Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever).
What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta.
Relevant Tools:
Visual Build
Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.)
Tentatively, I've got a Visual Build project that does this:
Get source and place in local directory, including necessary DLLs needed for project.
Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use).
Build using Visual Studio
Precompile using command line, copying into what will be a "build" directory
Copy to destination.
Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps.
I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet.
Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming.
First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control.
Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is.
Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc...
Hope this helps.
I have a set of Powershell scripts that do all of this for me.
Script 1: Build - this one is simple, it is mostly handled by a call to msbuild, and also it creates my database scripts.
Script 2: Package - This one takes various arguments to package a release for various environments, such as test, and subsets of the production environment, which consists of many machines.
Script 3: Deploy - This is run on each individual machine from within the folder created by the Package script (the Deploy script is copied in as a part of packaging)
From the deploy script, I do sanity checks on things like the machine name so things don't accidentally get deployed to the wrong place.
For web.config files, I use the
<appSettings file="Local.config">
feature to have overrides that are already on the production machines, and they are read-only so they don't accidentally get written over. The Local.config files are not checked in, and I don't have to do any file switching at build time.
[Edit] The equivalent of appSettings file= for a config section is configSource="Local.config"
We switched from using a perl script to MSBuild two years ago and haven't looked back.
Building visual studio solutions can be done by just specifying them in the main xml file.
For anything more complicated (getting your source code, executing unit tests, building install packages, deploying web sites) you can just create a new class in .net deriving from Task that overrides the Execute function, and then reference this from your build xml file.
There is a pretty good introduction here:
introduction
I've only worked on a couple of .Net projects (I've done mostly Java) but one thing I would recommend is using a tool like NAnt. I have a real problem with coupling my build to the IDE, it ends up making it a real pain to set up build servers down the road since you have to go do a full VS install on any box that you want to build from in the future.
That being said, any automated build is better than no automated build.
Our build process is a bunch of homegrown Perl scripts that have evolved over a decade or so, nothing fancy but it gets the job done. One script gets the latest source code, another builds it, a third stages it to a network location. We do desktop application development so our staging process also builds install packages for testing and eventually shipping to customers.
I suggest you break it down to individual steps because there will be times when you want to rebuild but not get latest, or maybe just need to re-stage. Our scripts can also handle building from different branches so consider that also with whatever solution you develop.
Finally we have a dedicated build machine that rebuilds the trunk and maintenance branches every night and sends out an email with any problems or if it completed successfully.
One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it.
I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.
Our build system is a makefile (or two). It has been rather fun getting it working as it needs to run on both windows (as a build task under VS) and under Linux (as a normal "make bla" task). The really fun thing is that the build gets the actual file list from a .csproj file, builds (another) makefile from that, and run that. In the processes the make file actually calls it's self.
If that thought doesn't scare the reader, then (either they are crazy or) they can probably get make + "your favorite string mangler" to work for them.
We use UppercuT.
UppercuT uses NAnt to build and it is extremely easy to use.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT