Need to cleanup Nexus images and keep last 10 images. However in new version there is no option for the same - nexus

In my organisation recently they updated the nexus to OSS 3.39, where as in older version we have an option to cleanup old images & can keep last 5 or 10 images.
There is no such option in updated version. where as it allows me to cleanup images based on days. which is not exactly my requirement.
These are the options in latest version
is there any way to cleanup & keep only last N images.

Related

Open edge 10.2B -impact on migration of Solaris 10 to Solaris 11

We have application set up on Openedge 10.1c on Solaris 10. We are planning a migration to 10.2B on Solaris 10 to Solaris 11.
Do we need to re-compile all programs due to OE and OS version change?
Do we need a dump and load instead of a dB refresh?
No, you are not required to re-compile. Nor is it a requirement to dump & reload the db.
For a point-release upgrade:
shutdown the db
truncate the bi file, if you are extra paranoid make a backup
apply the upgrade (or change the link pointing to the upgraded install directory)
run "proutil dbname -C updatevsts"
restart the db
On UNIX it is very common to have the old version and new version installed simultaneously. You can manage this very easily by using a naming scheme similar to:
/usr/oe101c # the 10.1c install directory
/usr/oe102b # the 10.2b install directory
/usr/dlc # a link to whichever one you want to be "live"
(You can also use the $DLC variable to redirect sessions, perhaps for testing purposes. Many times people use a simple shell script to set the DLC, PATH & PROPATH to dynamically switch between versions.)
You are not required to re-compile. But once you have migrated to 10.2b and you are sure that you are not going to revert it is a "best practice" to re-compile. That way your code can start to take advantage of new features.
Having said all of that - 10.2B is very old. You really ought to be upgrading to OpenEdge 11.something (11.7.3 is current at the moment.)
If you can compile code there is no reason to stick with version 10. Upgrading to oe11 will, however, require a recompile.
You can also upgrade from 10 to 11 without dumping and re-loading. You can just add "proutil dbname -C conv1011" to the steps above.
(Dumping and re-loading may have benefits but that is a different conversation. Migration between versions or platforms is often a convenient time to do it.)
If you are going to ignore me and stick with version 10 at least go with 10.2b service pack 08. That is the very last release of OpenEdge 10 and it has numerous bug fixes and enhancements that vanilla 10.2b lacks.
Changing Solaris versions does not matter with regards to needing to recompile or dump & load.
You should, of course, still test everything and not just take my word for it.

Are there any issues when updating from PhpStorm-2016.1.2 to PhpStorm-2017.1?

I have decided to update my development PC to use PhpStorm-2017.1, but before I update, I do not want to end up wasting 1-2 days re-configuring, if there are any potential issues that can hinder my work.
Will my current license work on the new version?
Will my project settings integrate with the update? (Symfony)
Will my plugins settings be kept? (Symfony)
Any other thing I need to figure out?
Answers to your questions:
1) Yes, the new installed version will automatically pick up your current license.
2) When you update, only the software is updated. The configuration files are not touched and settings are brought across as they were on the previous version.
3) Same answer as answer # 2
4) Not really. Just download the latest version from their website and install it as you would normally.
upgrading to phpstorm 2017.1 was smooth for me, (once they released some later fixes for things like the REST tool etc)
As for your plugins, it'll depend on what plugins they are, and whether there would be BC breaks. Look up the plugin documentation and check to see if there's a version for 2017.1.
For what its worth, the symfony plugin works fine.
You can try official control panel of JetBrains https://www.jetbrains.com/toolbox/app/
Manage product updates with ease
The pace of technologies and software updates is ever-accelerating. Stay up-to-date without compromising your productivity with the Toolbox App: easily maintain several versions of the same tool, install updates, and roll them back instantly if needed.
Could be useful to patch instead of complete update:
Faster updates
When updating, Toolbox App downloads and applies a patch (or even a set of patches) instead of the full package download, thus saving you time & bandwidth.
Official response form JetBrains:
It's hard to tell whether your plugins will work with 2017.1 since there are always some changes in API that may affect some of your plugins. So it's easier just to install 2017.1 and see how it goes. Installation won't broke your existing PhpStorm 2016.2 and its settings.
I believe there have been no changes in license server so if you have right on 2017.1 there shouldn't be any problems.
P.s. Thank you every one for your responses. I will be going with the official answer.

How to handle merging of branches that are not in a sequence in combination with Flyway

I just encountered the following situtation:
The test-server is currently running Flyway, with version 1 (V1). The test-server is automatically updated (including Flyway scripts) whenever anything is pushed on the develop branch.
A developer decides to start working on a new feature on branch feature/123. This developer creates a database script (Flyway compatible) called V2__cool_feature.sql. In the meantime, another developer also starts working on a feature branch called feature/456. This developer is also in need of an update script, and names it V3__another_cool_feature.sql, because the developer knows that V2 is already used on another branch. This feature/456 branch is finished and is merged, and so the current scripts on the develop branch are V1 & V3. This works well and the V3 script is executed, leaving Flyway its schema_version on version 3.
The other feature branch feature/123 is also merged, which means that the develop branch contains the scripts V1, V2 & V3.
Now this is were I'm having trouble with Flyway:
The build, including Flyway, is executed and it leaves the following message:
[INFO] Database: jdbc:mysql://example.com:3306/my_schema (MySQL 5.5)
[INFO] Successfully validated 2 migrations (execution time 00:00.019s)
[INFO] Current version of schema `my_schema`: 3
[INFO] Schema `my_schema` is up to date. No migration necessary.
What I want to happen is that the V2 script is executed, and I'm not sure how to do so.
I hope I explained my problem well, if not, please leave a comment.
Ugh, I'm not a smart guy. Putting a bit more effort into my Googling skills lands me upon the Flyway documentation, describing exactly my problem:
Option:
outOfOrder
Required:
NO
Default:
false
Description:
Allows migrations to be run "out of order".
If you already have versions 1 and 3 applied, and now a version 2 is found, it will be applied too instead of being ignored. (Even the same versioning as in my question is used >.< )

Upgrading from Plone 3.3.6 to Plone 4.2.1 ... is blobstorage in use?

Plone Experts:
I inherited a Plone site running Plone 3.3.1. It has a Data.fs size of about 1 GB. It seemed reasonable to try to take advantage of newer features and, in particular, of blobstorage by upgrading to Plone 4.
Thus far I've successfully upgraded from Plone 3.3.1 to Plone 3.3.6 including appropriate data migration for our production usage.
Next, on a RH Linux development server, I did a fresh UnifiedInstall of Plone 4.2.1 which went smoothly. We have virtually no third-party or add-on packages to this is should be a comparatively "vanilla" installation.
Then, I copied in the Data.fs from the Plone 3.3.6 install and did the portal migration step to upgrade from 3.3.6 to 4.2.1.
That also seemed to go smoothly and I can see that I've got many files now in var/blobstorage that seem to be consuming something like 750 MB of space. Great, I thought!
However, the size of Data.fs still still seems to be very close to 1 GB.
So, did the portal migration step create blobstorage but I failed to do something that allows my site to begin to actually use the blobstorage? Or is there something that I need to do to "trim" Data.fs so that it no longer contains the content that has been moved to blobstorage? (Note: I did do a pack of Data.fs but with no significant reduction in the file size) Is there a log file that I can examine that would tell me if I'm using the content in blobstorage?
Thanks for your consideration,
John
Note: as is likely obvious from my question, I'm a Plone neophyte. I'm working on Martin Aspeli's Professional Plone 4 Development book, but haven't found the answer to my questions either in there or in searches of various fora.
the default zeopack configuration only trims objects greater than a day old. If you just ran the migration, likely all those objects are not going to be packed. You can either customize the recipe to retain a different amount of days(0) or just customize the zeopack script directly and then retry packing.
http://pypi.python.org/pypi/plone.recipe.zeoserver

How to test a WordPress plugin through the upgrade automatically feature

Last week I had released a version of a WordPress plugin that works if the user was doing a fresh install, however if they already had the plugin and upgraded it using the WordPress upgrade automatically feature, problems occurred and some of the database elements were erased. So I had to revert back immediately.
I was wondering if there was a way to test the plugin through the upgrade automatically functionality before hand instead of having to release it and hoping you get it right the first time.
I would set up a dev/local site running the previous version of your plugin. Then, copy over the latest changes (overwriting all of the files) and test things out.
In essence, that is the same thing that happens during an automated upgrade.

Resources