Does any one know the difference between CentOS 5.6 and CentOS 6.4 - centos6

I am upgrading CentOS 5.6 to CentOS 6.4, can anyone give me the differentiating points or a link to a website that shows that

The entire release history is at http://wiki.centos.org/Manuals/ReleaseNotes
You will need to read/combine the release notes for 5.7..5.10 & 6.0..6.4 and you may also need to take a look at the bug tracker.

It's a major upgrade from 5 to 6 so the list of differences is long and it's dedicated multiple webpages to the changes in doing that, just google "centos migrate 5 to 6" and you get a ton of hits.
Redhats planning guide at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/pdf/Migration_Planning_Guide/Red_Hat_Enterprise_Linux-6-Migration_Planning_Guide-en-US.pdf is one that points out differences and what you need to plan for.

Related

Why is infected versions incorrect?

My application code was recently scanned by JFrog XRay and it produced a result stating that the in use version of Bouncy Castle BKS version 1 keystore has a high vulnerability. The version in use by my application is version 1.61, aka “source version = 1.61”. XRay reports that infected versions of this library are <= 1.46 and >= 1.49, and is the reason XRay caught this. This means that only versions between 1.46 and 1.49 are not infected, everything else is, and 1.61 is outside that scope. That cannot be correct. The NVD site (https://nvd.nist.gov/vuln/detail/CVE-2018-5382) states that all versions up to 1.47 (excluding) are infected. Meaning that the in use version (1.61) is not part of the infected list as XRay is stating. There is a direct conflict between what XRay is stating and what the NVD is stating.
I have little contact with the administrator of the XRay vulnerability database. I've asked them to check certain things, but to now avail.
I'm hoping someone can help me understand what the problem could be so I can relay that information to the XRay administrator.
I am part of JXRay (XRay vulnerability database) maintaining team at JFrog.
Looking at the references from NVD, in the vulnerability note released by US-CERT (https://www.kb.cert.org/vuls/id/306792/), they write that the problem is in the “BKS keystore format version 1 (BKS-V1)” and this format is supported in all versions before 1.47, and the support in this format was brought back in 1.49 and on. That is why version 1.49 and on is possibly affected (depend on the used format).
Please feel free to contact us for further questions through JFrog’s support.

Artifactory upgrade from 5.4 to 6.3.3

I have Artifactory 5.4 running and would like to upgrade it to the latest available one: 6.3.3. Do I need to upgrade it to some interim version or can I go straight to 6.3.3 ?
You should be able to upgrade from version 5 to 6 without the need for an interim version.
Before doing so, however, it is strongly suggested that you
Do a complete system export. If at any time you decide to roll back to the older version, you can use the export to reproduce the system in its entirety.
Backup your database.

Not getting networking metrics from scollector on Centos 7.1

We have bosun running on Centos 6.4 and many nodes on that OS also. We added some Centos 7.1 nodes and while we get basic metrics like os.cpu, no network-related metrics appear.
I recompiled the latest scollector on 7.1 and pushed that out, but it didn't help. Do I need to recompile the bosun on 6.4 also, or is all backwards compatible?
Thanks
Ken
Right now the interfaces it will collect from is restricted to a regular expression:
https://github.com/bosun-monitor/bosun/blob/master/cmd/scollector/collectors/ifstat_linux.go
var ifstatRE = regexp.MustCompile(`\s+(eth\d+|em\d+_\d+/\d+|em\d+_\d+|em\d+|` +
`bond\d+|team\d+|` + `p\d+p\d+_\d+/\d+|p\d+p\d+_\d+|p\d+p\d+):(.*)`)
The problem is due to aggregation we need to be clear if an interface is a physical interface, a tunnel, a team/bond etc. So we don't want to accidently get virtual interfaces in os.net.bytes as it messes up aggregation.
There are a couple PRs, but they need to address the categorization issue and not have yet. So the immediate workarounds are you can edit the code to have what you need, work on a pr to make that configurable with category, or rename your interfaces.

How ODBC connect to SQL Anywhere 12

I'm using sqlAnywhere 5.0 to create ODBC with third parties software house
I'm running it on windows XP without any error / problem
but 8/4/2014 Microsoft announce to stop service for windows XP
then our company migrate all client PC to windows 7x64
Now the application always hang and closed itself
I would like you suggest what can I do, I'm trying to test under SQL Anywhere 12.0 but I cannot config ODBC to open application
I also attached a picture (ODBC from rtdsh50.exe)
Start command is "C:\Starlims8\SqlAny50\Win32\rtdsk50.exe -d -c4000"
Please advise, Thank you in advance.
According to the image, you are using the version 5 runtime engine. This software is almost 20 years old... I am not surprised you are having trouble getting it to run on Windows 7.
It doesn't look like you are providing a database file on the startline in the ODBC configuration. Are you providing this somewhere else? What does your actual connect string look like in the application?
Version 12 will probably not work, since your application was written for version 5. However, to even have a chance, you have to rebuild the version 5 database in version 12 format and then replace the SQLA version 5 ODBC data source with a version 12 one with the same name.
Is this an application that you/your company wrote or is it a third party app. you purchased? Is it a 16 or 32 bit application?
Also, I would suggest setting the application up to run in compatibility mode as well as running it as an administrator.

Upgrading from Plone 3.3.6 to Plone 4.2.1 ... is blobstorage in use?

Plone Experts:
I inherited a Plone site running Plone 3.3.1. It has a Data.fs size of about 1 GB. It seemed reasonable to try to take advantage of newer features and, in particular, of blobstorage by upgrading to Plone 4.
Thus far I've successfully upgraded from Plone 3.3.1 to Plone 3.3.6 including appropriate data migration for our production usage.
Next, on a RH Linux development server, I did a fresh UnifiedInstall of Plone 4.2.1 which went smoothly. We have virtually no third-party or add-on packages to this is should be a comparatively "vanilla" installation.
Then, I copied in the Data.fs from the Plone 3.3.6 install and did the portal migration step to upgrade from 3.3.6 to 4.2.1.
That also seemed to go smoothly and I can see that I've got many files now in var/blobstorage that seem to be consuming something like 750 MB of space. Great, I thought!
However, the size of Data.fs still still seems to be very close to 1 GB.
So, did the portal migration step create blobstorage but I failed to do something that allows my site to begin to actually use the blobstorage? Or is there something that I need to do to "trim" Data.fs so that it no longer contains the content that has been moved to blobstorage? (Note: I did do a pack of Data.fs but with no significant reduction in the file size) Is there a log file that I can examine that would tell me if I'm using the content in blobstorage?
Thanks for your consideration,
John
Note: as is likely obvious from my question, I'm a Plone neophyte. I'm working on Martin Aspeli's Professional Plone 4 Development book, but haven't found the answer to my questions either in there or in searches of various fora.
the default zeopack configuration only trims objects greater than a day old. If you just ran the migration, likely all those objects are not going to be packed. You can either customize the recipe to retain a different amount of days(0) or just customize the zeopack script directly and then retry packing.
http://pypi.python.org/pypi/plone.recipe.zeoserver

Resources