When I run composer updates I'll occasionally get messages that packages are abandoned and I should use a different one instead, like Package webflo/drupal-core-require-dev is abandoned, you should avoid using it. Use drupal/core-dev instead. I don't have experience with Composer so I'm curious as to what is seen as the best practice for replacing outdated packages.
Where do these messages come from? I'm unsure if the source is always reliable.
I think the best practice is quite clear from the message "you should avoid using it". How/When to do this is not as clear. Abandoned packages will not receive updates, but composer will not be able to tell you how difficult it will be to transition to the recommended alternative. It might be that all you have to do is replace the package, because it was only a name change or having to modify your code as well.
In your case webflo/drupal-core-require-dev only contains a composer.json and the required packages match with what the alternative drupal/core-dev provides. That means replacing the package should be as easy as changing the name in your composer.json and then do a composer update drupal/core-dev.
For packages where the answer is not as straightforward, you have to rely on automated/manual tests to see if everything still works. Static code analysis tools might help as well. You will have to set them up before you do the change, so that you can see how their output differs and fix the new issues that come up.
You should do the switch to the new dependency as early as possible. Leaving it in will likely cause more work in the future when replacing it and might pose a security risk (if it is outdated and insecure). I understand that this is not always possible and using something like roave/security-advisories to tell you when there are known security issues in a package might help postponing it and giving some sense of security.
Related
The title more or less says it all.
From what digging I've managed to do, the answer to if this is possible seems to be "no", but the whole GNU build tools platform is something I know very little about, so I could easily be missing some detail.
Context
I'm trying to set up an as-hermetic-as-I-can build of something that needs gnutls and I'm trying to minimize the list of things that need to be installed prior to the build (i.e. don't use the thing, or build the thing from source). One attribute of the build process is that the only accessible build artifact from gnutls will the absolute minimum needed to call and link it in (more or less the .a files and includes directory). So even if documentation is generated, it won't ever be accessible. As such, a dependency on a doc generation tool is something I strongly want to avoid.
What I've dug up
It seems, based on this merge-request comment, that gnutls requires that that gtkdocize be installed regardless of if it will be used. The context there seems to imply that this is by design to make the gnutls development process (as opposed to the use of gnutls) less error prone. (I'm really hoping I'm misreading things or that something has changed since then.)
This seems like an odd choice as it take a problem only experienced by developers and solves it by requiring anyone who wants to build the library (even as a transitive dependency) to assume that dependency even if they will actively suppress using it. (This seems particularly odd given this expands the potential attack surface for a security project.)
I need an old copy of the software Postnuke. I’m aware it’s outdated and discontinued but need to use it locally to use & convert a site which used to use this software.
I managed to find it using SourceForge (the 0.76 version) but it keeps hanging on the installation and I’m getting errors that don’t seem fixable to me on the step of inserting data (around 80%).
If any of the devs are around I’d really appreciate any assistance they could give me on how to get the “Set Login” stage working of the installer, specifically the start_postnuke() function because it’s missing the language and other variables from the PNconfig variable that are preventing it from installing.
I’m aware this is tagged as a Zikula question but it’s the only way I can find to try and contact who I assume are the developers of Postnuke.
You are right. Postnuke is dead. It died so long ago that nobody has any expertise. I doubt very much that installing the software is possible or truly necessary. You must have a database with info you are trying to access. Simply access it with whatever tools you are most comfortable with and pull and modify the data as needed. (fyi - I'm a former postnuke dev and current zikula dev. I've used PN since 0.62, so I know what I'm talking about).
If you really want to give it a go on getting a working installation I would recommend using the same server stack components that were "modern" at the time 0.76 was released. Apache, php, mysql. It will probably work then.
Since that time a lot of php functions have been made obsolete, and even syntax changed such as array shorthand notation.
But if you use a stack that's contemporary to that version, it should work.
We have 2 solutions.
SolutionA is an internal solution where we put reusable code through our products
For the sake of the question, it has only two projects NugetProjectA and NugetProjectB which has a project reference to NugetProjectA.
SolutionB its a solution that has package references towards SolutionA via nuget.
The thing that troubles is:
add new method added in NugetProjectA
add new method in NugetProjectB project that uses previous method
publish new version of NugetProjectB
update nuget reference on Project of SolutionA
execute in Project newly added method of NugetProjectB
Since we didn't publish the NugetProjectA updated version, last step described will fail.
This seems to be a easy problem to solution. But imagine this with many more projects in SolutionB and many more in SolutionA.
Your question is vague and open-ended, so there isn't a simple, concise answer. Honestly it could easily be a multi-post blog series or even a short book. I'll try to give some general suggestions, but I'm not going to go into a lot of details to avoid making this answer too long.
Mono-repo vs multiple repos
Just like the microservice craze from a few years ago, you should first ask yourself if you need it. Having all your source code in a single repository and solution might make it feel "legacy", and it sure seems nicer to have a 3 minute build of a component rather than a 30 minute build of a whole system when checking in a 1-line bug fix. But are the problems you listed in your question worth the benefit of a shorter CI build?
On the other extreme, Google may famously run almost everything out of a single repository, but they have teams of people doing nothing more than managing their mono-repo and build system because a large number of developers working on a single repo have a different set of problems. These are engineers not working on customer applications. If their work make other teams more productive, then it can be worthwhile.
You need to figure out what's best for you, but given the problem you described in the question, maybe you have too many repos/solutions and your processes could be faster with fewer mistakes if you consolidate a little.
Automation
The next best thing is to just use good engineering practises. Automate as much as possible to reduce the risk of human mistakes, including automating processes that validate that manual processes are followed correctly.
Have the a CI or CD pipeline push packages, so you can't forgot to push dependencies, like in your example.
There are tools that will automatically generate a version number from the git history based on the number of commits since the versioning config file was checked in. This prevents you from forgetting to increment the package version when you check in a change to the package.
Have tests to make sure the package works before pushing it. You could make it as simple as making sure that dependencies exist, but you could also go further and have test programs run that use the package to ensure that backwards compatibility isn't broken and that new features actually work as expected. Have these tests run before pushing the package, so if the tests fail, you don't push a bad package.
You can have a pipeline that will automatically create pull requests on solutions that use a package when a new version of one of your packages is published. You can even automatically merge it, but you'll want to make sure you have real good tests to avoid a buggy package cascading into a huge mess, particularly if you do automatic deployments on successful builds.
Be creative and think about other ways you can automate your processes to make things easier for yourself and your team.
But be pragmatic about what you automate. There's no point spending a week full time to automate something that only takes you 5 minutes once a month to do manually. But if the manual process sometimes goes wrong and causes hours or days of effort to fix, then that makes automating the process more worthwhile.
Use modern features
The .NET ecosystem has changed a lot in the last 2-3 years since .NET Core was announced. Now packing with MSBuild (either dotnet pack or msbuild -t:pack) is easier to create packages than creating .nuspec files and making sure you do the right things to get project dependencies packed as nuget dependencies, getting all files in the right places, etc. If your class library uses SDK style projects, then there's nothing extra to do. If your project is a traditional project, you'll need to use PackageReference for your NuGet dependencies (or specify <ProjectStyle>PackageReference</ProjectStyle> as an MSBuild property in your project file), and then reference the NuGet.Build.Tasks.Pack package.
Version the application, not each package
Like the mono-repo vs multiple repos point, considering versioning all packages in the application with a single version number, rather than versioning each package individually. Yes, this means you'll sometimes (or maybe often) publish a new version of a package that doesn't have any code changes to the previous version, but it simplifies the release considerably. Coupled with packing with MSBuild in the section above, you can create a Directory.Build.props file in your repository root, and set the <Version> property to your app version, and all projects in the repo will have the same version. So, when you're ready to release, bump the version in a single file and every project and every NuGet packages will have the same version.
Summary
In an ideal world each component would be reusable in different applications, in a separate source code repository, each package individually versioned using semantic versioning. But in the real world this adds a lot of development time complexity. Your customers may be happier to get bug fixes and new features more quickly, even if the version number of packages are less meaningful. So, make data driven decisions. If you're frequently having dependency version problems, reduce your dependencies so there are fewer things that can go wrong.
Don't get me wrong, there are many good reasons to have multiple projects, multiple solutions, multiple repositories. Just make sure that the reason you're doing them is because it helps your team/company be more productive, not for idealistic reasons that are slowing you down or causing bugs.
I am working as a Data Scientist for a small start up and we are using R as part of our platform for analysis, dashboards etc. Therefore, I need to ensure that we maintain security with each package we use and load.
I have looked around and done extensive searching and have come across the following links:
This is the official R Studio Blog Security update page.
This blog post shows how you can implement rJava to help with those packages that require it, though it does state that '...the integrity & safety of the R package ecosystem is still in the “trust me, everything’s 👍!!”'
This post gives some good advice for package security, but basically boils down to: if you get it from CRAN or another trusted source then it should be ok.
The CVE site lists vulnerabilities, though the last one was back in 2017.
However, all the above links essentially say the same thing, which is "if its from CRAN (or similar), then it is probably fine". Now this might indeed be the case, but I was hoping for something a bit more rigorous. Has anyone else come across this issue with production R deployment?
If possible, if someone could direct to where I might be able to find out more information on checking for security updates, breaches and changes for R packages, or how to go about testing the security myself, I would be very grateful.
Thanks!
Is there a standard R community resource for keeping up to date on known bugs or bug fixes for packages? My current approach is rather manual. (NB: I'm restricting this to CRAN - see Note 1.)
My use case is basically bug surveillance and the management of package updates. I've been averaging a couple of bug discoveries each month for awhile (which I duly report to the authors ;-)). Since a lot of my work is done with virtual machines, I tend to update the VM images when I have a good handle on the bug status for necessary packages. When a bunch of bugs are fixed, I can remove my workarounds, which is great, and I update the images. When I discover an outbreak of bugs, I don't create a new image.
Here are the sources I'm currently using:
NEWS files: Many, but not all, packages have NEWS files. These are certainly a helpful place to start.
Package home page: Some packages do not have a NEWS file on CRAN, but separately post a change log on the author's site.
R project-hosted mailing lists
Google Groups for packages
Personal communication with package authors
Bug tracking for packages (e.g. a developer may use Bugzilla)
It's one thing to be the first to discover a bug (I grant that bugs happen to all of us), it's another to belatedly discover a bug that is either already known or, better yet, already fixed. Both slow down my own coding, but better bug surveillance (maybe we need a cdc4R package :)) would significantly reduce the impact. Without a standard update alerting system (e.g. an extension to update.packages() that reports on which packages could be updated and links to info on what's changed), it's the user's job to seek out this information.
As such a user, trying to seek out this information, is there some standard resource I've overlooked in the list above? For instance, is there an R mailing list where it's common for developers to post their changes and bug fixes? Or is there a site that aggregates such posts, posts tests (CRAN posts R CMD CHECK output, it seems), or that gives some other feedback?
A few additional notes on other resources, for others' benefit:
I see that CRANberries has a terse diff summary on packages, which is new to me. (I am inspired to consider a grep for bug or fix in the diff output.)
bug.report() in R is a good way to send a message to R Core or the email address of a package maintainer.
Several testing packages worth consideration are: testthat, RUnit, and svUnit.
My personal "quick test" is to simply use digest to verify that results match, without having to test equality of very large objects.
Note 1: I'm tagging this cran because it's impossible to manage the universe of all R packages. For an individual package author, one can distribute a package wherever they'd like, use whatever mailing list or bug tracking system they like, etc. However, that's outside the "mainstream" for R. Were I to release a package and alert users to changes, bugs, bugfixes, I'd go with CRAN + NEWS + Bugzilla + Google Groups + R-Forge (and/or RForge), etc., but is there another standard reporting mechanism that is missing from this list?
In some sense, this note also serves to ask if there's a mechanism that developers are encouraged to use. I suspect there is no standard, as packages by R Core members seem to do many different things regarding bug and change reporting.
Note 2: I'm also adding administration (though something else may be more apropos), since this also relates to administering R. For reproducibility, administration of packages is quite important; when there are multiple users or more moving pieces, keeping aware of bugs and fixes becomes an administrative task, as well as an important consideration for development that depends on the external packages. If another tag, e.g. system-administration is more appropriate, I'm open to a change.
Not a complete answer but here are some thoughts.
In the case of data.table we track bugs (and feature requests) on R-Forge here. I imagine you could query R-Forge's tracker (programatically) for all packages hosted there. To add to your list anyway. That web tracker is where bug.report(package="data.table") points to (not just an email address to maintainer).
Also, anyone can subscribe to any <pkgname>-commits#lists.r-forge.r-project.org mailing list to receive a unified diff and commit message (at the time of commit) for each project on R-Forge. I'm not aware of a general mailing list spanning any commit to any R-Forge project, though.
At the top of ?data.table there is a link to up to the minute NEWS. This is how we communicate to users what is in the latest version (and in development) if they upgrade. That link updates in real-time; i.e., "up to the minute" is meant literally. But, they do have to check there!