My work's web cache sometimes caches bad versions of jars from maven central. In order to get the correct version, I need to specifically request the --no-cache version (e.g. with wget).
However, there does not appear to be obvious way to tell sbt/ivy not to use the web cache... is this possible and if so how do I do it?
This is happening too often for a manual intervention to be scalable.
Put this in ~/.sbt/0.13/proxy.sbt (insert proxy address in place of localhost):
val WorkaroundShittyProxy = {
new java.net.URL("http://127.0.0.1/").openConnection().setDefaultUseCaches(false)
}
Related
So, I am trying to set up a CI/CD pipeline with the s4sdk. I successfully completed all the steps descriped in this blog. Everything seems to be running smoothly, however my build is failing with the following error message:
The following artifacts could not be resolved: com.sap.xs2.security:security-commons:jar:0.28.6, com.sap.xs2.security:java-container-security:jar:0.28.6, com.sap.xs2.security:java-container-security-api:jar:0.28.6, com.sap.security.nw.sso.linuxx86_64.opt:sapjwt.linuxx86_64:jar:1.1.19: Could not find artifact com.sap.xs2.security:security-commons:jar:0.28.6 in s4sdk-mirror (http://s4sdk-nexus:8081/repository/mvn-proxy/)
Now, this error messages makes sense to me, since I remember downloading these artifacts from the SAP download center and therefore those artifacts are not available on maven central.
I think this error can be resolved by manually uploading those artifacts to the nexus server, but I don't know how. According to the nexus documentation, there is a web ui reachable under http://< cx-server-ip>:8081, but it is somehow not responding.
I can confirm with docker ps that both the jenkins and nexus container are running and that the nexus container is listening on TCP 8081. I am also able to reach the jenkin's frontend to configure and run my pipeline.
What am I missing? Is uploading the missing artifacts to the nexus the right approach? Any help is appreciated.
The nexus container you see acts as a download cache and is by design not accessible from outside to prevent accidental changes to it. Also, its life-cycle is controlled by the cx-server script, so even if you installed packages there manually, they would be gone once you upgrade the Jenkins.
I think the best way to handle this would be to set up another Nexus instance where you install the required packages and configure the pipeline to use that as described here (mvn_repository_url). This nexus needs to be configured as a mirror for Maven central. We don't have specific docs on how to do that, but this post describes a similar setup.
In this set up, you might want to disable the download cache as it is redundant (cache_enabled to false).
I hope this helps.
Kind regards
Florian
The sidecar nexus acts as a read-only cache for maven and npm artifacts on the host (and agents) where cx server is running. By default it looks up artifacts from maven central and the default npm registry. In the current implementation, the cache will be completely deleted after stopping cx server, leading to a loss of all internal state.
If you want to use custom sources, you can set them in server.cfg via mvn_repository_url and npm_registry_url. This is documented in the operations guide, which you can find here: https://github.com/SAP/cloud-s4-sdk-pipeline/blob/master/doc/operations/operations-guide.md
In your case, you have to specify a maven repository which includes the dependencies in question.
I have found so far an bundle that uses memcache as translation source but I haven't found anything on how to move the translation cache from disk storage to a service or directly to memcache.
I have also look at the options for the framework but I haven'T found anything useful on it (or I'm to stupid to use google ^^).
I need to move the cache files to memcache for deployment reason.
I'm having multiple Application Servers.
And to store the translation cache etc. on disk is slow and pane full if I deploy software (php Process on the productive app servers need to be restarted). It would make my live easier if that stuff would be stored in memcache as I would simply flush memcache to reset the translation stuff.
did anyone ever try this?
What first comes to the mind is to make a console command that would use one Loader (for example, \Symfony\Component\Translation\Loader\XliffFileLoader) and then another Dumper (something implementing \Symfony\Component\Translation\Dumper\DumperInterface from that bundle, like MemcacheDumper).
In your command your would load translations from one source by loader (in the form of \Symfony\Component\Translation\MessageCatalogue) and then dump them into another.
Our CloudBees Jenkins SBT builds spend a lot of time re-downloading a considerable number of third-party jars any time we get a clean VM. If we could download the jars once and never again into a shared cache, that would speed things up wonderfully.
It would seem our WebDAV repo would fit the bill. The only issue I can think of is SBT's lock file, which should prevent contention between multiple builds, though I'm not sure if that works on a shared drive (this suggests maybe not). Might there be other issues that might catch us up?
An alternative might be to use our Cloudbees Artifactory server as a proxy for third-party jars, then mount Artifactory via WebDAV, though that sounds more complicated, and this suggests Ivy might still copy files from WebDAV to its cache (which is still better than downloading to the cache).
Thanks.
I heard people are saving the resulting artifacts of STB builds to Maven repo (I think this may help https://cloudbees.zendesk.com/entries/20836643-sbt-publish-to-repositories).
Note that the realm of the credentials must match exactly the realm of the server (https://groups.google.com/forum/?fromgroups=#!searchin/simple-build-tool/cloudbees/simple-build-tool/ovoxXM8fe7A/dAFQhdpcIvkJ).
Also be sure you create a folder before uploading the jars: afaik, webdav requires explicit directory creation.
We upload artifacts to Nexus through the file protocol with Maven deploy plugin. Sometimes, those artifacts do not appear directly in Nexus Web interface. I have to do 'expire cache' and refresh the page. Moreover, this causes builds dependant of this artifact to fail.
I guess this is because, we deploy though file protocol. Is there a way to prevent this ? I saw the 'Not Found Cache TTL' in Nexus interface. Not sure to understand the doc. If I set this to zero, will this work ?
Thanks
PW
Deploying directly to the file system should only be used in extreme cases such as bulk manipulations or imports. In order to make Nexus fully recognize the changes on disk, you would need to expire the cache and then you may have to rebuild the metadata. Both of these can be triggered from the repository screen. If you want the artifacts to be searchable, you would also have to fire off the indexer task as well.
All of those things happen automatically when you deploy via http/https directly to Nexus which is the way it is intended to be used
Background
I develop a web application that lives on an embedded device. In order to make dev times sane, frontend development is done using apache serving static documents, with PHP proxying out to the embedded device for specifically configured dynamic resources. This requires that we keep various server-simulation scripts hanging around in source control, and it requires updating those scripts whenever we add a new dynamic resource.
Problem
I'd like to invert the logic: if the requested document is available in the static documents directory, serve it; otherwise, proxy the request to the embedded device.
Optimally, I want a software package that will do this for me (for Windows or buildable on cygwin). I can deal with forcing apache to do it with PHP, but I'm unsure how to configure it to make it happen. I've looked at squid and privoxy, but neither of them seem to do what I want.
Any ideas? I'd rather not have to roll my own.
Now, Varnish is available in cygwin, see:
Installation instructions: http://varnish-cache.org/trac/wiki/VarnishOnCygwinWindows
I think what you want is varnish.
Now that I've looked at varnish, I understand that what I actually want is a special case of a reverse proxy, and that squid can be configured to do what I need. (With the added bonus of having it available as a cygwin package.)