Why is Artifactory downloading massive amounts from Fastly? - artifactory

We have Artifactory deployed locally. Our network team reports that Artifactory has begun downloading large amounts of data from Fastly, to the extent that it is having a major impact on our network. They report,
a ton of data is pulled from 199.232.192.209 - SKYCA-3 and thats
fastly
It happened yesterday between 3:30 and 9:30, and started again today at 10:00.
Can anyone tell us why Artifactory is doing this, and how can we control or stop it?

This is most probably a result of a build (or another automated process) which is hitting a remote repository in Artifactory, requesting artifacts which are not cached and resulting in outgoing requests to the proxied external repository.
It is common for public repositories to use services such as Fastly to serve requests.
The best way to troubleshoot it, would be looking at the Artifactory request log and see who generated this load of requests and which repository was used.
In addition, the Artifactory log file should contain logging for such outgoing requests, for example:
2021-05-20 21:11:45,306 [http-nio-8081-exec-2] [INFO ] (o.a.r.HttpRepo :443) - jcenter downloading https://jcenter.bintray.com/org/cfg4j/cfg4j/3.3.2/cfg4j-3.3.2.jar 36.17 KB

Related

Jfrog Artifactory High availability and maintenance

We are using Jfrog artifactory selfhosted instance with license for our project and many customers are using for thir package and binary management.
Since this is hosted i our private selfhosted environments over linux platform, regularly we may need to have a maintenance window atleast 2 times in a month to apply patches to our servers and all. So we are considering for high availability for our currently running Jfrog instance which should resolve this downtime during the maintenance. Also we are looking for some better managemental scenarios as below and couldnt find any helpful guidance from the docs.
How the Jfrog server insance service status can be monitored along with auto restart if the service is in failed state after the server reboot.
Is there any way to set and populate a notification messsage to the sustomers regarding the sceduled maintenance.
How can we enable the high availability for JFrog Artifactory and Xray. ?
Here are some of the workaround you can follow to mitigate the situation
To monitor the health of the JFrog services you can use the below rest API
curl -u : -XGET
http://<Art_IP>:8046/router/api/v1/topology/health -H 'Content-Type:
application/json'
If you are looking for a more lightweight check you can use
curl -u: -XGET
http://<Art_IP>:8081/artifactory/api/system/ping
By default, the systemctl scripts check for the availability of the services and restart them when they see a failure. The same applies to the system restart as well.
There is no option for a pop-up message however, you can set a custom message as a banner in the Artifactory. Navigate to Administration -> General settings -> Customer message. Here is the wiki link
When you add another node to the mix, Artifactory/Xray becomes a cluster to balance the load (or as a failover) however it is the responsibility of the load balancer/Reverse proxy to manage the traffic between the cluster nodes according to the availability of the backend node.

How to set in a Dockerfile an nginx in front of the same container (google cloud run)? I tried making a python server (odoo) handle http/2

Thanks in advance. My question is, how to set in a Dockerfile an nginx in front of a container? I saw other questions [5], and it seems that the only way to allow http/2 on odoo in cloud run is to create an nginx container, as sidecars are not allowed in gcrun. But I also read that with supervisord it can be done. Has anyone been able do that so to handle http/2 and so to increase the cloud run max request quota?
I wanted to to try this: in the entrpoint.sh, write a command to install nginx, and then set its configuration to be used as a proxy to allow http2. But, I ask you here as I'm not sure if it'll work, as I read in [2] that nginx won't work with a python server.
The whole story: I'm deploying an odoo ce on google cloud run + cloud sql. I configured one odoo as a demo with 5 blogposts and when I tried to download it, it says that the request was too large. Then, I imagined this was because of the 32MB of cloud run request quota [1], as the backup size was 52 MB. Then, I saw that the quota for http/2 connections was unlimited, so I activated http/2 button in cloud run. Next, when I accessed the service an error relating "connection failure appeared".
To do so I thought of two ways: one, was upgrading the odoo http server to one that can handle http/2 like Quark. This first way seems impossible to me, because it would force me to rewrite many pieces of odoo, maybe. Then, the second option that I thought of was running in front of the odoo container (that runs a python web server on a Werkzeug), an nginx. I read in the web that nginx could upgrade connections to http/2. But, I also read that cloud run was running its internal load balancer [2]. So, then my question: would it be possible to run in the same odoo container an nginx that exposes this service on cloud run?
References:
[1] https://cloud.google.com/run/quotas
[2] Cloud Run needs NGINX or not?
[3] https://linuxize.com/post/configure-odoo-with-nginx-as-a-reverse-proxy/
[4] https://github.com/odoo/docker/tree/master/14.0
[5] How to setup nginx in front of node in docker for Cloud Run?
Has anyone been able do that so to handle http/2 and so to increase the cloud run max request quota?
Handling HTTP/2 does not help you increase your maximum requests per container limit on Cloud Run.
HTTP/2 only helps you reuse TCP connections to send concurrent requests over a single connection, but Cloud Run does not really count connections so you are not on the right track here. HTTP/2 won't help you.
Cloud Run today already supports 1000 container instances (with 250 concurrent requests in preview) so that's 250,000 simultaneous requests for your service. If you need more, contact support.
But, I ask you here as I'm not sure if it'll work, as I read in [2] that nginx won't work with a python server.
Sounds incorrect.
If you configure a multi-process container you can run Python behind nginx on Cloud Run. But as you said, Cloud Run does not need nginx.
Overall you don't need HTTP/2 in this scenario.

How to limit/disable the upload on the network

I wish to make a secure environment and to block uploading to any destination on the Internet, how can I achieve that using pfSense.
Does pfSense is the right tool for it?
I tried to limit the upload to 8 bits per second and I can not download right now (it's also got limited).
Does squid can be a good solution for what I searched for?
p.s. I still want to download files via git, http, https, ssh for example yarn install and "composer install" should work.
The goal is to block upload of files outside from the pfSense.
in short, you can't do it with stock pf sense,
You'll need a firewall which can inspect SSL and SSH,
You can run squid proxy on pfsense, and that can sslbump. which can be used to inspect HTTPS traffic. and with squid you can block file upload, for http (and https with sslbump)
If you want to inspect SSH and limit file upload via SSH,
you'll need a Palo Alto or a Fortigate or another next-gen firewall which can inspect SSH.
tl;dr : You can't! But you can use trickle
Explanation
Since every time we create a tcp session - we upload data to the internet, and it doesn't matter if its a 3-way-handshake, http request or post a file to the server, you can not have the ability of creating a session without being able to upload data to the internet. What you can do- is limit the bandwidth per application.
Workaround 1
You can use trickle.
sudo apt-get install trickle
You can limit upload/download for a specific app by running
trickle -u (upload limit in KB/s) -d (download limit in KB/s) application
This way you can limit http/other applications, but still being able to use git.
Workaround 2
Another way to Deny all application from accessing the internet, and allow only applications by exception.

Flutter pub_hosted_url mirror sonatype nexus

I'm working in heavy secured company, the only possible way to reach pub.dev/packages is through nexus sonatype proxy, so I've set the pub_hosted_url to point nexus server, fetching versions working perfecty but after that pub trying downloading packages from archive url: https://pub.dartlang.org that is not reachable, temporary solution that I made is write simple server that redirect all request to nexus, override response by replace all https://pub.dartlang.org to nexus urls, is there any better solution to handle nexus proxy ?
I think you can reverse the order, have nexus in front door, and your simple server as middle tier to bridge nexus and pub.dartlang.org

Graphite webapp only shows metrics from first of 4 caches

I have a graphite relay and webapp installed on one servers, that is supposed to be communicating with 4 carbon caches (and respective webapps) on 4 other servers. I've validated that the relay is working by observing that different whisper files are being updated on different carbon-relay servers.
However, the webapp is only showing metrics that are stored on the first carbon cache server in the list and I'm not sure what else to look at.
The webapps on the carbon relays are set up to listen on port 81, and I have the following in local_settings.py on the relay server (the one I'm pointing my browser at):
CLUSTER_SERVERS = ["graphite-storage1.mydomain.com:81", "graphite-storage2.mydomain.com:81", "graphite-storage3.mydomain.com:81", "graphite-storage4.mydomain.com:81", ]
However - at one point I did have all metrics on all servers - I've migrated from a single instance to this federated cluster. I've since removed the whisper files that weren't active on each carbon-cache server. I've restarted all carbon-caches, the carbon-relay and the webapp server several times. Is there somewhere the metrics-->carbon-cache mapping is getting cached? Have I missed a setting somewhere?

Resources