Time quota with squid proxy: missing ext_time_quota_acl helper in squid? - squid

My goal is to get an authenticating proxy with time quota to work (without needing captive portal) on pfSense/squid. For this it appears I need the ext_time_quota_acl helper apparently introduced in squid 3.3.
My pfSense reported squid backend package version is 4.12, but the ext_time_quota_acl is missing. Searching gives only a few hits, among which ubuntu packages for squid 4.4 providing this helper. What is going on here? Or is the helper succeeded with similar functionality which I can't find?
EDIT: in the meantime I found out that in fact the configure options for the pfsense squid package don't include ext_time_quota_acl, i.e. it was compiled without this particular external helper. So I tried to compile from source in a FreeBSD VM, but have not been successful yet due to an apparent incompatibility with db.h (which appears to be Berkely DB, but these come in many versions). It might actually be easier to write my own helper that just parses the squid log and matches timestamps and users, and keeps track of accesses (?).

The pfSense squid package (and it appears many packages in linux distros too) are compiled with a (arbitrary?) selection of external acl helpers, but are often missing the time_quota helper.
The only solution appears to be to compile from source with the helper, which for pfSense is non-trivial for FreeBSD noobs.
What worked in a FreeBSD VM: building squid with additional/modified configure arguments (the original arguments were obtained from the package info on the pfSense installation).
Then the corresponding binary (squid-4.12/src/acl/external/time_quota/ext_time_quota_acl) can be copied over to the pfSense box.
The configure arguments for squid need some tweaking to get the time_quota external acl to compile, the working arguments are here.

Related

SSL-certificates installing packages in R 4.2

I have a problem with a custom local CRAN-mirror (jfrog) under windows on a machine without access to the internet:
in R 4.1.2, I have no problems accessing the mirror via https,
in R 4.2.0, I get errors that the index in PACKAGES can not be accessed via https.
After browsing the web and SO, I saw some more problems alike but none quite the same (see, e.g. here), I got around this issue by adding to my .Rprofile
options(repos = c(CRAN = "internalrepo"),
download.file.method = "curl",
download.file.extra = "-k -L")
to bypass checking SSL-certificates.
This works for me on a computer with access to the local network only, but it seems a bad idea on computers connected to the internet using a custom CRAN mirror.
Also, I would like to understand the cause of the problem. Did something change in the way that R handles SSL-certificates or did we break something in the installation of R 4.2?
sessionInfo() and Sys.getenv() do not show much difference between R 4.1.2 and R 4.2.0 but in R 4.2.0, I saw that there is an additional environment variable
CURL_CA_BUNDLE that points to the R installation directory /etc/curl-ca-bundle.crt
EDIT: 2022-12-14: I believe I tracked down the issue - it's SSL revocation checks that failed when a system only has access to a local network. A new flag was added to 4.2.1+ that can be set to TRUE, allowing SSL revocation checks to be given 'best-effort' attempts to contact, then bypassed. https://bugs.r-project.org/show_bug.cgi?id=18379 But if a machine has internet access, revocation checks will still be allowed to occur, which, I think, is probably the best we can hope for.
ORIGINAL: Just letting you know #clemenskuehn we have the same thing - local mirror working fine in 4.1.2 over HTTPS on some restricted data systems that aren't allowed internet access, then suddenly we update to 4.2.1 on our windows & linux boxes, and the windows machines only (linux works fine) aren't able to use the local mirror anymore,
"warning: unable to access index for repository https://mirror.oursite.com/cran/src/contrib:
cannot open URL 'https://mirror.oursite.com/cran/src/contrib/PACKAGES"
so it's not just you. Did you open a tracker with the R developers? Your workaround works for us so we might put it into production as a stopgap but it definitely be good if we didn't have to allow insecure connections, even though these systems can't get outbound access anyway.

Unable to access internet within "R" on cmd behind proxy

I have been using R on commandline (BASH). I am unable to access the internet (download any packages). I have tried proxy system wide, and tested it with wget, which works. The "install.packages()" command however does not.
Per some user's advice, I also tried setting the proxy in .Rprofiles file. That didn't help either. Please advice.
I recently ran into the same issue on my work machine. Our Firm uses Cylance as its antivirus software. Cylance was quarantining the file "internet.dll" that R uses to access the Internet. Fortunately, however, it only does so in the 32-bit version of R. For me, there were two solutions:
First, I was able to download packages directly from the 32-bit version of R (outside of RStudio). This works fine. The downloaded packages will run in 64-bit RStudio.
The longer-term solution was to submit an IT service request to release this file from quarantine (that is, to "whitelist a blocked entity"). At my Firm this was promptly done, as there is (obviously) nothing unsafe about this R file.

Unable to install packages on macos

When I try to install a package in R on macOS I get the following error, both using the GUI menu or install.packages:
Warning: unable to access index for repository https://cran.uni-muenster.de/bin/macosx/el-capitan/contrib/3.5:
cannot open URL 'https://cran.uni-muenster.de/bin/macosx/el-capitan/contrib/3.5/PACKAGES'
There are many Q&A on this site relating to this issue, and none of the answers provided there worked for me.
I tried disabling my firewall, changed all possible settings in the R preferences, checked in my browser whether the packet was online and available (it was), used different options and mirrors, both http and https, in install_packages, to no avail.
If you use Little Snitch, check the rules.
I use Little Snitch in quiet mode and have never defined any rules, and yet Little Snitch had a rule that blocked R from accessing the internet. Maybe Little Snitch installs with a certain set of base rules or creates rules for certain types of software by default. My bit torrent client (qbittorrent) and Cisco's VPN client, which I use to access my university network from home, were both blocked out of the box as well.
I deleted that rule and now packet installation works fine.

Running a Go webserver behind Phusion Passenger

Phusion Passenger has a great ecosystem for running webapps behind a webserver. I have experience with it from Ruby and Node.js apps. Now I rewrote a webservice to use Go, and it's time to deploy it. It seems natural to put Passenger+Nginx in front of the go webserver (using net/http). Searching around it seems that nobody has tried this, or asked about this anywhere...
I can't seem to find a configuration option to attach a custom binary, instead of passenger_ruby/passenger_node etc.
Can (should?) I use Phusion Passenger to run my binary created using go build?
No, you can't. Passenger doesn't actually use HTTP internally; it uses a custom protocol (like FastCGI or SCGI but incompatible with both) to communicate with your app and requires its own code in the application for management and dispatching requests. They don't provide such support code for Go.
This is actually possible now, Passenger 6 has added generic language support. You can find the tutorial here: https://www.phusionpassenger.com/docs/advanced_guides/gls/go.html
Basically:
Compile your Go program and put the binary somewhere convenient. The application needs to accept configuration to choose what port to run on.
passenger start --app-start-command 'env PORT=$PORT ./main' assuming main is your program name.
Passenger will try to tell the application what port to start on so that it can have port 80/443.

Saltstack: network.ip_addrs is not available

I've run into an issue with Saltstack version 2014.7.0, where I cannot get network information from Salt.
If I run:
salt-call network.ip_addrs
I get:
Function network.ip_addrs is not available
This only seems to happen on some of my hosts. It seems to effect the almost all of the functions in salt.modules.network, but everything else works as expected.
I suspect there's something in my environment to blame. I am running salt within a CentOS 7 docker container. I followed these instructions to get Systemd running under Docker, and it seems to be functioning just fine, so I don't think that's the issue, but I wouldn't be surprised if it's related. I'm using Docker as a development environment, but I will be using these formula to orchestrate virtual machines in production.
Has anyone encountered the network module not being loaded properly? Is there something that needs to be available for that module to be accessible?
I have other mechanisms to get the IP address, but none that are as easy to work with in other salt formulas.
It turns out my problem was that I had my own custom module called "network" which was obscuring the upstream network module.
I'm pretty sure this was working at some point in the past, so I'm wondering if there might have been a change to salt in a more recent version that would cause it to conflict at a module level instead of merging methods from different modules of the same name, but I suppose it's possible that it never worked.

Resources