When getting go modules using a local artifactory go proxy "go get" fails when doing module checksum verification.
At work we can't do downloads directly from internet but must go through a local proxy based on artifactory. I have specified GOPROXY (GOPROXY=https://repo.mycompany.se/artifactory/api/go/gocenter) to a proxy setup in our local artifactory. When running "go get" the download goes ok what I can see but the checksum verification fails as go try to use sum.golang.org directly instead of getting the checksum through the proxy.
C:\Users\x\go\src\hello2>go get rsc.io/quote#v1.5.2
go: finding rsc.io v1.5.2
go: downloading rsc.io/quote v1.5.2
verifying rsc.io/quote#v1.5.2: rsc.io/quote#v1.5.2: Get https://sum.golang.org/lookup/rsc.io/quote#v1.5.2: dial tcp: lookup sum.golang.org: no such host
C:\Users\x\go\src\hello2>
Do artifactory support getting the checksum through the local proxy and if so how do you set it up. I have read a blog post about support when using gocenter directly but I can't find any information when using artifactory.
I'm using go 1.13 and we are using artifactory 6.12.2.
Artifactory 6.12.2 when used as golang proxy, currently does not support checksum verification when there is no access to sum.golang.org. A feature request RTFACT-20405 (Artifactory to support go client checksum verification when sum.golang.org is not accessible).
In the meantime, refer to 'go help module-private' and documentation on using GONOSUMDB environment variables. An excerpt taken from the 1 -
"If GOSUMDB is set to "off", or if "go get" is invoked with the -insecure flag, the checksum database is not consulted, and all unrecognized modules are accepted, at the cost of giving up the security guarantee of verified repeatable downloads for all modules. A better way to bypass the checksum database for specific modules is to use the GOPRIVATE or GONOSUMDB environment variables. See 'go help module-private' for details"
Artifactory 6.16 has gosumdb support - https://www.jfrog.com/confluence/display/RTF/Release+Notes
Related
Trying to use ruby GRPC client to connect to a go GRPC server. The server uses TLS credentials with self signed certificates. I have trusted the certificate on my system (ubuntu 20.04) but still getting Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
Only way this is working is by manually setting GRPC::Core::ChannelCredentials.new(File.read(cert_path)) when initializing the client. Another workaround is setting :this_channel_is_insecure, but this only works if I remove TLS credentials in the server altogether (which I do not want).
Is there any way to get the GRPC client to work with the system certs?
I assume the gem is using roots.pem and trying to override that using GRPC::Core::ChannelCredentials.set_default_roots_pem results in Could not load any root certificate.
Also, I have not found any parameter that would let me skip certificate verification.
The default root location can be overridden using the GRPC_DEFAULT_SSL_ROOTS_FILE_PATH environment variable pointing to a file on the file system containing the roots. Setting GRPC::Core::ChannelCredentials.new(File.read(cert_path)) also seems fine to me.
In Ruby, most likely the feature to skip cert verification in TLS is not supported. We have the corresponding feature supported in underlying core, but it might not be plumbed to Ruby yet(at least not that I am aware of). If you need that, feel free to open a feature request to in gRPC Github page.
Thank you!
For the past few days, i've been trying to configure freeradius to authenticate wifi clients in OpenLDAP (without TLS - 389 bind).
I tried several guides and did not get the result i was looking for.
At localhost, RADTEST works and i receive an Accept-Accpet.
User is found within LDAP and accepts authentication.
When i try to authenticate via wifi (Windows 10), i can't connect.
The configuration i am currently using is this:
https://gitlab.com/ae-dir/client-examples/-/blob/master/freeradius/radiusd.conf
Someone with experience integrating freeradius with openldap?
I need wifi clients to connect with their ldap credentials.
You have not explained more details about the authentication method you are trying to use. This is important. However, a simple recipe for making freeradius+ldap authentication work with Windows10, Ubuntu and Android in EAP/TTLS mode is as follows:
Make sure the radius server has access to the LDAP server. Also make sure that the clients (access points) have access to the radius server. Check firewall issues and freeradius configuration (for Debian10 the file is /etc/freeradius/3.0/clients.conf)
For the authentication test (taking into account that the previous step has already been certified by you), there are two interesting tools: radtest (part of the freeradius-utils package) which does not support EAP/TTLS authentication, and a tool called eapol_test, which is part of the wpa_supplicant package and supports EAP-TTLS.
Follow the EAP/TTLS configuration steps and how to use the eapol_test tool on this link.
Make sure you generate the new certificates (don't use snakeoil certificates at all) and don't forget to change the certificate settings in /etc/freeradius/3.0/mods-enabled/eap). The link from the previous step does not talk about this step.
Run freeradius in full debug mode to find any errors (ie freeradius -X).
Don't forget to check the password and protocol compatibility list.
I wish to make a secure environment and to block uploading to any destination on the Internet, how can I achieve that using pfSense.
Does pfSense is the right tool for it?
I tried to limit the upload to 8 bits per second and I can not download right now (it's also got limited).
Does squid can be a good solution for what I searched for?
p.s. I still want to download files via git, http, https, ssh for example yarn install and "composer install" should work.
The goal is to block upload of files outside from the pfSense.
in short, you can't do it with stock pf sense,
You'll need a firewall which can inspect SSL and SSH,
You can run squid proxy on pfsense, and that can sslbump. which can be used to inspect HTTPS traffic. and with squid you can block file upload, for http (and https with sslbump)
If you want to inspect SSH and limit file upload via SSH,
you'll need a Palo Alto or a Fortigate or another next-gen firewall which can inspect SSH.
tl;dr : You can't! But you can use trickle
Explanation
Since every time we create a tcp session - we upload data to the internet, and it doesn't matter if its a 3-way-handshake, http request or post a file to the server, you can not have the ability of creating a session without being able to upload data to the internet. What you can do- is limit the bandwidth per application.
Workaround 1
You can use trickle.
sudo apt-get install trickle
You can limit upload/download for a specific app by running
trickle -u (upload limit in KB/s) -d (download limit in KB/s) application
This way you can limit http/other applications, but still being able to use git.
Workaround 2
Another way to Deny all application from accessing the internet, and allow only applications by exception.
We have updated our Artifactory from 5.2.0 to 5.10.2.
Since that, all remote repository test connection failed with Proxy error 407.
The proxy is correctly set in Admin section, the Proxy has been set for all repo.
login/password are valid.
There's no log error except this one :
20180419151212|30|REQUEST|172.22.50.135|usertst|POST|/ui/admin/repositories/testremote|HTTP/1.1|400|1610
Unfortunately I can't bypass the proxy.
It used to work so I don't understand why it doesn't work anymore since the update.
This is a bug that has been reported. You may want to vote & watch the JIRA for updates.
I am building a custom slash command for slack. When the slack user types a command, ex /uptime, a HTTP POST message is sent to the server URL.
The tutorials I've read all include installing a tool such as ngrok, pagekite, or localtunnel to generate a URL for the local machine.
Since I am working with a server, can I not just open a port and have slack connect directly to that hostname and port? How can I do this?
Doing some research, I came across opening a port with nc, then listening with curl, however I don't understand how to put it all together.
Yes, if you are running your script for handling the POST requests from Slack on a server that has a URL that can be reached on the Internet you do not need a local tunnel like ngrok.
If you starting from scratch I can recommend using a standard Apache + PHP [+ MySql] stack and have a PHP script to interpret and react to the POST request. Of course other script languages (e.g. Python) work just as well.