What gets exposed when checking "Allow Anonymous Downloads" - artifactory

I'm setting up a private Cargo repository using Artifactory. Publishing works fine with all credentials set up, but installing another crate that depends on a crate published to Artifactory fails with authentication issues (401). Based on the documentation, I enabled "Allow Anonymous Downloads" and it worked. But I want to careful with what I expose. The documentation says:
Authentication: Allow Anonymous Downloads
The Cargo client does not send any authentication header when running
install and search commands. Select the "Allow anonymous download and
search" to block anonymous requests but still allow anonymous Cargo
client downloads and performing search, to grant anonymous access
specifically to those endpoints for the specific repository.
Quoted from: https://www.jfrog.com/confluence/display/JFROG/Cargo+Package+Registry
The way it's worded appears a bit ambiguous as to what exactly is allowed. It still blocks anonymous requests, but allows anonymous Cargo clients? What exactly is the distinction here? I tried to download packages using a Cargo client without credentials, and it was blocked from installing packages (as I would like it to be).
So this leaves me a bit concerned. The naming seems to suggest people may download packages from my Artifactory repository without authentication, which I don't want. Cursory testing suggests they can't, but I'm not fully convinced as what is being blocked and what isn't. I would appreciate if somebody could clarify this.

Related

Given HTTPs URL of git repository, is it possible to retrieve the SSH url?

In a Git Repo, if i have the information of the HTTPs protocol saved, is it possible to retrieve the SSH url of it without having to manually login and copy ?
In the general case, no, this is not possible. Git does not require that a repository be accessible by multiple methods and does not provide a way to automatically discover all URLs for a repository, even if a repository is accessible by multiple methods. The user must intrinsically know this, and can map from one to the other by using config options of the form url.*.insteadOf if a particular protocol is unsuitable (see git-config(1)).
For GitHub specifically, yes, it is. A repository that has the HTTPS URL of https://github.com/foo/bar.git will also be accessible at git#github.com:foo/bar.git or ssh://git#github.com/foo/bar.git (among others). This is not necessarily true for GitHub Enterprise Server instances because administrators may restrict the protocols that are used. It is also not true for Subversion access, which is only over HTTPS.

Is possible to disable web access to anonymous user with Artifactory?

I've trying to find the answer but after reading a bunch of documentation I think it's not possible, but I would be a nice feature. The problem is I want anyone to access the cached repositories but I don't want them to access the web user interface.
The only way I have figured out is tweaking the nginx configuration to allow access only to certain endpoints like raw repository view. Anyway it has some problems which I've not totally resolved.
You could set up a SAML SSO redirect that forces a user to login, and if they fail they are not redirected back to the Artifactory instance. That's the only way I know of that won't let users even look at the front page of Artifactory.
They would get caught on step 3 after an automatic redirect: SAML SSO Login Process
The obvious downside with this is that you need to have a SAML SSO setup in the first place.

How can I detect if my chrome packaged app is installed?

I am the owner of a chrome app which is currently a hosted app on https://mydomain.com. I would like to add push messaging to it, so it will have to become a packaged app.
However, I don't want to lose the ability to prompt users on the website to install the app if they don't already have it installed.
There are many ways I've come up with to test this, but none of them seem satisfactory:
chrome.app.isInstalled, the method I currently use is unavailable for packaged apps.
Inserting a DOM element is a recommended practice, but only available for extensions; content_scripts is disallowed for packaged apps.
Setting a cookie could work, but the cookies permission is disallowed for packaged apps.
Setting a cookie using a webview might be possible, but webviews are sandboxed, and do not share cookies with chrome.
Detecting a file in the app might work, but the web_accessible_resources permission is disallowed disallowed for packaged apps.
Specifying url handlers seems like it might work, but It looks like they only work for urls in the address bar (i.e. they don't seem to handle requests).
Setting externally_connectable works, but it requires a permissions dialog saying that the app would like to "communicate with cooperating websites". The permission is this vague even if I specify https://mydomain.com. I would like to avoid this since people tend not to update apps when permissions change.
Does anyone know of a way to determine whether my packaged app is installed if I own both the app and https://mydomain.com?
url_handlers or externally_connectable is the way to go. You've understandably ruled out the last option because of the extra permission warning (which would disable the app until the user approves the new permission).
url_handlers does offer a solution without requiring extra permissions:
At your server's side, if the user doesn't have any cookies, redirect to some other URL at your server. E.g. http://example.com/landing/ -> http://example.com/landing/?noapp.
If the app is not installed, the redirect will be followed. On that landing page, use history.replaceState(null, null, '/landing/'); to change the URL back to the original URL.
If the app is installed, the chrome.app.runtime.onLaunched will be triggered, and the redirect is not followed. On your website, use setTimeout to check whether or not the page is unloaded.
If the previous method doesn't suit you for some reason, then there is one more (fragile) alternative: Set up an API endpoint at your server and use CORS. Because your app does not have the permission to access this resource, the AJAX request automatically gets an unforgeable request header (Origin: chrome-extension://.../...). You can detect the presence of this header, and mark the app as installed for the specific IP address. If you choose a right frequency, you will have an up-to-date ip-to-app mapping.
This doesn't work for multiple computers behind a NAT though. And I (as a user) would be concerned about my privacy if you kept pinging home...

authClient.login problems

I'm having a similar problem as was discussed in this question:
authClient.login returning error with "Unauthorized request origin"
I can't find anything on the firebase site that directly addresses this problem so I have 2 questions about the "unauthorized request origin":
1.) If I'm testing my program through my own computer (as in, it's just a file on my computer), what exactly am I supposed to add to the Auth panel? I tried following the advice offered in the link above but no luck.
2.) My eventual plan is to create an app using firebase and it's login system. Is this going to be a problem for when users try to login? Is there going to be something that I need to allow so that any user will be allowed to login to the system?
With the release of Firebase Simple Login, which contains a number of OAuth-based authentication methods (Facebook, Twitter, GitHub, etc.), we included the idea of 'Authorized Origins'. Without this restriction, malicious sites could pretend to be your application and attempt to access your users' Facebook, Twitter, etc. data on your behalf.
By restricting the domains for these requests to ones that you control and have verified, we can protect your users' data. Once you have configured your application domains, your users will be able to log in seamlessly and securely from the domains you defined.
To fix this error, log into Firebase Forge (by entering your Firebase URL into your browser), and navigate to the 'Auth' panel on the left.
For testing locally, you'll need to run at least a barebones webserver on your machine, rather than loading your test files via file://. The easiest way to run a barebones server on your local machine is to cd to the directory of your files and run python -m SimpleHTTPServer, which will allow you to access your content via http://127.0.0.1:8000/....
For your users, configure the domains that you'll be using to host your application. This can be any number of specific subdomains (such as a.b.www.domain.com) or high-level domains which will act as a wildcard (domain.com will allow requests from *.domain.com).
You can configure multiple application domains or IPs here, comma-delimited.
See https://www.firebase.com/docs/security/simple-login-overview.html for additional documentation about application configuration for Simple Login.
I hope that helps! Feel free to ping me directly if you have further questions.

How do I configure IIS so that the user's domain credentials are used when connecting to SQL server?

We've recently released the latest version of our intranet application, which now uses windows authentication as standard, and needs to be able to connect to a configured SQL server with the end-user's domain credentials.
Lately we've found that on a couple of customer deployments, although IIS can see the user's domain credentials, it will not pass these on to SQL server. Instead, it seems to use the anonymous account. This is in spite of following all the correct steps (changing the directory security to Win Auth, updating Web.Config to use Win Auth and denying anonymous users).
I've been doing a lot of reading that suggests we need to make sure that Kerberos is in place, but I'm not sure (a) how valid this is (i.e. is it really a requirement?) or (b) how to go about investigating if it's set up or how to go about setting it up.
We're in a situation where we need to be able to either configure IIS or the application to work for the customer, or explain to the customer exactly what they need to do to get it working.
We've managed to reproduce this on our internal network with a test SQL server and a developer's IIS box, so we're going to mess around with this set up and see if we can come up with a solution, but if anyone has any bright ideas, I'd be most happy to hear them!
I'd especially like to hear people's thoughts or advice in terms of Kerberos. Is this a requirement, and if it is, how do I outline to customers how it should be configured?
Oh, and I've also seen a couple of people mention the 'classic one-hop rule' for domains and passing windows credentials around, but I don't know how much weight this actually holds?
Thanks!
Matt
This is called the Double-Hop Problem and prohibits the forwarding of user's credentials to third parties. This occurs when they browse from one machine, against a site on another (first hop), and forwarding the credentials to a third machine (second hop).
The problem will not appear if you host IIS and SQL Server on the same machine.
There's alot more technical details published on this at How to use the System.DirectoryServices namespace in ASP.NET, which explains the double-hop issue, and primary and secondary tokens.
To run your application under the user's Active Directory or Windows credentials, ensure these:
the IIS application is set to NOT allow anonymous access
the IIS application uses Integrated Windows authentication
your connection string should have Integrated Security=SSPI to ensure the user's Windows/AD credentials are passed to SQL Server.
i.e. Data Source=myServerAddress;Initial Catalog=myDataBase;Integrated Security=SSPI;
You state you're not sure "how to go about investigating if it's set up or how to go about setting it up".
For this I'd heartily recommend a tool called DelegConfig. It's a very handy app that you can tell you if kerberos is setup properly.
Unzip it into a directory, configure a virtual directory in IIS to point to it. Browse to the main page and you tell it which backend server you want to allow access to (e.g. UNC, SQL, HTTP etc..) and it tell you its setup correctly or not and explain why.
It even has the abilty to recongiure the kerberos to fix the issue if you so desire (although I've not used this - I'd rather reconfiguire it myself to understand what I've done in future)
I realise this comes too late for your particular problem but thought it worth sharing for others that follow - especially the tools ability to explain why delegation is or is not working. I've found it invaluble.

Resources