Simple Token based authentication in FastAPI that does not require users? - fastapi

I have created a very simple API with FastAPI to access text files on the filesystem.
It runs in a docker container that has only access to a volume, not the host filesystem and is accessible via a private network in docker from other containers.
The whole app is simple, just 112 lines, and should just store some test files for other containers to access.
I would like to add a simple Token bases authentication to the API. I have found many examples, but they assume that we have users accessing the API. Hence they require to setup a user DB etc.
In my case there is only a machine to machine communication with other apps in docker containers.
Is there any way to have a simple Token based authentication added without the overhead of users? Maybe set a single Token for all access that can be read form the environment variable at the start of the container.

Related

OpenIdDict Multiple authorization servers can't decode the same access token

I'm working with a client that has a very strange network setup. Basically, they have multiple small segmented networks with their own clusters of servers because of several acquisitions, mergers, etc. It's a nightmare.
I've setup the authorization servers correctly and they're all running the same code but when I try to take my laptop from location to the other, I get logged out and have to log back in again. A lot of the employees travel between sites so getting logged out all the time is causing some grumbles.
The each instance of the APIs and Authorization servers are able to use the same database, but each site has it's own Authorization and Resource server.
What I've noticed is this:
If I stay in one place, my access/refresh token setup works great with no problems
If a travel to another site, the new sites authority server doesn't seem to be able to validate the access token and logs me out
There is one site with a load balancer that will log me out randomly as well as if I'm traveling between sites.
The app is built on .NET Core 2.2 and OpenIDDict 2.0. For budgetary reasons, upgrading either is not an option.
Is there anyway to configure a shared certificate or key so that all of the servers can decode the access tokens? Basically multiple authorization servers able to decode the access tokens generated by any of the other authorization servers?
I was able to figure it out. I had to change this:
services.AddDataProtection()
.PersistKeysToDbContext<DbContext>();
to this:
services.AddDataProtection()
.SetApplicationName("appName")
.ProtectKeysWithCertificate(MyX509Certificate2)
.PersistKeysToDbContext<DbContext>();

Access Key Vault for a Service Fabric application using Azure Active Directory

I have an application that runs in a Service Fabric(SF) cluster and I wan't to access Key Vault from it.
The cluster hosts a number of applications and I want to give access to a Key Vault for my application without giving access to the other applications. By default an application runs under the same user as the SF cluster, but each applicatiuon has it's own unique name, mine has the name fabric:/application1.
My question is, is it possible to create an Active Directory application account for fabric:/application1 and grant access to the key vault?
I know it is possible to use the RunAs options in the SF manifest, but that requires me storing an encrypted password in the manifest/source code and I want to try and avoid this if possible.
AFAIK,
The only way to have this flexibility is using ClientID & Secret or Service Principal certificates and each application manage their own credentials.
Service Principal Certificate is already integrated to AD, but does not require the application, the user or the Host to be part of the domain, the only requirement is setup an user on AD to grant the permissions on Keyvault.
There are other solutions using AD integration, like Managed identities for Azure resources(Former: Managed Service Identity) but I am not sure if you are able to restrict access per application like you described, because the MI add this as a service in the node, so technically other applicaitons would have access as well, worth a try to validate if you can restrict this.
If you want to try this approach, you can use with Microsoft.Azure.Services.AppAuthentication for implicit authentication of the services running in your cluster, where the nodes are setup with Managed Identities extension like described here.
Something link this:
When you use the Microsoft.Azure.Services.AppAuthentication, the Step 2 will be handled by the library and you won't have to add much changes to your key vault auth logic.
When you run your code on an Azure App Service or an Azure VM with a
managed identity enabled, the library automatically uses the managed
identity. No code changes are required.
The following docs describe other options you can use for KeyVault Authentication.
PS: I've done other KeyVault integrations using Client Secrets and Certificates and they are secure enough, With Certificates you can store it on the managed store or with the application, I would recommend MI only if is a requirement for your solution.

Storing Azure Vault Client ID and Client Secret

I am using .NET Core 2.0 and ASP.NET Core 2.0 for application development. The "test" application is a .NET Core Console application. The core code I am writing is a class library. Once proper testing. I choose to do this since I won't be putting this to use for awhile (it's replacing older ASPNET code).
Anyway, since I have to work with a LOT of API keys for various services I decided to use Microsoft Azure Key Vault for storing the keys. I have this all setup and understand how this works. The test application uses a test Azure account so it's not critical. And since this is replacing legacy code and it's in the infancy, I am the sole developer.
Basically, I'm running into this issue. There's not too much information on Azure Key Vault from what I can see. A lot of examples are storing the Client ID and Secret in a plain text json file (for example: https://www.humankode.com/asp-net-core/how-to-store-secrets-in-azure-key-vault-using-net-core). I really don't understand how this can be secure. If someone were to get those keys they could easily access stored information Azure, right?
The Microsoft MSDN has a powershell command that grants access (I lost the original link, this is closest I can find: https://www.red-gate.com/simple-talk/cloud/platform-as-a-service/setting-up-and-configuring-an-azure-key-vault/) My development operating system is Windows 10 and my primary server operating system is Debian.
How would I approach this?
Yes, you are right, the plain text config file could be used only during development, not for production purpose. And in general, available options depend on where and how you host an App.
If you have an Azure Web App, you have at least next built-in options (from the documentation):
add the ClientId and ClientSecret values for the AppSettings in the Azure portal. By doing this, the actual values will not be in the web.config but protected via the Portal where you have separate access control capabilities. These values will be substituted for the values that you entered in your web.config. Make sure that the names are the same.
authenticate an Azure AD application is by using a Client ID and a Certificate instead of a Client ID and Client Secret. Following are the steps to use a Certificate in an Azure Web App:
Get or Create a Certificate
Associate the Certificate with an Azure AD application
Add code to your Web App to use the Certificate
Add a Certificate to your Web App
You may also find an approach that uses env variables to store credentials. This may be OK only if you can guarantee that it's not possible to do a snapshot of env variable on prod machine. Look into Environment Variables Considered Harmful for Your Secrets for more details.
And the last one thing: there is also a technic that based on the idea, that you need to store/pass only a ClientSecret value while ClientId should be constructed based on machine/container details where the App is hosted (e.g. docker container id). I have found an example for Hashicorp Vault and an App hosted on AWS, but the general idea is the same: Secret management with Vault
In addition to the first answer, with the context of running applications on Azure VM, instead of using client_secret to authenticate, you can use client certificate authentication as explained in this documentation: Authenticate with a Certificate instead of a Client Secret.
In the picture above:
Application is authenticating to AAD by proving that it has the private key of the certificate (which is basically stored in CNG if you are using Windows).
Application get back the access_token and then use it to access the Key Vault.
The developer does not need to know the private key value of the certificate in order for their app to be successfully authenticated. Instead, they only need to know the location of the imported pfx (a container for private key and its certificate) in the Certificate Store.
At least on Windows, you as secret administrator can convert the private key and the certificate into pfx format which is password protected, and then deploy it into the Windows Certificate store. This way no one could know the private key unless they know the password of the pfx file.
The other approach specifics for Azure Compute, is to use Azure Managed Service Identity. Using Azure MSI, Azure will automatically assign your resources such as VM with an identity / Service Principal, and you can fire requests at a specific endpoint that are only accessible by your resource to get the access_token. But be wary that Azure MSI are still under public preview, so please review the known issues before using it.
The picture above explain how Azure Resource Manager assign a Service Principal identity to your VM.
When you enable MSI in a VM, Azure will create a service principal in your AAD.
Azure will then deploy a new MSI VM extension to your VM. This provides an endpoint at http://localhost:50432/oauth2/token to be used to get the access_token for the service principal.
You can then use the access_token to access the resources such as Key Vault which authorize the service principal access.

authClient.login problems

I'm having a similar problem as was discussed in this question:
authClient.login returning error with "Unauthorized request origin"
I can't find anything on the firebase site that directly addresses this problem so I have 2 questions about the "unauthorized request origin":
1.) If I'm testing my program through my own computer (as in, it's just a file on my computer), what exactly am I supposed to add to the Auth panel? I tried following the advice offered in the link above but no luck.
2.) My eventual plan is to create an app using firebase and it's login system. Is this going to be a problem for when users try to login? Is there going to be something that I need to allow so that any user will be allowed to login to the system?
With the release of Firebase Simple Login, which contains a number of OAuth-based authentication methods (Facebook, Twitter, GitHub, etc.), we included the idea of 'Authorized Origins'. Without this restriction, malicious sites could pretend to be your application and attempt to access your users' Facebook, Twitter, etc. data on your behalf.
By restricting the domains for these requests to ones that you control and have verified, we can protect your users' data. Once you have configured your application domains, your users will be able to log in seamlessly and securely from the domains you defined.
To fix this error, log into Firebase Forge (by entering your Firebase URL into your browser), and navigate to the 'Auth' panel on the left.
For testing locally, you'll need to run at least a barebones webserver on your machine, rather than loading your test files via file://. The easiest way to run a barebones server on your local machine is to cd to the directory of your files and run python -m SimpleHTTPServer, which will allow you to access your content via http://127.0.0.1:8000/....
For your users, configure the domains that you'll be using to host your application. This can be any number of specific subdomains (such as a.b.www.domain.com) or high-level domains which will act as a wildcard (domain.com will allow requests from *.domain.com).
You can configure multiple application domains or IPs here, comma-delimited.
See https://www.firebase.com/docs/security/simple-login-overview.html for additional documentation about application configuration for Simple Login.
I hope that helps! Feel free to ping me directly if you have further questions.

Can Amazon IAM be used as an authentication method for hosts?

Is it possible to use IAM to manage user accounts for EC2-hosted unix hosts by way of a PAM module similarly to LDAP, NIS, etc...?
My goal is to have a means to centralize host authentication on our EC2 hosts without the overhead of setting up a single sign on solution.
AWS IAM is meant to handle access to AWS resources. You can create new users but the basic authentication which EC2 instances get is via key pairs, which are not the same as IAM users.
You might be able to create a system of your own which manages IAM users and also generates a private and public key for them to be used inside the instances being created (maybe even re-using the keys you get when creating a new user in IAM).
All in all its not really meant to be used that way, as far as I understand.
Since you mentioned LDAP, you can use this project:
https://github.com/denismo/aws-iam-ldap-bridge
to sync an LDAP server with IAM.

Resources