How to access Firebase from http://localhost:3000 - firebase

I'm developing a React app, and I get this error when trying to sign in to Firebase using their email auth provider.
Failed to load https://www.googleapis.com/identitytoolkit/v3/relyingparty/verifyPassword?key=.....:
Response to preflight request doesn't pass access control check: The
'Access-Control-Allow-Origin' header has a value 'https://localhost:3000'
that is not equal to the supplied origin. Origin 'http://localhost:3000'
is therefore not allowed access.
(Notice the https on line 3 versus http on line 4)
It looks like they changed Access-Control-Allow-Origin from *, to the https version of whatever domain you're calling from?
Does this mean I now need to configure my React app to run as https://localhost:3000?

You want to create a .env file in the root of your project and set HTTPS=true. This will launch your app using a self signed certificate.
Take a look at the advanced configuration options of create-react-app here
https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#advanced-configuration
If you need more control over the certificate and do not want to eject. Take a look at react-app-rewired (https://github.com/timarney/react-app-rewired). You can config the devServer to use a custom certificate using the Extended Configuration Options here (https://github.com/timarney/react-app-rewired#extended-configuration-options)

Related

RequestHeaderSectionTooLarge: Your request header section exceeds the maximum allowed size

We are using AWS Amplify for our NextJS web app and keep receiving error when ever I try to load the application once deployed to Amplify. Locally there is no issue.
I am using Amplify's default Auth configuration, with basic email and password auth. It looks like it could be related to the Amplify cookie being set in the header but I cannot find any documentation within AWS to prevent this or reduce the amount of information passed with the header. Any help would be appreciated.
I have faced the same issue and was able to solve it. Here's how -
Identify the CloudFront Distribution ID for your app. You can find it in the Deploy logs of your app build console.
Search & open that particular CF Distribution and go to the Behaviours tab.
Select the Default behaviour (5th one in my case) and hit Edit.
Scroll down to the Cache key and origin requests section.
Here you will find settings to control what's included in the headers of the request that goes to the server. In my case, I didn't need any Cookies so I chose None, and it solved the issue for me.
In your case, you can do the same or pick what all info needs to be in the headers.
Check to see if there are any unnecessary cookies for that domain.
I was getting this error (on a site I don't own). I took a look at the request headers and found a very large number of cookies (several dozen) for the site's domain. I cleaned up the cookies which seemed non-critical and the error went away.
As the error implies, the size of the entire request header section is above 8192 bytes. Request headers include the accept headers, the user agent, the cookies, etc. and all combined can get rather large. Large headers look malicious to some WAFs. I once had a single user having trouble with our site. Turns out they were a polyglot and had configured their browser to accept several dozen languages causing their accept-language header to be suspiciously long, and the WAF refused to proxy the request.
I faced the same issue using Nextjs, amplify and an external Auth provider.
The problem is that AWS S3 service has a request header maximum allowed size of 8192 bytes, so when ever you try to access the static generated pages of Nextjs it returns that error. This has already been asked here
In my case, I was using an external Auth provider and I was able to solve the issue configuring the cookies only for the '/api/' path. That way the Auth cookies are sent only to the Nextjs api endpoints, so your request header is lighter whenever you try to get the static pages.

Configuring Keycloak OIDC with an nginx (OpenResty) reverse-proxy

I am experimenting with a two-service docker-compose recipe, largely based on
the following GitHub project:
https://github.com/rongfengliang/keycloak-openresty-openidc
After streamlining, my configuration looks something like the following fork
commit:
https://github.com/Tythos/keycloak-openresty-openidc
My current issue is, the authorization endpoint ("../openid-connect/auth") uses
the internal origin ("http://keycloak-svc:"). Obviously, if users are
redirected to this URL, their browsers will need to cite the external origin
("http://localhost:"). I thought the PROXY_ADDRESS_FORWARDING variable for the
Keycloak service would fix this, but I'm wondering if I need to do something
like a rewrite on-the-fly in the nginx/openresty configuration.
To replicate, from project root::
docker-compose build
docker-compose up --force-recreate --remove-orphans
Then browse to "http://localhost:8090" to start the OIDC flow. You can
circumvent the origin issue by, once you encounter the aforementioned origin
issue, by replacing "keycloak-svc" with "localhost", which will forward you to
the correct login interface. Once there, though, you will need to add a user
to proceed. To add a user, browse to "http://localhost:8080" in a separate tab
and follow these steps before returning to the original tab and entering the
credentials:
Under Users > Add user:
username = "testuser"
email = "{{whatever}}"
email verified = ON
Groups > add "restybox-group"
After user created:
Go to "Credentials" tab
Set to "mypassword"
Temporary = OFF
Authorization Servers such as Keycloak have a base / internet URL when running behind a reverse proxy. You don't need to do anything dynamic in the reverse proxy - have a look at the frontend URL configuration.
Out of interest I just answered a similar question here, that may help you to understand the general pattern. Aim for good URLs (not localhost) and a discovery endpoint that returns intermet URLs rather than internal URLs.

Embedding login with microsoft in an iFrame?

Integration:
Load an SPA in InContact inside of an iFrame.
We have our ADFS setup on Microsoft Azure.
We have an SPA that initiates SSO flow for ADFS with Azure from the backend.
How: the backend responsible for rendering index.html for SPA first redirects to IDP metadata_url which once successful redirects to our login with token in the url.
What I've tried:
- Remove the X-Frame-Options header, so our SPA can load into the InContact iFrame. - This enables me to load our SPA into iFrame
- I tried the dirty approach of setting the X-Frame-Options header to use allow-from (yes, it is deprecated), but with that there are issues with PowerShell.
Using the command:
as mentioned here
Its response is:
PS /home/dhruv> Set-AdfsResponseHeaders -SetHeaderName "X-Frame-Options" -SetHeaderValue "allow-from https://*.mpulsemobile.com"
Set-AdfsResponseHeaders: The term 'Set-AdfsResponseHeaders' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Tried posting to Microsoft forums here, but that gets me into a redirect loop!
So my question is: Is there another easy way to set the header X-Frame-Options of the IDP url (login.microsoftonline.com//) to allow iFrame embedding or a better way to approach this?
X-Frame-Options is not deprecated for ADFS, and can be used as stated "in certain rare cases you may trust a specific application that requires iFrame capable interactive AD FS login page. The ‘X-Frame-Options' header is used for this purpose".
Ensure you're runnig the cmdlet from the AD FS server, also if its 2016 that KB4493473 and KB4507459 have been applied. Finally that the ADFS module has been loaded:
(Get-Module ADFS) -ne $null (should output True)
If not manually load it:
Import-Module ADFS (should not throw an errorr)
If that fails ensure the files are available and let us know:
Test-Path C:\Windows\system32\WindowsPowerShell\v1.0\Modules\adfs\adfs.psd1 (should return True)

Facebook PHP SDK fails to obtain token if redirect URL has the code parameter

I am attempting OAuth authentication using the Facebook PHP SDK v5.6.1.
When the browser returns to the redirect URL, I am unable to exchange the authorization code for an access token. Instead I get a redirect_uri_mismatch error:
Invalid redirect: https://.../ callback does not match one of the registered values.
(The text may not be exact because it had to be translated)
I debugged the Facebook SDK and found that the cause of this error is the code parameter passed back on the request URL. Normally the SDK infers the redirect URL from the PHP request, but when I manually supply the redirect URL to the SDK without the code parameter then the token exchange succeeds.
My goal, is to upgrade the SDK from an older version with a minimum of code changes, so I would like to avoid manually supplying the redirect URL if possible.
Inside the getAccessToken SDK method, the SDK takes care to remove the state parameter from the URL, but does nothing about removing the code parameter, which evidently needs to be removed.
In my app settings for Facebook Login I have strict mode switched off.
What else should I do to make the request URL functional as the redirect URL?
Something must be off because I don't see anyone else having an issue with this.
I carefully compared the mismatched url's and found one to be erroneous. The protocol said https but it also had port 80 specified. Turns out my apache reverse proxy headers were misconfigured.
RequestHeader set X-Forwarded-Proto 'https'
RequestHeader set X-Forwarded-Host 'hostname'
RequestHeader set X-Forwarded-Port '443'

Detect and rewrite HTTP Basic user/password headers into custom headers with Nginx/Lua

I am working with a historic API which grants access via a key/secret combo, which the original API designer specified should be passed as the user name & password in an HTTP Basic auth header, e.g.:
curl -u api_key:api_secret http://api.example.com/....
Now that our API client base is going to be growing, we're looking to using 3scale to handle both authentication, rate limiting and other functions. As per 3scale's instructions and advice, we'll be using an Nginx proxy in front of our API server, which authenticates against 3scale's services to handle all the access control systems.
We'll be exporting our existing clients' keys and secrets into 3scale and keeping the two systems in sync. We need our existing app to continue to receive the key & secret in the existing manner, as some of the returned data is client-specific. However, I need to find a way of converting that HTTP basic auth request, which 3scale doesn't natively support as an authentication method, into rewritten custom headers which they do.
I've been able to set up the proxy using the Nginx and Lua configs that 3scale configures for you. This allows the -u key:secret to be passed through to our server, and correctly processed. At the moment, though, I need to additionally add the same authentication information either as query params or custom headers, so that 3scale can manage the access.
I want my Nginx proxy to handle that for me, so that users provide one set of auth details, in the pre-existing manner, and 3scale can also pick it up.
In a language I know, e.g., Ruby, I can decode the HTTP_AUTHORIZATION header, pick out the Base64-encoded portion, and decode it to find the key & secret components that have been supplied. But I'm an Nginx newbie, and don't know how to achieve the same within Nginx (I also don't know if 3scale's supplied Lua script can/will be part of a solution)...
Reusing the HTTP Authorization header for the 3scale keys can be supported with a small tweak in your Nginx configuration files. As you were rightly pointing out, the Lua script that you download is the place to do this.
However, I would suggest a slightly different approach regarding the keys that you import to 3scale. Instead of using the app_id/app_key authentication pattern, you could use the user_key mode (which is a single key). Then what you would import to 3scale for each application would be the base64 string of api_key+api_secret combined.
This way the changes you will need to do to the configuration files will be fewer and simpler.
The steps you will need to follow are:
in your 3scale admin portal, set the authentication mode to API key (https://support.3scale.net/howtos/api-configuration/authentication-patterns)
go to the proxy configuration screen (where you set your API backend, mappings and where you download the Nginx files).
under "Authentication Settings", set the location of the credentials to HTTP headers.
download the Nginx config files and open the Lua script
find the following line (should be towards the end of the file):
local parameters = get_auth_params("headers", string.split(ngx.var.request, " ")[1] )
replace it with:
local parameters = get_auth_params("basicauth", string.split(ngx.var.request, " ")[1] )
finally, within the same file, replace the entire function named "get_auth_params" for the one in this gist: https://gist.github.com/vdel26/9050170
I hope this approach suits your needs. You can also contact at support#3scale.net if you need more help.

Resources