Realm SyncUser.authenticate failed with Google's clientID and Facebook - realm

I'm using Google for authenticating, like following:
let credential = Credential.google(token: "<SOME-HASH-HERE>.apps.googleusercontent.com")
SyncUser.authenticate(with: credential, server: serverURL, timeout: 60) { [weak self] user, error in
guard nil == error else {
print("error while authenticating: \(error!)")
return
}
ā€¦
}
It gives an error 400. After some debugging I found more info about the problem, but still not sure what is wrong with that. So response looks like this:
{
"invalid_params":[
{
"name":"provider",
"reason":"Unknown provider!"
}
],
"status":400,
"type":"https://realm.io/docs/object-server/problems/invalid-parameters",
"title":"Your request parameters did not validate!",
"code":601
}
Here is request body:
{
"provider":"google",
"app_id":"com.blabla.bla-bla-bla",
"data":"<SOME-HASH-HERE>.apps.googleusercontent.com"
}
I took auth code from example from official documentation, and I'm using latest Realm framework.
I also checked authentication using Facebook, but it gives same error.
I checked configuration.yml file on server, and did uncomment google and facebook, put required details, and restart system. Not helping.
Does anyone experience same problem?
PS: configuration.yml(only part with providers):
# Realm Object Server Configuration
#
# For each possible setting, the commented out values are the default values
# unless another default is mentioned explicitly.
#
# Paths specified in this file can be either absolute or relative.
# Relative paths are relative to the current working directory.
providers:
## Providers of authentication tokens. Each provider has a configuration
## object associated with it. If a provider is included here and its
## configuration is valid, it will be enabled.
## Possible providers: cloudkit, debug, facebook, realm, password
## Providers 'realm' and 'password' are always enabled:
## - The 'realm' provider is used to derive access tokens from a refresh token.
## - The 'password' provider is required for the dashboard to work. It supports
## authentication through username/password and uses a PBKDF2 implementation.
## This enables authentication via a Google Sign-In access token for a
## specific app.
google:
## The client ID as retrieved when setting up the app in the Google
## Developer Console.
clientId: '<SOME-HASH-HERE>.apps.googleusercontent.com'
## This enables authentication via a Facebook access token for a specific app.
## This provider needs no configuration (uncommenting the next line enables it).
facebook: {}
After I made changes in that file I called
sudo service realm-object-server restart
And just to be sure I also reboot system.

Unfortunately, there is a bug in the sample configuration.yml file shipped with Realm Object Server which I suspect you're hitting. The providers: section in the configuration file should live under the auth: section (instead of inside the network: section where it lives in the shipped file). The fix is to simply move the relevant providers configuration to live under the auth: key.
We have a fix ready for this bug which will be part of the next release of Realm Object Server.
Here's a sample snippet showing the complete auth: section with the fix:
# Realm Object Server Configuration
#
# For each possible setting, the commented out values are the default values
# unless another default is mentioned explicitly.
#
# Paths specified in this file can be either absolute or relative.
# Relative paths are relative to the current working directory.
auth:
## The path to the public and private keys (in PEM format) that will be used
## to validate identity tokens sent by clients.
## These configuration options are MANDATORY.
public_key_path: /etc/realm/token-signature.pub
private_key_path: /etc/realm/token-signature.key
providers:
## Providers of authentication tokens. Each provider has a configuration
## object associated with it. If a provider is included here and its
## configuration is valid, it will be enabled.
## Possible providers: cloudkit, debug, facebook, realm, password
## Providers 'realm' and 'password' are always enabled:
## - The 'realm' provider is used to derive access tokens from a refresh token.
## - The 'password' provider is required for the dashboard to work. It supports
## authentication through username/password and uses a PBKDF2 implementation.
## This enables authentication via a Google Sign-In access token for a
## specific app.
google:
## The client ID as retrieved when setting up the app in the Google
## Developer Console.
clientId: '<SOME-HASH-HERE>.apps.googleusercontent.com'
## This enables authentication via a Facebook access token for a specific app.
## This provider needs no configuration (uncommenting the next line enables it).
facebook: {}

Related

Firebase 3rd-party AuthProvider (Google/Facebook/etc) login with chrome extension manifest v3

Manifest version 3 for Chrome extensions have been killing me lately. Been able to navigate around it so far, but this one has really stumped me. I'm trying to use Firebase authentication for a Chrome extension, specifically with 3rd party auth providers such as Google and Facebook. I've setup the Firebase configuration for Login with Google and created a login section in the options page of the Chrome extension and setup the Firebase SDK.
Now, there are two login options when using an auth provider, signInWithRedirect and signInWithPopup. I've tried both of these and both have failed for different reasons. signInWithRedirect seems like a complete dead end as it redirects to the auth provider, and when it attempts to redirect back to the chrome-extension://.../options.html page, it just redirects to "about:blank#blocked" instead.
When attempting to use signInWithPopup, I instead get
Refused to load the script 'https://apis.google.com/js/api.js?onload=__iframefcb776751' because it violates the following Content Security Policy directive: "script-src 'self'". Note that 'script-src-elem' was not explicitly set, so 'script-src' is used as a fallback.
In v2, you could simply add https://apis.google.com to the content_security_policy in the manifest. But in v3, the docs say
"In addition, MV3 disallows certain CSP modifications for extension_pages that were permitted in MV2. The script-src, object-src, and worker-src directives may only have the following values:"
self
none
Any localhost source, (http://localhost, http://127.0.0.1, or any port on those domains)
So is there seriously no way for a Google Chrome extension to authenticate with a Google auth provider through Google's Firebase? The only workaround I can think of is to create some hosted site that does the authentication, have the Chrome extension inject a content script, and have the hosted site pass the auth details back to the Chrome extension through an event or something. Seems like a huge hack though and possibly subject to security flaws. Anyone else have ideas??
Although it was mentioned in the comments that this works with the Google auth provider using chrome.identity sadly there was no code example so I had to figure out myself how to do it.
Here is how I did it following this tutorial:
(It also mentions a solution for non-Google auth providers that I didn't try)
Identity Permission
First you need permission to use the chrome identity API. You get it by adding this to your manifest.json:
{
...
"permissions": [
"identity"
],
...
}
Consistent Application ID
You need your application ID consistent during development to use the OAuth process. To accomplish that, you need to copy the key in an installed version of your manifest.json.
To get a suitable key value, first install your extension from a .crx file (you may need to upload your extension or package it manually). Then, in your user data directory (on macOS it is ~/Library/Application\ Support/Google/Chrome), look in the file Default/Extensions/EXTENSION_ID/EXTENSION_VERSION/manifest.json. You will see the key value filled in there.
{
...
"key": "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAgFbIrnF3oWbqomZh8CHzkTE9MxD/4tVmCTJ3JYSzYhtVnX7tVAbXZRRPuYLavIFaS15tojlRNRhfOdvyTXew+RaSJjOIzdo30byBU3C4mJAtRtSjb+U9fAsJxStVpXvdQrYNNFCCx/85T6oJX3qDsYexFCs/9doGqzhCc5RvN+W4jbQlfz7n+TiT8TtPBKrQWGLYjbEdNpPnvnorJBMys/yob82cglpqbWI36sTSGwQxjgQbp3b4mnQ2R0gzOcY41cMOw8JqSl6aXdYfHBTLxCy+gz9RCQYNUhDewxE1DeoEgAh21956oKJ8Sn7FacyMyNcnWvNhlMzPtr/0RUK7nQIDAQAB",
...
}
Copy this line to your source manifest.json.
Register your Extension with Google Cloud APIs
You need to register your app in the Google APIs Console to get the client ID:
Search for the API you what to use and make sure it is activated in your project. In my case Cloud Firestore API.
Go to the API Access navigation menu item and click on the Create an OAuth 2.0 client ID... blue button.
Select Chrome Application and enter your application ID (same ID displayed in the extensions management page).
Put this client ID in your manifest.json. You only need the userinfo.email scope.
{
...
"oauth2": {
"client_id": "171239695530-3mbapmkhai2m0qjb2jgjp097c7jmmhc3.apps.googleusercontent.com",
"scopes": [
"https://www.googleapis.com/auth/userinfo.email"
]
}
...
}
Get and Use the Google Auth Token
chrome.identity.getAuthToken({ 'interactive': true }, function(token) {
// console.log("token: " + token);
let credential = firebase.auth.GoogleAuthProvider.credential(null, token);
firebase.auth().signInWithCredential(credential)
.then((result) => {
// console.log("Login successful!");
DoWhatYouWantWithTheUserObject(result.user);
})
.catch((error) => {
console.error(error);
});
});
Have fun with your Firebase Service...

How to verify a HS256 signed JWT Token created with Keycloak authentication provider on jwt.io

I am trying to verify a HS256 JWT Token generated with locally ran KeyCloak Authentication Provider on https://jwt.io.
The KeyCloack instance is running on my local machine inside a docker container. I have applied almost the same steps as described in this answer (which on contrary applies the RS algorithm instead, and works as described): https://stackoverflow.com/a/55002225/1534753
My validation procedure is very simple:
1.) Request the token (with Postman) from my local docker KeyCloak instance with:
POST requesting http://localhost:8080/auth/realms/dev/protocol/openid-connect/token
2.) Copy the token contents inside the jwt.io's "Encoded" section
3.) I verify that the header and payload are as expected and correct
4.) I copy the client secret from my KeyCloak instance admin dashboard, you can see the reference on the image below:
5.) I paste the secret into the "VERIFY SIGNATURE" section on jwt.io and the "Encoded" token section changes, hence resulting with an invalid signature and a invalid (i.e. different) token.
My core question is what am I missing here? Why does the token change when I apply the expected secret!? Am I applying the right secret, the one from the client? If I understand JWT infrastructre and standard correctly then It should stay the same if the secret (with the expected algorithm applied) is valid. My reasoning is that something with JWT creation on KeyCloak is specific. I have not touched the HS256 algorithm provider on KeyCloak, everything is used as default with the docker installation guide on using KeyCloak. The settings related to the token and algorithm are setup to use HS256, and the algorithm is specified as expected in the JWT's header section correctly which can be verified after the encoded token is pasted into the jwt.io's page.
I need this to work as I am trying to apply the same JWT validation process inside a .NET Core web API application. I have encountered this whole issue in there, i.e. inside the System.IdentityModel.Tokens.JWT and the JwtSecurityTokenHandle.ValidateSignature method which results with an invalid signature and finally resulting in an exception.
On side note, I am accessing the token with Postman and its Authorize feature the configuration can be seen on the image below:
One more side note is I have a user "John" which belongs to my "Demo" realm. I use him to request an access token from KeyCloak.
To get the secret used for signing/verifying HS256 tokens, try using the following SQL:
SELECT value FROM component_config CC INNER JOIN component C ON(CC.component_id = C.id) WHERE C.realm_id = '<realm-id-here>' and provider_id = 'hmac-generated' AND CC.name = 'secret';
If you use the resulting secret to verify the tokens, the signature should match. Iā€™m not sure if this secret is available through the UI, probably not.
Source: https://keycloak.discourse.group/t/invalid-signature-with-hs256-token/3228/3
you can try using Keycloak Gatekeeper.
If you want to verify that token in that way you need to change the Client Authenticator to "Signed JWT with client secret", otherwise you can use this "Gatekeeper" option. Here you can read more about it.

CAS authentication failing with Drupal but working as standalone

I am trying to setup CAS server locally and I have Drupal running locally as well. I am using Mongo DB for CAS ticket registry and user authentication. For CAS service registry I am using file-based JSON service registry.
My Service registry:
{
"#class": "org.apereo.cas.services.RegexRegisteredService",
"id": 3,
"serviceId": "http(s)?:\\/\\/relo.local(:\\d{4,5})?(\\/.*)?$",
"name": "relo.local",
"evaluationOrder": 10,
"accessStrategy": {
"#class": "org.apereo.cas.services.DefaultRegisteredServiceAccessStrategy",
"enabled": true,
"ssoEnabled": true
},
"attributeReleasePolicy": {
"#class": "org.apereo.cas.services.ReturnAllAttributeReleasePolicy"
}
}
In MongoDB I created a collection called accounts in which I have created some dummy user records like this:
/* 1 */
{
"_id" : ObjectId("5c24f234e51c56a02af5873f"),
"username" : "casuser",
"password" : "casuser",
"firstname" : "wohn",
"lastname" : "smith",
"mail" : "casuser#test.com"
}
/* 2 */
{
"_id" : ObjectId("5c24f24de51c56a02af58757"),
"username" : "wasuser",
"password" : "wasuser",
"firstname" : "wohn",
"lastname" : "smith",
"mail" : "wasuser#test.com"
}
For the ticket registry, I do not need to create any collection. CAS is taking care of creating ticket registry collection and insert a record in that when I try to log in.
Drupal is using cas module which uses phpCAS library to connect with CAS server.
Drupal version: 7.34
phpCAS version: 1.3.0
CAS Version: 6.1.0-RC1-SNAPSHOT
CAS provide its own login screen. After all these configurations I am able to login into CAS server with casuser and wasuser accounts.
My cas.properties file:
# Required CAS settings
cas.server.name=https://localhost:8443
cas.server.prefix=${cas.server.name}/cas
# Log4j config file location
logging.config: file:/etc/cas/config/log4j2.xml
# Control log levels via properties
logging.level.org.apereo.cas=DEBUG
# Restrict admin endpoints (like /status) to localhost
# cas.adminPagesSecurity.ip=127\.0\.0\.1
# Authenticate if any handler succeeds
cas.authn.policy.any.tryAll=false
# Disable authentication with a static list of credentials
# If below line is commented then you can use default
# username/password:casuser/Mellon
cas.authn.accept.users=ram::ram,shyam::shyam
# Ticket Grant Cookie (TGC) encryption key
cas.tgc.crypto.encryption.key=<my key>
# Ticket Grant Cookie (TGC) Signing key
cas.tgc.crypto.signing.key=<my key>
# Webflow encryption key
cas.webflow.crypto.encryption.key=<my key>
# Webflow signing key
cas.webflow.crypto.signing.key=<my key>
# Embedded Tomcat settings
server.servlet.context-path=/cas
server.port=8443
server.ssl.keyStore=file:/etc/cas/thekeystore
server.ssl.keyStorePassword=changeit
server.ssl.keyPassword=changeit
# JSON Service Registry
cas.serviceRegistry.json.location=file:/etc/cas/config/services-staging
# MongoDb Ticket registry
cas.ticket.registry.mongo.host=localhost
cas.ticket.registry.mongo.port=27017
cas.ticket.registry.mongo.userId=casDbAdmin
cas.ticket.registry.mongo.password=admin
cas.ticket.registry.mongo.databaseName=casdb
cas.ticket.registry.mongo.authenticationDatabaseName=casdb
# MongoDb Authentication
cas.authn.mongo.host=localhost
cas.authn.mongo.port=27017
cas.authn.mongo.userId=casDbAdmin
cas.authn.mongo.password=admin
cas.authn.mongo.databaseName=casdb
cas.authn.mongo.authenticationDatabaseName=casdb
cas.authn.mongo.usernameAttribute=username
cas.authn.mongo.attributes=
cas.authn.mongo.passwordAttribute=password
cas.authn.mongo.collection=accounts
# Authentication Policy
cas.authn.policy.requiredHandlerAuthenticationPolicyEnabled=true
# Default attributes.
cas.authn.attributeRepository.defaultAttributesToRelease=firstname,lastname,mail
# Spring Webflow
cas.webflow.autoconfigure=true
cas.webflow.alwaysPauseRedirect=false
cas.webflow.refresh=true
cas.webflow.redirectSameState=false
cas.webflow.session.lockTimeout=30
cas.webflow.session.compress=false
cas.webflow.session.maxConversations=5
cas.webflow.session.storage=true
I have configured Drupal to use local CAS server for authentication. When I try to access Drupal it redirects me to CAS login screen. After putting credentials in the login form and submit It is failing and showing me below error which I am not able to figure out. I am not very good in JAVA. CAS has embedded Spring webflow. I think the error is related to webflow. During authentication process, CAS has something called principle resolution and attribute resolution which decides which authentication handler to used and how many attributes to attach with the response.
Error:
2019-01-24 14:35:32,348 INFO [org.apereo.inspektr.audit.support.Slf4jLoggingAuditTrailManager] - <Audit trail record BEGIN
=============================================================
WHO: casuser
WHAT: TGT-5-*****b6YD2V8OBQ4X-jet
ACTION: TICKET_GRANTING_TICKET_CREATED
APPLICATION: CAS
WHEN: Thu Jan 24 14:35:32 IST 2019
CLIENT IP ADDRESS: 0:0:0:0:0:0:0:1
SERVER IP ADDRESS: 0:0:0:0:0:0:0:1
=============================================================
>
2019-01-24 14:35:32,353 ERROR [org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/cas].[dispatcherServlet]] - <Servlet.service() for servlet [dispatcherServlet] in context with path [/cas] threw exception [Request processing failed; nested exception is org.springframework.webflow.execution.ActionExecutionException: Exception thrown executing org.apereo.cas.web.flow.GenerateServiceTicketAction#147375b3 in state 'generateServiceTicket' of flow 'login' -- action execution attributes were 'map[[empty]]'] with root cause>
java.lang.NullPointerException: null
Full Error Stack Trace: https://pastebin.com/vEvcvFte
Any help is appreciated. I am struggling with this error for days and I am not able to figure out the issue. Please help.
You may want to consider force-updating your SNAPSHOT. While your logs don't show the commit id, it's possible you are running an old version with a bug that has since been fixed. If you examine the readme file of the project, you will find instructions on how to update the snapshot version via Gradle.

How do I automatically authorize all endpoints with Swagger UI?

I have an entire API deployed and accessible with Swagger UI. It uses Basic Auth over HTTPS, and one can easily hit the Authorize button and enter credentials and things work great with the nice Try it out! feature.
However, I would like to make a public sandboxed version of the API with a shared username and password, that is always authenticated; that is, no one should ever have to bring up the authorization dialog to enter credentials.
I tried to enter an authorization based on the answer from another Stack Overflow question by putting the following code inside a script element on the HTML page:
window.swaggerUi.load();
swaggerUi.api.clientAuthorizations.add("key",
new SwaggerClient.ApiKeyAuthorization(
"Authorization", "Basic dXNlcm5hbWU6cGFzc3dvcmQ=", "header"));
However, when I hit the Try it out! button the authorization is not used.
What would be the proper way to go about globally setting the auth header on all endpoints, so that no user has to enter the credentials manually?
(I know that might sound like a weird question, but like I mention, it is a public username/password.)
If you use Swagger UI v.3.13.0 or later, you can use the following methods to authorize the endpoints automatically:
preauthorizeBasic ā€“ for Basic auth
preauthorizeApiKey ā€“ for API keys and OpenAPI 3.x Bearer auth
To use these methods, the corresponding security schemes must be defined in your API definition. For example:
openapi: 3.0.0
...
components:
securitySchemes:
basicAuth:
type: http
scheme: basic
api_key:
type: apiKey
in: header
name: X-Api-Key
bearerAuth:
type: http
scheme: bearer
security:
- basicAuth: []
- api_key: []
- bearerAuth: []
Call preauthorizeNNN from the onComplete handler, like so:
// index.html
const ui = SwaggerUIBundle({
url: "https://my.api.com/swagger.yaml",
...
onComplete: function() {
// Default basic auth
ui.preauthorizeBasic("basicAuth", "username", "password");
// Default API key
ui.preauthorizeApiKey("api_key", "abcde12345");
// Default Bearer token
ui.preauthorizeApiKey("bearerAuth", "your_bearer_token");
}
})
In this example, "basicAuth", "api_key", and "bearerAuth" are the keys name of the security schemes as specified in the API definition.
I found a solution, using PasswordAuthorization instead of ApiKeyAuthorization.
The correct thing to do is to add the following line into the onComplete handler:
swaggerUi.api.clientAuthorizations.add("basicAuth",
new SwaggerClient.PasswordAuthorization(
"8939927d-4b8a-4a69-81e4-8290a83fd2e7",
"fbb7a689-2bb7-4f26-8697-d15c27ec9d86"));
swaggerUi is passed to the callback so this is the value to use. Also, make sure the name of your auth object matches the name in the YAML file.

Freeradius no authentication method found

I have Asterisk server with Freeradius server on the same machine and trying to authenticate with Radius if a user can make a call or not but I am getting an error while calling that is:
ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user
Failed to authenticate the user.
Is there something that I am missing in one of Radius files that I have to add?
The issue is that no module in the authorize section of your virtual server has taken responsibility for processing the request.
You should remove the contents of the authorize section, and list the following modules:
authorize {
pap
chap
mschap
digest
eap
}
You should then run the server in debug mode radiusd -X to see which module is taking responsibility for the request (you'll see one returns ok or updated where the others return noop). We'll call this the auth module
Once you've figured out which module will take responsibility for the request you'll need to provide a suitably hashed password.
Here are the password hashes that will work with the different modules.
pap - any
chap - Cleartext-Password, CHAP-Password
mschap - Cleartext-Password, NT-Password
digest - Cleartext-Password, Digest-HA1
eap - Depends on inner method (respond to this answer and I can give further guidance).
For testing you can put the password in a flat file local to the server. The module which deals with these flat files is the files module.
To add entries to the users file, first truncate /etc/raddb/users (alter for your installation).
Then add the following entry to the top:
<username> <password attr> := <password>
With values in <> replaced with the real values.
Remove the unused modules in authorize, and add the files module at the top.
authorize {
files
<auth module>
}
Then remove all the modules from authenticate and add the <auth module>
authenticate {
<auth module>
}
That should give you up and running. If no modules take responsibility for the request, please post the list of attributes in the request from the top of the debug output, and i'll help you identify it.
You need configure your radius to add missing headers
You can enabled full debug on radius server, it will show you all packets radius server get.
Freeradius allow add any header into packet on any stage, see doc.

Resources