Freeradius no authentication method found - asterisk

I have Asterisk server with Freeradius server on the same machine and trying to authenticate with Radius if a user can make a call or not but I am getting an error while calling that is:
ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user
Failed to authenticate the user.
Is there something that I am missing in one of Radius files that I have to add?

The issue is that no module in the authorize section of your virtual server has taken responsibility for processing the request.
You should remove the contents of the authorize section, and list the following modules:
authorize {
pap
chap
mschap
digest
eap
}
You should then run the server in debug mode radiusd -X to see which module is taking responsibility for the request (you'll see one returns ok or updated where the others return noop). We'll call this the auth module
Once you've figured out which module will take responsibility for the request you'll need to provide a suitably hashed password.
Here are the password hashes that will work with the different modules.
pap - any
chap - Cleartext-Password, CHAP-Password
mschap - Cleartext-Password, NT-Password
digest - Cleartext-Password, Digest-HA1
eap - Depends on inner method (respond to this answer and I can give further guidance).
For testing you can put the password in a flat file local to the server. The module which deals with these flat files is the files module.
To add entries to the users file, first truncate /etc/raddb/users (alter for your installation).
Then add the following entry to the top:
<username> <password attr> := <password>
With values in <> replaced with the real values.
Remove the unused modules in authorize, and add the files module at the top.
authorize {
files
<auth module>
}
Then remove all the modules from authenticate and add the <auth module>
authenticate {
<auth module>
}
That should give you up and running. If no modules take responsibility for the request, please post the list of attributes in the request from the top of the debug output, and i'll help you identify it.

You need configure your radius to add missing headers
You can enabled full debug on radius server, it will show you all packets radius server get.
Freeradius allow add any header into packet on any stage, see doc.

Related

keycloak starts with a new realm and some client configurations

I try to use keycloak as the authentication service in my design. In my case, when the keycloak starts, I need one more realm besides default master realm. Assuming the new agency is called "demo".
So it means when keycloak starts, it should have two realms (master and demo).
In addtion, in the realm demo, I need to configure the default client "admin-cli" to enable "Full Scope Allowed". Also need to add some buildin mapper to this client.
In this case, I wonder whether I can use something like initialization file which keycloak can load when starting ?
Or I need to use keycloak client APIs to do this operations (e.g., Java keycloak admin client)?
Thanks in advance.
You can try the following:
Create the Realm;
Set all the options that you want;
Go to Manage > Export;
Switch Export groups and roles to ON;
Switch Export clients to ON;
Export.
That will export a .json file with the configurations.
Then you can tested it be deleting your Demo Realm and:
Go to Add Realm;
Chose the .json file that was exported;
Click Create.
Check if the configurations that you have changed are still presented on the Demo Realm, if there are then it means that you can use this file to import the Realm from. Otherwise, for the options that were not persistent you will have to create them via the Admin Rest API.

How to verify a HS256 signed JWT Token created with Keycloak authentication provider on jwt.io

I am trying to verify a HS256 JWT Token generated with locally ran KeyCloak Authentication Provider on https://jwt.io.
The KeyCloack instance is running on my local machine inside a docker container. I have applied almost the same steps as described in this answer (which on contrary applies the RS algorithm instead, and works as described): https://stackoverflow.com/a/55002225/1534753
My validation procedure is very simple:
1.) Request the token (with Postman) from my local docker KeyCloak instance with:
POST requesting http://localhost:8080/auth/realms/dev/protocol/openid-connect/token
2.) Copy the token contents inside the jwt.io's "Encoded" section
3.) I verify that the header and payload are as expected and correct
4.) I copy the client secret from my KeyCloak instance admin dashboard, you can see the reference on the image below:
5.) I paste the secret into the "VERIFY SIGNATURE" section on jwt.io and the "Encoded" token section changes, hence resulting with an invalid signature and a invalid (i.e. different) token.
My core question is what am I missing here? Why does the token change when I apply the expected secret!? Am I applying the right secret, the one from the client? If I understand JWT infrastructre and standard correctly then It should stay the same if the secret (with the expected algorithm applied) is valid. My reasoning is that something with JWT creation on KeyCloak is specific. I have not touched the HS256 algorithm provider on KeyCloak, everything is used as default with the docker installation guide on using KeyCloak. The settings related to the token and algorithm are setup to use HS256, and the algorithm is specified as expected in the JWT's header section correctly which can be verified after the encoded token is pasted into the jwt.io's page.
I need this to work as I am trying to apply the same JWT validation process inside a .NET Core web API application. I have encountered this whole issue in there, i.e. inside the System.IdentityModel.Tokens.JWT and the JwtSecurityTokenHandle.ValidateSignature method which results with an invalid signature and finally resulting in an exception.
On side note, I am accessing the token with Postman and its Authorize feature the configuration can be seen on the image below:
One more side note is I have a user "John" which belongs to my "Demo" realm. I use him to request an access token from KeyCloak.
To get the secret used for signing/verifying HS256 tokens, try using the following SQL:
SELECT value FROM component_config CC INNER JOIN component C ON(CC.component_id = C.id) WHERE C.realm_id = '<realm-id-here>' and provider_id = 'hmac-generated' AND CC.name = 'secret';
If you use the resulting secret to verify the tokens, the signature should match. I’m not sure if this secret is available through the UI, probably not.
Source: https://keycloak.discourse.group/t/invalid-signature-with-hs256-token/3228/3
you can try using Keycloak Gatekeeper.
If you want to verify that token in that way you need to change the Client Authenticator to "Signed JWT with client secret", otherwise you can use this "Gatekeeper" option. Here you can read more about it.

Presto custom PasswordAuthenticator plugin for coordinator authentication is not triggered

I created a presto custom password authenticator plugin (internal) by making a copy of the LDAP plugin and modifying it. You can see that code here: https://github.com/prestodb/presto/tree/master/presto-password-authenticators/src/main/java/com/facebook/presto/password.
I created copies of the Authenticator, AuthenticatorFactory, and the config, and modified them to basically just take a user/password from the config and to only allow that user in. I also put the new class in the PasswordAuthenticatorPlugin registration code.
I can see the plugin loading when presto is started, but it doesn't appear to do anything despite no errors being present. What am I missing?
Note: I had already found a solution to this, I'm just recording it on SO as I originally came here and found no help.
To make a custom password plugin work, you actually need HTTPS enabled for communication with the coordinator. You can actually see this recommendation at the bottom of their documentation:
https://prestodb.github.io/docs/current/develop/password-authenticator.html
Additionally, the coordinator must be configured to use password authentication and have HTTPS enabled.
So, the steps to make it work are:
Make sure your main config.properties has "http-server.authentication.type=PASSWORD".
Make sure you add a password-authenticator.properties next to config properties with content like the sample in the link above. But make sure you use your string from your authenticator as the name, and that you add your configuration properties instead (user name and password).
Set up a JKS store or a real certificate (some instructions here from Presto for JKS: https://prestodb.github.io/docs/current/security/tls.html).
Add SSL config to your config.properties.
http-server.https.enabled=true
http-server.https.port=8443
http-server.https.keystore.path=/etc/presto-keystore/keystore.jks
http-server.https.keystore.key=password123
Set up your JDBC driver to use the same key store.
I wrote up a blog on it with a bit more detail as well if any of that doesn't make sense. But after doing all this, you should find that it does require a password and it does enforce your plugin.
https://coding-stream-of-consciousness.com/2019/06/18/presto-custom-password-authentication-plugin-internal/

AWS API Gateway as Serivce proxy for S3 upload

I have been reading about creating an API which can be used to upload objects directly to S3. I have followed the guides from Amazon with little success.
I am currently getting the following error:
{"message":"Missing Authentication Token"}
My API call configuration:
The role ARN assigned is not in the image, but has been set up and assigned.
The "Missing Authentication Token" error can be interpreted as either
Enabling AWS_IAM authentication for your method and making a request to it without signing it with SigV4, or
Hitting a non-existent path in your API.
For 1, if you use the generated SDK the signing is done for you.
For 2, if you're making raw http requests make sure you're making requests to /<stage>/s3/{key}
BTW, the path override for s3 puts needs to be {bucket}/{key}, not just {key}. You may need to create a two-level hierarchy with bucket as the parent, or just hardcode the bucket name in the path override if it will always be the same. See: http://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html

OpenAM J2EE agent installation bringing down tomcat

OpenAM version -12 , Agent version 3.5 and 3.3 , tomcat version 7
I have tried to follow the link https://forums.alfresco.com/forum/installation-upgrades-configuration-integration/authentication-ldap-sso/sso-openam-06052012 to set up my J2EE Agent. Let me paste the steps after asking the question(see at the end)
but I am getting the error as asked below
Not able to configure J2ee agent on adding my customized data store for users
I have tried to use 3.5 version installed and uninstalled multiple times and tried previous version.
There is a nice discussion on this topic at http://database.developer-works.com/article/16009911/%22Cannot+obtain+Application+SSO+token%22+error
but it did not help me much.
I am using LDAP so I have used LDAP realm and subjects are showing up ok. Also I am observing that the policy tab has changed quite a bit from how it is described in the Blogs.
Now with the roadblock I am not sure how to proceed as the error is not giving me any clue what to do. I even added the file named AMConfig.properties in the classpath with username and password of the agent and tried the username and password of the OpenAM admin too as suggested in the discussion mentioned. but that too did not help.
The issue is the Tomcat now is not starting and giving error that AMConfig.properties properties are needed
I know the OpenAM Realm setup is good as I am able to login via this realm to another application (Liferay) where I just have to give the URL for use OpenAM integration. but after uninstallation of the agent the tomcat starts without any error and i am able to login to the application
-------------------Step copied from 1st link(modified)--------------------------
1. Configure your OpenAM agent (tried both 3.5 and 3.3 version on tomcat 7)
a. Log into OpenAM as the admin user and navigate to "Access Control -> (Your Realm) - where in my case LDAP Realm (other application using it without issue)
b. Select Policies -> New Policy
c. Enter Share as the policy name and then create 2 new URL Policy agent rules
d. 1st Resource Name = http://:/share/*
e. 2nd Resource Name = http://alfresco.domain.com:8080/share/*?*
f. Add a subjects - already part of LDAP Realm
g. Now select Agents -> J2EE - > (your J2EE agent)
h. Select the Application tab
i. Login Processing -> Login Form URI - add /share/page/dologin
j. Logout Processing -> Application Logout URL - add Map Key = share - Corresponding Map Value = /share/page/dologout
k. Not Enforced URI Processing - Add 2 entries - /share and /share/
l. Profile Attributes Processing - Select HTTP_HEADER and add Map Key = uid - Corresponding Map Value = SsoUserHeader (This is what I called my header in the alfresco-global.properties file - see below)
Auth chain
authentication.chain=external1:external,alfrescoNtlm1:alfrescoNtlm
alfresco.authentication.allowGuestLogin=true
SSO settings
external.authentication.enabled=true
external.authentication.defaultAdministratorUserNames=admin
external.authentication.proxyUserName=
external.authentication.proxyHeader=SsoUserHeader
NOTE- It does not seem possible to configure SSO where the Guest login has been disabled. There are webscripts used on the Alfresco repository that need guest login.
That concludes the setup for Alfresco and OpenAM
For Share you need to have the following section uncommented in your share-config-custom.xml
alfresco/web-extension/alfresco-system.p12
pkcs12
alfresco-system
alfrescoCookie
Alfresco Connector
Connects to an Alfresco instance using cookie-based authentication
org.alfresco.web.site.servlet.SlingshotAlfrescoConnector
alfrescoHeader
Alfresco Connector
Connects to an Alfresco instance using header and cookie-based authentication
org.alfresco.web.site.servlet.SlingshotAlfrescoConnector
SsoUserHeader
alfresco
Alfresco - user access
Access to Alfresco Repository WebScripts that require user authentication
alfrescoHeader
http://alfreso.domain.com:8080/alfresco/wcs
user
true
Notice I am not using the SSL cert and in my alfrescoHeader connector I have used SsoUserHeader (as setup in OpenAM) and the endpoint uses the alfrescoHeader connector
Now you need to add the OpenAM filter to the Share web.xml file
Add the following filter just before the Share SSO authentication support filter
Agent
com.sun.identity.agents.filter.AmAgentFilter
Add the following filter mapping to the filter-mapping section
Agent
REQUEST
INCLUDE
FORWARD
ERROR
----- End ----------
The error message is a bit misleading: the Cannot obtain application SSO token in general means that the agent was unable to authenticate itself. When you install the agent, the agent asks for a profile name and a password file, those values need to correspond to the agent profile configured within OpenAM.
To test if you can authenticate as the user, you could simply try to authenticate as the agent by making the following request:
curl -d "username=profilename&password=password&uri=realm=/%26module=Application" http://aldaris.sch.bme.hu:8080/openam/identity/authenticate
In the above command the realm value needs to be the same as the value for the "com.sun.identity.agents.config.organization.name" property defined in OpenSSOAgentBootstrap.properties (under the agent's install directory).
Having bad username/password combination is only one of the possible root causes for this exception though. It is also possible that during startup the agent was unable to connect to OpenAM to authenticate itself. In those cases the problem could be:
network error, firewall issues preventing the agent from contacting OpenAM
SSL trust issues: agent's JVM does not trust the certificate of OpenAM's container (only problem if you've installed the agent by providing OpenAM's HTTPS URL and the certificate is self-signed or just simply not trusted by the JVM)

Resources