Integrating Keycloak with Alfresco - alfresco

I Use Alfresco in my spring boot Application. I wanted to use authentication subsystem(identity-service). i have given configurations to use in alfresco-global.properties.
It is not connecting to Keycloak. No errors found in log. I could find in logs Authentication subsystem starts though.
[ restartedMain] a.i.IdentityServiceDeploymentFactoryBean : Keycloak JWKS URL: http://localhost:8011/auth/realms/alfresco-dbp/protocol/openid-connect/certs
2019-10-24 16:41:34.956 INFO --- [ restartedMain] a.i.IdentityServiceDeploymentFactoryBean : Keycloak Realm: alfresco-dbp
2019-10-24 16:41:34.956 INFO --- [ restartedMain] a.i.IdentityServiceDeploymentFactoryBean : Keycloak Client ID: alfresco-client
2019-10-24 16:41:34.958 INFO --- [ restartedMain] o.a.r.m.s.ChildApplicationContextFactory : Startup of 'Authentication' subsystem, ID: [Authentication, managed, identity-service1] complete

Alfresco provides Identity Service for SSO. I have integrated Alfresco 6.2.2 with Identity Service 1.3. It is working fine.
But still if you want to implement standalone keycloak independently, it will work with same configuration. I have tested with keycloak 9.0.3 and keycloak 11.0.0 (which is the underlying version of Identity Service 1.3) - both integrated with Alfresco, and working correctly.
The configuration in alfresco files would remain the same :
Below are the alfresco-global.properties settings. Keycloak is assumed to be running on port 8081:
authentication.chain=identity-service-1:identity-service,alfrescoNtlm-1:alfrescoNtlm
identity-service.auth-server-url=http://localhost:8081/auth
identity-service.enable-basic-auth=true
identity-service.realm=sharerealm
identity-service.resource=alfresco
csrf.filter.referer=http://localhost:8080
csrf.filter.origin=http://localhost:8080/*
aims.enabled=true
aims.realm=sharerealm
aims.resource=alfresco
aims.authServerUrl=http://localhost:8081/auth
aims.publicClient=true
Entry in share-config-custom.xml file:
<!-- AIMS -->
<config evaluator="string-compare" condition="AIMS">
<enabled>true</enabled>
<realm>sharerealm</realm>
<resource>alfresco</resource>
<authServerUrl>http://localhost:8081/auth</authServerUrl>
<sslRequired>none</sslRequired>
<publicClient>true</publicClient>
<autodetectBearerOnly>true</autodetectBearerOnly>
<alwaysRefreshToken>true</alwaysRefreshToken>
<principalAttribute>email</principalAttribute>
<enableBasicAuth>true</enableBasicAuth>
</config>
If you are working with ADF or ADW (Alfresco Digital Workspace), then the following entry in app.config.json:
"$schema": "../node_modules/#alfresco/adf-core/app.config.schema.json",
"ecmHost": "http://localhost:8080",
"aosHost": "http://localhost:8080/alfresco/aos",
"baseShareUrl": "http://localhost:8080/adw",
"providers": "ECM",
"authType": "OAUTH",
"oauth2": {
"host": "http://localhost:8081/auth/realms/sharerealm",
"clientId": "alfresco",
"scope": "openid",
"secret": "",
"implicitFlow": true,
"silentLogin": true,
"publicUrls": [
"**/preview/s/*",
"**/settings"
],
"redirectSilentIframeUri": "http://localhost:8080/adw/assets/silent-refresh.html",
"redirectUri": "/adw",
"redirectUriLogout": "/adw/#/login"
},

Related

"gatsby-source-graphql" doesn't link to Drupal 8

siteMetadata: {
title: `Gatsby Default Starter`,
description: `Kick off your next, great Gatsby project with this default starter. This barebones starter ships with the main Gatsby configuration files you might need.`,
author: `#gatsbyjs`,
},
plugins: [
`gatsby-plugin-react-helmet`,
{
resolve: `gatsby-source-filesystem`,
options: {
name: `images`,
path: `${__dirname}/src/images`,
},
},
`gatsby-transformer-sharp`,
`gatsby-plugin-sharp`,
{
resolve: `gatsby-plugin-manifest`,
options: {
name: `gatsby-starter-default`,
short_name: `starter`,
start_url: `/`,
background_color: `#663399`,
theme_color: `#663399`,
display: `minimal-ui`,
icon: `src/images/gatsby-icon.png`, // This path is relative to the root of the site.
},
},
{
resolve: "gatsby-source-graphql",
options: {
// Arbitrary name for the remote schema Query type
typeName: "DRUPAL",
// Field under which the remote schema will be accessible. You'll use this in your Gatsby query
fieldName: "drupal",
// Url to query from
url: "https://intl-pgs-rsm-growth-platform.pantheonsite.io/graphql",
},
},
],
}
Here's my gatsby-config.js file
When I run gatsby clean && gatsby develop or just gatsby build i get
success createSchemaCustomization - 0.005s
ERROR #11321 PLUGIN
"gatsby-source-graphql" threw an error while running the sourceNodes lifecycle:
Unexpected token < in JSON at position 0
ServerParseError: Unexpected token < in JSON at position 0
- JSON.parse
- index.js:35
[test-gatsby]/[apollo-link-http-common]/lib/index.js:35:25
- next_tick.js:68 process._tickCallback
internal/process/next_tick.js:68:7
not finished source and transform nodes - 0.506s
My Drupal 8 site is new with the graphql module installed. And the gatsby site is brand new too.
I started getting this issue on Monday after working fine for a while. There were no code changes to my gatsby but all of a sudden I'm no longer able to get drupal data.
There seems to be few examples of using the "gatsby-source-graphql" plugin so if anyone can help please do
In my case, it was a permissions issue in Drupal. I solved it by going to admin → people → permissions and setting all permissions related to GraphQL for the anonymous user.
The Drupal Graph QL Module requires authentication using tokens. You can use Simple Oauth or JWT. I used Simple Oauth and resolved my issue using the following steps:
Install the Simple Oauth module using composer so it installs it's dependancies $ composer require drupal/simple_oauth, then enable the module.
Create a user role for your third party app and assign content viewing permissions for that role
In admin/config/people/simple_oauth add a token expiration time, generate keys using the button provided (make sure they are generated outside of the drupal web root) and add the path to the public and private key files.
In admin/config/services/consumer add a new consumer (or use the default consumer)
Add a secret password in the Secret field and select the new role you created under Scopes, then save the config page.
Make a post request to your site https://intl-pgs-rsm-growth-platform.pantheonsite.io/oauth/token using curl or postman to generate a token using the following body fields:
grant_type: password
client_id: The client id generated `admin/config/services/consumer`
client_secret: The secret you entered in step 4
username: A user in your drupal site that has the role you created
password: The password assigned to that account
The post request should return the access token if successful. Add your token to your app's .env.development file and update your gatsby-config.js file with the Authorization header: (More info about setting up environment variables in your gatsby app can be found here)
{
resolve: "gatsby-source-graphql",
options: {
typeName: "DRUPAL",
fieldName: "drupal",
url: "https://intl-pgs-rsm-growth-platform.pantheonsite.io/graphql",
headers: {
"Authorization": `Bearer ${process.env.GATSBY_API_TOKEN}`
},
},
},
Video Tutorials of the Simple Oauth set up steps can be found here
More information on this set up process can be found here

Error with Gatsby plugin Gatsby-Source-Wordpress

I'm trying to setup my first gatsby + wordpress site. I'm following this tutorial.
I get the site running but at the point where I should get the data from WP I get stuck. I added Gatsby-Source-Wordpress plugin. After I restarted site it throws this error:
success open and validate gatsby-configs - 0.102 s
success load plugins - 0.631 s
success onPreInit - 0.019 s
success initialize cache - 0.053 s
success copy gatsby files - 0.161 s
success onPreBootstrap - 0.040 s
info Creating GraphQL type definition for File
Path: /wp-json
The server response was "404 Not Found"
ERROR #11321 PLUGIN
"gatsby-source-wordpress" threw an error while running the sourceNodes lifecycle:
Cannot read property 'data' of undefined
TypeError: Cannot read property 'data' of undefined
- fetch.js:141 fetch
[gatsby-wordpress]/[gatsby-source-wordpress]/fetch.js:141:21
- next_tick.js:68 process._tickCallback
internal/process/next_tick.js:68:7
warn The gatsby-source-wordpress plugin has generated no Gatsby nodes. Do you need it?
success source and transform nodes - 0.327 s
success building schema - 0.404 s
success createPages - 0.019 s
success createPagesStatefully - 0.090 s
success onPreExtractQueries - 0.022 s
success update schema - 0.079 s
success extract queries from components - 0.595 s
success write out requires - 0.103 s
success write out redirect data - 0.032 s
success Build manifest and related icons - 0.263 s
success onPostBootstrap - 0.308 s
⠀
info bootstrap finished - 6.617 s
⠀
success run static queries - 0.105 s — 3/3 36.11 queries/second
success run page queries - 0.044 s — 5/5 230.97 queries/second
DONE Compiled successfully in 4851ms 10:46:42 AM
⠀
You can now view gatsby-starter-default in the browser.
⠀
http://localhost:8000/
⠀
View GraphiQL, an in-browser IDE, to explore your site's data and schema
⠀
http://localhost:8000/___graphql
⠀
Note that the development build is not optimized.
To create a production build, use npm run build
⠀
ℹ 「wdm」:
ℹ 「wdm」: Compiled successfully.
I run WP locally with Mamp and I'm able to see JSON data here: http://localhost:8888/GatsbyWP/wp-json/ .
Here's my gatsby-config.js file:
module.exports = {
siteMetadata: {
title: `Gatsby wordpress test`,
description: `Testing...`,
author: `#gatsbyjs`,
},
plugins: [
`gatsby-plugin-react-helmet`,
{
resolve: `gatsby-source-filesystem`,
options: {
name: `images`,
path: `${__dirname}/src/images`,
},
},
`gatsby-transformer-sharp`,
`gatsby-plugin-sharp`,
{
resolve: `gatsby-plugin-manifest`,
options: {
name: `gatsby-starter-default`,
short_name: `starter`,
start_url: `/`,
background_color: `#663399`,
theme_color: `#663399`,
display: `minimal-ui`,
icon: `src/images/gatsby-icon.png`,
},
},
{
resolve: "gatsby-source-wordpress",
options: {
baseUrl: `localhost:8888`,
protocol: `http`,
hostingWPCOM: false,
useACF: true,
},
},
`gatsby-plugin-sitemap`,
],
}
I'm stuck and don't have any clue what to do now. I found that other people had similar issue than this but didn't find any good answers or direction where to try figure out my problem.
Thanks in advance!
The options of gatsby-source-wordpress require...
the base URL of the Wordpress site without the trailing slash and the protocol. This is required.
Example : 'gatsbyjsexamplewordpress.wordpress.com' or 'www.example-site.com'
module.exports = {
siteMetadata: {
title: `Gatsby wordpress test`,
description: `Testing...`,
author: `#gatsbyjs`,
},
plugins: [
{
resolve: "gatsby-source-wordpress",
options: {
baseUrl: `localhost:8888/GatsbyWP`,
protocol: `http`,
hostingWPCOM: false,
useACF: true,
},
},
],
}
When Sourcing on hosting from Services Like Godaddy The axios/node clients can be dubious about them and reject the https crt, I have seen this rectified in two ways
1) With adding a third party go daddy crt bundle for example https://ssl-ccp.godaddy.com/repository?origin=CALLISTO, if you have your own server that's fine. So adding this to axios:
var agent = new https.Agent({
ca: fs.readFileSync('ca.pem')
});
axios.get(url, { agent: agent });
// or
var instance = axios.create({ agent: agent });
instance.get(url);
or node
export NODE_EXTRA_CA_CERTS=[your CA certificate file path]
2) But what if you are on Netlify or hosting through gitlab or other, What worked for me was changed my gatsby config protocol to http, this allowed me to source from my site just fine and provided all assets are https anyway, even when I deployed to my https site it all still worked. This stumped me for days, hope this helps someone
{
resolve: `gatsby-source-wordpress`,
options: {
/*
* The base URL of the WordPress site without the trailingslash and the protocol. This is required.
* Example : 'dev-gatbsyjswp.pantheonsite.io' or 'www.example-site.com'
*/
baseUrl: `example.com`,
// The protocol. This can be http or https.
protocol: `http`,
// Indicates whether the site is hosted on wordpress.com.
// If false, then the assumption is made that the site is self hosted.
// If true, then the plugin will source its content on wordpress.com using the JSON REST API V2.
// If your site is hosted on wordpress.org, then set this to false.
hostingWPCOM: false,
Hey you need to make sure you query the post types you pull in and you also need to make sure theres data there to be consumed but here's what you config should look like.
{
resolve: `gatsby-source-wordpress`,
options: {
baseUrl: process.env.API_URL,
protocol: process.env.API_PROTOCOL,
hostingWPCOM: false,
useACF: true,
includedRoutes: [
"**/categories",
"**/posts",
"**/pages",
"**/media",
"**/tags",
"**/taxonomies",
"**/users",
"**/menus",
"**/portfolio",
"**/services",
"**/qualifications",
"**/gallery",
"**/logo",
"**/location",
],
},
},
This may be of help to someone in case all the recommended solutions above do not solve your problems.
It may be that you are using the later version of gatsby and so little changes were made to the way gatsby-source-wordpress plugin works
https://www.gatsbyjs.com/docs/how-to/sourcing-data/sourcing-from-wordpress/
this site was helpful. Check it out too.

Openstack API Authentication

Openstack noob here. I have setup an Ubuntu VM with DevStack, and am trying to authenticate with Keystone to obtain a token to be used for subsequent Openstack API calls. The identity endpoint shown on the “API Access” page in Horizon is: http://<DEVSTACK_IP>/identity.
When I post the below JSON payload to this endpoint, I get the error get_version_v3() got an unexpected keyword argument 'auth’.
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"name": "admin",
"domain": {
"name": "Default"
},
"password": “AdminPassword”
}
}
}
}
}
Based on the Openstack docs, I should be hitting http://<DEVSTACK_IP>/v3/auth/tokens to obtain a token, but when I hit that endpoint, I get 404 Not Found.
I'm currently using Postman for testing this, but will eventually be doing programmatically.
Does anybody have any experience with authenticating against the Openstack API that can help?
Not sure whether you want to do it in a python way, but if you do, here is a way to do it:
from keystoneauth1.identity import v3
from keystoneauth1 import session
v3_auth = v3.Password(auth_url=V3_AUTH_URL,
username=USERNAME,
password=PASSWORD,
project_name=PROJECT_NAME,
project_domain_name="default",
user_domain_name="default")
v3_ses = session.Session(auth=v3_auth)
auth_token = v3_ses.get_token()
And you V3_AUTH_URL should be http://<DEVSTACK_IP>:5000/v3 since keystone is using port 5000 as a default.
If you do have a multi-domain devstack, you can change the domains, otherwise, they should be default
Just in case you don't have the client library installed: pip install python-keystoneclient
Here is a good doc for you to read about it:
https://docs.openstack.org/keystoneauth/latest/using-sessions.html
HTH

Sonata Media CDN Rackspace

I have the next problem with Sonata Media:
I'm trying to use the Rackspace CDN for uploading images:
My config file looks like this based on current documentation:
cdn:
server:
path: %cdn_url%
filesystem:
local:
directory: %kernel.root_dir%/../web/uploads/media
create: false
rackspace:
url: %rackspace.opencloud.host%
secret:
username: %rackspace.opencloud.username%
apiKey: %rackspace.opencloud.api_key%
region: LON
containerName: projectName
create_container: false
replicate:
master: sonata.media.adapter.filesystem.opencloud
slave: sonata.media.adapter.filesystem.local
And on providers config:
providers:
image:
filesystem: sonata.media.filesystem.replicate
cdn: sonata.media.cdn.server
resizer: sonata.media.resizer.square
allowed_extensions: ['jpg', 'png', 'gif', 'jpeg']
allowed_mime_types: ['image/pjpeg','image/jpeg','image/png','image/x-png', 'image/gif']
The problem is(how I discovered this bug)if Rackspace is down or incorrect username/password are provided on every page of the app I'm getting this answer:
Client error response [status code] 401 [reason phrase] Unauthorized [url] https://lon.auth.api.rackspacecloud.com/v2.0/tokens
This is because Gaufrette Opencloud tries to create a connection on Kernel load.
The quickest solution as a temporary fix was to create a compiler pass and check if the authenticate method returns false then replace argument 0 for replicate definition with the local filesystem adaptor.
My questions are:
How can I avoid creating the Rackspace connection on Kernel Load?
In case Rackspace is down how can I swap between Rackspace or other adapter(local or other ftp server)
Thank you in advance and please in case there are not sufficient information provided please leave a comment.
Apparently there is a solution for lazy loading implemented in Gaufrette: https://github.com/KnpLabs/KnpGaufretteBundle/issues/72
All I had to do is:
sonata.media.adapter.open_stack:
class: OpenCloud\Rackspace
arguments: [ %rackspace.opencloud.host%, { username: %rackspace.opencloud.username%, apiKey: %rackspace.opencloud.api_key% }]
sonata.media.adapter.object_store_factory:
class: Gaufrette\Adapter\OpenStackCloudFiles\ObjectStoreFactory
arguments: [ #sonata.media.adapter.open_stack, "LON", ""]
sonata.media.adapter.filesystem.lazyopencloud:
class: Gaufrette\Adapter\LazyOpenCloud
arguments: [ #sonata.media.adapter.object_store_factory, %rackspace.opencloud.container_name%]
And change replicate master to sonata.media.adapter.filesystem.lazyopencloud
Hope it helps :)

Spring Profile values getting mixed

I'm using Spring Boot 1.4.3.RELEASE and gradle 2.13 for developing an api. On local environment, 'local' profile is working perfectly, but I'm getting issue while deploying code in Linux/Unix server.
Here is my application.yml with two profiles : dev and local
---
spring:
profiles:
active: dev
server:
context-path: /api/v1
port: 8080
build:
version: 1.0.0
cache:
storage:
path: '/home/user/content/storage/'
---
spring:
profiles:
active: local
server:
context-path: /content-delivery/api/v1
port: 8081
build:
version: 1.0.0
cache:
storage:
path: '/Yogen/api/code/cachedData'
The command I'm using to deploy my war file is:
jdk1.8.0_112/bin/java -jar _-Dspring.profiles.active=dev content-delivery.jar
When I run my war it was working fine with only one profile, but once i added another profile, I'm getting values get mixed up causing error as below:
01:56:16.557 INFO ContentDeliveryApplication - The following profiles are active: dev
............
02:00:48.182 INFO PropertyPlaceholderConfigurer - Loading properties file from file [/home/542596/content-api/resources/app-dev.properties]
02:00:51.939 INFO PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.ws.config.annotation.DelegatingWsConfiguration' of type [class org.springframework.ws.config.annotation.DelegatingWsConfiguration$$EnhancerBySpringCGLIB$$290c5e2d] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
02:00:55.736 INFO AnnotationActionEndpointMapping - Supporting [WS-Addressing August 2004, WS-Addressing 1.0]
02:01:25.559 INFO PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [class org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$a08e9c2b] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
02:01:56.522 INFO TomcatEmbeddedServletContainer - Tomcat initialized with port(s): 8081 (http)
02:01:57.669 INFO StandardService - Starting service Tomcat
02:01:57.877 INFO StandardEngine - Starting Servlet Engine: Apache Tomcat/8.5.6
02:02:11.228 INFO [/content-delivery/api/v1] - Initializing Spring embedded WebApplicationContext
02:02:11.228 INFO ContextLoader - Root WebApplicationContext: initialization completed in 353263 ms
02:02:19.412 INFO ServletRegistrationBean - Mapping servlet: 'dispatcherServlet' to [/]
02:02:19.517 INFO ServletRegistrationBean - Mapping servlet: 'messageDispatcherServlet' to [/services/*]
02:02:19.829 INFO FilterRegistrationBean - Mapping filter: 'metricsFilter' to: [/*]
02:02:19.829 INFO FilterRegistrationBean - Mapping filter: 'characterEncodingFilter' to: [/*]
02:02:19.829 INFO FilterRegistrationBean - Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
02:02:19.829 INFO FilterRegistrationBean - Mapping filter: 'httpPutFormContentFilter' to: [/*]
02:02:19.829 INFO FilterRegistrationBean - Mapping filter: 'requestContextFilter' to: [/*]
02:02:19.829 INFO FilterRegistrationBean - Mapping filter: 'webRequestLoggingFilter' to: [/*]
02:02:19.829 INFO FilterRegistrationBean - Mapping filter: 'applicationContextIdFilter' to: [/*]
02:02:46.283 ERROR EhcacheManager - Initialize failed.
02:02:46.284 WARN AnnotationConfigEmbeddedWebApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jsonObjectCacheManager': Invocation of init method failed; nested exception is org.ehcache.StateTransitionException: Directory couldn't be created: /Yogen/api/code/cachedData/cachedData
02:02:46.285 INFO StandardService - Stopping service Tomcat
02:02:47.528 INFO AutoConfigurationReportLoggingInitializer -
Error starting ApplicationContext. To display the auto-configuration report re-run your application with 'debug' enabled.
02:02:48.103 ERROR SpringApplication - Application startup failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jsonObjectCacheManager': Invocation of init method failed; nested exception is org.ehcache.StateTransitionException: Directory couldn't be created: /Yogen/api/code/cachedData/cachedData
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:137) ~[spring-beans-4.3.5.RELEASE.jar!/:4.3.5.RELEASE]
The PropertyPlaceholderConfigurer has been configured as follows:
#Configuration
public class PropertyConfiguration {
#Bean
#Profile("dev")
public static PropertyPlaceholderConfigurer developmentPropertyPlaceholderConfigurer() {
PropertyPlaceholderConfigurer configurer = new PropertyPlaceholderConfigurer();
configurer.setLocation(new FileSystemResource("/home/542596/content-api/resources/app-dev.properties"));
configurer.setIgnoreUnresolvablePlaceholders(true);
return configurer;
}
}
The console shows that properties file being read is correct also the profile is 'dev', but the port started and context-path and other values are being fetched from 'local' profile instead 'dev' profile.
What I'm missing?? Thanks.
Your profiles are getting overridden.
Better Boostrap your application using bootstrap.yml
and have two application.yml as follows :
application-dev.yml
application-local.yml
set profiles in bootstrap.yml to decide which property file to load:
spring:
profiles:
active : dev # choose which application properties to load dev/local
Like #Barath stated, you cannot separate profile properties within a single application.yml file. Create a yml file for each profile, name it as follows
application-{profile}.yml
I would not use bootstrap.yml to specify which profile to load as you'll want to externalize this if possible, so passing it to the VM via -Dspring.profiles.active=dev would be preferable.
We also have another option similar to #Maxwell answer instead of passing the profile through VM property :
you can have following files:
application.yml
application-dev.yml
application-local.yml
And define profile in application.yml :
spring:
profiles:
active : dev
This ensures loading of application.yml as well as application-dev.yml.

Resources