Solr Fields in schema (Solr v6.6), marked as 'stored=false' still appearing in query response
Did we reindex the document after changing the schmea , I hope you are aware for solr cloud we need to do up config. if you are using solr cloud, then do up config of your schema into zookeeper and restart the solr and then try.
Related
I'm working through the appengine+go tutorial, which connects in with Firebase: https://cloud.google.com/appengine/docs/standard/go/building-app/. The code is available at https://github.com/GoogleCloudPlatform/golang-samples/tree/master/appengine/gophers/gophers-6, which aside from my Firebase keys is identical.
I have it working locally just fine under dev_appserver.py, and it queries the Vision API and adds labels. However, after I deploy to appengine I get an index error on datastore. If I go to the Firebase console, I see the collection (Post) and the field (Posted) which is a timestamp.
If I change this line: https://github.com/GoogleCloudPlatform/golang-samples/blob/master/appengine/gophers/gophers-6/main.go#L193 to remove the Order("-Posted") then everything works (it's important to note that any Order call causes it to error, except the test records I've posted come in random order.
The error message when running in appengine is: "Getting posts: API error 4 (datastore_v3: NEED_INDEX): no matching index found."
I've attempted to create a composite index, or test locally with --require_indexes=true and it hasn't helped me debug the issue.
Edit: I've moved this over to use Firebase's Datastore libraries directly, instead of the GCP updates. I never solved this particular issue, but was able to move forward with my app actually working :)
By default the local development server automatically creates the composite indexes needed for the actual queries invoked in your app. From Creating indexes using the development server:
The development web server (dev_appserver.py) automatically adds
items to this file when the application tries to execute a query that
needs an index that does not have an appropriate entry in the
configuration file.
In the development server, if you exercise every query that your app
will make, the development server will generate a complete list of
entries in the index.yaml file.
When the development web server adds a generated index definition to
index.yaml, it does so below the following line, inserting it if
necessary:
# AUTOGENERATED
The development web server considers all index definitions below this
line to be automatic, and it might update existing definitions below
this line as the application makes queries.
But you also need to deploy the generated index configurations to the Datastore and let the Datastore update indexing information (i.e. the indexes to get into the Serving state) for the respective queries to not hit the NEED_INDEX error. From Updating indexes:
You upload your index.yaml configuration file to Cloud Datastore
with the gcloud command. If the index.yaml file defines any
indexes that don't exist in Cloud Datastore, those new indexes are
built.
It can take a while for Cloud Datastore to create all the indexes and
therefore, those indexes won't be immediately available to App Engine.
If your app is already configured to receive traffic, then exceptions
can occur for queries that require an index that is still in the
process of being built.
To avoid exceptions, you must allow time for all the indexes to build.
For more information and examples about creating indexes, see
Deploying a Go App.
To upload your index configuration to Cloud Datastore, run the
following command from the directory where your index.yaml is located:
gcloud datastore create-indexes index.yaml
For information, see the gcloud datastore reference.
You can use the GCP Console, to check the status of your indexes.
I am using the REST API of Firebase Realtime Database from an AppEngine Standard project with Java. I am able to successfully put data under different locations, however I don't know how I could ensure atomic updates to different paths.
To put some data separately at a specific location I am doing:
requestFactory.buildPutRequest("dbUrl/path1/17/", new ByteArrayContent("application/json", json1.getBytes())).execute();
requestFactory.buildPutRequest("dbUrl/path2/1733455/", new ByteArrayContent("application/json", json2.getBytes())).execute();
Now to ensure that when saving a /path1/17/ a /path2/1733455/ is also saved, I've been looking into multi path updates and batched updates (https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes, only available in Cloud Firestore?) However, I did not find whether this feature is available for the REST API of the Firebase Realtime Database as well or only through the Firebase Admin SDK.
The example here shows how to do a multi path update at two locations under the "users" node.
curl -X PATCH -d '{
"alanisawesome/nickname": "Alan The Machine",
"gracehopper/nickname": "Amazing Grace"
}' \
'https://docs-examples.firebaseio.com/rest/saving-data/users.json'
But I don't have a common upper node for path1 and path2.
Tried setting as the url as the database url without any nodes (https://db.firebaseio.com.json) and adding the nodes in the json object sent, but I get an error: nodename nor servname provided, or not known.
This would be possible with the Admin SDK I think, according to this blog post: https://firebase.googleblog.com/2015/09/introducing-multi-location-updates-and_86.html
Any ideas if these atomic writes can be achieved with the REST API?
Thank you!
If the updates are going to a single database, there is always a common path.
In your case you'll run the PATCH command against the root of the database:
curl -X PATCH -d '{
"path1/17": json1,
"path2/1733455": json2
}' 'https://yourdatabase.firebaseio.com/.json'
The key difference with your URL seems to be the / before .json. Without that you're trying to connect to a domain on the json TLD, which doesn't exist (yet) afaik.
Note that the documentation link you provide for Batched Updates is for Cloud Firestore, which is a completely separate database from the Firebase Realtime Database.
I want get for a given database/table the list of groups this database/table has been granted access in sentry.
There does not appear to be a Sentry SHOW command for this purpose in the documentation.
This blog post suggests that you can instead query the Sentry database directly (assuming you are using the Sentry service, not policy files).
However, at present there is no command to show the group to role
mapping. The only way to do this is by connecting to the Sentry
database and deriving this information from the tables in the
database.
If you're using CDH you can determine which node in the cluster is
running the Sentry database using Cloudera Manager, navigating to
Clusters > Sentry, then clicking Sentry Server and then
Configuration. Here you will find the type of database being used
(e.g. MySQL, PostgreSQL, Oracle), the server the databases is running
on, it's port, the database name and user.
You will need the Sentry database password - the blog post gives a suggestion for retrieving it if you do not know it.
An example query for a PostgreSQL database is given:
SELECT "SENTRY_ROLE"."ROLE_NAME","SENTRY_GROUP"."GROUP_NAME"
FROM "SENTRY_ROLE_GROUP_MAP"
JOIN "SENTRY_ROLE" ON "SENTRY_ROLE"."ROLE_ID"="SENTRY_ROLE_GROUP_MAP"."ROLE_ID"
JOIN "SENTRY_GROUP" ON "SENTRY_GROUP"."GROUP_ID"="SENTRY_ROLE_GROUP_MAP"."GROUP_ID";
However, I have not tried this query myself.
This should work for MySQL:
SELECT R.ROLE_NAME, G.GROUP_NAME
FROM SENTRY_ROLE_GROUP_MAP RGM
JOIN SENTRY_ROLE R ON R.ROLE_ID=RGM.ROLE_ID
JOIN SENTRY_GROUP G ON G.GROUP_ID=RGM.GROUP_ID;
We are seeking the most simple way for sending alfresco's audit log to elasticsearch.
I think using the alfresco supplying query and getting audit log would be most simple way.(since audit log data is hardly watchable on db)
And this query processes the effect measure as json type then I'd like to download the query direct using fluentd and send to elasticsearch.
I roughly understood that it would ouput at elasticsearc but I wonder whether I can download 'curl commend' using query direct at fluentd.
Otherwise, if you have other simple idea to get alfresco's audit log then kindly let me know.
I am not sure weather I understood it fully or not but based on your last statement I am giving this answer.
To retrieve audit entries from alfresco repository you could directly use REST APIs of Alfresco which allows you to access them.
I am trying to get ACLs attached to a document in alfresco repository. I believe ACL are stored in solr index along with content.
I did some research and found out that CMIS provides ACLService as below.
AclService aclService = session.getBinding().getAclService();
But on alfresco repository side of things, there is no such equivalent.
Has anybody any idea on how to get ACL for a document.
Regards.
Permissions are stored in the DB but also indexed into SOLR to filter search results by permissions without DB access.
I guess you're looking for bean PermissionService interface org.alfresco.service.cmr.security.PermissionService:
Get all the AccessPermissions that are set for anyone for the given node:
public Set<AccessPermission> getAllSetPermissions(NodeRef nodeRef);
Get all the AccessPermissions that are granted/denied to the current authentication for the given node:
public Set<AccessPermission> getPermissions(NodeRef nodeRef);