I have probably multiple newbie questions but I am unsure about how to work with telepat based on just the document.
While creating an APP, we are expected to give a Key. However the field name is keys. Is there any reason for it? I am assuming that it would have to be unique but document does not mention if that is the case or the error we should expect in case the rule is violated.
Referring to http://docs.telepat.io/api.html#api-Admin-AdminCreateContext Admin Create does not seem to require authentication even when doing it from API. It also misses the response on success. Just a 200 may be sufficient but..
There is no way to get App ID. What am I missing?
First of all, what version of Telepat are you using ? Changes to the infrastructure happen often. The latest version is 0.2.5 (although I'd try to download from develop branch since improvements and bug-fixes appear on a day-by-day basis).
You can add multiple API keys for an application and distribute them in whatever way you want. The system is not bothered if you add a key that already exists at the moment.
May be because of old Telepat build. Can't get into detail with this.
admin/app/create returns the application object, including its ID. Also /admin/apps returns a list of all applications you have.
Related
I am using solr-5.4.0 in my production environment(solr cloud mode) and am trying to automate the reload/restart process of the solr collections based on certain specific conditions.
I noticed that on solr reload the thread count increases a lot there by resulting in increased latencies. So I read about reload process and came to know that while reloading
1) Solr creates a new core internally and then assigns this core same name as the old core. Is this correct?
2) If above is true then does solr actually create a new index internally on reload?
3) If so then restart sounds much better than reload, or is there any better way to upload new configs on solr?
4) Can you point me to any docs that can give me more details about this?
Any help would be appreciated, Thank you.
If I'm not wrong you wanted to restart/reload the collection in production (Solr cloud mode) and asking for the best approach, If that's true here are few points for the consideration-
If possible, Could you please provide more details like what it cause/requirement to reload/restart the collection in production
I’m assuming the reason could be to refresh the shared resource (Like to see the changes of updated synonyms, or adding or deleting a stop word, etc.) or to update the Solr config set.
Here are a few points for the consideration-
If you want to update the shared resources –
Upload the resources through Solr API and reload the collection through (https://lucene.apache.org/solr/guide/6_6/collections-api.html#CollectionsAPI-Input.1)
If you want to update the config set –
When running in SolrCloud mode, changes made to the schema on one node will propagate to all replicas in the collection. You can pass the updateTimeoutSecs parameter with your request to set the number of seconds to wait until all models confirm they applied the schema updates. ( I got this information from solr-5.4.0, and it’s similar to what we have in Solr 6.6 here https://lucene.apache.org/solr/guide/6_6/schema-api.html)
1) Solr creates a new core internally and then assigns this same core name as the old core. Is this correct?
Not sure about it. Can you please share some reference?
2) If above is true then does solr create a new index internally on reload?
Not sure about it. Can you please share some reference?
3) If so then restart sounds much better than reload, or is there any better way to upload new configs on solr?
I don’t agree, because ideally reload is part of the restart, as per my understanding there will be an additional process in reset related to caching and sync.
4) Can you point me to any docs that can give me more details about this?
Here is a link for reference guide- https://archive.apache.org/dist/lucene/solr/ref-guide/apache-solr-ref-guide-5.4.pdf
In one requirement I need to query just created document. If I use lucene search then it will take few seconds to do the indexing and may not come in the search result.
The query should be executing from some alfresco webscript or a scheduler which runs every 5 seconds.
Right now I am doing it by using NodeService and finding child by name which is not the efficient way to do. I am using JAVA API.
Is there any other way to do it?
Thank you
You don't mention what version of Alfresco you are using, but it looks like you are using Solr.
If you just created the document, the recommendation is to keep the reference to it, so you don't have to search for it again.
However, sometimes it is not possible to have the document reference. For example, client1 is not aware that client2 just created a document. If you are using Alfresco version 4.2 or later, you can probably enable Transactional Metadata Queries (TMQ), which allows you to perform searches against the database, so there is no Solr latency. Please review the whole section, because you need to comply with four conditions to use TMQ:
Enable the TMQ patch, so the nodes properties tables get indexed in the database.
Enable searches using the database, whenever possible (TRANSACTION_IF_POSSIBLE).
Make sure that you use the correct query language (CMIS, AFTS, db-lucene, etc.)
Your query must be supported by TMQ.
What's the best way to change the Firebase data model while you have multiple versions of your iOS app in production?
Since there's no 'application server' layer in the middle any changes in the database model could break older versions of the app.
Performance Related Example of the problem:
In version 1.0 I was naively keeping everything related to a post under '/posts/'. Now in version 2.0 I want to take Firebase's recommendation and add a '/user-post' endpoint to quickly list all posts for a given user.
People using version 1.0 of the iOS app are not writing any data to '/user-posts' since that endpoint didn't used to exist. People using version 2.0 therefore don't see any posts created by people using the old version of the app.
In theory I could create a server somewhere that listens for changes on '/post/' and adds them to '/user-posts' as well. That seems hard to maintain over time though if you have a lot of different versions of your app.
New Feature Example of the problem:
Lets say in version 1.0 of your mobile app you write new blog posts to '/posts/'. Now in version 2.0 of your app you introduce a Teams feature and all posts need to be in '/team/team-id/posts'.
People who haven't upgraded to version 2.0 will still be writing to '/posts'. Those posts won't be visible to people using version 2.0 who are reading from '/team/team-id/posts'.
I realize you could keep both endpoints simultaneously (and index /posts based on team ID) but over time this seems hard to maintain.
Traditional solutions:
If I were using something like Django or Express I'd do a database migration and then update the server-side endpoints for creating blogposts.
That would make changes in the database from the clients. I could in theory add an application-server tier to my architecture with Firebase, but that doesn't seem like it's recommended: https://firebase.googleblog.com/2013/03/where-does-firebase-fit-in-your-app.html
I would suggest you use Firebase Remote Config to show an alert via UIAlertController or different screen if an update is available. You could force the user to update to the current version and you don't have problems later because no posts with the old code can be created.
To answer your question:
I would develop a different app, add it to the same Firebase project and then let this app convert all old data to the new data model. So you would do this one time after releasing the new version and the old user data is converted to the new data model and everything works smoothly. You could also have a property like databaseVersion for every object.
To prevent future problems you could have a general property named app-version in your Firebase Realtime Database. Before every post the app checks if there is a newer version. If not the user can add the post but if there is a newer version you could show an message/alert via UIAlertController
I'm working with Azure's offline-sync API.
(It's REALLY GREAT so far, but since it's still new-ish it doesn't have comprehensive documentation, only tutorials. We need to craft dependable integration tests, and we're finding that tricky because we need to rely on published behavior in official docs for that... or dig into the source, but that is liable to change at any time.)
The samples do this:
var store = new MobileServiceSQLiteStore("localstore.db");
The comments mention "initializes local store".
I assume the local sync database is a "throw-away" asset, as it can be recreated at will.
Is the expected behavior that it will create the local SQLite file if it does not exist, or it will recreate the file each time the mobile app starts and that call is made?
The tutorials are augmented by the HOWTO documentation (available under Mobile > Develop - in the same area as the tutorials) and the GitHub Wiki and the github.io pages for the SDK.
The local store is created if it doesn't exist, and new fields are added to tables if they are needed. It's sometimes good to delete the database - for example, if you reduce the field count in your mobile app (the process only adds fields). If you do this, the database will be re-created when the app is next restarted.
I'm using alfresco throw cmis.
On one of our environment, we have an issue.
We want to create a folder and put some docs in it.
This works fines in all our env except one.
In this one, we can create the folder.
But when we do a search to find the folder, the folder isn't found.
After that i can find it with the share gui.
I have no error message in the share app.
Does any one have an idea on what could be the issue?
Promoting a comment to an answer...
When using Alfresco with SOLR, you need to be aware that the SOLR index isn't quite real-time. Close to real time, sure, but it's asynchronous so there's always a lag. (It's an eventually consistent index, not a fully realtime one)
There's a lot of information on the Alfresco and SOLR Wiki, including the way you can query what the current lag is.
If the lag is very low (eg a lightly loaded system), you can find that SOLR will catch up almost instantly, and newly created items will show instantly in the search results. However, it's more normal to expect to have to wait a little bit, especially on more loaded systems.
If no new results are showing up even after several minutes, you'll want to follow the instructions on the wiki or the SOLR Monitoring and Troubleshooting docs to work out why and fix.