I have heard a rumor: ""solr-alfresco" is a technical query language - it is not officially supported. Only "fts-alfresco" and "cmis-alfresco" are supported and recommended. I.e. "fts-alfresco" is the high-level language, and depending on query and system configuration, can use either "solr-alfresco" or "db-afts"."
But our complex system rely on "solr-alfresco" query language, could you tell me does it for long life or not?
The query languages in Alfresco work very different , the "fts-alfresco", "cmis-alfresco", "solr-cmis" are hit the limit 1000 max (limit could be overridden in properties) and do not allow to get more than limit even using a pagination. The "solr-alfresco" , "solr-fts-alfresco" - allow us to get all documents using the pagination
Following are officially "full support" languages by search service:
LANGUAGE_CMIS_ALFRESCO
LANGUAGE_CMIS_STRICT
LANGUAGE_FTS_ALFRESCO
LANGUAGE_LUCENE
LANGUAGE_SOLR_ALFRESCO
LANGUAGE_SOLR_CMIS
LANGUAGE_SOLR_FTS_ALFRESCO
LANGUAGE_XPATH
https://docs.alfresco.com/5.2/references/dev-services-search.html
Related
Whenever I call a function that has (enforce-guard some-guard) from X-wallet or Zelcore it always fails with the error Keyset failure (keys-all)
I have no issues when doing this from Chainweaver
How to fix this
This is an issue if you are also providing capabilities with your request.
To fix this, you will need to put enforce-guard within a capability too.
So you will need to do something like
(defcap VERIFY_GUARD (some-guard:guard)
(enforce-guard some-guard)
)
And wherever you would call enforce-guard , you will then need to do
(with-capability (VERIFY_GAURD some-guard)
; Guarded code here
)
Why does this happen?
Chainweaver allows you to select unrestricted signing keys, which provides a key/guard for enforce-guard to work with.
However X-Wallet and Zelcore don't provide this if capabilities present on the request (otherwise they do).
It is probably better practice to add enforce-guard into capabilities anyways and use require-capability in places where you expect the guard to pass.
Below is the example , I understand the score is 0 for translate, but is it not supposed to detect the language at least especially when the detect API is working as expected for the same text ?
Detect API
POST https://api.cognitive.microsofttranslator.com/detect?api-version=3.0
[{"Text":"ಬಾ ಇಲ್ಲಿಗೆ"}]
Response :
[{"language":"Knda","score":1.0,"isTranslationSupported":false,"isTransliterationSupported":true}]
Translate API
POST https://api.cognitive.microsofttranslator.com/translate?to=en&api-version=3.0
[
{"Text":"ಬಾ ಇಲ್ಲಿಗೆ"}
]
Response :
[{"detectedLanguage":{"language":"en","score":0.0},"translations":[{"text":"ಬಾ ಇಲ್ಲಿಗೆ","to":"en"}]}]
If the detect API is able to return the language properly , translate should at least return the detected language properly , as of now it looks completely wrong .
Translate detects among the translatable languages, English as default. Detect detects languages Translator doesn't translate.
I want to have the same WordPress users in two different databases
For example, if a user registers on SiteA, then he can login to SiteB. And reverse.
Also i want create same cookie for both after login.
mywebsite.com/ (SiteA_DB)
mywebsite.com/blog/ (SiteB_DB)
I've never done this before and maybe Wordpress has hooks to archive this, but I prefer using mysql for such a trick.
You could try ..
.. using 'federated storage' ( https://stackoverflow.com/a/24532395/10362812 )This is my favorite, because you don't even have to share a database or even the mysql serverThe downside is, that it doesn't work with db cache and uses an additional connection.
.. creating a 'view' ( https://stackoverflow.com/a/1890165/10362812 )This should be possible when using the database-name in the query itself and it would be the simplest solution if it works. Downside: The 2 tables have to share the same mysql-server and have to be assigned to the same user as far as I know.
-- **Backup your database before trying!** --
DROP TABLE `second_database`.`wp_users`;
DROP TABLE `second_database`.`wp_usermeta`;
CREATE VIEW `second_database`.`wp_users` AS SELECT * FROM `first_database`.`wp_users`;
CREATE VIEW `second_database`.`wp_usermeta` AS SELECT * FROM `first_database`.`wp_usermeta`;
This should work, according to: Creating view across different databases
.. creating a 'shadow copy' ( https://stackoverflow.com/a/1890166/10362812 )Works with caching and is a standalone tableDownsides as 2. solution + a bit of setup and I think it might be the worst option in performance
This were answers to this question: How do I create a table alias in MySQL
I merged them together for you and made them fit your use-case.
Please also notice, that solution 1 and 2 will replace your current user-tables auf "second_database" because you write directly into "first_database" when querying the fed. storage or the view. This can lead to problems with user-role plugins. You should take care of syncing the plugin-options too, if you should use one of them and in case it uses different tables or 'wp_options' values.
Let me know if this works, I have to do a similar task next week. While researching I found the linked answers.
EDIT: I was missing the point of "cookie-sharing" in my answer. Your example shows a blog on the same domain - you should be able to change the way wordpress sets its cookies to be domain-wide. What I did once for 2 different domains was, that I hooked into the backend (is_admin) and added a javascript which did a post-request to siteB, receiving a token which is stored but marked as 'invalid' on siteB. This token then was passed back to my plugin on siteA which checked if the user is logged_in and (in my case) have adminrights (current_user_can()) and if so, it was sending this token back to sideB which was marking this token as valid to login. (Make sure only sideA can tell sideB to make this token valid!) Once a user is seen with this token in a cookie on siteB, the user is logged-in automatically in the background. Also I made this bidirectional. I am sorry, that I can't share the code for you. I don't have access to it anymore.
Greetings, Eric!
Ektron 801 SP1
I am using the following code to fetch some smart form content. Can I pre-sort (using OrderByField?) before I fetch 20 rows? I am sorting memberlist but that is after the fact and kinda useless. What am I missing?
Criteria<ContentProperty> criteria1 = new Criteria<ContentProperty>();
criteria1.AddFilter(ContentProperty.XmlConfigurationId, CriteriaFilterOperator.EqualTo, MEMBERS_ID);
criteria1.PagingInfo = new PagingInfo(20);
List<ContentType<member>> memberslist = contentTypeManager.GetList(criteria1);
I have good news and bad news for you.
First, the good news. You can sort by Content Properties with the Criteria object before you pull the 20 items. You'll want to use the OrderByField and OrderByDirection properties of the criteria.
criteria.OrderByField = ContentProperty.DateCreated;
criteria.OrderByDirection = EkEnumeration.OrderByDirection.Descending;
The bad news comes when trying to order items based on fields within the Smart Form. You might be able to do so using the IndexSearch API, but since Ektron 8.0* still relies on Microsoft's Indexing Service, I'm not a fan of that approach and don't have any code to share. If you choose to go that route, the premise is to use search to return the content IDs in the correct order, then use the criteria, as you are, to get items with those IDs.
What you can do with just the API is use Microsoft LINQ to sort the data after it's loaded, but in order to get the right results in the right order you have to load all items first (and ideally cache them to minimize performance impact). I'm using one of my content types as an example, but you should get the idea.
var membersList = new List<SlideBannerType>();
var sortedList = membersList.OrderBy(s => s.EnableAlternateText);
var firstpage = sortedList.Take(20);
var nextpage = sortedList.Skip(20).Take(20);
It's not ideal, but it does work very well for smaller (in the hundreds, perhaps thousands, but not tens of) data sets.
The second bit of good news, though, is that Ektron uses Microsoft Search Server for versions 8.5 and up. This has a much, much more robust API and performs fantastic (both in terms of speed and reliability). The premise would actually stay the same as that for the IndexSearch, use Search to get the IDs in the right order, and then ContentManager (or ContentTypeManager) to get the items. I've used this approach several times, albeit not with Smart Forms specifically. Your best result would come from upgrading to 8.6 and Microsoft Search Server and using the two APIs together to get each page of data. In doing so, it would actually be almost trivial at that point to mix in advanced search and filter options as well with the new search APIs.
I started building a app that will automatically download my delicious bookmarks, and save to a database, so they I can view them on my own website in my favoured format.
I am forced to use oAuth, as I have a yahoo id to login to delicious. The problem is I am stuck at the point where oAuth requires a user to manually go and authenticate.
Is there a code/ guidelines available anywhere I can follow? All I want is a way to automatically save my bookmarks to my database.
Any help is appreciated. I can work on java, .net and php. Thanks.
Delicious Provides an API for this already:
https://api.del.icio.us/v1/posts/all?
Returns all posts. Please use sparingly. Call the update function to see if you need to fetch this at all.
Arguments
&tag={TAG}
(optional) Filter by this tag.
&start={#}
(optional) Start returning posts this many results into the set.
&results={#}
(optional) Return this many results.
&fromdt={CCYY-MM-DDThh:mm:ssZ}
(optional) Filter for posts on this date or later
&todt={CCYY-MM-DDThh:mm:ssZ}
(optional) Filter for posts on this date or earlier
&meta=yes
(optional) Include change detection signatures on each item in a 'meta' attribute. Clients wishing to maintain a synchronized local store of bookmarks should retain the value of this attribute - its value will change when any significant field of the bookmark changes.
Example
$ curl https://user:passwd#api.del.icio.us/v1/posts/all
<posts tag="" user="user">
<post href="http://www.weather.com/" description="weather.com"
hash="6cfedbe75f413c56b6ce79e6fa102aba" tag="weather reference"
time="2005-11-29T20:30:47Z" />
...
<post href="http://www.nytimes.com/"
description="The New York Times - Breaking News, World News & Multimedia"
extended="requires login" hash="ca1e6357399774951eed4628d69eb84b"
tag="news media" time="2005-11-29T20:30:05Z" />
</posts>
There are also public and private RSS feeds for bookmarks, so if you can read and parse XML you don't necessarily need to use the API.
Note however that if you registered with Delicious after December, and therefore use your Yahoo account, the above will not work and you'll need to use OAuth.
There are a number of full examples on the Delicious support site, see for example: http://support.delicious.com/forum/comments.php?DiscussionID=3698