How to write OSLC query where clause in Maximo Anywhere to evaluate somevalue < now() - maximo-anywhere

I'm configuring Work Execution. The Work Order History query that is called when retrieving past work orders for assets or locations is open-ended. Consequently, several thousand rows are retrieved each time and the application times out. I can attach where clause (see below) to limit it to records with actfinish after a specific date. However, what I want to do is something like this...
spi_wm:actfinish>now()-30
<!--WorkOrder History Asset Resource-->
<resource id="workOrderHistoryAssetLoc" class="application.business.WorkOrderObject" defaultOrderBy="wonum asc" describedBy="http://jazz.net/ns/ism/work/smarter_physical_infrastructure#WorkOrder" name="workOrderHistoryAssetLoc" pageSize="50" providedBy="/oslc/sp/WorkManagement">
<attributes id="workOrderHistoryAsset_attributes1">
<attribute describedByProperty="dcterms:identifier" id="workOrderHistoryAsset_identifier_dctermsidentifier1" index="true" name="identifier"/>
<attribute describedByProperty="oslc:shortTitle" id="workOrderHistoryAsset_wonum_oslcshortTitle1" index="true" name="wonum"/>
<attribute describedByProperty="dcterms:title" id="workOrderHistoryAsset_description_dctermstitle1" index="true" method="descriptionChanged" name="description"/>
<attribute describedByProperty="spi:status" id="workOrderHistoryAsset_status_spistatus" index="true" method="statusChanged" name="status"/>
<localAttribute dataType="string" id="workOrderHistoryAsset_statusdesc_string" name="statusdesc"/>
</attributes>
<queryBases id="workOrderHistoryAsset_queryBasesh">
<queryBase defaultForSearch="true" id="workOrderHistoryAsset_queryBase_searchAllWorkOrdersh" name="searchAllWorkOrdersAsset" queryUri="/oslc/os/oslcwodetail?savedQuery=getWithComplexQuery"/>
<!-- TODO AWH 20170130 - add where clause to this query -->
</queryBases>
<whereClause clause="spi:status in ['COMP','CLOSE'] and spi_wm:actfinish>'2016-10-10T09:50:00-04:00'" id="workOrderHistoryAssetLoc_whereClause"/>
</resource>
I see elsewhere where there are formulas in the app.xml but I don't know what types of operators or language is available to accomplish something like this. I was hoping the whereClause attribute had the ability to use a resolverClass and resolverFunction so that I could replace a named parameter with a value derived from a javascript function... no dice. Any help would be appreciated!

It looks like you are attempting to set the where clause in the app.xml. While I think this could work, it would probably be a million times easier to do the following.
duplicate the resource, then comment out the original
Create a saved query in Maximo with the where clause you need
a. spi:status in ['COMP','CLOSE'] and spi_wm:actfinish>'2016-10-10T09:50:00-04:00'
Name the saved query "ANYWHERE_WOHIST" or something like that.
Modify the duplicate resource to point to your new saved query.
<queryBase defaultForSearch="true" id="workOrderHistoryAsset_queryBase_searchAllWorkOrdersh" name="searchAllWorkOrdersAsset" queryUri="/oslc/os/oslcwodetail?savedQuery=ANYWHERE_WOHIST"/>
Also, this allows the query where clause to be managed in the backend, so when your users decide they want to see something else here you can mange the query within Maximo. We're nearing the end of our project with Anywhere, so feel free to reach out if you'd like to swap war stories.

Related

Check if record already exists when doing a buffer-copy

I have a piece of code which does a Buffer-Copy method, but is there any way to check before doing the buffer copy of the record already exists? I do not want to check 'unique keys' in my data dictionary.
This is the code I have at this moment:
CREATE QUERY hQuery.
hQuery:SET-BUFFERS(hBuffer).
hQuery:QUERY-PREPARE("FOR EACH " + hBuffer:NAME + " NO-LOCK ").
hQuery:QUERY-OPEN().
hQuery:GET-FIRST().
DO WHILE NOT hQuery:QUERY-OFF-END:
DO TRANSACTION ON ERROR UNDO:
hDBBuffer:BUFFER-CREATE().
hDBBuffer:BUFFER-COPY(hBuffer) NO-ERROR.
It is unclear what you are trying to accomplish and why you don't want to check unique keys "in my data dictionary" or even what you mean by that.
Your example code is very sketchy and incomplete, maybe someone else can figure out what you are trying to do and why, but I am at a loss to divine the purpose behind it. The use of handles and dynamic queries is especially puzzling. There doesn't seem to be a reason for that or any need to do that.
None the less, if I were coding a routine to copy a buffer, couldn't look up unique indexes in the dictionary, and wanted to pro-actively avoid potential collisions I might write something like this:
define temp-table oLine like orderLine.
for each orderline no-lock:
find oLine of orderLine no-error.
if not available( oLine ) then create oLine.
buffer-copy orderLine to oline.
end.
(Using static coding to keep the example simple.)
(I wouldn't really use OF - it is on my personal forbidden list, I think it is terrible from a documentation and maintenance perspective.)
I believe, as Tom has mentioned in his reply, it'd be most appropriate to have another dynamic query directed at the hDBBuffer using the BUFFER-FIELDs and BUFFER-VALUEs from hBuffer and check the NUM-RESULTS after you use QUERY-OPEN. Then delete the query for memory purposes.
But yes, you would be looking for the metadata unique keys to achieve that. I understand you don't want to do it, but it's REALLY the best way, can't stress it enough.
Now if you would really like to check for the existence of ALL the record data, look into the BUFFER-COMPARE method. You could create a second dynamic query, then cycle all records there by using buffer-compare to match the entire record you're looking at to the one you're assessing whether to create, or list the ones you wish to include or exclude. This approach is way less performatic, though, please keep that in mind.

marklogic 8 query performance down after inserting large number xml files in my database

I inserted 200000 xml document (approximately Total size 1GB) in my database through MLCP command. Now I want to run below search query against that database (database with default index setup in the admin api) to get all documents.
let $options :=
<options xmlns="http://marklogic.com/appservices/search">
<search-option>unfiltered</search-option>
<term>
<term-option>case-insensitive</term-option>
</term>
<constraint name="Title">
<range collation="http://marklogic.com/collation/" facet="true">
<element ns="http://learning.com" name="title" />
</range>
</constraint>
<constraint name="Keywords">
<range collation="http://marklogic.com/collation/" facet="true">
<element ns="http://learning.com" name="subjectKeyword" />
</range>
</constraint>
<constraint name="Subjects">
<range collation="http://marklogic.com/collation/" facet="true">
<element ns="http://learning.com" name="subjectHeading" />
</range>
</constraint>
<return-results>true</return-results>
<return-query>true</return-query>
</options>
let $result := search:search("**", $options, 1, 20)
return $result
Range Index:-
<range-element-index>
<scalar-type>string</scalar-type>
<namespace-uri>http://learning.com</namespace-uri>
<localname>title</localname>
<collation>http://marklogic.com/collation/</collation>
<range-value-positions>false</range-value-positions>
<invalid-values>ignore</invalid-values>
</range-element-index>
<range-element-index>
<scalar-type>string</scalar-type>
<namespace-uri>http://learning.com</namespace-uri>
<localname>subjectKeyword</localname>
<collation>http://marklogic.com/collation/</collation>
<range-value-positions>false</range-value-positions>
<invalid-values>ignore</invalid-values>
</range-element-index>
<range-element-index>
<scalar-type>string</scalar-type>
<namespace-uri>http://learning.com</namespace-uri>
<localname>subjectHeading</localname>
<collation>http://marklogic.com/collation/</collation>
<range-value-positions>false</range-value-positions>
<invalid-values>ignore</invalid-values>
</range-element-index>
In each xml document subjectkeyword and title value like be
<lmm:subjectKeyword>anatomy, biology, illustration, cross, section, digestive, human, circulatory, body, small, neck, head, ear, torso, veins, teaching, model, deep, descending, heart, brain, muscles, lungs, diaphragm, c</lmm:subjectKeyword><lmm:title>CORTY_EQ07-014.eps</lmm:title>
But it taking lots of time even query console saying Too many elements to render or Parser Error: Cannot parse result. File Size too large
I'd also add that if you wanted to fetch all documents (which I wouldn't recommend on a non-trivial database) doing it directly rather than as a wildcarded search is going to be more efficient: fn:doc() (or, as Geert suggests, paginated: fn:doc[1 to 20]
First of all, don't try to get all documents at once. It will mean MarkLogic will have to go to disk for every document, process, and serialize it, and last but not least, client-side need to receive and display too. The latter is probably the bottle-neck here. This is typically why user application show search results by 10 or 20 at a time. In other words: use pagination.
I also recommend running unfiltered for better performance.
HTH!
Pagination is definitely key here, and I'm curious about your facets. From your example, I'm imagining "Title" is almost always unique across your 200k documents. And the lmm:subjectKeyword element seems like it needs a little post-processing to make it more useful as a facet - it's a string of comma-delimited values, which means subjectKeyword will almost always be unique too (I recommend putting each of these values into a separate element, that would be much more useful as a facet). And I'm guessing subjectHeading is mostly unique too.
Facets are generally useful when you have a bounded set of values - e.g. for laptops, bounded sets include manufacturer, monitor size, and buckets for price range. Once you get into hundreds of values, the utility of a facet decreases for a user - how many users really want to sort through hundreds or thousands of values to find what they want? And in your case, we're probably talking about tens of thousands of unique values, if not 200k unique values (particularly for "Title"). And - when you have that many unique values, facet resolution time is going to take longer.
So before exploring the facet resolution time - what problem are you trying to solve with these 3 facets?
Without knowing anything more, I'd post-process that subjectKeyword element into many elements, each with a single keyword in it, and then put a facet on that element. Ideally, you have dozens of keywords, maybe hundreds, and resolving that facet should be very fast.

element-attribute-range-query fetching result but element-attribute-value-query is not fetching any result

I wanted to fetch the document which have the particular element attribute value.
So, I tried the cts:element-attribute-value-query but I didn't get any result. But the same element attribute value, I am able to get using cts:element-attribute-range-query.
Here the sample snippet used.
let $s-query := cts:element-attribute-range-query(xs:QName("tit:title"),xs:QName("name"),"=",
"SampleTitle",
("collation=http://marklogic.com/collation/codepoint"))
let $s-query := cts:element-attribute-value-query(xs:QName("tit:title"),xs:QName("name"),
"SampleTitle",
())
return cts:search(fn:doc(),($s-query))
The problem with range-query is it needs the range index. I have hundreds of DB's in multiple hosts. I need to create range indexes on each DB.
What could be the problem with attribute-value-query?
I found the issue with a couple of research.
Actually the result document is a french language document. It has the structure as follows. This is a sample.
<doc xml:lang="fr:CA" xmlns:tit="title">
<tit:title name="SampleTitle"/>
</doc>
The cts:element-attribute-value-query is a language dependent query. To get the french language results, then language needs to be mentioned in the option as follows.
cts:element-attribute-value-query(xs:QName("tit:title"),xs:QName("name"), "SampleTitle",("lang=fr"))
But cts:element-attribute-range-query don't require the language option.
Thanks for the effort.

Retrieve query for time range (FetchXML/QueryExpression)

(Sorry for bad English)
I have an application that uses MS-CRM 2011 Web Service for retrieving last changes on crm Entities. This application Sync last data changes with a Windows Mobile device.
Sync operation occurs periodic every 20 minutes. In each sync operation I want to retrieve changes occurred from previous update by checking entities 'modifiedon' field.
Problem is that, CRM queries don't use Time piece of DateTime object and all change from the start of passed DateTime parameter been returned.
I'd check both FetchXML and QueryExpression, there is no different.
Is there any way to create a query for run on Crm WebService that return modified record from specified Date and time?
Sample (My FetchXml) :
<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>
<entity name='new_brand'>
<attribute name='new_brandname' />
<attribute name='new_pdanumber' />
<filter type='and'>
<condition attribute='modifiedon' operator='on-or-after' value='2012/11/12 23:59'/>
</filter>
</entity>
</fetch>
See In the code, I wants all modified entities from 2012/11/12 23:00, but crm returns all modified record from 2012/11/12 00:00.
I have the same problem now with and oldest Dynamics CRM 2011 organization.The on or after comparation don't compare the time, only the date. Try using grater or equal ('ge' in fetchxml).
Your code is looking for records changed on or after 23:59... Your documented results sound correct.
In any case, the time part is used but i suspect you are seeing the result of user time versus universal time. If the users time zone offset != 0 then midnight as selected by the user in the UI will be different to the value stored in the database which is the UST equivalent.
We've added a new field (called ModifiedOnTick) to the Entity, and record the time difference in milliseconds with a fixed date (2011-01-01) in this field.

Drupal 6 & Views 2 - DISTINCT field

I'm using the Feeds module to import lots of Feed Item nodes. Due to a malformed feed file, I'm getting lots of duplicates. I'm using a View to display these nodes, and need to be able to add a DISTINCT filter on the "Node: Post Date" field, so I only get 1 result for each post-date.
I will also look into tackling the problem at the source so to speak (I don't want to have all those duplicates in the first place), but this is an interesting issue in itself - I can't find a way to add a DISTINCT filter on a field other than the Node ID (which has it's own option in the View's Basic Settings box).
I found a great article on a good way to alter the SQL queries that are generated from views before they get executed: http://echodittolabs.org/blog/2010/06/group-views. I used this to basically suffix a GROUP BY clause to the end of the query (in a really nice, clean and versatile way).
As an aside, I also found a way to tackle the issue of importing lots of duplicate feed items, the details of which are here: http://drupal.org/node/661314#comment-3667228. It adopts quite an extreme approach (deleting all items before each update), but this is the only solution for some nasty malformed feeds.
I was holding out for some undiscovered feature of Views that let you do this, but I don't think there is one - maybe in the next version ;)
There are two option to solve this:\
apply this patch
OR
hook_views_query_alter => just paste
$query->distinct = 1;
$query->no_distinct = 'views_groupby';
I guess you have two options: either put some logic in the view template file to skip the duplicate items or implement hook_views_query_alter() to change the query used by the view, adding the DISTINCT clause.
We found this issue in drupal 6.x view - had 7 of 150 items duplicated one or twice. No idea why. Issue only appeared for anonymous users. Luckily, views 6.x.2.16 provides a 'distinct' setting under the basic settings, I set it to Yes and got rid of the duplicates.

Resources