App does not have access to all the data in JSON Store - maximo-anywhere

I am facing an issue in the Work Execution application where the page size on the resource is defined as 1000 and it loads just 1000 into the app cache
This is OK from a performance point of view, but if I am trying to search [using the search bar on top] for data it should go look for it in the whole data set and not just that 1000
What am I missing?

I see from the email you sent me that the resource you're attempting to search is the Location resource, and you're trying to search the location resource within the Location lookup. The issue where only the first page of data was being searched was fixed under this APAR IV83948, and is available out of the box in 7.6.1 or from an iFix on 7.6

Related

How can I retrieve a gitlab issue by its id?

I am trying to track a GitLab issue no matter in what project it may reside.
An issue is normally created in GitLab within the context of a project. If it is moved to another project, the issue is closed and a new issue is created. The original issue tracks the new location using the moved_to_id. The problem is I have no idea how to follow this moved_to_id value using the GitLab API v4. GitLab does not honour the typical REST-like behaviour where you can retrieve an entity by its ID.
For example, if I call https://gitlab.com/api/v4/issues/ I'll get a list of issues as objects: these objects have a set of fields: title, description, state, ..., id and iid. The iid is the user-friendly id of an issue within a project. But what is the id and how is it useful? I can't retrieve an issue using this id - at least not using expected ways...
Consider an issue exists in https://gitlab.com/api/v4/issues/ with id == 29564819,
https://gitlab.com/api/v4/issues/29564819 returns a 404.
https://gitlab.com/api/v4/issues/29564819/ returns a 404.
https://gitlab.com/api/v4/issues/29564819?scope=all returns a 404
https://gitlab.com/api/v4/issues/?id=29564819 returns all the issues (no effect using parameter).
Can I retrieve an issue without a project? Do I have to resort to using labels?
If a user is not a member of a private project, a GET request on that project results in a 404 status code.
Only administrators can retrieve issue by its id.
The preferred way to do this is by using personal access tokens.
GET /issues/:id

how to programmaticaly get events from sterling file gateway?

We have Sterling file gateway with UI and everything and we also have control center where we see the file transfers from SFG. Trying to find out how i can subscribe to the events from Filegateway[SFG] programmaticaly. The documentation is not clear on if there is a way to do this.
The Database tables FG_EVENT and FG_EVENTATTR contain the details about Filegateway events.
Example of SQL query:
select * from fg_event t1,fg_eventattr t2 where t1.event_key=t2.event_key and
event_code='FG_0422'
You can add different criteria to the SQL query to filter on filename, type of delivery, date , etc ...
Then you can use SQL queries with any client to query the Database.
Sterling Control Center can monitor the following events:
•Arrived File events - every Sterling File Gateway Arrived File status code is recorded as a successful (FG_0411 - Arrived File Routed) or a failed (FG_0455 - Arrived File Failed) file transfer
•Route events
•Delivery events
More information about IBM Control Center.
There is also another way to invoke business processes by certain events:
Edit the listenerStartup.properties and listenerStartup.properties.in files to include the line:
Listener.Class.xx=com.sterlingcommerce.server1.
dmi.visibility.event.XpathBPLauncherEventListener
Where xx is the next available number according to how many listeners are already enabled in the file.
Edit the visibility.properties and visibility.properties.in files to add the necessary information to configure the listener to launch the proper business processes based on the correct events. The pattern for registering the events with the listener is:
bp_event_trigger.X=eventPreFilter,xPathExpression,bpname,userId
There is an example in this page:
https://www.ibm.com/support/knowledgecenter/SS3JSW_5.2.0/com.ibm.help.aft.doc/SI_AFT_InternalEvent.html

Simulating CMIS Atom API doesn't load the information properly

I was requested to simulate a CMIS Atom API for my company's content management using our API. but I'm stuck in what it seems to be something simple. So I'm trying to load the CMIS TCK, but for some reason the values of the responses doesn't make it into the next request. So I think I'm missing something.
The first request I get is to getRepositories
/cmisatom/getRepositories
Then I get the request to get a specific repository
/cmisatom/getRepositories?repositoryId=c9ad76c6-d121-4a32-bb14-e5d43bf91ee6
Which kinda tells me that the data from the first request was parsed properly.
Now on the third request is where things get weird. I get the request for the id
/cmisatom/c9ad76c6-d121-4a32-bb14-e5d43bf91ee6/id?id=&filter=&includeAllowableActions=&includeACL=&includePolicyIds=&includeRelationships=&renditionFilter=
but no information of the id, not filter nor anything else, was loaded. I'm matching the responses to a alfresco CMIS Atom that I have running on my local. So the response its identical except for the jsession. Can you share any guidance on this?
The steps go like below.
Service document is the first one to fetch - your example refers to it as "/cmisatom/getRepositories". This lists the list of all repository data. It also includes the repository url templates like OBJECT_BY_ID, TYPE_BY_ID etc. That means, for navigation / listing folders etc, your link "/cmisatom/getRepositories?id=c9ad76c6-d121-4a32-bb14-e5d43bf91ee6" is not used.
The third link you're referring to looks like a URL template OBJECT_BY_ID - and here you have to provide the object id and populate other params before you make a request.
The param object id for the first request is again a value which you obtain from service document. This value is called ROOT FOLDER ID.
Use root folder id to update object by id url template and get the root folder details - from there you get the children and proceed further.
You can refer further to Apache Chemistry In Memory repository - https://chemistry.apache.org/java/developing/repositories/dev-repositories-inmemory.html - it is an open source implementation which can help you dig deep.
And this is the spec: http://docs.oasis-open.org/cmis/CMIS/v1.1/CMIS-v1.1.html

Is it ok to change from full recovery to simple recovery in Sql Server

I have an old database - a users membership/role that was setup automatically by an ASP.Net 2 application years ago:
The Sql Server version currently running is: Sql Server 10.5.1617
The users database log file is huge (the ldf file is approx 400 times the size of the mdf file).
The recovery model is currently set to "Full". I understand what that is - and I don't need point in time restoration.
If I simply changed the recovery model to "Simple" from within Sql Server Management Studio:
...and clicked ok to save the changes - would I be risking my current database in any way? Or is Sql Server fine with making changes like this to live databases? And would the log file automatically shrink itself?
Thanks for your advice,
Mark
You should be fine, the transactions have been commited. The log file is waiting to be backed up and therefor released. Changing to Simple Recovery means that you cannot do rolling backups, but data will be commited to the db in the same way as before, logs are simply deleted after sql has completed writing the transaction.
To answer both of your questions:
Changing the recovery model on a live database is safe. You shouldn't incur any downtime, blocking, etc.
The log file won't shrink itself. You may find that once you've set the recovery model to simple that it may not be shrinkable right away. If you find that you're unable to shrink it, take a look at dbcc loginfo, specifically the 'status' column. Each row in the output of that command represents one virtual log file (vlf). The shrink command will only be able to clear a contiguous block of inactive (i.e. status = 0) vlfs at the end of the file. TL;DR - If you've got rows with status = 2 at the bottom, wait until you don't and then shrink.

Timeout when uploading images

I am currently testing Tridion 2011 and am having problems creating multimedia components with uploaded content (as opposed to external).
I fill out the title, schema, multimedia type, select a file from my system then click save. I get a Saving item... information message then approximately 30 seconds later I will receive a The wait operation timed out message.
There doesn't appear to be any error messages in the C:\Program Files (x86)\Tridion\log directory. Looking at the event viewer I see the following information relating to the save action
Unable to save Component (tcm:4-738361).
The wait operation timed out
Error Code:
0x8004033F (-2147220673)
Call stack:
System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject,Boolean,Boolean)
System.Data.SqlClient.TdsParser.TryRun(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject,Boolean&)
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,TaskCompletionSource`1,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1,String,Boolean,Int32,Boolean)
System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.SetBinaryContent(Int32,Stream)
Tridion.ContentManager.Data.AdoNet.ContentManagement.ItemDataMapper.Tridion.ContentManager.Data.ContentManagement.IItemDataMapper.SetBinaryContent(Stream,TcmUri)
Tridion.ContentManager.ContentManagement.RepositoryLocalObject.SetBinaryContent(BinaryContent)
Tridion.ContentManager.ContentManagement.Component.OnSaved(SaveEventArgs)
Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)
Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)
Tridion.ContentManager.ContentManagement.VersionedItem.Save()
Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)
XMLState.Save
Component.Save
I already have my timeout settings in the Content Manager Snap-In set to high values (more than 10 minutes) due to another issue.
The BINARIES table in the Content Manage Database is 25GB if that helps.
Any ideas? Thanks.
Edit 1
Following suggestions from Bart Koopman, my DBA has rebuilt the indexes but does not reckon the Transaction log has any impact on performance. The problem persists.
Edit 2
I have just found more details of the error
Unable to save Component (tcm:0-0-0).
Timeout expired.
The timeout period elapsed prior to completion of the operation or the server is not responding.
A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATEBINARYCONTENT".EDA_ITEMS_UPDATEBINARYCONTENT
After taking a look at this procedure it looks like the following statement could be the root cause
SELECT 1 FROM BINARIES WHERE ID = #iBINARY_ID AND CONTENT IS NULL
I execute it manually with #iBINARY_ID as -1 and after 2 minutes it still hasn't completed. I assume that when I insert a new multimedia component the query will be something similar (i.e. the id will not exist in the table).
The BINARIES table currently has a NON-CLUSTERED Primary Key. Maybe the solution would be to change this to a CLUSTERED Primary Key? However, I assume it is NON-CLUSTERED for a reason.
Just had a response from SDL customer support. Apparently this is a known issue related to statistics and the chosen query plan.
Running the following statement manually from SQL Server Management Studio fixes the problem (it didn't even need to complete for me)
SELECT 1 FROM BINARIES WHERE ID = -1 AND CONTENT IS NULL
Hope this helps someone else out!
Timeouts on database operations are usually an indication of a misconfiguration or a lack of maintenance. By increasing the timeout you are just working around the problem rather than solving it.
With a binaries table that big you will want to make sure you have proper database setup with data files that are separated from your log files (separated on different physical partitions/disks) and possibly even multiple data files on multiple physical partitions to take advantage of performance gains.
Next to that you will want to assure that the standard database maintenance is performed daily/hourly. Things like backing up and truncating the transaction log every hour will greatly improve your database performance (on MS SQL Server a transaction log of more than 1GB slows the database down drastically, you should always try to keep it below that size through timely backup/trucate). Updating statistics and rebuilding indexes is also something you should not forget on a regular basis.

Resources