I want to re-trigger all the failed schedules using a java jar file on CMS.
Just for testing I wrote this below program which i suppose would re-trigger a certain schedule, which completed successfully.
Please help me find where did i go wrong since it shows success when I run this program on CMS but the schedule doesn't get triggered
public class Schedule_CRNA implements IProgramBase {
public void run(IEnterpriseSession enterprisesession, IInfoStore infostore, String str[]) throws SDKException {
//System.out.println("Connected to " + enterprisesession.getCMSName() + "CMS");
//System.out.println("Using the credentials of " + enterprisesession.getUserInfo().getUserName() );
IInfoObjects oInfoObjects = infostore.query("SELECT * from CI_INFOOBJECTS WHERE si_instance=1 and si_schedule_status=1 and SI_ID=9411899");
for (int x = 0; x < oInfoObjects.size(); x++) {
IInfoObject oI = (IInfoObject) oInfoObjects.get(x);
IInfoObjects oScheds = infostore.query("select * from ci_infoobjects,ci_appobjects where si_id = " + oI.getID());
IInfoObject oSched = (IInfoObject) oScheds.get(0);
Integer iOwner = (Integer) oI.properties().getProperty("SI_OWNERID").getValue();
oSched.getSchedulingInfo().setScheduleOnBehalfOf(iOwner);
oSched.getSchedulingInfo().setRightNow(true);
oSched.getSchedulingInfo().setType(CeScheduleType.ONCE);
infostore.schedule(oScheds);
oI.deleteNow();
}
}
}
It seems you missed the putting of your retrieved scheduled object into collection.
The last part in your snippet should be:
oSched.getSchedulingInfo().setScheduleOnBehalfOf(iOwner);
oSched.getSchedulingInfo().setRightNow(true);
oSched.getSchedulingInfo().setType(CeScheduleType.ONCE);
IInfoObjects objectsToSchedule = infostore.newInfoObjectCollection();
objectsToSchedule.add(oI);
infostore.schedule(objectsToSchedule);
oI.deleteNow();
You cannot schedule report directly but rather through collection. Full sample is here.
Also your coding deleting object from repository and rescheduling it again with each iteration seems weird, but it is up to your requirement.
I'm getting the error code SQLITE_BUSY when trying to write to a table after selecting from it. The select statement and result is properly closed prior to my insert.
If I'm removing the select part the insert works fine. And this is what I'm not getting. According to the documentation SQLITE_BUSY should mean that a different process or connection (which is definetly not the case here) is blocking the database.
There's no SQLite manager running. Also jdbcConn is the only connection to the database I have. No parallel running threads aswell.
Here's my code:
try {
if(!jdbcConn.isClosed()) {
ArrayList<String> variablesToAdd = new ArrayList<String>();
String sql = "SELECT * FROM VARIABLES WHERE Name = ?";
try (PreparedStatement stmt = jdbcConn.prepareStatement(sql)) {
for(InVariable variable : this.variables.values()) {
stmt.setString(1, variable.getName());
try(ResultSet rs = stmt.executeQuery()) {
if(!rs.next()) {
variablesToAdd.add(variable.getName());
}
}
}
}
if(variablesToAdd.size() > 0) {
String sqlInsert = "INSERT INTO VARIABLES(Name, Var_Value) VALUES(?, '')";
try(PreparedStatement stmtInsert = jdbcConn.prepareStatement(sqlInsert)) {
for(String name : variablesToAdd) {
stmtInsert.setString(1, name);
int affectedRows = stmtInsert.executeUpdate();
if(affectedRows == 0) {
LogManager.getLogger().error("Error while trying to add missing database variable '" + name + "'.");
}
}
}
jdbcConn.commit();
}
}
}
catch(Exception e) {
LogManager.getLogger().error("Error creating potentially missing database variables.", e);
}
This crashes on int affectedRows = stmtInsert.executeUpdate();. Now if I remove the first block (and manually add a value to the variablesToAdd list) the value inserts fine into the database.
Am I missing something? Am I not closing the ResultSet and PreparedStatement properly? Maybe I'm blind to my mistake from looking at it for too long.
Edit: Also executing the select in a separate thread does the trick. But that can't be the solution. Am I trying to insert into the database too fast after closing previous statements?
Edit2: I came across a busy_timeout, which promised to make updates/queries wait for a specified amount of time before returning with SQLITE_BUSY. I tried setting the busy timeout like so:
if(jdbcConn.prepareStatement("PRAGMA busy_timeout = 30000").execute()) {
jdbcConn.commit();
}
The executeUpdate() function still immedeiately returns with SQLITE_BUSY.
I'm dumb.
I was so thrown off by the fact that removing the select statement worked (still not sure why that worked, probably bad timing) that I missed a different thread using the same file.
Made both threads use the same java.sql.Connection and everything works fine now.
Thank you for pushing me in the right direction #GordThompson. Wasn't aware of the jdbc:sqlite::memory: option which led to me finding the issue.
Data to be inserted has just two TEXT columns whose individual length don't even exceed 256.
I initially used executeSimpleSQL since I didn't need to get any results.
It worked for simulataneous inserts of upto 20K smoothly i.e. in the bakground no lag or freezing observed.
However, with 0.1 million I could see horrible freezing during insertion.
So, I tried these two,
Insert in chunks of 500 records - This didn't work well since even for 20K records it showed visible freezing. I didn't even try with 0.1million.
So, I decided to go async and used executeAsync alongwith Bind etc. This also shows visible freezing for just 20K records. This was the whole array being inserted and not in chunks.
var dirs = Cc["#mozilla.org/file/directory_service;1"].
getService(Ci.nsIProperties);
var dbFile = dirs.get("ProfD", Ci.nsIFile);
var dbService = Cc["#mozilla.org/storage/service;1"].
getService(Ci.mozIStorageService);
dbFile.append('mydatabase.sqlite');
var connectDB = dbService.openDatabase(dbFile);
let insertStatement = connectDB.createStatement('INSERT INTO my_table
(my_col_a,my_col_b) VALUES
(:myColumnA,:myColumnB)');
var arraybind = insertStatement.newBindingParamsArray();
for (let i = 0; i < my_data_array.length; i++) {
let params = arraybind.newBindingParams();
// Individual elements of array have csv
my_data_arrayTC = my_data_array[i].split(',');
params.bindByName("myColumnA", my_data_arrayTC[0]);
params.bindByName("myColumnA", my_data_arrayTC[1]);
arraybind.addParams(params);
}
insertStatement.bindParameters(arraybind);
insertStatement.executeAsync({
handleResult: function(aResult) {
console.log('Results are out');
},
handleError: function(aError) {
console.log("Error: " + aError.message);
},
handleCompletion: function(aReason) {
if (aReason != Components.interfaces.mozIStorageStatementCallback.REASON_FINISHED)
console.log("Query canceled or aborted!");
console.log('We are done inserting');
}
});
connectDB.asyncClose(function() {
console.log('[INFO][Write Database] Async - plus domain data');
});
Also, I seem to get the async callbacks after a long time. Usually, executeSimpleSQL is way faster than this.If I use SQLite Manager Tool extension to open the DB immediately this is what I get ( as expected )
SQLiteManager: Error in opening file mydatabase.sqlite - either the file is encrypted or corrupt
Exception Name: NS_ERROR_STORAGE_BUSY
Exception Message: Component returned failure code: 0x80630001 (NS_ERROR_STORAGE_BUSY) [mozIStorageService.openUnsharedDatabase]
My primary objective was to dump data as big as 0.1 million + and then later on perform reads when needed.
Currently:
I have a ASP NET project, which utilises several ASHX.
it runs barely acceptable with daily 50000 login api calls(and each login comes diff amount of other API calls)
but now it takes >1min with daily 150,000 login api calls.
to t/shoot the problem, these thing I done on 1 of the API with sproc call which only retrieve data:
1) I run it in another brand new instance, in local-machine, it runs less than 100milisec; current web-instance runs > 1min
2) I run the sproc alone in SSMS, it take < 1sec; current web-instance runs > 1min
3) I did local parameter assignment on that particular sproc, to prevent parameter sniffing and re-built and re-exec, same result.
4) the part the serialize the Json(newtonsoft) uses static, also very fast, less than 1 sec.
I ran "http://tools.pingdom.com/fpt/" and indeed very slow; but I am not sure why it's slow.
I tried to use "DebugView" ("https://technet.microsoft.com/en-us/library/bb896647.aspx"), since it can write plenty log without file-locking, but not sure how to put it works for web application.
Anyway/ tool to capture the independent begin_request & end_request time externally and accurately? (I tried to put the time span in Global.asax, but get some null parameters sometimes) Best if anyone has the idea to pinpoint the part that caused it.
sample timespan test:
private string single_detail(HttpContext context)
{
PlayDetailApiResult o = new PlayDetailApiResult
{
...
};
string json = generateJsonFromObj(o); // init default
try
{
{
{
long current = 0;
DateTime dtStart = DateTime.Now;
// call sp
int ret = plmainsql.getPracticeQualificationTimeLeft(...);
DateTime dtEnd = DateTime.Now;
TimeSpan ts = dtEnd - dtStart;
LoggingHelper.GetInstance().Debug("single_detail getPracticeQualificationTimeLeft's duration(ms): " + ts.TotalMilliseconds);
if (ret != 0)
{
o.Code = ...;
}
}
DateTime dtStart_1 = DateTime.Now;
json = generateJsonFromObj(o);
DateTime dtEnd_1 = DateTime.Now;
TimeSpan ts_1 = dtEnd_1 - dtStart_1;
LoggingHelper.GetInstance().Debug("single_detail generateJsonFromObj's duration(ms): " + ts_1.TotalMilliseconds);
}
}
catch (Exception ex)
{
LoggingHelper.GetInstance().Debug("single_detail : " + ex.Message + "|" + ex.StackTrace);
}
finally
{
}
return json;
}
most of the result:
2015-03-04 18:45:29,263 [163] DEBUG LoggingHelper single_detail getPracticeQualificationTimeLeft's duration(ms): 5.8594
2015-03-04 18:45:29,264 [163] DEBUG LoggingHelper single_detail generateJsonFromObj's duration(ms): 0
these thing I noted.
1) computer's processor usage and memory usage: memory is at 70%, which i think still is not peak
2) check if Gzip compression is enabled : my json runs on mobile (iOS/Android/WM); some code changes needed, and not tested to prevent any compress-decompress issue; as compression also increase CPU usage at the same time
You can also try miniprofiler. It is very easy to use and displays the execution time for every step or function you want to monitor.
Check out http://miniprofiler.com/
I try to tune my query but I have no idea what I can change:
A screenshot of both tables: http://abload.de/image.php?img=1plkyg.jpg
The relation is: 1 UserPM (a Private Message) has 1 Sender (User, SenderID -> User.SenderID) and 1 Recipient (User, RecipientID -> User.UserID) and 1 User has X UserPMs as Recipient and X UserPMs as Sender.
The intial load takes around 200ms, it only takes the first 20 rows and display them. After this is displayed a JavaScript PageMethod gets the GetAllPMsAsReciepient method and loads the rest of the data
this GetAllPMsAsReciepient method takes around 4.5 to 5.0 seconds each time to run on around 250 rows
My code:
public static List<UserPM> GetAllPMsAsReciepient(Guid userID)
{
using (RPGDataContext dc = new RPGDataContext())
{
DateTime dt = DateTime.Now;
DataLoadOptions options = new DataLoadOptions();
//options.LoadWith<UserPM>(a => a.User);
options.LoadWith<UserPM>(a => a.User1);
dc.LoadOptions = options;
List<UserPM> pm = (
from a in dc.UserPMs
where a.RecieverID == userID
&& !a.IsDeletedRec
orderby a.Timestamp descending select a
).ToList();
TimeSpan ts = DateTime.Now - dt;
System.Diagnostics.Debug.WriteLine(ts.Seconds + "." + ts.Milliseconds);
return pm;
}
}
I have no idea how to tune this Query, I mean 250 PMs are nothing at all, on other inboxes on other websites I got around 5000 or something and it doesn't need a single second to load...
I try to set Indexes on Timestamp to reduce the Orderby time but nothing happend so far.
Any ideas here?
EDIT
I try to reproduce it on LinqPad:
Without the DataLoadOptions, in LinqPad the query needs 300ms, with DataLoadOptions around 1 Second.
So, that means:
I could save around 60% of the time, If I can avoid to load the User-table within this query, but how?
Why Linqpad needs only 1 second on the same connection, from the same computer, where my code is need 4.5-5.0 seconds?
Here is the execution plan: http://abload.de/image.php?img=54rjwq.jpg
Here is the SQL Linqpad gives me:
SELECT [t0].[PMID], [t0].[Text], [t0].[RecieverID], [t0].[SenderID], [t0].[Title], [t0].[Timestamp], [t0].[IsDeletedRec], [t0].[IsRead], [t0].[IsDeletedSender], [t0].[IsAnswered], [t1].[UserID], [t1].[Username], [t1].[Password], [t1].[Email], [t1].[RegisterDate], [t1].[LastLogin], [t1].[RegisterIP], [t1].[RefreshPing], [t1].[Admin], [t1].[IsDeleted], [t1].[DeletedFrom], [t1].[IsBanned], [t1].[BannedReason], [t1].[BannedFrom], [t1].[BannedAt], [t1].[NowPlay], [t1].[AcceptAGB], [t1].[AcceptRules], [t1].[MainProfile], [t1].[SetShowHTMLEditorInRPGPosts], [t1].[Age], [t1].[SetIsAgePublic], [t1].[City], [t1].[SetIsCityShown], [t1].[Verified], [t1].[Design], [t1].[SetRPGCountPublic], [t1].[SetLastLoginPublic], [t1].[SetRegisterDatePublic], [t1].[SetGBActive], [t1].[Gender], [t1].[IsGenderVisible], [t1].[OnlinelistHidden], [t1].[Birthday], [t1].[SetIsMenuHideable], [t1].[SetColorButtons], [t1].[SetIsAboutMePublic], [t1].[Name], [t1].[SetIsNamePublic], [t1].[ContactAnimexx], [t1].[ContactRPGLand], [t1].[ContactSkype], [t1].[ContactICQ], [t1].[ContactDeviantArt], [t1].[ContactFacebook], [t1].[ContactTwitter], [t1].[ContactTumblr], [t1].[IsContactAnimexxPublic], [t1].[IsContactRPGLandPublic], [t1].[IsContactSkypePublic], [t1].[IsContactICQPublic], [t1].[IsContactDeviantArtPublic], [t1].[IsContactFacebookPublic], [t1].[IsContactTwitterPublic], [t1].[IsContactTumblrPublic], [t1].[IsAdult], [t1].[IsShoutboxVisible], [t1].[Notification], [t1].[ShowTutorial], [t1].[MainProfilePreview], [t1].[SetSound], [t1].[EmailNotification], [t1].[UsernameOld], [t1].[UsernameChangeDate]
FROM [UserPM] AS [t0]
INNER JOIN [User] AS [t1] ON [t1].[UserID] = [t0].[RecieverID]
WHERE ([t0].[RecieverID] = #p0) AND (NOT ([t0].[IsDeletedRec] = 1))
ORDER BY [t0].[Timestamp] DESC
If you want to get rid of the LoadWith, you can select your field explicitly :
public static List<Tuple<UserPM, User> > GetAllPMsAsReciepient(Guid userID)
{
using (var dataContext = new RPGDataContext())
{
return (
from a in dataContext.UserPMs
where a.RecieverID == userID
&& !a.IsDeletedRec
orderby a.Timestamp descending
select Tuple.Create(a, a.User1)
).ToList();
}
}
I found a solution:
At first it seems that with the DataLoadOptions is something not okay, at second its not clever to load a table with 30 Coloumns when you only need 1.
To Solve this, I make a view which covers all nececeery fields and of course the join.
It reduces the time from 5.0 seconds to 230ms!