Currently i am working on an app that uses sqlite. I have a scenario in which there are graphs according to days of week. User can check graphs of any day. There may be data available for that day or not.
Issue is for the first time every thing goes well. But then if user taps again on the day which actually have data available, app stop working, or i can say it stuck. it didn't crash or any exception.
private async void getGraphData(int p)
{
var checkCount = p;
Color currentAccentColorHex = (Color)Application.Current.Resources["PhoneAccentColor"];
BabySleepChart.Foreground = new SolidColorBrush(ConvertStringToColor(currentAccentColorHex.ToString()));
Conn = new SQLiteAsyncConnection(DB_NAME);
ObservableCollection<Graph> BabydailySleep = new ObservableCollection<Graph>();
var sundayData = await Conn.QueryAsync<BabySleep>("SELECT * FROM BabySleep WHERE today = ?", Convert.ToDateTime(DateTime.Now.AddDays(-p).Date.ToString("d")));
var count = sundayData.Any() ? sundayData.Count : 0;
if (count == 0)
{
MessageBox.Show("No date for " + DateTime.Now.AddDays(-p).ToString("dddddd"), "Data Not Found", MessageBoxButton.OK);
}
else
{
}
}
In else i display graphs. I am using amcharts. (The control doesn't go after the query for 2nd time.)
Please help . How can i solve this issue.
Regards.
Related
I want to re-trigger all the failed schedules using a java jar file on CMS.
Just for testing I wrote this below program which i suppose would re-trigger a certain schedule, which completed successfully.
Please help me find where did i go wrong since it shows success when I run this program on CMS but the schedule doesn't get triggered
public class Schedule_CRNA implements IProgramBase {
public void run(IEnterpriseSession enterprisesession, IInfoStore infostore, String str[]) throws SDKException {
//System.out.println("Connected to " + enterprisesession.getCMSName() + "CMS");
//System.out.println("Using the credentials of " + enterprisesession.getUserInfo().getUserName() );
IInfoObjects oInfoObjects = infostore.query("SELECT * from CI_INFOOBJECTS WHERE si_instance=1 and si_schedule_status=1 and SI_ID=9411899");
for (int x = 0; x < oInfoObjects.size(); x++) {
IInfoObject oI = (IInfoObject) oInfoObjects.get(x);
IInfoObjects oScheds = infostore.query("select * from ci_infoobjects,ci_appobjects where si_id = " + oI.getID());
IInfoObject oSched = (IInfoObject) oScheds.get(0);
Integer iOwner = (Integer) oI.properties().getProperty("SI_OWNERID").getValue();
oSched.getSchedulingInfo().setScheduleOnBehalfOf(iOwner);
oSched.getSchedulingInfo().setRightNow(true);
oSched.getSchedulingInfo().setType(CeScheduleType.ONCE);
infostore.schedule(oScheds);
oI.deleteNow();
}
}
}
It seems you missed the putting of your retrieved scheduled object into collection.
The last part in your snippet should be:
oSched.getSchedulingInfo().setScheduleOnBehalfOf(iOwner);
oSched.getSchedulingInfo().setRightNow(true);
oSched.getSchedulingInfo().setType(CeScheduleType.ONCE);
IInfoObjects objectsToSchedule = infostore.newInfoObjectCollection();
objectsToSchedule.add(oI);
infostore.schedule(objectsToSchedule);
oI.deleteNow();
You cannot schedule report directly but rather through collection. Full sample is here.
Also your coding deleting object from repository and rescheduling it again with each iteration seems weird, but it is up to your requirement.
So this works but it takes 15 seconds for a spreadsheet with 60 items.
function addToModel(name,birth,age){
var newRecord = app.models.ImportData.newRecord();
newRecord['PRESIDENT'] = name;
newRecord['BIRTH_PLACE'] = birth;
newRecord['AGE_ELECTED'] = age;
app.saveRecords([newRecord]);
}
function getSpreadsheet(){
var sh = SpreadsheetApp.openById("zzz");
var ss = sh.getSheetByName("Sheet1");
var data = ss.getDataRange().getValues();
THIS WAS WAY ONE, TAKES 15 SECONDS
for (var i=1; i<data.length;i++)
{
addToModel(data[i][1],data[i][2],data[i][3].toString());
}//for loop
}
but I noticed that the command is saveRecordS not saveRecord and with anything in google apps script, the fewer calls the better, so I tried this but it doesn't work
//SAME SPREADSHEET INFO
var result = [];
for (var i=0; i<data.length;i++)
{
var newRecord = app.models.ImportData.newRecord();
newRecord['PRESIDENT'] = data[i][1];
newRecord['BIRTH_PLACE'] = data[i][2];
newRecord['AGE_ELECTED'] = data[i][3].toString();
result.push(newRecord);
}//for loop
app.saveRecords([result]);
Expected result: new records in my table, much faster than the first version. Actual result: "Cannot read property "key" from undefined" which is triggered from the last line (saveRecords). I tried both app.saveRecords(result) and ([result]), same problem both times.
Note: this example is from an appmaker university tutorial that no longer works because of the changes for appmaker v2.
I think that it's model.newRecord() takes time for each new item created, while time on app.saveRecords() is ignorable.
Could you please confirm that you are EU user? as we EU user are facing same issue (link) due to the server location, if it is the case, please help star that issue and give more info to help Google solve that issue. Thanks.
I was setting up a storm cluster to calculate real time trending and other statistics, however I have some problems introducing the "recovery" feature into this project, by allowing the offset that was last read by the kafka-spout (the source code for kafka-spout comes from https://github.com/apache/incubator-storm/tree/master/external/storm-kafka) to be remembered. I start my kafka-spout in this way:
BrokerHosts zkHost = new ZkHosts("localhost:2181");
SpoutConfig kafkaConfig = new SpoutConfig(zkHost, "test", "", "test");
kafkaConfig.forceFromStart = false;
KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("test" + "spout", kafkaSpout, ESConfig.spoutParallelism);
The default settings should be doing this, but I think it is not doing so in my case, every time I start my project, the PartitionManager tries to look for the file with the offsets, then nothing is found:
2014-06-25 11:57:08 INFO PartitionManager:73 - Read partition information from: /storm/partition_1 --> null
2014-06-25 11:57:08 INFO PartitionManager:86 - No partition information found, using configuration to determine offset
Then it starts reading from the latest possible offset. Which is okay if my project never fails, but not exactly what I wanted.
I also looked a bit more into the PartitionManager class which uses Zkstate class to write the offsets, from this code snippet:
PartitionManeger
public void commit() {
long lastCompletedOffset = lastCompletedOffset();
if (_committedTo != lastCompletedOffset) {
LOG.debug("Writing last completed offset (" + lastCompletedOffset + ") to ZK for " + _partition + " for topology: " + _topologyInstanceId);
Map<Object, Object> data = (Map<Object, Object>) ImmutableMap.builder()
.put("topology", ImmutableMap.of("id", _topologyInstanceId,
"name", _stormConf.get(Config.TOPOLOGY_NAME)))
.put("offset", lastCompletedOffset)
.put("partition", _partition.partition)
.put("broker", ImmutableMap.of("host", _partition.host.host,
"port", _partition.host.port))
.put("topic", _spoutConfig.topic).build();
_state.writeJSON(committedPath(), data);
_committedTo = lastCompletedOffset;
LOG.debug("Wrote last completed offset (" + lastCompletedOffset + ") to ZK for " + _partition + " for topology: " + _topologyInstanceId);
} else {
LOG.debug("No new offset for " + _partition + " for topology: " + _topologyInstanceId);
}
}
ZkState
public void writeBytes(String path, byte[] bytes) {
try {
if (_curator.checkExists().forPath(path) == null) {
_curator.create()
.creatingParentsIfNeeded()
.withMode(CreateMode.PERSISTENT)
.forPath(path, bytes);
} else {
_curator.setData().forPath(path, bytes);
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
I could see that for the first message, the writeBytes method gets into the if block and tries to create a path, then for the second message it goes into the else block, which seems to be ok. But when I start the project again, the same message as mentioned above shows up. No partition information can be found.
I had the same problem. Turned out I was running in local mode which uses an in memory zookeeper and not the zookeeper that Kafka is using.
To make sure that KafkaSpout doesn't use Storm's ZooKeeper for the ZkState that stores the offset, you need to set the SpoutConfig.zkServers, SpoutConfig.zkPort, and SpoutConfig.zkRoot in addition to the ZkHosts. For example
import org.apache.zookeeper.client.ConnectStringParser;
import storm.kafka.SpoutConfig;
import storm.kafka.ZkHosts;
import storm.kafka.KeyValueSchemeAsMultiScheme;
...
final ConnectStringParser connectStringParser = new ConnectStringParser(zkConnectStr);
final List<InetSocketAddress> serverInetAddresses = connectStringParser.getServerAddresses();
final List<String> serverAddresses = new ArrayList<>(serverInetAddresses.size());
final Integer zkPort = serverInetAddresses.get(0).getPort();
for (InetSocketAddress serverInetAddress : serverInetAddresses) {
serverAddresses.add(serverInetAddress.getHostName());
}
final ZkHosts zkHosts = new ZkHosts(zkConnectStr);
zkHosts.brokerZkPath = kafkaZnode + zkHosts.brokerZkPath;
final SpoutConfig spoutConfig = new SpoutConfig(zkHosts, inputTopic, kafkaZnode, kafkaConsumerGroup);
spoutConfig.scheme = new KeyValueSchemeAsMultiScheme(inputKafkaKeyValueScheme);
spoutConfig.zkServers = serverAddresses;
spoutConfig.zkPort = zkPort;
spoutConfig.zkRoot = kafkaZnode;
I think you are hitting this bug:
https://community.hortonworks.com/questions/66524/closedchannelexception-kafka-spout-cannot-read-kaf.html
And the comment from the colleague above fixed my issue. I added some newer libraries to.
I'm using the fullcalendar resourceviews fork version 1.6.1.6...
I used an older version which had the resources on the top and the times on the left axis.
But now it is different. The times are on the top and the resources are on the left axis. It's not that good anymore. Is there a way to change it?
I need the newer version of it because of the refetchResources function.
I modified the resource object (using Ike Lin fullCalendar) and added an array which includes the number of the day, start time and end time like 0 -> 09:00 -> 12:00, 1 10:00 -> 15:30 ...
Then I changed the fullcalendar.js
function updateCells() {
var i;
var headCell;
var bodyCell;
var date;
var d;
var maxd;
var today = clearTime(new Date());
for (i=0; i<colCnt; i++) {
date = resourceDate(i);
headCell = dayHeadCells.eq(i);
if(resources[i].anwesenheit[date.getDay()-1] != null){
var von = resources[i].anwesenheit[date.getDay()-1].von;
var _von = von.substring(0, 5);
var bis = resources[i].anwesenheit[date.getDay()-1].bis;
var _bis = bis.substring(0, 5);
headCell.html(resources[i].name + "<p style='font-weight: normal; font-size: 11px;'>" + _von + " - " + _bis + " Uhr</p>");
} else {
headCell.html(resources[i].name);
}
headCell.attr("id", resources[i].id);
bodyCell = dayBodyCells.eq(i);
if (+date == +today) {
bodyCell.addClass(tm + '-state-highlight fc-today');
}else{
bodyCell.removeClass(tm + '-state-highlight fc-today');
}
setDayID(headCell.add(bodyCell), date);
}
}
This shows the work time from each resource right unter the name of the resource.
Also I added a serverside function to the select function which checks if the resource is available. If yes, then the event will be created, else the event won't be created and I get an error message.
Now I can work with it. It's not exactly what I wanted, but it's nice to use now. It updates the times under the resource name on every day change so I have an overview when a resource is available and when it's not available.
I am trying to query free busy data from Google calendar. Simply I am providing start date/time and end date/time. All I want to know is if this time frame is available or not. When I run below query, I get "responseOBJ" response object which doesn't seem to include what I need. The response object only contains start and end time. It doesn't contain flag such as "IsBusy" "IsAvailable"
https://developers.google.com/google-apps/calendar/v3/reference/freebusy/query
#region Free_busy_request_NOT_WORKING
FreeBusyRequest requestobj = new FreeBusyRequest();
FreeBusyRequestItem c = new FreeBusyRequestItem();
c.Id = "calendarresource#domain.com";
requestobj.Items = new List<FreeBusyRequestItem>();
requestobj.Items.Add(c);
requestobj.TimeMin = DateTime.Now.AddDays(1);
requestobj.TimeMax = DateTime.Now.AddDays(2);
FreebusyResource.QueryRequest TestRequest = calendarService.Freebusy.Query(requestobj);
// var TestRequest = calendarService.Freebusy.
// FreeBusyResponse responseOBJ = TestRequest.Execute();
var responseOBJ = TestRequest.Execute();
#endregion
Calendar API will only ever provide ordered busy blocks in the response, never available blocks. Everything outside busy is available. Do you have at least one event on the calendar
with the given ID in the time window?
Also the account you are using needs to have at least free-busy access to the resource to be able to retrieve availability.
I know this question is old, however I think it would be beneficial to see an example. You will needed to actually grab the Busy information from your response. Below is a snippet from my own code (minus the call) with how to handle the response. You will need to utilized your c.Id as the key to search through the response:
FreebusyResource.QueryRequest testRequest = service.Freebusy.Query(busyRequest);
var responseObject = testRequest.Execute();
bool checkBusy;
bool containsKey;
if (responseObject.Calendars.ContainsKey("**INSERT YOUR KEY HERE**"))
{
containsKey = true;
if (containsKey)
{
//Had to deconstruct API response by WriteLine(). Busy returns a count of 1, while being free returns a count of 0.
//These are properties of a dictionary and a List of the responseObject (dictionary returned by API POST).
if (responseObject.Calendars["**YOUR KEY HERE**"].Busy.Count == 0)
{
checkBusy = false;
//WriteLine(checkBusy);
}
else
{
checkBusy = true;
//WriteLine(checkBusy);
}
if (checkBusy == true)
{
var busyStart = responseObject.Calendars["**YOUR KEY HERE**"].Busy[0].Start;
var busyEnd = responseObject.Calendars["**YOUR KEY HERE**].Busy[0].End;
//WriteLine(busyStart);
//WriteLine(busyEnd);
//Read();
string isBusyString = "Between " + busyStart + " and " + busyEnd + " your trainer is busy";
richTextBox1.Text = isBusyString;
}
else
{
string isFreeString = "Between " + startDate + " and " + endDate + " your trainer is free";
richTextBox1.Text += isFreeString;
}
}
else
{
richTextBox1.Clear();
MessageBox.Show("CalendarAPIv3 has failed, please contact support\nregarding missing <key>", "ERROR!");
}
}
My suggestion would be to break your responses down by writing them to the console. Then, you can "deconstruct" them. That is how I was able to figure out "where" to look within the response. As noted above, you will only receive the information for busyBlocks. I used the date and time that was selected by my client's search to show the "free" times.
EDIT:
You'll need to check if your key exists before attempting the TryGetValue or searching with a keyvaluepair.