I'm using fullcalendar for a while now, if it's finished I will definitely put it open source somewhere, because it isn't good that there is not an example implementation somewhere. Anyway, I have three questions before everything is working properly:
I can successfully create a new item, but when I go to another week (in the weekview) and go back, then the item appears twice. One instance is from the database, and the other one from fullcalendar I guess. I do a ".fullCalendar('renderEvent',ev,true);" after successfully storing an item. Can I prevent this problem?
Furthermore, I can't set the left side date format. It's still in am/pm and whatever I try with timeFormat (currently "timeFormat:'H:mm'"), it doesn't listen. I really really can't easily read the am/pm format (just like about half of the world), so it's really quite annoying...
And I was wondering, how do I translate the start and end when creating a new item, into a timestamp? This is the only string that goes over the wire, and I'm parsing it by hand in PHP now. I would love to use .fullCalendar.parseDate() at the clientside, but I didn't succeed. This is probably easy, I know.
Thanks a lot!!!
Niels
for your first question, use renderEvent w/o the true. when refetching happens, it will be wiped away, which is what you want.
2nd question.. timeFormat should work, so can you make a bug report on the issue tracker (with full demo)?
not really sure i understand your 3rd question
Related
I'm working with UIPath to automate some processes in Microsoft Dynamics AX 2012. When I use UIPath to indicate a button to press or a field to type into, UIPath gets a brainfreeze (stops working) and just chews on it for 3-6 minutes before it has completed. It works, but it take a ridicolous amount of time to make a process, as this is the case at every click. There is no problem when the process is running from orchestrator - it is only during development and only in AX. In all other programs it only takes a split second.
Does anyone know what causes AX to be this slow and how to fix it?
I have attached a video here, where you can see the issue: Link to video showing performance issues
Thank you in advance
This probably happens because the UiPath activity tries to load the whole table (including what is not directly visible) into memory. To workaround this, you might just filter your table in a way that you just have a few rows visible before you Indicate on screen the specific element.
Note that a similar behavior might occur during run time if there is a lot of data to capture.
UI Framework needed to be swithed. Pres F4 to go switch to AA before hovering over AX.
I am creating a bespoke SCORM course. All the data that i save and restore works fine. When I finish the course and set the 'cmi.completion_status: completed' and 'cmi.success_status: Passed' I close the course and all looks great in the LMS (cloud.scorm.com).
The problem starts after i try to reopen the course again after completing it. For some reason the LMS is resetting all values that was stored in the the database so it looks like the course was never lunched before.
Any ideas why this is happening and how i can prevent it, since when starting the course i have to make sure that we do not lose the progress of the learner.
You need to set "cmi.exit" to "suspend" before terminating, that way it knows that you're wanting to come back to the same data, rather than completing this attempt and having the new attempt replace it next time it starts.
I'm fairly new to xProc and xPath, but I've been asked to solve the following problem:
Step 2 receives data via the secondary port from step 1. Step 2 contains a p:for-each, which saves a document into a folder for each element that passes the for-each.
(Part A)
These documents (let's say I receive 6 documents from for-each) lay in the same directory and get filtered by p:directory-list and are eventually stored in one single document, containing the whole path of every document the for-each created. (Part B)
So far, so good.
The problem is that Part A seems to be too slow. Part B already tries to read the data Step A
stores while the directory is still empty. Meaning, I'm having a performance / synchronization problem.
And now comes the question:
Is it possible to let the pipeline wait and to let it continue as soon as a certain event occurs?
That's what I'm imagining:
Step B waits as long as necessary until the directory, which Step A stores the data in, is no longer empty. I read something about
dbxml:breakpoint, but unfortunately I couldn't find more information than the name and
a short description of what it seems to do:
Set a breakpoint, optionally based upon a condition, that will cause pipeline operation to pause at the breakpoint, possibly requiring user intervention to continue and/or issuing a message.
It would be awesome if you know more about it and could give an example of how it's used. It would also help if you know a workaround or another way to solve this problem.
UPDATE:
After searching google for half an eternity, I found SMIL which's timesheets seem to do the trick. Has anyone experience with throwing XML / xProc and SMIL together?
Back towards the end of 2009 I proposed the concept of 'Orchestrating XProc with SMIL' http://broadcast.oreilly.com/2009/09/xproc-and-smil-orchestrating-p.html in a blog post on the O'Reilly Network.
However, I'm not sure that this (XProc + Time) is the solution to your problem. It's not entirely clear, to me, from you description what's happening. Are you implying that you're trying to write something to disk and then read it in a subsequent step? You need to keep stuff in the pipeline in order to ensure you can connect outputs to subsequent inputs.
I'm having a bit of a time trying to get Dojo grids (1.5) to play nice. Specifically I've spent about two weeks of work trying to implement a grid that allows for our result set data to collapse into rows, where rows can be expanded. Data comes in as a full set in JSON format, using ItemFileReadStore as the store. Any subsequent sorts or pagings are handled by GETing a new json from the application, and passing in new query parameters in the url.
The nested data was only two layers deep - a top layer to always be displayed and an array of child data with identical structure as the top layer. Each node has a unique ID and a cluster ID - on a parent node the unique ID and cluster ID will match.
I was initially very excited with TreeGrid - but I couldn't see how I could format it to do what I needed - namely eliminate the 'summary row' and one extra row full of null cells (???) that I just couldnt figure out how to remove unless I focused the query to only one cluster. I studied the test examples, built many test pages myself, tried to understand the forestModel, which for what I could tell was unnecessary... I found so little documentation, and sources I found online hinted that TreeGrid might not be reliable...
So I decided I would try to implement the expandable/collapsible rows in dataGrid.
I flattened the JSON data and added another attribute to indicate being a top level node ('alwaysShow' = true). I built my grid programaticaly and applied grid.filter() to pull only those top level nodes. I modified that filter by extending the ItemFileReadStore _FetchItems "filter" method to allow for OR querying instead of AND, and also modified it to allow for keys to point to arrays - when a top level node (small +/- icon in the cell) is clicked, the cluster ID of the parent node is added to the grid.filter.allowed[] and the filter is updated, allowing nodes with that cluster_id value to be displayed.
This worked fine on my small test set of five records (although id say a little slugish...) - but now I am pulling ~900 rows back from the application, and on expanding large clusters (~80 rows) I am seeing a very long flash of blue and white on the filter updates. I've spent most of my day trying to step through in firebug to find where its happening, but the dojo logic is so spread out. Seems to be happening before the call to _Grid.js defaultUpdate.
Its so bad that I am considering trying again with TreeGrid. Im also considering just doing this by hand... Im kicking myself for spending so much time trying to get Dojo to work to begin with. I would also consider a commercial "JSON->table with collapsible row" library if anyone has any recommendations...
Any suggestions or insights? Familiarity with the flashing problem or how I could adapt TreeGrid to my needs? I'm aware this is a bit of a rant... Many thanks for any help.
-robbie
EDIT: I eventually gave up trying to get Dojo to do what I needed and coded it myself in less than a day. Not the best use of three weeks...
EDIT:
I just found a solution that works for me, I have added the following CSS:
.dojoxGridSummaryRow {
visibility: collapse
}
Basically the summaries are probably still created but they are not visible nor taken into account in the table layout. That's good for me. Hope this will solve your issue.
This won't help but just to let you know that:
"- but I couldn't see how I could format it to do what I needed - namely eliminate the 'summary row' "
is the very EXACT same thing I'm trying to achieve and did not find the solution even though this looks like a very simple feature... Will let you know if I found a solution...
I'm using a JQuery plugin (http://docs.jquery.com/Plugins/Autocomplete) to add auto-completion to a "city" textfield. The component calls an ASP.NET page that simply loads an array of all possible city values (>8000) and then iterates that array returning those that start with the text the user has so far entered.
The thing is, it's pretty slow in real use. It lags behind what the user types to the extent that most of the time the user probably won't notice that it's there.
So, my question is, how can I speed it up?
I had thought that an array would be a better way to go than putting the data in a database and having to hit that multiple times. Do others agree that having this information hard-coded is the way to go given that it's not at all volatile and needs to be all about speed of return?
If so, what would you look at to improve the speed of performance? Should I be caching the data on application start and accessing it from memory? Would I be better off instead with multiple arrays, each containing values starting with a particular letter so I can go straight to the relevant one and thus iterate a much smaller array? Or am I missing a much more obvious way to go about this?
Thanks in advance.
The code you posted looks pretty normal and should return fast. My guess would be that you're using the default "delay" value for the plug-in which is 400ms. Try changing it to something lower like 100ms (or even lower) and see if it feels better:
$('#textbox').autocomplete('your_url_here', {
delay: 100
});
Well you probably don't want to hard code any data if you can avoid it. Having it in a database will make it much easier to manage changes. And there's no reason you would have to query the database each time. You could cache data from sql in memory into multiple lists as you touched on yourself. That should be quite speedy, unless there's some sort of additional network latency problem. You might want to post some of your code so people can provide more specific guidance.
Thanks for the reply. (Very basic) code is as follows:
Dim q As String = LCase(Server.UrlDecode(Request.QueryString("q")))
Dim arrCities() As String = {"Abano Terme", "Abbadia Cerreto", ... "Zungri"}
For Each s As String In arrCities
If s.ToLower.StartsWith(q) Then
Response.Write(s & vbCrLf)
End If
Next
Line 2 is basically a huge array declared right in the code. It isn't pretty, but the data is involatile so I thought I could get away with it and I assumed it would be faster than pulling it directly from the database. But maybe SQL with some caching if you can offer some specific advice?