I'm not totally clear on what flow check does vs just flow. Running flow seems to start a server up, and checks all the code. Subsequent executions of flow are faster due to the running server. flow check also seems to do a full code check but without the server.
Is there more to it than that?
Yep, you seem to get it!
flow server does a full check of your code from scratch. Once it is done, it watches for changes, and incrementally checks your code as it changes.
flow start basically runs flow server in the background.
flow check is basically the same thing as flow server, except as soon as it's done with the initial full check, it prints the errors and exits.
flow status talks to the running flow server and asks for the current errors. If no server is running, it calls flow start to start a new server.
flow (with no explicit command) is an alias for flow status
Another distinction is flow check will ignore the currently running server.
I learned this after finding that running flow and flow check randomly yielded two different results. Truth is, I had run flow, added flow-typed stuff, and then run flow check.
Related
I am trying to use Axon's TrackingEventProcessor to replay our events.
While resetting the token, I am getting an UnableToClaimTokenException, since the service runs in a distributed setup.
Is there any way to solve this issue without using Axon Server?
Axon Server has nothing to do with the occurrence of the UnableToClaimTokenException, Axon Server just greatly simplifies the process of initiating a replay.
As the exception and javadoc of the TrackingEventProcessor state, you will need to stop all instances of a given TrackingEventProcessor prior to initiating a replay.
Thus, in a distributed set up, you will have to stop each and every duplication of the given TrackingEventProcessor prior to being able to actual call resetTokens on one of them.
Without Axon Server, that means you will have to create your own endpoints or CLI within your application to stop the given processors. To simplify this, you would essentially want to have a centralized dashboard which shows all occurrence of a given TrackingEventProcessor.
This is exactly what Axon Server is, hence simplifying the process tremendously.
Regardless, it's definitely doable to create this yourself.
Thus, when it comes to triggering a replay, you will first shut down each instance of the TEP prior to a reset.
I'm running into serious productivity issues when debugging flows. I can only assume at this point is due to a lack of knowledge on my part; particularly effective debugging techniques of flows. The problems arise when I have one flow which needs to "wait" for a consumption of a specific state. What seems to happen is the waiting flow starts and waits for the consumption of the specified state, but despite implemented as a listening future with an associated call back (at this point I'm simply using getOrThrow on the future returned from 'WhenConsumed'), the flows just hang and I see hundreds of Artemis send/write messages in the console window. If I stop the debug session, delete the node build directory, redeploy the nodes and start again the flows restart and I can return to the point of failure. However if I simply stop and detach the debugger from the node and attempt to run the calling test (calling the flow via RPC), nothing seems to happen. It's almost as if the flow code (probably incorrect at this point) results in the StateMachine/messaging layer becoming stuck in some kind of stateful loop which is only resolved by wiping the node build directories and redeploying. Simply restarting the node results in the flow no longer executing at all. This is a real productivity killer, and so I'm writing this question in the hope and assumption I've missed an obvious trick in how to effectively test/debug flows in such a way which avoids repeatedly re-deploying the nodes.
It would be great if someone could explain how to effectively debug flows; especially flows which are dependent on vault updates and thus wait on a valut update event. I have considered using a subflow, but this would ultimately, (I believe?) not result in quite the functionality required; namely to have a flow triggered when an identified state is consumed by a node. Or maybe it would? Perhaps this issue is due to not using a subFlow??? I look forward to your thoughts anyway!!
Not sure about your specific use case. But in general,
I would do as much unit testing as possible before physically running the nodes and see if the flow works.
Corda provides three levels of unit testing: transaction/ledger DSL, mock network and driver DSL. So if done right, most if not all bugs in the flows should be resolved by the time of runnodes. Actual runnodes mostly just reveal configuration issues.
I have a web app. I am living a problem about time of Meteor.logout() and Meteor.call(). When i meteor.logout(), it takes time between about 30-40 sec. Same for Meteor.call() as well. About 200-250 clients use this system on the same time.
if a client see about 100-200 items his on app screen this delay time is so much. but 10-20 items, it's a little well. we get data every 5-10 sec as different times each others on these items. I mean, live screen.
I don't get this problem when i work this system on diffrent port with same code and same database by the way just use only me.
I can't figure it. What can be reason it. I need your ideas and help.
The logout function waits for a callback form the server, there is something wrong with the way you have configured your server.
Run the same code on another machine, it should not happen.
You can use this.unblock() in every method and publications.
By default, Meteor process requests one by one, it will queue all the requests coming, if one is processing.
This may be due to the reason that some of the functions doing some bigger functionalities will be requiring more time and all other request to the server have to wait till it ends.
You need to simply place this.unblock() at the starting of every method and publications and it will not block your requests.
Thanks
I solved my problem.
While the collection update process is performed from one side, the meteor publish process is performed from the other side. As the number of clients increases, the server becomes unresponsive. I solved it with Mongodb oplog feature.
Thank you for your interest.
There could be multiple reasons.
There could be unsubscription of collections, which means client and server exchange the list of id's which are being unsubscribed.
You many have reactive UI, which suddenly gets overwhelmed with the amount of data that is being transferred and needs to update itself. (example angular digest cycle always runs after meteor sub/unsub)
Chrome Inspector - Network websocket frame is your best tool understand how soon Meteor logout fires and and if there are any messages being passed back and forth before server retutns the result of logout request.
You may also use this.unblock() feature in subscribe. This way your subscritption run parallelly and don't block each other
So I'm pretty new to using the Coldfusion solr search (just moved from a CF8 Mac OS X server to a Linux CF9 server), and I'm wondering what the best way to handle automatically updating the collections is. I know scheduled tasks are meant for this but I haven't been able to find any examples online.
I currently have a scheduled task set up to update all of the collections weekly by getting the list of collections and using the cfindex tag in a loop to run the refresh command. This is pretty processing intensive though and takes about ten minutes to update the four collections I have set up so far. This works when I run it in the browser, but I get this error "The request has exceeded the allowable time limit Tag: CFLOOP" when I run the task from scheduled task administration page.
Is there a better way to handle updating the collections? Would it be better if I made a task to update each collection individually?
Here's my update code.
<cfsetting requesttimeout="1800">
<cfcollection action="list" name="collections" engine="solr">
<cfloop query="collections">
<cfindex collection="#name#" action="refresh" extensions=".pdf, .html, .htm, .cfml, .cfm" type="path" key="/home/#name#/public_html/" recurse="yes">
</cfloop>
In earlier versions of ColdFusion there was a URL parameter that could be passed on any HTTP request to change the server's timeout for the requested page. You might have guessed from the scheduled task configuration that there's an HTTP request running your task, so it functions just like any other page. In those earlier versions you would have just added &requesttimeout=900 to the URL and that gave the server 15 minutes to process that task.
In later versions they realized that this URL parameter was a security risk but they needed a way to allow developers to declare that an individual HTTP request should still be allowed to take longer than the default page timeout set in the ColdFusion Administrator. So they moved it from the URL parameter to the <cfsetting> tag.
<cfsetting requesttimeout="900" />
You need to put the cfsetting tag at the top of the page, rather than putting it inside your loop, because it's resetting the total allowable time from the beginning of the request, not just since the last cfsetting tag. Ben Nadel wrote a blog article about that here: http://www.bennadel.com/blog/626-CFSetting-RequestTimeout-Updates-Timeouts-It-Does-Not-Set-Them.htm
I'm not sure if there's an upper limit to the request timeout. I do know that in the past when I've had a really long-running task like that, the server has gradually slowed down, in some cases until it crashed. I'm not sure if I would expect reindexing Solr collections to degrade performance so badly, I think my tasks were doing some other things that were probably hogging memory at the time. Anyway if you run into that issue, you may need to divide it up into separate tasks for each collection and just make sure there's enough time between the tasks to allow each one to complete before the next one starts.
EDIT: Oops! I don't know how I missed the cfsetting tag in the original question. D'oh! In any event, when you execute a scheduled task via the CF Administrator, it performs a cfhttp request to execute the task. This is the way scheduled tasks are normally executed, and I suspect it's so the task can execute inside your own application scope, but the effect is that there are two separate requests executing. I don't think there's a cfsetting tag in the CFIDE page, but I suspect a person could add one if they wanted to allow that page longer to wait for the task to complete.
EDIT: Okay, if you wanted to add the cfsetting in the CFIDE, you would first have to decrypt the template and then add your one line of code... which might void your warranty on the server, but is probably not dangerous. ;) For decrypting the template see: Can I get the source of a hacked Coldfusion template? - and the template to edit is /CFIDE/administrator/scheduler/scheduletasks.cfm.
Here's the scenario...
We have an internal website that is running the latest version of the ODAC (Oracle Client). It opens database connections, runs a stored procedure or packaged method, then disconnects. Connection pooling is turned on, and we are currently under version 11g in both our development and test environments, but under 10gR2 in our production environment. This happens on Production.
A few days ago, a process began firing off a ORA-2020 error. The process is called from a webpage on our internal website. The user simply sets a date, hits a button, and a job is started on another system that is separate from the website. The call itself, however, uses a database link to run a function.
We've scoured the SQL to find that it only uses that one database link. And since these links are on a per session basis and the user isn't exceeding the default limit of 4, how is it possible that we are getting a ORA-2020 error.
We have ran a number of tests to try to push over the default limit of 4. ODAC, from what I recall, runs a commit after each connection, and I can't seem to run 4 DB links, then run a piece of SQL with 1 DB link directly after with any errors. The only way I can bring up this error is if I run a query with 4 DB links, then a function or piece of dynamic SQL with a database link within it. We don't have that problem as this issue is sporadic. It isn't ALWAYS happening.
Questions
Is it possible that connection pooling is allowing User B to use User A's connection after the initial process was run, thus adding to the open links number if User B runs a SQL statement with more database links?
Is this a scenario where we should up our limit past 4? What are the disadvantages of increasing the number?
Do I need to explicitly close open database links before disconnecting from the database? Oracle documentation seems to suggest it should automatically happen, but "on occasion"... doesn't.
Firstly, the simple solution: I'd double check that in the production database the number of default links is actually 4.
select *
from v$system_parameter
where name = 'OPEN_LINKS'
Assuming you're not going to get off that lightly:
Is it possible that connection pooling is allowing User B to use User
A's connection after the initial process was run, thus adding to the
open links number if User B runs a SQL statement with more database
links?
You say that you explicitly close the session, which, according to the documentation, should mean that all links associated with that session are closed. Other than that I confess complete ignorance on this point.
Is this a scenario where we should up our limit past 4? What are the
disadvantages of increasing the number?
There aren't any disadvantages that I can think of. Tom Kyte suggests, albeit a long time ago, that each open database link uses 500k of PGA memory. If you don't have any then this will obviously cause a problem but it should be more than fine for most situations.
There are, however, unintended consequences: Imagine that you up this number to 100. Somebody codes something that continually opens links and draws a lot of data through all them select * from my_massive_table or similar. Instead of 4 sessions doing this you have 100, which is attempting to transfer hundreds of gigabytes simultaneously. Your network dies under the strain...
There's probably more but you get the picture.
Do I need to explicitly close open database links before disconnecting
from the database? Oracle documentation seems to suggest it should
automatically happen, but "on occasion"... doesn't.
As you've noted the best answer is "probably not", which isn't much help. You don't mention exactly how you're terminating the session but if you're killing it rather than closing gracefully then definitely.
Using a database link spawns a child process on the remote server. Because your server is no longer in absolute charge of this process there's a myriad of things that could cause it to become orphaned or otherwise not close on termination of the parent process. By no means does this happen the whole time but it can and does.
I would do two things.
In your process, if an exception is encountered, e-mail the results of the following query to yourself.
select *
from v$dblink
As a minimum at least you will know what database links are open in the session and give you some way of tracing them.
Follow the documentations advice; specifically the following:
"You may have occasion to close the link manually. For example, close
links when:
The network connection established by a link is used infrequently in an application.
The user session must be terminated."
The first seems to exactly fit your situation. Unless your process is time-sensitive, which doesn't seem to be the case, then what have you got to lose? The syntax is:
alter session close database link <linkname>
We ended up increasing the link amount, but we never did find the root cause.