I'd like to setup a scripted input in Splunk to do a curl against the render url api for Graphite. I imagine I could configure this input to run on the minute, and retrieve that last minutes worth of events.
My concern with this is that some events might be missed, or duplicated.
Has anybody done something similar to this? How could I keep track of the events from Graphite that I have already read?
If you write a modular input you can use data checkpoints. See the docs for more info: http://docs.splunk.com/Documentation/Splunk/6.2.1/AdvancedDev/ModInputsCheckpoint
My concern with this is that some events might be missed, or duplicated.
Yes, it may go missing. In two cases-
If you're pushing your graphite server to the limits, there is a lag between the point wherein the datapoint is received and its flushing to disk. With large queues, i have seen this go upto 20 mins. (IO is the constraint here).
For example- in the case above wherein there's a 20 minute lag, and i am storing data at a 1m granularity- i will have the latest 20 datapoints with NULL against the timestamp. Of-course, they will soon fill in with the next flush.
Know that these are indeterminate. So if you have a zero lag deployment- go for this approach.
The latest datapoint can or cannot be NULL at any given point, because of the flushing nature of graphite, even if nothing is throttling. You can use something like &from=-21m&to=-1m to make sure you never encounter this. Note: Your monitoring now lags by a minute. :)
All said, graphite is a great monitoring tool if your requirements aren't realtime.
Related
I have been scouring the internet for days on a solution to this problem.
That is, how to handle aggregation when there is no network connection? I have a task management app that looks to aggregate meta data about user tasks. For example, the task can contain tags that can be aggregated to be shown in a dashboard to the user on a daily basis. This would be easy if the user is always online, so I could use transaction or cloud function to aggregate, but when the user is offline, the aggregation will appear to be incorrect, until the user restores their network connection.
Aggregation queries are explained here:
https://firebase.google.com/docs/firestore/solutions/aggregation
Which states a limitation:
Offline support - Client-side transactions will fail when the user's
device is offline, which means you need to handle this case in your
app and retry at the appropriate time.
However, there has yet to be any example or documentation on how to 'handle this case'. How would I go about addressing this problem?
Some thoughts:
I could cache the item if a transaction fails. This item will be aggregated on top of the stored aggregation. However, going down this line would mean that I can't take advantage of the Firestore's "offline mode", because I'm using my own cache on every write while offline anyway.
I could aggregate on demand. That is, never store the aggregation. This is going to be very heavy on read depending on how many tasks a user has. Furthermore, if the aggregation will need to be shared as insights to other users, this option will not work because other users do not have access to the tasks.
I'm at a loss and any help would be appreciated, thanks!
After a lot of research and trial and error I found a solution that can address this problem gracefully.
FieldValue.increment to the rescue.
What FieldValue.increment does is bypass the use of transaction while respecting the default Firestore's offline cache behaviour. It requires the use of set or update on the field directly. The drawback is the inability to use the 'withConverter' on the collection for type safety. I'm willing to live with the drawback considering how useful FieldValue.increment is.
I've done multiple tests and can confirm that the values can be incremented/decremented multiple times locally while offline. This offline value is reflected in a get or snapshot call to the cache. When the network connection is restored, the values are updated on the server.
The value itself is not stored on the cache, it simply stores the "difference" in the FieldValue sentinel for when it is time to update it on the server.
This method only works with incrementing and decrementing values. Storing averages will not be possible using this method. That is because the true total number of items is not known at the time of its calculation when offline.
Instead, the total number of items are stored along side the total value. The average is then calculated when and as needed. In this way the average will always be accurate from a local perspective when offline, and it will also be accurate when online when the total value and count has been synced.
In my turn based online game I have a timer in-game, that ticks down from 24 hours to 0, when it reaches 0 for any player they have lost.
When a player makes their turn they write something like this to the database:
action: "not important"
timeStamp: 1670000000
What I want is for either of the two players to be able to get into the ongoing game at any time, read "timeStamp" and set the clock accordingly, showing how much time is left since the last action.
When writing to the database I am using ServerValue.TIMESTAMP (Android). I am aware of the ability to estimate the server time using ServerTimeOffset described here;
https://firebase.google.com/docs/database/android/offline-capabilities#server-timestamps
But I feel it's not always accurate when testing, so I wanted to explore if there is any other way to do this. What I really want is to get the actual server timestamp when reading the node:
timeLeft = actionTimeStamp - currentServerTime + 24h
Is this possible to do in ONE call? I am using RTDB, but I am open to moving to Firestore if it is possible there somehow.
There's no way to get the server timestamp without writing it to the database, but you can of course have each client write it and then immediately read it back.
That said, it shouldn't make much of a difference from using the initial start time that was written and the serverTimeOffset value.
For a working example, have a look at How to implement a distributed countdown timer in Firebase
I have setup TTL on dynamodb table and enabled a stream. According to aws docs it can take up to 48hrs before item is removed. I have run some experiments and I am seeing a 10min delay. I can live with this but has anyone else had longer delays?
Yes,
There are instances where the time taken for the item removal to happen takes more than 10 mins! In fact, the SLA from DynamoDB is 48 hours. The time needed for the actual removal to happen depends on the activity levels of DynamoDB tables.
A more pointed rephrase of Allan:
Even if no one has seen that delay (and chances of finding it anecdotally through a Q&A site seems like a bad statistical test) Amazon says to expect the possibility of that much of a delay. This is for resource cleanup only, and most likely a breach of the 48h SLA will only allow you a refund of storage costs.
Do not depend on the absence of a given item to trigger logic within your application (e.g., user session timeout).
I need to keep certain data ( in a grid) up to date
and was gonna do a poll to the server every 15 seocnds or so to get the data and refresh the grid, however it feels a bit dirty ( the grid will have the loading icon every 15 sec..) doesnt look great...
Another option is to check if there is new data and compare the new data with the current data and only refresh the grid if there is any changes ( I would have to do this client side tho because maintaing the current state of every logged in user also seems like an overkill)
I m sure there are better solutions and would love to hear about them
I heard about COMET, but tit seems to be a bit of an overkill
BTW i m using asp.net MVC on the server side
I d like to hear what people have to say for or against continuos polling with js
Cheers
Sounds like COMET is indeed the solution you're looking for. In that scenario, you don't need to poll, nor do comparisons, as you can push out only the "relevant" changed data to your grid.
Check out WebSync, it's a nice comet server for .NET that'll let you do exactly what you've described.
Here's a demo using ExtJS and ASP.NET that pushes a continuous stream of stock ticker updates. The demo is a little more than you need, but the principal is identical.
Every time you get the answer from the server, check if something has changed.
Do a request. Do let the user know that you are working with some spinner, don't hide it. Schedule the next request in 15 seconds. The next request executes; if nothing has changed, schedule the next one in 15 + 5 seconds. The next request executes; if nothing has changed, schedule the next on in 15 +5 +5 seconds. And so on. The next request executes; if something has indeed changed, reset the interval to 15 seconds.
Prototype can do this semi-automatically with Ajax.PeriodicalUpdater but you probably need stuff that is more customized to your needs.
Anyway, just an idea.
As for continuous polling in general; it's bad only if you hit a different site (using a PHP "bridge" or something like that). If you're using your own resources you just have to make sure you don't deplete them. Set decent intervals with a decay.
I suggest Comet is not an overkill if "updates need to be constant." 15 seconds is very frequent; is your visited by many? Your server may be consumed serving these requests while starving others.
I don't know what your server-side data source looks like, or what kind of data you're serving, but one solution is to server your data with a timestamp, and send a timestamp of the last poll with every subsequent request.
Poll the server, sending the timestamp of when the service was last polled (eg: lastPollTime).
The server uses the timestamp to determine what data is new/updated and returns only that data (the delta), decreasing your transmission size and simplifying your client-side code.
It may be empty, it may be a few cells, it may be the entire grid, but the client always updates data that is returned to it because it is already known to be new.
The benefits of this method are that it simplifies your client side code (which is less code for the client to load), and decreases your transmission size for subsequent polls that have no new data for the user.
Also, this allows you to maintain state on the server side because you don't have to save a state for each individual user. You just have one state, the state of the current data, that is differentiated by access time.
I think checking if there is any new data is a good option.
I would count the number of rows in the database and compare that with the number of rows in your (HTML) table. If they're not the same, get the difference in rows.
Say you got 12 table rows and there are 14 database rows as you check: Get the latest (14 - 12) = 2 rows.
I've been playing with RSS feeds this week, and for my next trick I want to build one for our internal application log. We have a centralized database table that our myriad batch and intranet apps use for posting log messages. I want to create an RSS feed off of this table, but I'm not sure how to handle the volume- there could be hundreds of entries per day even on a normal day. An exceptional make-you-want-to-quit kind of day might see a few thousand. Any thoughts?
I would make the feed a static file (you can easily serve thousands of these), regenerated periodically. Then you have a much broader choice, because it doesn't have to run below second, it can run even minutes. And users still get perfect download speed and reasonable update speed.
If you are building a system with notifications that must not be missed, then a pub-sub mechanism (using XMPP, one of the other protocols supported by ApacheMQ, or something similar) will be more suitable that a syndication mechanism. You need some measure of coupling between the system that is generating the notifications and ones that are consuming them, to ensure that consumers don't miss notifications.
(You can do this using RSS or Atom as a transport format, but it's probably not a common use case; you'd need to vary the notifications shown based on the consumer and which notifications it has previously seen.)
I'd split up the feeds as much as possible and let users recombine them as desired. If I were doing it I'd probably think about using Django and the syndication framework.
Django's models could probably handle representing the data structure of the tables you care about.
You could have a URL that catches everything, like: r'/rss/(?(\w*?)/)+' (I think that might work, but I can't test it now so it might not be perfect).
That way you could use URLs like (edited to cancel the auto-linking of example URLs):
http:// feedserver/rss/batch-file-output/
http:// feedserver/rss/support-tickets/
http:// feedserver/rss/batch-file-output/support-tickets/ (both of the first two combined into one)
Then in the view:
def get_batch_file_messages():
# Grab all the recent batch files messages here.
# Maybe cache the result and only regenerate every so often.
# Other feed functions here.
feed_mapping = { 'batch-file-output': get_batch_file_messages, }
def rss(request, *args):
items_to_display = []
for feed in args:
items_to_display += feed_mapping[feed]()
# Processing/returning the feed.
Having individual, chainable feeds means that users can subscribe to one feed at a time, or merge the ones they care about into one larger feed. Whatever's easier for them to read, they can do.
Without knowing your application, I can't offer specific advice.
That said, it's common in these sorts of systems to have a level of severity. You could have a query string parameter that you tack on to the end of the URL that specifies the severity. If set to "DEBUG" you would see every event, no matter how trivial. If you set it to "FATAL" you'd only see the events that that were "System Failure" in magnitude.
If there are still too many events, you may want to sub-divide your events in to some sort of category system. Again, I would have this as a query string parameter.
You can then have multiple RSS feeds for the various categories and severities. This should allow you to tune the level of alerts you get an acceptable level.
In this case, it's more of a manager's dashboard: how much work was put into support today, is there anything pressing in the log right now, and for when we first arrive in the morning as a measure of what went wrong with batch jobs overnight.
Okay, I decided how I'm gonna handle this. I'm using the timestamp field for each column and grouping by day. It takes a little bit of SQL-fu to make it happen since of course there's a full timestamp there and I need to be semi-intelligent about how I pick the log message to show from within the group, but it's not too bad. Further, I'm building it to let you select which application to monitor, and then showing every message (max 50) from a specific day.
That gets me down to something reasonable.
I'm still hoping for a good answer to the more generic question: "How do you syndicate many important messages, where missing a message could be a problem?"