Have problem with Queue time parameter:
1. Have hit.
2. Offline action (phone call started).
3. During conversation visitor comes to site with other source. So the new session started.
4. Phone call ends and I'm sending event with qt parameter = phone call lenghth.
http://www.google-analytics.com/collect?v=1&t=event&tid=UA-74639540-1&cid=1141561565.1464086029&uip=195.138.65.154&ea=call&ec=PROPER&el=119&qt=127
5. Event appears in GA like initiated by last session, while using qt I expect that it will put event in session that was active at the moment of phone call start.
What is the problem? Is it possible to post event to session that is not last but ended less than 4 hours as expected by "Queue Time" behaviour? Maybe I need to specify some additional parameter?
Related
I have application that should notify user based on some interval pattern like:
Event
> Pushes
Pattern: Immediately - 3 day - 7 day - 12 day
If user made action for event pushes should stops for this event. It is possible to have multiple same type events that should send push when event occurred.
Also I do not want to bother user for example when the one have 5 events to send x5 more pushes, but reduce by taking together pushes that should happens next day (or some other interval) by sending one push for example 'reminder: you have 5 events'.
So now I decide this kind of solution, when event occurred, insert into db all pushes for event that should be send later with datetime for send. If user take action, pushes marks as redundant for this event. And before sending analyze interval for example take all pushes for next 24hour, send one and mark all others as already sent.
Is it ok, or maybe exists better solutions?
I have experience building same application with you. What I'm I doing is :
CMS -> redis -> worker
CMS, are used for creating push notification content, including the time when that content should be sent
Redis, are used for storing the delayed jobs data
Worker, php application that pulling delayed jobs data from Redis. I use Laravel on here, I take advantage from Laravel queue delayed dispatching.
Previously I have try use database and message broker SQS as queue driver. Why I'm switch to redis ? First, when I using database, is too costly, due the traffic of my queue data is very huge. Then when I use SQS, it's better than database, but SQS cannot hold delayed data with weeks age. So my last choice is Redis. Of course we can use another serivce such as Rabbitmq.
I've had Application Insights set up on my ASP.NET project for a couple months with no issues. I use Custom Events for logging certain events.
Recently, I tried to add a Custom Event after a user has authenticated in order to track the login behavior. My custom event DOES log to application insights debug session. I know this because I can see it in the telemetry when paused on a breakpoint just after the event.
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
I cannot understand what the issue is. Does anyone familiar have any (application) insights? I couldn't help myself ;)
There are some things to check:
are you logging to one resource (iKey) and searching on another? (a lot of people send data to one resource in dev/debug and a different resource in release/prod environments. so make sure you're sending to the place you expect, and searching the place you expect.
is the data actually going out successfully? you may need to use fiddler or some other tool to watch your outbound http for calls to dc.services.visualstudio.com. It could somehow be the case that there's something wrong with the data you're sending, or maybe you're getting capped or throttled by the service. If that's the case, the outbound requests will have responses other than 200, and will generally tell you the reason it didn't accept any items that it rejected.
if the data is getting successfully sent and is going where you expect it to go, there might just be a delay in backend processing. you can always check aka.ms/aistatus to see if there are any current issues with the service.
I am confused, however, by what you mean when you say
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
What do you mean "it just disappears" ? if you see it in the output window, then the SDK saw it, and it will get sent, precluding any of the above 3 items. Where is it "disappearing" from? unless you clear the output window, it's never gone from there. If you're talking about the VS search tools that show data sent by the AI SDK during debug, that tool currently has a cap of the most recent 250 items that have occurred during the debug session.
I use google analytics to track user behaviour in an application. What I do is this:
Send a start session
Send a custom event "Lifetime start"
If there is an error: send a fatal exception
Send a custom event "Lifetime stop"
Send a stop session
Now when I look at the statistics, I see that a certain percentage of users had an exception over tha last 30 days. However, all users have had sessions without exception!. This is almost impossible, since I know there are users where the application crashes every time.
Is it possible that the fatal exception I submit terminates the session? So even users where the app crashes everytime get a second (short) session, containing only the "Lifetime stop" custom event? (That would explain my statistics)
That being said, I ran a test using Universal Analytics via the web (so this was not done in an app), but the results should be consistent with your setup.
I started a session and sent a "Before Exception" event which showed up in my real-time event report. I then waited a few seconds and successfully sent a fatal exception (which there is not a real-time report for). Without refreshing I then sent an "After Exception" event which came through fine in my real-time report.
From the following screenshot (User Explorer), you can see the two "Exception Test" events that I described both in the same session.
I would think that whatever fatal crash you're seeing is what is preventing additional data from appearing in Google Analytics- not that Google Analytics is ending the session at the time of the fatal exception. If it was ending the session, you'd still see events for "Lifetime stop", but it sounds like you aren't seeing those events at all.
The only things that end a GA session:
Session timeout (default: 30 minutes)
End of day
UTM/AdWords/Referral
Manually ending the session as you described
You may need to come up with some context clues to actually get to the bottom of this (maybe a remote server log?), but from the information provided (and if I'm understanding it correctly) I'm leaning towards the crash preventing the rest of the code from running.
Title sums it up. Same question is here. Posting on SO to see if I can get any help. I also made an almost minimal project to demonstrate the problem I'm facing, so links that follow point to the piece of code being mentioned.
Nothing fancy on what I'm currently doing:
My watchface is notified that the bluetooth connection with the phone is up, using .pebble_app_connection_handler.
In that bluetooth callback I set up, I send a message to the phone using app_message_outbox_send(). When BT connection is up, of course.
My Android app has a BroadcastReceiver that listens to these messages and calls an IntentService.
This IntentService calculates the data, pushes it to the watch and sets itself to run again after some time.
What I expect:
When the BT connection is up, the connection handler to be called.
app_message_outbox_send() return a value telling if the message initiation had any errors. Normally, this is APP_MSG_OK, but it can be APP_MSG_BUSY, and I'm perfectly aware of that.
The app message callbacks (app_message_register_inbox_received and friends) be called to indicate if the asynchronous process of sending message to phone really worked. This is stated in the docs.
What I'm seeing:
The expected steps happen when the watchface is loaded, because I trigger the update manually. However, when the update is triggered by a BT connection event, expected steps 1 and 2 happen, but not the step 3.
This is particularly aggravating when I get APP_MSG_OK in step 2, because I should reasonably expect that everything on the watch went OK, and I should prepare myself to receive something inside the app message callbacks. Basically, I'm told by the docs to wait for a call that never arrives.
This happens 100% of the time.
Thank you for any help. I have another solution that works, using the watch to keep track of the update interval, but I believe this one allows me to save more battery by making use of recent Android features.
From documentation :
To also be notified of connection events relating to any PebbleKit
companion apps associated with this watchapp, also assign a handler to
the pebblekit_connection_handler field. This will be called when the
status of the connection to the PebbleKit app changes.
Maybe it is what you need
I didn't see a situation quite like mine, so here goes:
Scenario highlights: The user wants a system that includes custom SMS alerts. A component of the functionality is to have a way to identify a start based on user input, then send SMS with personalized message according to a pre-defined interval after the trigger. I've never used Twilio before and am noodling around with the implementation.
First Pass Solution: Using Twilio account, I designated the .aspx that will receive the inbound triggering alert/SMS via GET. The receiving page declares and instantiates my SMSAlerter object within page load, which responds immediately with a first SMS and kicks off the System.Timer.Timer. Elementary, and functional to a point.
Problem: The alerts continue to be sent if the interval for the timer is a short time span. I tested it at a minute interval and was successful. When I went to 10 minutes, the immediate SMS is sent and the first message 10 minutes later is sent, but nothing after that.
My Observation: Since there is no interaction with the resource after the inbound text, the Session times out if left at default 20 minutes. Increasing Session timeout doesn't work, and even if it did does not seem correct since the interval will be on the order of hours, not minutes.
Using Cache to store each new SMSAlerter might be the way to go. For any SMSAlerter that is created, the schedule is used for roughly 12 hours and is replaced with a new SMSAlerter object when the same user notifies the system the following day. Is there a better way? Am I over/under-simplifying? I am not anticipating heavy traffic now (tens of users), but the user is thinking big.
Thank you for comments, suggestions. I didn't include the code, because the question is about design, not syntax.
I think your timer is going out of scope about 20 minutes after the original request, killing the timer. I have a feeling that if you keep refreshing the aspx page it won't happen - but obviously that doesn't help much.
You could launch a new thread that has the System.Timers.Timer object so it stays alive, and doesn't go out of scope when there are no follow up requests to the server. But this isn't a great idea to be honest - although it might help with understanding the issue.
Ultimately, you'll need some sort of continuously running service - as you don't want to depend on the app pool for this, so I'd suggest a Windows Service running in the background to handle it, which is going to be suitable for a long term solution.
Hope this helps!
(Edited slightly to make the windows service aspect clearer)