The Here android SDK contains lots of log messages, but they go through a wrapper for the Android Log class, that by default does not actually log these messages. Is there a way to make it actually log these messages?
Im able to set a breakpoint in a decompiled class, bg.class, in the static init block, and set a = null, and that enables logging, but it seems like there must be a better and less tedious way to do that.
Sorry that is not supported :)
What would you like to achieve turning on logs? Generally our logs contains logs of trace messages that are not useful for 3rd party developers.
Related
I've had Application Insights set up on my ASP.NET project for a couple months with no issues. I use Custom Events for logging certain events.
Recently, I tried to add a Custom Event after a user has authenticated in order to track the login behavior. My custom event DOES log to application insights debug session. I know this because I can see it in the telemetry when paused on a breakpoint just after the event.
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
I cannot understand what the issue is. Does anyone familiar have any (application) insights? I couldn't help myself ;)
There are some things to check:
are you logging to one resource (iKey) and searching on another? (a lot of people send data to one resource in dev/debug and a different resource in release/prod environments. so make sure you're sending to the place you expect, and searching the place you expect.
is the data actually going out successfully? you may need to use fiddler or some other tool to watch your outbound http for calls to dc.services.visualstudio.com. It could somehow be the case that there's something wrong with the data you're sending, or maybe you're getting capped or throttled by the service. If that's the case, the outbound requests will have responses other than 200, and will generally tell you the reason it didn't accept any items that it rejected.
if the data is getting successfully sent and is going where you expect it to go, there might just be a delay in backend processing. you can always check aka.ms/aistatus to see if there are any current issues with the service.
I am confused, however, by what you mean when you say
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
What do you mean "it just disappears" ? if you see it in the output window, then the SDK saw it, and it will get sent, precluding any of the above 3 items. Where is it "disappearing" from? unless you clear the output window, it's never gone from there. If you're talking about the VS search tools that show data sent by the AI SDK during debug, that tool currently has a cap of the most recent 250 items that have occurred during the debug session.
An interface has as a requirement that we do not include an expect-100. (The documentation is assuming I will be using c# or php code to talk and has the code to not send the expect-100)
I quickly googled around a bit and found many topics on how to disable this when not using BizTalk and found multiple posts that would make me believe that BizTalk sends an expect-100 by default as well. (BizTalk Data Services: Extended to bring management functions through IUpdatable and Adding Custom HTTP Headers to messages send via HTTP Adapter.). I have had trouble in finding someone trying to disable it.
Since I have found the C# code to disable it, would a solution be to create a custom pipeline component that disables this?
This is not something I would worry about. I don't recall ever seeing Expect: 100- continue in any trace to or from BizTalk using WCF.
I will say that it is very strange that they would have a dependency on not seeing this. Either way, if WCF is sending it, you should be able to remove it with a Behavior.
You'll have to set this all up to even see if it's a problem. Here's where I say just try it and see what happens.
I am wondering if there is a good way of making automated system testing for a Chromecast receiver application?
If you open the application URL in a Chrome browser, the cast_receiver library cannot find the websocket connection on:
ws://localhost:8008/v2/ipc
Since this handles the communication between the app and the Chromecast hardware, I am thinking of something like a Node.js websocket server that can talk to the chromecast receiver app. Is there such a system, or do anyone know if there are plans of google releasing something for this kind of testing?
Also, would there be other problems related to the difference between the chromecast browser and chrome browser? As I understand, the chromecast browser is just a subset of chrome, which makes me think it should work.
No, there is no easy way to do this.
DISCLAIMER: I haven't tried any of what I'm about to suggest. It's also probably a terribly idea as Google could change the protocol any time and in any fashion they desire since it isn't a public thing.
BIG DISCLAIMER: You may be in violation of the ToS by doing this as Section 3.2 (Developer Policies) states that you "may not ... develop a standalone technology ... any functionality of any Google Cast Receiver". Possibly, you'd be making a standalone piece of technology that replicated the IPC functionality. But I don't know. I'm not a lawyer.
If you want to go and do this, I'd suggest making a copy of the Google Cast Receiver SDK (www.gstatic.com/cast/sdk/libs/receiver/2.0.0/cast_receiver.js as of April 28, 2015) and altering it so that it logs out the messages that are being sent and received.
Luckily, it appears that we have logging messages to help us find the relevant code.
The receiving method has the string "Received message". I would guess that "a.message" is what is being received.
The sending method has the string "IPC message sent". I would guess that "a" is what is being sent.
Once you've instrumented your copy of the code, you need to publish it somewhere that your receiver app can see it and then you need to edit your receiver app to point to your new and improved SDK. Please please please make sure that you do this on a non-published app for testing purposes only.
Once that is done, you need to find some way to get your messages out of the code and into something that you can access. You have a few options.
Fiddle around with the code more and figure out how to get the Chromecast to log out the data you want;
Store the information in an array and read it using the debugger;
Open your own socket (or websocket) and send that data to a server that you control.
From here, you can run your app, interact with it, and then have a complete record of the IPC messages that were sent and received. Armed with this, you can create your own Fake-IPC server that listens for specific messages and spits out the stuff that is in your log.
I'm a nascent coder creating a simple iOS app. I'm experimenting with coding push notifications for the first time and I have a simple question regarding the Parse Installation Object and a scenario where multiple users log on the same device (let's say a loner iPad at a library).
Based on the Parse documentation I've seen, when a user subscribes to a channel - let's say "The Giants" - it saves this info on the Installation Object. But if the user logs out and another user logs in, does Parse assume that we are to erase the previous channels? Should channels therefore be saved to the User class first, and only saved to Installation when a user logs in? And similarly how do we handle advanced targeting where I want to query Installation for a specific User objectId? Is the best practice to always leave the last user logged in listed as 'owner'/'user'?
If you find the library example impractical, also consider something like signing into your Spotify account on a friend's device in order to play a private playlist at a party. I know these are less common scenarios, but I want to make sure I know how to handle them.
I'm new to Push Notifications so I may be missing something fundamental here, but if any experienced developer can lend some advice as to how they handle this scenario, it would be greatly appreciated.
Store a reference to PFUsers when you save the installation. Add a field #"owner" and tag the pfuser to it.
After a user logs in, if they are not associated with the current installation, send an alert asking if they'd like to receive pushes on this device. If that's the case, resave and update the current installation. Otherwise leave it as is.
This is a tricky area, let me know what you come up with.
It's pretty rare that people will sign onto a service using someone else's phone, so I don't think its a huge issue if you want to just "see what happens" and if there's demand work it out.
I have 3 iOS apps using a single Parse application which supports push notifications for all 3 apps. I have a flag set on the project for the Release configuration for NDEBUG. I use #ifndef NDEBUG to set the boolean on a value I set on the current installation. This way it makes it easy to identify which installation that I can use for testing push notifications. I also use the appIdentifier value to filter to the application I am testing.
I also set other values as needed but these values are a good start.
if (debug) {
[currentInstallation setObject:[NSNumber numberWithBool:YES] forKey:#"debug"];
}
else {
[currentInstallation setObject:[NSNumber numberWithBool:NO] forKey:#"debug"];
}
I have a Flex 3 app which I want to instrument to report errors generated by the app to a server via simple HTTPService call.
My idea is to wrap all the methods in try ... catch blocks which then pass the Error object to the reportError() function (which then fires off the HTTP request and pops up a dialog) but is there a better way?
I have implemented a system such as the one you suggest, wrapping all of my methods in try/catch and sending the stack trace to a service that emails me the errors. I created a basic format for the error that logs which method the error occurred in. I noticed that sometimes I end up getting null from the stack traces, so I wanted to log that information for these situations.
It GREATLY improved my application. I tracked down a (large) handful of errors and released a much cleaner build to my users. Now I don't ever get the emails.
The better way IMO is something like this.
I've no idea how good is this particular project (aside from this spooky GPL license), but I don't see why logging in action script should be any different from J2EE, C++, or say Python. Yes, it has some sand box security issues, but I think if this solved, you could log into some centralized log server..
Unfortunately, there really isn't -- errors don't bubble up in such a way as to be trappable at a global level, so the only real way you have to catch errors is to try and catch them all manually. (The community's been pretty vocal in asking for a global exception-handling feature for a while, but it's not there yet.)