When use Google Analytics SDK for Android v2 can not get any crash or Exception,I had try EasyTracker and ExceptionReporter but useless? - google-analytics

Use EasyTracker:<bool name="ga_reportUncaughtExceptions">true</bool>
Ues ExceptionReporter:
UncaughtExceptionHandler myHandler = new ExceptionReporter(
myTracker, // Currently used Tracker.
GoogleAnalytics.getInstance(), // GoogleAnalytics singleton.
Thread.getDefaultUncaughtExceptionHandler()); // Current default uncaught exception handler.
Thread.setDefaultUncaughtExceptionHandler(myHandler); // Make myHandler the new default uncaught exception handler.
I can get other data,like event ,page,but I can not get any data of crash and Exception.I have set some uncatch exception and crashs in my program.

if you look in the API doc:
ExceptionReporter(Tracker tracker, ServiceManager serviceManager, java.lang.Thread.UncaughtExceptionHandler originalHandler)
GoogleAnalytics does not implement the ServiceManager interface.... luckily GAServiceManager does. Thus if you change:
GoogleAnalytics.getInstance()
to:
GAServiceManager.getInstance()
It works for me

Related

BackoffExceptions are logged at error level when using RetryTopicConfiguration

I am a happy user of the recently added RetryTopicConfiguration there is however a small issue that is bothering me.
The setup I use looks like:
#Bean
public RetryTopicConfiguration retryTopicConfiguration(
KafkaTemplate<String, String> template,
#Value("${kafka.topic.in}") String topicToInclude,
#Value("${spring.application.name}") String appName) {
return RetryTopicConfigurationBuilder
.newInstance()
.fixedBackOff(5000L)
.maxAttempts(3)
.retryTopicSuffix("-" + appName + ".retry")
.suffixTopicsWithIndexValues()
.dltSuffix("-" + appName + ".dlq")
.includeTopic(topicToInclude)
.dltHandlerMethod(KAFKA_EVENT_LISTENER, "handleDltEvent")
.create(template);
}
When the a listener throws an exception that triggers a retry, the DefaultErrorHandler will log a KafkaBackoffException at error level.
For a similar problem it was suggested to use a ListenerContainerFactoryConfigurer yet this does not remove all error logs, since I still see the following in my logs:
2022-04-02 17:34:33.340 ERROR 8054 --- [e.retry-0-0-C-1] o.s.kafka.listener.DefaultErrorHandler : Recovery of record (topic-spring-kafka-logging-issue.retry-0-0#0) failed
org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.listener.KafkaBackoffException: Partition 0 from topic topic-spring-kafka-logging-issue.retry-0 is not ready for consumption, backing off for approx. 4468 millis.
Can the log-level be changed, without adding a custom ErrorHandler?
Spring-Boot version: 2.6.6
Spring-Kafka version: 2.8.4
JDK version: 11
Sample project: here
Thanks for such a complete question. This is a known issue of Spring for Apache Kafka 2.8.4 due to the new combine blocking and non-blocking exceptions feature and has been fixed for 2.8.5.
The workaround is to clear the blocking exceptions mechanism such as:
#Bean(name = RetryTopicInternalBeanNames.LISTENER_CONTAINER_FACTORY_CONFIGURER_NAME)
public ListenerContainerFactoryConfigurer lcfc(KafkaConsumerBackoffManager kafkaConsumerBackoffManager,
DeadLetterPublishingRecovererFactory deadLetterPublishingRecovererFactory,
#Qualifier(RetryTopicInternalBeanNames
.INTERNAL_BACKOFF_CLOCK_BEAN_NAME) Clock clock) {
ListenerContainerFactoryConfigurer lcfc = new ListenerContainerFactoryConfigurer(kafkaConsumerBackoffManager, deadLetterPublishingRecovererFactory, clock);
lcfc.setBlockingRetriesBackOff(new FixedBackOff(0, 0));
lcfc.setErrorHandlerCustomizer(eh -> ((DefaultErrorHandler) eh).setClassifications(Collections.emptyMap(), true));
return lcfc;
}
Please let me know if that works for you.
Thanks.
EDIT:
This workaround disables only blocking retries, which since 2.8.4 can be used along non-blocking as per the link in the original answer. The exception classification for the non-blocking retries is in the DefaultDestinationTopicResolver class, and you can set FATAL exceptions as documented here.
EDIT: Alternatively, you can use the Spring Kafka 2.8.5-SNAPSHOT version by adding the Spring Snapshot repository such as:
repositories {
maven {
url 'https://repo.spring.io/snapshot'
}
}
dependencies {
implementation 'org.springframework.kafka:spring-kafka:2.8.5-SNAPSHOT'
}
You can also downgrade to Spring Kafka 2.8.3.
As Gary Russell pointed out, if your application is already in production you should not use the SNAPSHOT version, and 2.8.5 is out in a couple of weeks.
EDIT 2: Glad to hear you’re happy about the feature!

LoadBalancingChannel exception on DocumentClient.Dispose()

We have hit an issue where after using CosmosDb, an exception occurs if we try to dispose of the DocumentClient shortly after. Waiting a few seconds before disposing causes no exceptions. We have confirmed that we are using await with every asynchronous call.
Psuedo-code:
using(DocumentClient documentClient = new DocumentClient(...params)) {
IOrderedQueryable<T> query = this.documentClient.CreateDocumentQuery<T>(...params);
IList<T> documents;
using (IDocumentQuery<T> documentQuery = query.AsDocumentQuery()) {
documents = (await documentQuery.ExecuteNextAsync<T>()).ToList();
}
// Processing...
}
The exception states:
LoadBalancingChannel rntbd://[ip].documents.azure.com:[port]/ in use
The API that makes the call successfully returns before DocumentClient.Dispose is called (all of the documents are correctly returned).
Has anyone seen this exception before? A search revealed no hits.
It can happen if the DocumentClient is disposed while pending requests. this was addressed in SDK version 2.2.2, please upgrade to latest SDK version.

How to use the proxy for API call

I'm using System.Net.Http.HttpClient to call some API.
It works correctly in UWP
It fails in WASM with the error : "Operation is not supported on this platform."
Stack trace show this is System.Net.WebProxy.CreateDefaultProxy() that fails.
What is the most universal way to do API call?
Currently, the best way to handle this is to set the default handler to Uno's WasmHttpHandler, as follows:
var httpMessageHandler = Type
.GetType("System.Net.Http.HttpClient, System.Net.Http")
.GetField("GetHttpMessageHandler",
BindingFlags.Static |
BindingFlags.NonPublic
);
httpMessageHandler.SetValue(
null,
(Func<HttpMessageHandler>)(() => new Uno.UI.Wasm.WasmHttpHandler())
);
Note that this does not override the default HttpHandler behavior, which means that if you use it explicitly, you'll get the same error.

Why is my signalR proxy info not generated correctly?

Ok so im using the latest version of signalR (2.10.0 at this moment in time) and I am adding a new hub to my application, I don't want to use it yet nor do I intend to in it's own right I simply want to use this hub to template some basic crud stuff so I don't have to rewrite it all the time.
The adding of this hub to my project causes signalR to fail and not correctly generate the proxy stuff on the client.
Can anyone explain why this would happen?
Here's my code ...
Existing hub:
public class NotificationHub : Hub { ... }
New hub:
public class Hub<T> : Hub { ... }
Client side code that breaks by adding new hub ...
$(function () {
var notifications = $.connection.notificationHub;
notifications.client.success = function (message) <--- js exception here
{ ... };
});
The js exception reads "cannot read property client of undefined".
EDIT:
Looking closer at the details in chromes tools I can see that the client side error is due to : "GET http://localhost/signalr/hubs 500 (Internal Server Error)"
Question is, how do I fix this as attaching the debugger does not let me handle / see where this error is occurring.
I'm guessing this is in the signalR code somewhere.
Is there an issue with generics in signalR ?
Ah ok ...
Since I never have any intention of actually creating a Hub only its sub types I made it abstract.
This appears to resolve the resolve the problem.
public abstract class Hub<T> : Hub { ... }

Flex/AS3: why I failed to listen ready event of module

follow are code snippet
var depTree:IModuleInfo=ModuleManager.getModule('modules/depTree.swf');
if(!depTree.loaded){
depTree.addEventListener(ModuleEvent.ERROR, onModuleError);
depTree.addEventListener(ModuleEvent.PROGRESS,onModuleProgress);
depTree.addEventListener(ModuleEvent.SETUP,onModuleSetup);
depTree.addEventListener(ModuleEvent.READY,onDepTreeModuleReady);
depTree.load();
}
private function onDepTreeModuleReady(event:ModuleEvent):void{
logger.debug("depTree module was ready");
var moduleInfo:IModuleInfo = event.currentTarget as IModuleInfo;
Panel(component).addChild(moduleInfo.factory.create() as Module);
}
when I run my application, I got "[SWF] modules/depTree.swf - 336,967 bytes after decompression" message,so i'm sure depTree module was loaded,also depTree.read is true.
but onDepTreeModuleReady function seems never be invoked,I didn't got debug message in it and UI of application has no change.
Its an SDK bug move to 3.4

Resources