How to solve the error `Exception in thread "main" java.lang.NoSuchMethodError` in NebulaGraph Exchange? - nebula-graph

When I submit the task in NebulaGraph Exchange, I get the following error:
Exception in thread "main" java.lang.NoSuchMethodError

Most errors are caused by JAR package conflicts or version conflicts. Check whether the version of the error reporting service is the same as that used in Exchange, especially Spark, Scala, and Hive.

Related

Am I missing something when using MassTransit and AmazonSQS in a large project?

I'm using MassTransit in a project with AmazonSQS and since I updated the packages to the latest version 7.3 I'm getting this exception
---> Amazon.SimpleNotificationService.AmazonSimpleNotificationServiceException: Rate exceeded
---> Amazon.Runtime.Internal.HttpErrorResponseException: Exception of type 'Amazon.Runtime.Internal.HttpErrorResponseException' was thrown.
Sometimes the exception is coming from SQS, the thing is when I was working with the version 6 I didn't have those exceptions.
This solution has three projects:
Two web applications (which produce the messages)
BackgroundService (which receive and process the messages)
I designed this system using CQRS pattern with several commands and for that reason it's creating 100 topics and I don't know if I need to consider some limits either from AWS or MassTransit
Someone can help me? Thanks

A call to PerformanceCounterCategory.Exists() throws System.PlatformNotSupportedException in docker

For some reason, on runtime, i would get exceptions for
System.AggregateException: One or more errors occurred. (Performance Counters are not supported on this platform.) ---> System.PlatformNotSupportedException: Performance Counters are not supported on this platform.
at System.Diagnostics.PerformanceCounterCategory.Exists(String categoryName)
But looking at the system diagnostics performancecounter document, the library is there for .NETCore 2.2
The document can be found here https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.performancecountercategory.exists?view=netcore-2.2

Google Dataflow writing insufficient data to datastore

One of my Batch-Jobs tonight failed with a Runtime-Exception. It writes Data to Datastore like 200 other jobs that were running tonight. This one failed with a very long list auf causes, the root of it should be this:
Caused by: com.google.datastore.v1.client.DatastoreException: I/O error, code=UNAVAILABLE
at com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:126)
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:95)
at com.google.datastore.v1.client.Datastore.commit(Datastore.java:84)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.flushBatch(DatastoreV1.java:925)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.processElement(DatastoreV1.java:892)
Caused by: java.io.IOException: insufficient data written
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.close(HttpURLConnection.java:3501)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:81)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:87)
at com.google.datastore.v1.client.Datastore.commit(Datastore.java:84)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.flushBatch(DatastoreV1.java:925)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.processElement(DatastoreV1.java:892)
at com.google.cloud.dataflow.sdk.util.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:49)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.processElement(DoFnRunnerBase.java:139)
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:188)
at com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.processElement(ForwardingParDoFn.java:42)
at com.google.cloud.dataflow.sdk.runners.
How can this happen? It's very similar to all the other jobs I run. I am using the Dataflow-Version 1.9.0 and the standard DatastoreIO.v1().write....
The jobIds with this error message:
2017-08-29_17_05_19-6961364220840664744
2017-08-29_16_40_46-15665765683196208095
Is it possible to retrieve the errors/logs of a job from an outside application (Not cloud console) to automatically being able to restart jobs, if they would usually succeed and fail because of quota-issues or other reasons that are temporary?
Thanks in advance
This is most likely because DatastoreIO is trying to write more mutations in one RPC call than the Datastore RPC size limit allows. This is data-dependent - suppose the data for this job differs somewhat from data for other jobs. In any case: this issue was fixed in 2.1.0 - updating your SDK should help.

Connection Managers don't work once I change the Protection Level

Using SSIS 2008 currently, my package ProtectionLevel is set to: EncryptSensitiveWithUserKey.
I recently tried changing it to DontSaveSensitive
but my package fails after making that change with the following error:
SSIS package "ImportModerators.dtsx" starting.
Information: 0x4004300A at Data Flow Task, SSIS.Pipeline: Validation phase is beginning.
Error: 0xC020801C at Data Flow Task, Add new Moderators [422]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "StagingDatabase" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed.
Error: 0xC0047017 at Data Flow Task, SSIS.Pipeline: component "Add new Moderators" (422) failed validation and returned error code 0xC020801C.
Error: 0xC004700C at Data Flow Task, SSIS.Pipeline: One or more component failed validation.
Error: 0xC0024107 at Data Flow Task: There were errors during task validation.
SSIS package "ImportModerators.dtsx" finished: Success.
If I've changed the protection level (and I'm the one who created the package), shouldn't this package be able to run?
If the connection is not windows authentication, you will need to setup package configurations for each connection so SSIS has somewhere to pull credentials from.

LogisticsEntityContactInfoView (table) has no valid runnable code in method 'entityType'

Error executing code: LogisticsEntityContactInfoView (table) has no valid runable code in method 'entityType'.
Stack trace
(S)\Data Dictionary\Views\LogisticsEntityContactInfoView\Methods\entityType
(S)\Classes\xApplication\dbSynchronize
(S)\Classes\Application\dbSynchronize - line 22
I encounter this error when I execute the synchronize database.
How to solve this error?
Thanks,
Eric
Have you fully compiled your AX 2012 installation?
If you installed a model other than the Foundation models, you must complete this task. If you do not complete this task, you will encounter errors when you run the Synchronize database task.
And remember....
Depending on your hardware, compilation can take an hour or more. It is critical to let compilation run until it is complete.

Resources