nodejs application insigths string property suddenly limited to 1024 - azure-application-insights

I used for > 1.5 years to use trackEvent in AI nodejs package with a property with various payload text.
until now, I was able to send large payload (sent many time >53k length string)
suddenly, starting on 18th of october 2018, all string longer than 1024 chars are truncated.
I looked into github repo and or new release and I can't understand if expected or not ...
AI "version": "1.0.6"
I suspect that version 1.0.6 has this new limitation ?

Yes, this is new in 1.0.6 and appears to be a bug.
Context:
The Application Insights SDK for Node had always intended to truncate custom dimensions to 1024 chars, but this logic was broken in some cases. As part of the changes in 1.0.6 to support nested objects in custom dimensions, the truncation logic was fixed.
We perform this truncation because your telemetry has a chance of being dropped entirely by the Application Insights backend if custom dimensions are longer than the limit specified in the schema. However, the limit in the schema is 8192 rather than 1024.
I've opened a bug to track fixing this: https://github.com/Microsoft/ApplicationInsights-node.js/issues/444

Related

How to connect Adobe Captivate XApi course with YetAnalytics or LRS (Learning record system)?

I am trying to connect my Adobe Captivate XApi course to the LRS (YetAnalytics). I have very less information as to what should i add in this code of tc-onfig.js in the course files:
// Pre-configured LRSes that should receive data, added to what is included
// in the URL and/or passed to the constructor function.
//
// An array of objects where each object may have the following properties:
//
// endpoint: (including trailing slash '/')
// auth:
// allowFail: (boolean, default true)
// version: (string, defaults to high version supported by TinCanJS)
//
TC_RECORD_STORES = [
{
endpoint : "",
auth : "",
allowFail: ,
version: "",
}
];
Generally you should avoid using that functionality. That code is leveraged by an underlying library in Captivate (Rustici Driver) for packages with a tincan.xml file. That package will be launched with an LRS endpoint and authentication credential which is where it will send the statements that it generates. Generally it is a much better idea to send all statements to that configured LRS and then figure out a way to get those statements either forwarded from or pulled from that LRS into your additional LRS(s).
This is for two main reasons. First by using this functionality you have to hard code a credential into the package which makes it insecure and indistinguishable during requests, this is generally just bad. Second, there is little to no error handling around calls that leverage this functionality, so if you set allowFail to false exceptions will go uncaptured and the content will likely behave in strange ways (or break completely), if you set allowFail to true then you will have no recourse when a call fails and you potentially will not know that you've lost data.
(Unfortunately, I know this because I implemented the functionality originally a very long time ago before fully understanding all of the ramifications.)
But just so I've answered your actual question, if you wish to not heed my advice, then the values that should go there will be passed through to the constructor for a TinCan.LRS object which is documented here: http://rusticisoftware.github.io/TinCanJS/doc/api/latest/classes/TinCan.LRS.html
The auth being the most tricky, it should be a value that is a full Authorization header value as needed to connect to the LRS, very often a Basic Auth header.

XSD for Decoding in Logic Apps an X12 830 02000

I am looking for the XSD to use to support a Decoding action in Logic Apps for the X12 830 00200. This was approved by ANSI in 1986 (pre-ASC), but is still widely used by Ford. I understand the same XSD would be used in a BizTalk Server solution. Does anyone have one to share?
I have tried the download item MicrosoftEdiXSDTemplates.zip as part of Microsoft Azure BizTalk Services SDK Setup:
https://www.microsoft.com/en-us/download/details.aspx?id=39087
However that only goes back to 00204, which I tried unsuccessfully adapting.
I would rather not do this as a Flat File Decode, as I want all X12 830 processing in my Logic Apps solution to have a consistent, Agreement-based configuration.
I have sample EDI, drawn from the real-world.
I will be using Ford's specs for the v002001FORD 830O to validate any schema I obtain or create: https://www.gsec.ford.com/GEC/edispecs/830.pdf
** UPDATE **
Thanks all for the help. It ends up that on the MS side, the Kusto log analytics trace of my run-time activity shows explicit duplicate schema references in my Agreement, while my run-time exception from Logic Apps does not clearly indicate a duplicate schema issue is present: 'The message has an unknown document type and did not resolve to any of the existing schemas configured in the agreement.' So, there was nothing wrong with my schema. I just had to tweak my Agreement configuration. I am reporting this to MS and hope the schema validation in the Agreement and/or the exception reporting will be improved.
To me a broader issue is that the X12 schema provided are ASC-issued ones: 02000, 03000, 04000, etc.. The same ones prevented from being shared on Git due to copyright issues. The reason I believe I am running into older, ANSI-issued specs still in used despite their age by Ford, Toyota, etc. is that the same copyright issues tends continued usage by OEMs of these specs despite their age. For that reason, it would be a big help to the community if MS provided the XSDs for the ANSI-issued X12 specs as is done for the ASC-issued ones. For each ASC-issued spec, such as 04000, there are many documents: 830, 856, etc. This multiplies out to scores if not hundreds of handcrafted XSDs one may need to produce (as is our case) to implements broad X12 support in Logic Apps.
The process with outlier EDI Schemas is to find the closest one and modify it to support the version you need.
What do you mean by 'unsuccessfully adapting'? This is not an uncommon thing.
Since the spec is so old, one thing I would very much consider is bumping the interchanges up to a 'current' :) version, even just 00204. I'm not sure the specific value 00200 will work with BizTalk EDI.
You would use a custom Pipeline Component for the incoming and should be able to use the EDI.Override properties on outbound.

Using the Nexus3 API how do I get a list of artifacts in a repository

We are migrating from Nexus Repository Manager 2.1.4 to Nexus 3.1.0-04. With version 2 we have been able to use the API to get a list of artifacts by repository, however we are struggling to find a way to do this with the Nexus 3 API.
Having read https://books.sonatype.com/nexus-book/reference3/scripting.html chapter 16 we have been able to get artifact information for a specific blob using a groovy script like:
import org.sonatype.nexus.blobstore.api.BlobId
def properties = blobStore.blobStoreManager.get("default").get(new BlobId("7f6379d32f8dd78f98b5b181166703b6")).getProperties()
return [headers: properties.headers, metrics: properties.metrics]
However we can't find a way to iterate over the contents of a blob store. We can get a blob store object:
blobStore.blobStoreManager.get("default")
however the API does not appear to give us a way to get a list of all blobs within that store. We need to get a list of the blobIDs within a blob store.
Is there a way to do this via the Nexus 3 API?
One of our internal team members put this together. It doesn't use the blobStore but accomplishes I believe what you are trying to do (and a bit more): https://gist.github.com/kellyrob99/2d1483828c5de0e41732327ded3ab224
For some background, think of a blobStore as just where we store the bits, with no information about them. OrientDB has Component/Asset records and stores all the info about them. You'll generally want to use that instead of the blobStore for Asset information as a result.
Once your migration is done, it can be worth to investigate to update your version of Nexus.
That way, you will be able to use the - still in beta - new API for Nexus. It's available by default on the version 3.3.0 and more: http://localhost:8082/swagger-ui/
Basically, you retrieve the json output from this URL: http://localhost:8082/service/siesta/rest/beta/assets?repositoryId=YOURREPO
Only 10 records will be displayed at a time and you will have to use the continuationToken provided to request the next 10 records for your repository by calling: http://localhost:8082/service/siesta/rest/beta/assets?continuationToken=46525652a978be9a87aa345bdb627d12&repositoryId=YOURREPO
More information here: http://blog.sonatype.com/nexus-repository-new-beta-rest-api-for-content

KryoException: Buffer too small

I'm using Gremlin 3.0.2 together with Titan 1.0.0.
The request I send to the Gremlin Server will return a list of nodes and their properties. Effectively, it's a list of items like the following:
[coverurl:[https://lh3.googleusercontent.com/RYb-duneinq8ClWVLVKknkIx1jOKm64LjreziFApEnkKME8j9tHNDRi9NMA6PK4PTXO7], appname:[Slack], pkgid:[com.Slack]]
In one case, a request will return 38 items like the one above and everything is fine. In another case, the list would contain 56 of these items and I get the following exception:
WARN org.apache.tinkerpop.gremlin.driver.MessageSerializer - Response [PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)] could not be deserialized by org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0.
ERROR org.apache.tinkerpop.gremlin.driver.Handler$GremlinResponseHandler - Could not process the response
io.netty.handler.codec.DecoderException: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: org.apache.tinkerpop.shaded.kryo.KryoException: Buffer too small: capacity: 0, required: 1
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
This problem has been discussed here last year. However, for different versions of Titan and for writing data to Titan, instead of reading like it is the case here.
I don't see any programmatic way to adapt the buffer size of the (De)serializer so what is the preferred way to deal with this problem? Also, setting some limit (which?, where?) to some higher value can only be a temporary solution since I never know how much data a request will return.
Anyways - The amount of data I receive is fairly small (probably a little more than 8500 Bytes). I'm surprised that this exception is thrown at all?
Titan 1.0 is based on TinkerPop 3.0.1... are you building Titan on your own?
TINKERPOP-817 introduces a fix that allows a bufferSize parameter to be configured. As Stephen mentioned in the comments:
the kryo buffer size was defaulted to 4096 and would those throw that
"Buffer too small" exception
The fix went into TinkerPop 3.0.2 and is documented here.
In order to use this, you'll need to upgrade your Titan Server to run with TinkerPop 3.0.2, and it would be best to recompile from source after modifying the tinkerpop.version in the Titan pom.xml. Find the Titan build directions here. Alternately, you could consider building the titan11 branch for the latest available fixes and TinkerPop 3.1.1 (Hadoop 2 support!).
Next, you will need to configure the bufferSize on the appropriate serializer in the gremlin-server.yaml configuration. I do not think you cannot fix this problem with a client configuration only.
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { bufferSize: 8192, useMapperFromGraph: graph }} # application/vnd.gremlin-v1.0+gryo
Since you have a Java client and you're expecting to work directly with the Vertex objects, perhaps you could consider doing a direct connection to Titan and avoid this serialization completely.

OData error: "A value without a type name was found and no expected type is available." when calling Azure Active Directory Graph API

Let's see if you experts have a clue of what's going on here.
Context
We have a web application running on Azure Web Sites. This WebApp uses OWIN + OpenID Connect to authenticate users against an Azure Active Directory tenant. Also the application uses the Azure AD Graph API to collect some data of the directory.
We based our code on this sample project provided in GitHub: https://github.com/AzureADSamples/WebApp-GraphAPI-DotNet
Issue
The WebApp was working perfectly some hours ago (authenticating to the AD and fetching data from the directory), but then the weirdest thing happened to us. Today we found that we could still authenticate against the AD but the Graph API was throwing errors almost randomly.
We traced the error down to a specific request, when trying to get a specific user by ObjectId in a synchronous way:
Claim claimObject = ClaimsPrincipal.Current.FindFirst(Helper.Constants.ADTenant.ObjectIdClaimType);
string userObjectID = claimObject == null ? string.Empty : claimObject.Value;
ActiveDirectoryClient client = AuthenticationHelper.GetActiveDirectoryClient();
List<IUser> users = client.Users.Where(u => u.ObjectId == userObjectID).ExecuteAsync()
.Result.CurrentPage.ToList();
The thing is the last line throws an exception regarding the OData model:
"A value without a type name was found and no expected type is available. When the model is specified, each value in the payload must have a type which can be either specified in the payload, explicitly by the caller or implicitly inferred from the parent value."
We started slicing the last line of code into pieces as follows:
IReadOnlyQueryableSet<IUser> queryUsers = client.Users.Where(u => u.ObjectId == userObjectID);
IPagedCollection<IUser> pagedUserCollection = queryUsers.ExecuteAsync().Result;
List<IUser> users = pagedUserCollection.CurrentPage.ToList();
And found the exception was throwing in this line:
IPagedCollection<IUser> pagedUserCollection = queryUsers.ExecuteAsync().Result;
Weirdest thing is that this line was executing fine yesterday and today started failing without explanation.
Does anyone know what are we doing wrong? Why did it start failing today?
Remarks
We are using api-version=2013-11-8. We kept the Azure AD Graph API Client Library on version 1.0, as in the sample on GitHub.
Folks,
First of all - many apologies for introducing this problem. The underlying problem is that an entity (User entity in this case) was updated on the service side, with a new collection (AlternativeSignInNamesInfo). Typically adding new entities, properties, collections and complex types should not cause a breaking change for the client library. However due to an issue in ODatalib, unknown collections are not simply ignored.
I totally agree with the sentiment on this, and we absolutely do NOT want to have apps that take a dependency on the Graph Client Library be subject to ANY outages. We are working with the ODatalib team to get this issue rectified, so that this is no longer a problem with our Graph client library moving forward.
In the meantime we are in the process of rolling back our Graph service, so that 2.0.5 should start to work again. Version 2.0.6 should also work - as long as you don't try and post to the new collection on the User object (AlternativeSignInNamesInfo).
UPDATE: The Graph service has been rolled back. I’ve also verified that getting a user through Graph Client Library 2.0.5 AND 2.0.6 both work.
Hope this helps and again sorry for any issues caused here.
I had the same problem just now! I have an application which have been working for a couple of weeks and hasn't been changed. I've got it working by upgrading "Microsoft.Azure.ActiveDirectory.GraphClient" from version 2.0.5 to 2.0.6
Yes upgrading the graph client Nuget package to the latest 2.0.6 fixed this problem. I had a similar panic this morning too. It's an unbelievable fact that Microsoft rolled out a new version of the dll which breaks applications run on previous version!
My team had similar experience. After installing 2.06 our code started working again. Took the entire day with first discovering, fixing, and then testing the solution.

Resources