From the documentation of 'Upload usage event' (https://westus.dev.cognitive.microsoft.com/docs/services/Recommendations.V4.0/operations/577ec1847270320f24da25b1), we can tie the event with the build id.
However, from Cognitive Service Recommendation API Upload Usage Event, it seems like I need to create a new build for the event to be considered.
Is that still true? If that's the case, what is the purpose of sending the build id in the usage event?
The usage events need to be tied to a particular model, not a particular build.
You will notice in the URL the location where the modelId is passed:
https://westus.api.cognitive.microsoft.com/recommendations/v4.0/models/{modelId}/usage/events
That just means that the next build you create for that model will take into consideration the new uploaded usage events. So, yes -- you do need to create a build after changing catalog and usage related information for that to be taken into consideration.
The only reason for the buildId to be passed as part of the usage event information is that it is helpful to get conversion metrics for a particular build. (i.e. Since you would be notifying the system when a purchase actually occurs).
NOTE: Please notice that the recommendations API will be deprecated on Feb. 15 2018 -- therefore I would advise you to use The Recommendations Solutions Template (http://aka.ms/recopcs) instead.
Thanks!
Luis Cabrera | Program Manager | Recommendations Team
Related
I have .Net core App deployed on azure and enabled application insights.
Sometimes Azure application insights End-to-end transaction details do not display all telemetry.
Here it only logs the error and not request or maybe request logged but both do not display together over here(difficult to find out due to many people use it)
Should be like:
Sometimes request log but with no error log.
What could be the reason for happening this? do I need to look into application insights specific set-up/feature?
Edit:
As suggested by people here, try to disable the Sampling feature but still not works, Here is open question as well.
This usually happens due to sampling. By default, adaptive sampling is enabled in the ApplicationInsights.config which basically means that only a certain percentage of each telemetry item type (Event, Request, Dependency, Exception, etc.) is sent to Application insights. In your example probably one part of the end to end transaction got sent to the server, another part got sampled out. If you want, you can turn off sampling for specific types, or completely remove the
AdaptiveSamplingTelemetryProcessor
from the config which completely disables sampling. Bear in mind that this leads to higher ingestion traffic and higher costs.
You can also configure sampling in the code itself, if you prefer.
Please find here a good overview of how sampling works and can be configured.
This may be related to :
When using SDK 2.x, you have to track all events and send the telemetries to Application insights
When using auto-instrumentation with 3.x agent, in this case the agent collect automatically the traffic, logs ... and you have to pay attention to the sampling file applicationinsights.json where you can filter the events.
If you are using java, below the accepted Logging libraries :
-java.util.logging
-Log4j, which includes MDC properties
-SLF4J/Logback, which includes MDC properties
I have been trying out axon recently and read lots of stuff. From what I understand, the concept of event sourcing says that system state is rebuilt from the event store, while CQRS updates a view model that can be queried with the command side side is not queryable.
I will like to implement rebuilding of state myself each time the UI requests for some information. I've implemented the event processor and seen its replay capabilities. However I can't seem to find any evidence that axon allows for replays triggered on user demand. Therefore I'm asking here if its possible to personally trigger a replay to build a DAO required by the UI or axon only supports the CQRS way of doing it.
When axon does a replay (when token is deleted), does it read from the snapshot table (if implemented), then from the event table or does it always start from the beginning of time?
Dropping the token manually is currently the way to go if you want your query side to be rebuilt.
So, if you'd like to trigger a replay, you'd have to add a component which does that for you.
This might be a tricky process as well, since you'd probably not want any other parties trying to access your view(s) while they're rebuilding.
Take note that for this you do need to put your event handling components in a TrackingEventProcessor, otherwise it will have no recollection of how much of the events it has handled or not in the form of a TrackingToken.
The snapshot table is meant for your aggregates to ease the loading process. When you've set up axon to use the (Caching)EventSourcingRepository, it will first load an aggregate based on a snapshot and then source the remaining events.
So, to answer your second question: when you drop the token, it will read from the domain event entry table from the beginning of time.
Hope this helps!
Update
For those looking for replay support in Axon Framework, it is recommended to look at this page of the Reference Guide. It describes the replay support constructed for Streaming Event Processsors (like the TrackingEventProcessor).
Need to make a tool to search XML data from BizTalk messagebox.
How do I search all XML data related to lets say a common node called Employee ID from all data stored in the BizTalk MessageBox?
The BizTalk Message Box (BizTalkMsgBoxDb database) is a transient store for messages as they pass through BizTalk. Once a message has finished processing, it will be removed from the Message Box.
You probably want to research Business Activity Monitoring (BAM) which will allow you to capture message data as messages flow through BizTalk; message data can be exposed through its generic web-based portal. BAM is a big product in its own right and I would suggest that you invest time in researching all of the available features to find the one that suits your particular scenario. There are many, many resources available, however you might start by taking look at Business Activity Monitoring. There is also a very good book specifically on BAM: Pro BAM in BizTalk Server 2009
Alternatively, take a look at using the built-in BizTalk Administration Console tools for querying the Tracking database (BizTalkDTADb) which will hold messages for later reference based on your pre-defined configuration options. See Using BizTalk Document Tracking.
Finally, you could consider rolling your own message tracking solution, writing message contents to a SQL Database table, as messages are received in a pipeline for example.
Check out the BizTalk Message Decompressor on CodePlex! I've been using this tool for a couple of years with excellent results. Since you're hitting the messagebox directly, you should be very careful and be very familiar with the queries that you choose to execute.
As noted by a previous poster's answer, BAM and the integrated HAT queries in the admin console are the official, safest, and Microsoft prescribed answers.
I have a use case where I need to add information about the user that created the current publish transaction (more than just their user name, I also need group memberships and some other details) and pass it on to a deployer extension.
When publishing this is relatively easy to do with the following code
engine.PublishingContext.RenderedItem.AddInstruction(
InstructionScope.Global, instruction);
As you may notice this method "AddInstruction" is only available for a "RenderedItem", but Unpublish instructions do not render items, and therefore I cannot use the same technique.
Short of hacking the package manifest in the file system when generating it (for instance in a custom resolver) how would you tackle this requirement?
Do you have more info on what you need to do with this information in the Deployer. Would it be an option to capture the un-publish action after it happens with an event handler, and then create a second publish action which sends the message to the Deployer with the additional information? (I know that means 2 round trips, but I can't think of another approach at this point). Un-publish actions have been a bit tricky ever since R4, back in R3 we actually had code which was executed by templates in the unpublish phase (although it was all Perl back then).
I wonder whether this is a missing extensibility point. After all, I can see why you would want to transmit extra data with an unpublish. So firstly, I'd suggest an enhancement request to have some functionality added to support this use case.
Getting to the point of your question... how to implement something without hacking the package. Perhaps you could make the information available through another mechanism. For example, you could write a web service that runs on the content manager and which serves the data when queried for a given publish transaction ID.
We are building an AbleCommerce 7 web store and trying to integrate it with an existing point-of-sale system. The product inventory will be shared between a phyical store and a web store so we will need to periodically update quantity on hand for each product to keep the POS and the web store as close to in synch as possible to avoid over selling product in either location. The POS system does have an scheduled export that will run every hour.
My question is, has anyone had any experience with synchronizing data with an Able Commerce 7 web store and would you have any advice on an approach?
Here are the approaches that we are currently considering:
Grab exported product data from the POS system and determine which products need to be updated. Make calls to a custom-built web service residing on the server with AbleCommerce to call AbleCommerce APIs and update the web store appropriately.
Able Commerce does have a Data Port utility that can import/export web store data via the Able Commerce XML format. This would provide all of the merging logic but there doesn't appear to be a way to programmatically kick off the merge process. Their utility is a compiled Windows application. There is no command-line interface that we are aware of. The Data Port utility calls an ASHX handler on the server.
Take an approach similar to #1 above but attempt to use the Data Port ASHX handler to update the products instead of using our own custom web service. Currently there is no documentation for interfacing with the ASHX handler that we are aware of.
Thanks,
Brian
We've set this up between AbleCommerce and an MAS system. We entered the products into the AbleCommerce system and then created a process to push the inventory, price, and cost information from the MAS system into the ProductVariants table.
The one issue we ran into is that no records exist in the ProductVariants table until you make a change to the variants data. So, we had to write a Stored Procedure to automatically populate the ProductVariants table so that we could do the sync.
I've done this with POS software. It wasn't AbleCommerce, but retail sales and POS software is generic enough (no vendor wants to tell prospects that "you need to operate differently") that it might work.
Sales -> Inventory
Figure out how to tap into the Data Port for near-real-time sales info. I fed this to a Message-Queue-By-DBMS-Table mechanism that was polled and flushed every 30 seconds to update inventory. There are several threads here that discuss MQ via dbms tables.
Inventory -> Sales
Usually there is a little more slack here - otherwise you get into interesting issues about QC inspection failures, in-transit, quantity validation at receiving, etc. But however it's done, you will have a mechanism for events occurring as new on-hand inventory becomes available. Just do the reverse of the first process. A QOH change event causes a message to be queued for a near-real-time polling app to update the POS.
I actually used a single queue table in MSSQL with a column for messagetype and XML for the message payload.
It ends up being simpler than the description might sound. Let me know if you want info offline.