Able Commerce POS Data Merge - ablecommerce

We are building an AbleCommerce 7 web store and trying to integrate it with an existing point-of-sale system. The product inventory will be shared between a phyical store and a web store so we will need to periodically update quantity on hand for each product to keep the POS and the web store as close to in synch as possible to avoid over selling product in either location. The POS system does have an scheduled export that will run every hour.
My question is, has anyone had any experience with synchronizing data with an Able Commerce 7 web store and would you have any advice on an approach?
Here are the approaches that we are currently considering:
Grab exported product data from the POS system and determine which products need to be updated. Make calls to a custom-built web service residing on the server with AbleCommerce to call AbleCommerce APIs and update the web store appropriately.
Able Commerce does have a Data Port utility that can import/export web store data via the Able Commerce XML format. This would provide all of the merging logic but there doesn't appear to be a way to programmatically kick off the merge process. Their utility is a compiled Windows application. There is no command-line interface that we are aware of. The Data Port utility calls an ASHX handler on the server.
Take an approach similar to #1 above but attempt to use the Data Port ASHX handler to update the products instead of using our own custom web service. Currently there is no documentation for interfacing with the ASHX handler that we are aware of.
Thanks,
Brian

We've set this up between AbleCommerce and an MAS system. We entered the products into the AbleCommerce system and then created a process to push the inventory, price, and cost information from the MAS system into the ProductVariants table.
The one issue we ran into is that no records exist in the ProductVariants table until you make a change to the variants data. So, we had to write a Stored Procedure to automatically populate the ProductVariants table so that we could do the sync.

I've done this with POS software. It wasn't AbleCommerce, but retail sales and POS software is generic enough (no vendor wants to tell prospects that "you need to operate differently") that it might work.
Sales -> Inventory
Figure out how to tap into the Data Port for near-real-time sales info. I fed this to a Message-Queue-By-DBMS-Table mechanism that was polled and flushed every 30 seconds to update inventory. There are several threads here that discuss MQ via dbms tables.
Inventory -> Sales
Usually there is a little more slack here - otherwise you get into interesting issues about QC inspection failures, in-transit, quantity validation at receiving, etc. But however it's done, you will have a mechanism for events occurring as new on-hand inventory becomes available. Just do the reverse of the first process. A QOH change event causes a message to be queued for a near-real-time polling app to update the POS.
I actually used a single queue table in MSSQL with a column for messagetype and XML for the message payload.
It ends up being simpler than the description might sound. Let me know if you want info offline.

Related

.Net core microservices query from many services

I have a .net core 5 microservices project, the client has a search module which will query the data from many objects, these objects are in many services.
first microservice for products.
in this microservice has table product {productid, productname }.
second microservice for vendors.
in this microservice has table account {verndorId, vendorName}.
third microservice for purchase
in this microservice has table purchase {title, productid(this id comes from product table in first microservcice), accountid (this id comes from account table in secount microservice)}.
Now : the user want to search for the purchase where product name like "clothes" and vendorname like "x";
who I can do this query throw microservice pattern.
taking a monolithic system and slapping service interfaces on top of each table will just make your life hell as you will begin to implement that database logic yourself - like in this question where you try to recreate database joins.
Putting aside that it looks like your partitioning to services doesn't sound right (or needed) for this case. considering purchases describe events that happened (even if they are later deleted) they can capture a state of the product or the vendor. so you can augment the data in the purchases to represent the product and vendor data that were right at the time of the purchase. This will isolate purchase queries to the purchase service and will have the added benefit of preserving history when products/vendors evolve over time.
Considering this case you have to build at least 3 Queue Managers in your Message Queue service integration for Purchase, Vendor and Products.
And setup the Request and Reply queues.
Now those request and reply of the data can be in either JSON or XML however you like.
You need to create a listener which will be responsible to listen to the reply queues and you can create something like SignalR streaming to continuous listen.
Once all the integeration is completed you can directly inject the result in your Client application.

Do I need to create a new SQLite database every time an application is updated?

I have a Xamarin Forms application I would like to develop. It will have a SQLite database and I wish to make this available on iOS and Android. The database will be populated with data from a SQL Server database on the cloud with initial seed data. I'm thinking this will be about 500 rows of data with each row about 1Kb.
What I don't understand is when and how to populate this. Should I try to put the data into a CSV file and have this populate the database when the application is installed, or when it first starts? What's the normal way to populate seed data other than lines inside of the code with a huge number of insert statements.
Any help or advice on how this is normally done (I'm thinking most people do it the same way) would be much appreciated.
Thanks
Lets break the problem down.
Is the initial data that you wish to use in your app going to change over time?
If you include any pre-populated data (a SQLite, Realm, or CSV-based file, ...) and the data that you are including goes stale and you have to update it on a routine basis, you will need to publish an application update (.apk/.ipa) so your new user installs receive the updated data (more on this below).
Note: This assumes that your current users get the updated data via actually running your app and it is handling the local data updates on routine basis (background service, push notifications, data polling, etc..)
Is this a Line of Business (LoB) application published via Ad-Hoc, private Store, and/or iOS Enterprise publishing?
If you control the user base, than having to force an update install so your users get your new/updated pre-populated data might be an acceptable approach, but not a great user experience if they forced to update the application all the time... but it works...
Is this application going to be distributed via the public Apple and Google App Stores?
This is where you need to be very careful on what pre-populated data you include within your application.
If the data goes stale and you need to push an updated app version to the Stores for your new install installs, beware that it could be days (or weeks or even month+) to get that new app into the store.
The Play Store usually is less then 24 hours on publishing app updates, and while the Apple Store can be the same, do not bet on it.
We routinely see 48-72 hour delays and randomly get rejected and thus it can take a week or more to get an update app into the Apple Store. We have had rejections delaying an app update for over a month and have gone into the appeal process and even removed already existing features to get re-published
Note: Every app update to the Apple store resets your user reviews... :-(
Bottom line: You want to want to publish to the Stores when you are bug fixing and/or adding features, not to update some "static" data that is stored within your app bundle...
What does this data cost your end-user and you?
Negative costs to you as an app developer are bad reviews and uninstalls. Look at how this "data" effects the end-users access to your application and how they react. Longer download time, usually acceptable. Longer initial app startup times, less acceptable... etc....
What markets will your app be used in? Network speeds and the cost of data transfer in many markets across the world are slow and costly...
What really is the true size of the data?
I "pre-populate" a Realm data instance with thousand of rows with 5MB of JSON data in under a second. SQLite takes longer, but it is still not bad. The data itself is stored in a zip and accessed as a static file (https-based get) and at a 80% compression factor, the 1MB of compressed data is pulled from a server (AWS S3) in under one second using LTE cellular data speeds and uncompressing it as stream while deserializing the JSON on-fly to update the Realm instance adds another second...
So, the user impact is very small and I "hide" this initial pre-populate update via a first-time welcome screen and some text that the user hopefully reads before getting to the first "real" app screen...
Note: This does assume that the user will have network data access the first time they open the app... In many markets around the world, this is not true, so factor this into your app design.
I also architect the app so its data can be update on background threads during its launch (the initial one or not) and thus the user does not stand there watching a spinning busy indictor, they can at least interact with the data that they do have.
So should you include any pre-populated data in your app bundle?
Sure, when that data is absolutely required to get the user up and running as fast as possible to enhance the user experience. Games are a great example of this in bundling 100s of megabytes or even gigabytes via .obb... with the various levels, media files, etc... into the app so the user does not experience a 10+ min. wait time upon opening the app the first time.
Now this does mean that their initial download time for the install was longer as that data was bundled within the app, the overall user experience was better as users accept the download/install times and view that as a carrier/phone/service plan issue vs. the time to open your app the first time to actually get to a functional screen.
So what do?
Personally I look at this issue on a case by case basis. I look at the data and if it is not going to change and only get added to and possibly pruned over time, include it as a pre-populated SQLite or Realm store or... Why cause the user to wait for the web requests, database updates and the additional network data usage and associated costs. If the data is going to go stale, do not bundle it in your app.
As for the mechanics of installing pre-populated data:
See my answer on this SO Question about "Bundle prebuilt Realm files"
You don't have to create your sqlite database every time the app is updated.
Actually SQLiteOpenHelper provides the following two methods:
OnCreate() : you should implement this method and create your sqlite database with populated data from the server. It is called when you the app is started for the first time.
OnUpgrade(): you should implement this method if you want to modify the database (add a new table or column in a table) or populate additional data.
The database is preserved between app updates and you don't need to create it each time.
Check the following examples which explain how to use sqlite database with Xamarin:
Using Sqlite in a Xamarin.Android Application Developed using Visual Studio
and
An Introduction to Xamarin.Forms and SQLite

Access BizTalk Orchestration Instances from database

Can I access persisted data of running orchestration Instances from the BizTalk database?
My BizTalk application deals with long-running processes and hundreds of orchestration instances can be running at a time.
I would like to access data persisted by these orchestration instances and display it on my application's UI. The data would give an insight about how many instances are running and at which state each of them is.
EDIT :
Let me try to be a little more specific.
My BizTalk application gets ticket requests (messages) from a source and after checking some business rules they are to be assigned to different departments of the company. The tickets can hop between inbox of different departments as each department completes its processing.
Now, the BizTalk orchestration instances are maintaining all the information that which department owns a particular ticket at a given time. I would want to read this orchestration information and generate inbox for each of the department at runtime. I can definitely do this by pushing this information to a separate database and populate the UI from there BUT as all this useful information is already available in the form of orchestration instances I would like to utilize it and avoid any syncing issues.
Does it make any sense?
The answer to your specific question is NO.
BAM exists for this purpose exactly.
Yes it is doable. Your question is little confusing. You can't get the data which is persisted for your orchestration instance, however You can get number of running or dehydrated instances using various options like WMI, ExplorerOM library. As a starting point you can look at some samples provided as part of BizTalk installation under SDK\Samples\Admin folder. Also you should be looking at MSBTS_ServiceInstance WMI class to get the service instances. You can also look at a sample http://msdn.microsoft.com/en-us/library/aa561219.aspx here. You can also use powershell to perform the same operation

Searching BizTalk MessageBox?

Need to make a tool to search XML data from BizTalk messagebox.
How do I search all XML data related to lets say a common node called Employee ID from all data stored in the BizTalk MessageBox?
The BizTalk Message Box (BizTalkMsgBoxDb database) is a transient store for messages as they pass through BizTalk. Once a message has finished processing, it will be removed from the Message Box.
You probably want to research Business Activity Monitoring (BAM) which will allow you to capture message data as messages flow through BizTalk; message data can be exposed through its generic web-based portal. BAM is a big product in its own right and I would suggest that you invest time in researching all of the available features to find the one that suits your particular scenario. There are many, many resources available, however you might start by taking look at Business Activity Monitoring. There is also a very good book specifically on BAM: Pro BAM in BizTalk Server 2009
Alternatively, take a look at using the built-in BizTalk Administration Console tools for querying the Tracking database (BizTalkDTADb) which will hold messages for later reference based on your pre-defined configuration options. See Using BizTalk Document Tracking.
Finally, you could consider rolling your own message tracking solution, writing message contents to a SQL Database table, as messages are received in a pipeline for example.
Check out the BizTalk Message Decompressor on CodePlex! I've been using this tool for a couple of years with excellent results. Since you're hitting the messagebox directly, you should be very careful and be very familiar with the queries that you choose to execute.
As noted by a previous poster's answer, BAM and the integrated HAT queries in the admin console are the official, safest, and Microsoft prescribed answers.

Pattern for long running tasks invoked through ASP.NET

I need to invoke a long running task from an ASP.NET page, and allow the user to view the tasks progress as it executes.
In my current case I want to import data from a series of data files into a database, but this involves a fair amount of processing. I would like the user to see how far through the files the task is, and any problems encountered along the way.
Due to limited processing resources I would like to queue the requests for this service.
I have recently looked at Windows Workflow and wondered if it might offer a solution?
I am thinking of a solution that might look like:
ASP.NET AJAX page -> WCF Service -> MSMQ -> Workflow Service *or* Windows Service
Does anyone have any ideas, experience or have done this sort of thing before?
I've got a book that covers explicitly how to integrate WF (WorkFlow) and WCF. It's too much to post here, obviously. I think your question deserves a longer answer than can readily be answered fully on this forum, but Microsoft offers some guidance.
And a Google search for "WCF and WF" turns up plenty of results.
I did have an app under development where we used a similar process using MSMQ. The idea was to deliver emergency messages to all of our stores in case of product recalls, or known issues that affect a large number of stores. It was developed and testing OK.
We ended up not using MSMQ because of a business requirement - we needed to know if a message was not received immediately so that we could call the store, rather than just letting the store get it when their PC was able to pick up the message from the queue. However, it did work very well.
The article I linked to above is a good place to start.
Our current design, the one that we went live with, does exactly what you asked about a Windows service.
We have a web page to enter messages and pick distribution lists. - these are saved in a database
we have a separate Windows service (We call it the AlertSender) that polls the database and checks for new messages.
The store level PCs have a Windows service that hosts a WCF client that listens for messages (the AlertListener)
When the AlertSender finds messages that need to go out, it sends them to the AlertListener, which is responsible for displaying the message to the stores and playing an alert sound.
As the messages are sent, the AlertSender updates the status of the message in the database.
As stores receive the message, a co-worker enters their employee # and clicks a button to acknowledge that they've received the message. (Critical business requirement for us because if all stores don't get the message we may need to physically call them to have them remove tainted product from shelves, etc.)
Finally, our administrative piece has a report (ASP.NET) tied to an AlertId that shows all of the pending messages, and their status.
You could have the back-end import process write status records to the database as it completes sections of the task, and the web-app could simply poll the database at arbitrary intervals, and update a progress-bar or otherwise tick off tasks as they're completed, whatever is appropriate in the UI.

Resources