What are the different ways available to get access to the data from the Progress procedures in the form of JSON? Other than creating PASOE instances, ODBC or JDBC? If I want to build an API that can communicate with Progress 4GL DB, what are the options available other than what I have mentioned above? Just to give an example I have a front end application which is build on JavaScript/Angular/ASP.NET CORE and if I want to make calls to Progress DB, how do I achieve it? It would be helpful if I can know any latest technologies that can be integrated to communicate with Progress 4GL DB.
If you run old-school appserver or webspeed you can set up a webservice that way.
For .Net (or Java) you can check the Open Client: https://docs.progress.com/bundle/openedge-open-clients/page/Introduction-to-Open-Clients.html
You could also develop some kind of server operating on a socket but I think sticking to tested techniques such as those you mention NOT wanting to use is the winning bet. Whatever you save in money not licensing PAS (if money is the issue) will be losts in time developing that server.
Related
I'm writing a REST API using ASP.Net Core and Microsoft SQL Server. One of my requirements is that clients will POST certain data to this API and the API will have to transform/process the data in some way before it is used or read. Turns out this processing is costly. So I'm thinking of doing it asynchronously in the background without blocking the POST request. I'm considering doing the processing:
In a scheduled SQL job
Using a separate Windows Service running in the background that reads from the DB, does the processing and writes back to it. It'll be slower than the SQL job I presume, but the code will be more readable.
Using Hangfire. Never used it. Not sure how well it works.
What are the best options for this? Are there there any best practices around this kind of thing?
Boilerplate
Store that data somewhere (RDBMS, nonSQL, etc)
Respond to user that his data has been scheduled for processing
Run some worker or pool of workers for job processing
Store result somewhere
Notify client that background job is complete (could be just a GET /jobs/id endpoint which client can check
Show that result
You can use your own daemon, process, script. If it's not enough and you need more features use that Hangfire which is looking solid.
I am using hangfire in production for almost 3 years, and yes this is a great way, retry policy from out of the box, UI dashboard, but extra options can be like this:
Serverless (Azure function, AWS Lambda)
AWS SQS or Azure Queue + Hosted services docs
Another option I've found is to implement IHostedService, a built-in interface in ASP.Net Core. See this page for details.
I have a storemanager dashboard which makes use of Microsoft Dynamics AX database. To avoid writing a lot of code, I plan on using CRT (commerce runtime) which would give me some form of abstraction and also saves my time writing a lot of code by using other integration methods like AIF and .net business connector.
But, my doubt is, the description of CRT says that it makes use of CRT channel database.
Will it have required amount of data that ax database would have and is it the right way to go forward, when you have to make use of Dynamics AX database (central db) that has all the data?
See this overview of the Commerce Runtime Architecture.
If the dashboard can use the services of the CRT, then use that.
The CRT database is not the AX database and contain a subset of the AX data, and is asynchronously updated using a one- or two-way sync depending on the data.
You will have to decide whether this is okay for your application.
I am in the beginning stages of writing an angularjs client that talks to a RESTful ASP.net Web Api server and am trying to integrate Breeze. I have full control over both client and server code, but the one non-negotiable is that I have to connect to a DBISAM database that I'm sharing with a legacy Windows desktop app; so I can not take advantage of the Entity Framework that most Breeze Server examples use. I've successfully retrieved data by setting up a controller similar to the one in the NoDB example and am now trying to figure out the best way to get real data from my database. Also, I am able to get data from the DB using the ODBC connection, but I'm just not sure where that will fit in with the Breeze way of doing things.
Given all of that, here are my specific questions:
are there any Breeze Server examples showing how to retrieve/save data using a database connected via ODBC that I have somehow overlooked?
will I need to create an adapter to make this work? And if so, is the mongodb adapter the closest thing to use as an example of what that code should look like?
Without Entity Framework, is it still better to return the metadata from the server, or should I create it on the client instead?
I understood the documentation to say that it is easier to have the server be Breeze "aware", but is that still true when needing to use ODBC? Perhaps I should just use Breeze on the client side instead, similar to the Edmunds example?
thanks for any help with figuring out the best way to proceed!
I have an issue to evaluate the amount of concurrent users that our website can handle. Website is a Single Page Application built on .net framework with Durandal.js on the frontend. We use signalR (hubs) for real time communication between server and client.
The only option I see is ‘browser testing’, so each test should run browser instance (or use phantomJs etc) to keep real time connection with the server (as in real usage). Are there any other options to do this except use tests that will use browser instance to emulate user’s behaviour? What is the best way to emulate load of e.g. 1000 concurrent users?
I’ve found several cloud services that support such load testing, e.g. loadimpact, blazemeter. Would be great if someone can share their experience of using such tools.
SignalR provides tool called Crank, which can be used to test how many connections can be handled by given machine.
More info: http://www.asp.net/signalr/overview/performance/signalr-connection-density-testing-with-crank
Make your own script to create virtual users! that is the most effective way to recreate real world load/stress! use Akka Actor model(for creating virtual users) with java signalr client! (if you want you can use Gatling tool as framework and attach your script written in java or scala to virtual users of Gatling!)
make script dynamic by storing user info(Authentication token or user credentials) in xml document.
Please comment questions I can guide you end to end as I completed building+deploying such tool...
I am soon to embark on a medium scale project. Although this isn't a very high priority in my large list of things to do but I have been trying of how I could affectively handle data concurrency.
I will be using a stateless EJB backend to my flex application.
Ideally I am looking for a simple method to deal with data concurrency. e.g. if data is saved on one interface it is refreshed in another. Or it warns that the data has been changed before saving a new version of the data.
Has anyone any ideas as I am at a loss at the moment. As I mentioned its not a high priority but I would feel a lot better if I had some mechanism to improve the process.
If you are planning on using AMF channels for communication you can use the long polling feature to effectively give your application "push message" type support. Both the BlazeDS and/or GraniteDS data services support this capability for exactly the reasons you mentioned.
Version control systems store user_id and datetime for every revision. You can use same method. Client app get current datetime for requested data and save it. App send on changed data with saved datetime. Server checks datetime of last revision and received datetime. And reply to app accordingly.
Second method is using broadcast messages from server to clients. But I don't think it's applicable in your case. This method put into practice in LAN (environment with stable connect) usually.