Sylius Grid Bundle over Api Response - symfony

I have several projects (on local / Amazon EC2) for various purposes.
The one on EC2 hold the main DB and is accessible through an API, exposing most of my Resources.
I need to display on ALL of my instances Grids to interact with these resources.
Easy on the EC2, but can't find a way with SyliusGridBundle to Map to an Api response.
Is it even possible ?

Related

Azure CosmosDB scaling globally

I've seen the videos about Azure CosmosDB scaling around the world by clicking on a map, which is neat. But in those demos, they connect directly to the database from a client.
It's my understanding that allowing a client (like wpf desktop program) to directly access a database is a bad idea. It should be behind a web api that we control access to.
So for scaling globally, I don't really need lots of areas where users are, I need the same datacenter that's hosting the api.
Is this correct?
There is an interesting article in the docs refering to Multi-master database architecture that will be useful.
Basically, if you are going to expose a Web API and clients will connect to it instead of the database, you want the Web API as closer to the database as possible (that's when global replication comes into play).
To transparently connect the client to the closest API, you can use Traffic Manager's Geographic routing.

How to access file from another application's directory on Bluemix?

This is my problem scenario :
1.Create 2 apps.
2.App1 continuously pulls tweets and stores the json file in its /data folder.
3.App2 picks up the latest file from the /data folder of App1 and uses it.
I have used R and its corresponding build-pack to deploy the app on bluemix.
How do I access /data/file1 in App1 from App2 i.e. can I do something like this in the App2 source file :
read.csv("App1/data/Filename.csv") ;
will bluemix understand what App1 folder points to ?
Bluemix is a Platform as a Service. This essentially means that there is no filesystem in the traditional sense. Yes, your application "lives" in a file structure on a type of VM, but if you were to restage or redeploy your application at any time, changes to the filesystem will be lost.
The "right" way to handle this data that you have is to store it in a NoSQL database and point each app to this DB. Bluemix offers several options, depending on your needs.
MongoDB is probably one of the easier and more straight-forward DBs to use and understand. Cloudant is also very good and robust, but has a slightly higher learning curve.
Once you have this DB set up, you could poll it for new records periodically, or better yet, look into using WebSockets for pushing notifications from one app to the other in real-time.
Either way, click the Catalog link in the Bluemix main navigation and search for either of these services to provision and bind them to your app. You'll then need to reference them via the VCAP_SERVICES environment object, which you can learn more about here.
You can't access files from another app on bluemix. You should use a database service like cloudant to store your json. Bind the same service to both apps.
Using something like Cloudant or the Object Storage service would be a great way to share data between two apps. You can even bind the same service to 2 apps.
Another solution would be to create a microservice that is your persistance layer that stores your data for you. You could then create an API on top of this that both of your apps could call.
As stated above storing information on disk is not a good idea for a cloud app. Go check out http://12factor.net, it describes no-no's for writing a true cloud based app.

Deploying Multiple Web Roles and Worker Roles on a Single Azure Cloud Service

This may not be new, but I hope some one can put me on right track as its bit confusing during azure deployment. I'm in the process of planning for deployment on Azure. This is what I have
A Public facing ASP.Net MVC app (web-role) + WCF Service (web-role) to be accessible only to this asp.net app + WCF Service (worker-role) again accessible to 1. over message-queue
An Custom STS i.e. ASP.NET MVC app (web-role) acting as Id-Provider (for 1. which is a Relying Party) + WCF Service (web-role) to expose some of STS funcionality to RP's such as 1.
SQL Azure: accessed by 1 and 2
Note: 1. will eventually grow to be come a portal, with multiple wcf-services hosted on web and worker roles for both internal and external access.
My question is if 1. is going to be the app exposed to public, and 2. is for 1. to Federate the security (internal), how should I plan my deployment on azure keeping in mind 1. will require scale-out sometime later along with the two wcf services? Do I publish to one cloud service or how?.
My understanding is that A Cloud Service is a Logical Container for n-web/worker roles.
But when yu have 2 web soles like in this case both asp.net apps, which one becomes the default one?
Best Regards
Satish
By default all web roles in the solution are public. You can change this by going into the service definition and remove HTTP endpoints, if you wish; you can also define internal HTTP endpoints that will only be available to cloud services, nothing will be exposed to the load balancer. The advantage to having all web roles in the same project is that it's easy to dynamically inspect the RoleEnvironment and each web role -- in other words, all roles in a solution are "aware" of other roles and their available ports. It's also easy to deploy one package.
All roles share the same DNS name (.cloudapp.net) (however you could use host headers to differentiate), but they are typically exposed by using different ports via the load balancer on your .cloudapp.net service. You can see this when the service is running in the cloud, there are links in the portal that point to each role that has a public HTTP endpoint that specifies the port. The one that is port 80 (as defined by the external HTTP endpoint) is the "default" site.
You could also create multiple cloud projects, and deploy them separately. In this case, each would have its own DNS name, and each is managed separately. Whether this is a good thing or not depends on how tightly coupled the applications are, and if you will typically be deploying the entire solution, or just updating individual roles within that solution. But there's no cost or scalability difference.
If you are going to frequently redeploy only one of the roles, I'd favor breaking them out.
To deploy multiple Web Roles under the same cloud instance have a look at this:
Best practice for Deployment of Web site into a cloud service
Multiple worker roles will be trickier to implement:
Run multiple WorkerRoles per instance

how to sync data between company's internal database and externally hosted application's database

My organisation (a small non-profit) currently has an internal production .NET system with SQL Server database. The customers (all local to our area) submit requests manually that our office staff then input into the system.
We are now gearing up towards online public access, so that the customers will be able to see the status of their existing requests online, and in future also be able to create new requests online. A new asp.net application will be developed for the same.
We are trying to decide whether to host this application on-site on our servers(with direct access to the existing database) or use an external hosting service provider.
Hosting externally would mean keeping a copy of Requests database on the hosting provider's server. What would be the recommended way to then keep the requests data synced real-time between the hosted database and our existing production database?
Trying to sync back and forth between two in-use databases will be a constant headache. The question that I would have to ask you is if you have the means to host the application on-site, why wouldn't you go that route?
If you have a good reason not to host on site but you do have some web infrastructure available to you, you may want to consider creating a web service which provides access to your database via a set of well-defined methods. Or, on the flip side, you could make the database hosted remotely with your website your production database and use a webservice to access it from your office system.
In either case, providing access to a single database will be much easier than trying to keep two different ones constantly and flawlessly in sync.
If a webservice is not practical (or you have concerns about availability) you may want to consider a queuing system for synchronization. Any change to the db (local or hosted) is also added to a messaging queue. Each side monitors the queue for changes that need to be made and then apply the changes. This would account for one of the databases not being available at any given time.
That being said, I agree with #LeviBotelho, syncing two db's is a nightmare and should probably be avoided if you can. If you must, you can also look into SQL Server replication.
Ultimately the data is the same, customer submitted data. Currently it is being entered by them through you, ultimately it will be entered directly by them, I see no need in having two different databases with the same data. The replication errors alone when they will pop-up (and they will), will be a headache for your team for nothing.

SaaS: one web app to one database VS. many web apps to many databases

I am planning to develop a fairly small SaaS service. Every business client will have an associated database (same schema among clients' databases, different data). In addition, they will have a unique domain pointing to the web app, and here I see these 2 options:
The domains will point to a unique web app, which will change the
connection string to the proper client's database depending on the
domain. (That is, I will need to deploy one web app only.)
The domains will point to their own web app, which is really the
same web app replicated for every client but with the proper
connection string to the client's database. (That is, I will need to
deploy many web apps.)
This is for an ASP.NET 4.0 MVC 3.0 web app that will run on IIS 7.0. It will be fairly small, but I do require to be scalable. Should I go with 1 or 2?
This MSDN article is a great resource that goes into detail about the advantages of three patterns:
Separated DB. Each app instance has its own DB instance. Easier, but can be difficult to administer from a database infrastructure standpoint.
Separated schema. Each app instance shares a DB but is partitioned via schemas. Requires a little more coding work, and mitigates some of the challenges of a totally separate schema, but still has difficulties if you need individual site backup/restore and things like that.
Shared schema. Your app is responsible for partitioning the data based on the app instance. This requires the most work, but is most flexible in terms of management of the data.
In terms of how your app handles it, the DB design will probably determine that. I have in the past done both shared DB and shared schema. In the separated DB approach, I usually separate the app instances as well. In the shared schema approach, it's the same app with logic to modify what data is available based on login and/or hostname.
I'm not sure this is the answer you're looking for, but there is a third option:
Using a multi-tenant database design. A single database which supports all clients. Your tables would contain composite primary keys.
Scale out when you need. If your service is small, I wouldn't see any benefit to multiple databases except for assured data security - meaning, you'll only bring back query results for the correct client. The costs will be much higher running multiple databases if you're planning on hosting with a cloud service.
If SalesForce can host their SaaS using a multitenant design, I would at least consider this as a viable option for a small service.

Resources