AWS DynamoDB + multiple Titan servers: Is this setup possible? - amazon-dynamodb

I want to create a web application which is using a graph database hosted as part of the Amazon Web Services (AWS). As far as I understand, to use a Graph database with AWS DynamoDB as storage backend, you need to run a Titan server. Such a server can be set up on an EC2 instance.
Now, to remain scalable, I will eventually want to deploy multiple such instances behind (a couple of) load balancers. The question that arises is:
Can multiple Titan DB instances work with the same, shared storage backend (such as DynamoDB)?

Yes. Titan Server is a Gremlin Server, which is based on Netty. You configure it with a graph properties file which points to your storage backend (DynamoDB) and indexing backend (optional). As long as you use the same graph properties file for each Titan Server, it should work as your described architecture.

Related

Flask SQLAlchemy Database with AWS Elastic Beanstalk - waste of time?

I have successfully deployed a Flask application to AWS Elastic Beanstalk. The application uses an SQLAlchemy database, and I am using Flask-Security to handle login/registration, etc. I am using Flask-Migrate to handle database migrations.
The problem here is that whenever I use git aws.push it will push my local database to AWS and overwrite the live one. I guess what I'd like to do is only ever "pull" the live one from AWS EB, and only push in rare circumstances.
Will I be able to access the SQLAlchemy database which I have pushed to AWS? Or, is this not possible? Perhaps there is some combination of .gitignore and .elasticbeanstalk settings which could work?
I am using SQLite.
Yes, your database needs to not be in version control, it should live on persistent storage (most likely the Elastic Block Storage service (EBS)), and you should handle schema changes (migrations) using something like Flask-Migrate.
The AWS help article on EBS should get you started, but at a high level, what you are going to do is:
Create an EBS volume
Attach the volume to a running instance
Mount the volume on the instance
Expose the volume to other instances using a Network File System (NFS)
Ensure that when new EBS instances launch, they mount the NFS
Alternatively, you can:
Wait until Elastic File System (EFS) is out of preview (or request access) and mount all of your EB-started instances on the EFS once EB supports EFS.
Switch to the Relational Database Service (RDS) (or run your own database server on EC2) and run an instance of (PostgreSQL|MySQL|Whatever you choose) locally for testing.
The key is hosting your database outside of your Elastic Beanstalk environment. If not, as the load increases different instances of your Flask app will be writing to their own local DB. There won't a "master" database that will contain all the commits.
The easiest solution is using the AWS Relational Database Service (RDS) to host your DB as an outside service. A good tutorial that walks through this exact scenario:
Deploying a Flask Application on AWS using Elastic Beanstalk and RDS
SQLAlchemy/Flask/AWS is definitely not a waste of time! Good luck.

Transaction, multiple databases, different servers: MSDTC or any other option

I am working on a client-server application and using LINQ to SQL to perform databases operations, 1 databases is on server and multiple clients will connect to it and there will be sync of databases records as well, Trying to manage a transaction with multiple datacontext and multiple databases but getting error on MSDTC service that it is not configured, before I go to configure I want to ask Is the MSDTC the only option? It will not be available in shared hosting then what type of hosting i should be looking for?
Any other thing I need to check before I go to MSDTC configuration? Is this service required only on server or must be configured on each client machine?

Can I setup two Fuseki instances using the same database directory?

I need to have a public read only instance to query data using the port 3030 and a private read and write instance to add and update data using the port 3031. Both instances are only accessible throw a web server using distinct domains and port 80. Also, the private instance provides access using an HTTP user/password.
My question is about concurrency: Has Fuseki support to concurrently access to a directory database using two server instances?
I found the answer in the Fuseki documentation:
Multiple applications, running in multiple JVMs, using the same file
databases is not supported. There must be a single JVM controlling the
database directory and files.
Use Fuseki to provide a database server for multiple applications.
Fuseki supports SPARQL Query, SPARQL Update and the SPARQL Graph Store
protocol.
So, the answer is that multiple instances using the same database are not supported.

MySQL Databases with Amazon

So I've recently started working with a client that has a Wordpress site run totally off Amazon Cloud Services. As part of some new work I'm doing for them, I need to directly access the database.
I looked at the most recent bill, and the charges I see from amazon are for:
Amazon Elastic Compute Cloud
Amazon CloudFront
AWS Data Transfer
Amazon Simple Storage Service
Amazon SimpleDB (however the charge is $0.00)
I see no charges for RDS or for another database service they provide, however, the wp-config lists localhost as the connection info.
So my question is, is there a way to setup a MySQL database on one of these services? I'm thinking no, and it's possible the database is on another account?
Any help is appreciated!
Yes, Elastic Compute Cloud (ec2). AWS does over an RDS service for a managed MySQL option, but you don't have to host it that way. Currently it appears MySQL is hosted on the same instance as WordPress is installed on.

Azure Cloud Services vs VMs for Existing Asp.Net website

I have seen variations of this question but couldn't find any that dealt with our particular scenario.
We have an existing aps.net website that links to a SQL Server database.
The database has clr user-defined types, hence it can only be hosted in Azure VM since Cloud Services don't support said types.
We initially wanted to use a vm for the database and cloud service for the front-end, but then some issues arose:
We use StateServer for storing State, but Azure doesn't support that. We would need to configure either Table storage, SQL Databases, or a Worker role dedicated to State management (a new worker role is an added cost). Table storage wouldn't be ideal due to performance. The other 2 options are preferable but they introduce cost or app-reconfiguration disadvantages.
We use SimpleMembership for user management. We would need to migrate the membership tables from our vm instance sql server to Azure's SQL Databases. This is an inconvenience as we want to keep all our tables in the same database, and splitting up the 2 may require making some code changes.
We are looking for a quick solution to have this app live as soon as possible, and at manageable cost. We are desperately trying to avoid re-factoring our code just to accommodate hosting part of the app in Azure Cloud services.
Questions:
Should we just go the VM route for hosting everything?
Is there any cost benefit in leveraging a VM instance (for sql server) and a Cloud Service instance (for the front-end)?
It seems to me every added "background process" to a Cloud Service will require a new worker role. For example, if we wanted to enable smtp for email services, this would require a new role, and hence more cost. Is this correct?
To run SQL Server with CLR etc, you'll need to run SQL Server in a Virtual Machine.
For the web tier, there are advantages to Cloud Services (web roles), as they are stateless - very easy to scale out/in without worrying about OS setup. And app setup is done through startup scripts upon bootup. If you can host your session content appropriately, the stateless model will be simpler to scale and maintain. However: If you have any type of complex installations to perform that take a while (or manual intervention), then a Virtual Machine may indeed be the better route, since you can build the VM out, and then create a master image from that VM. You'll still have OS and app maintenance issues to contend with, just as you would in an on-premises environment.
Let me correct you on your 3rd bullet regarding background processes. A cloud service's web role (or worker role) instances are merely Windows Server VM's with some scaffolding code for startup and process monitoring. You don't need a separate role for each. Feel free to run your entire app on a single web role and scale out; you'll just be scaling at a very coarse-grain level.
Some things to consider...
If you want to be cheap, you can have your web/worker role share the same code on a single machine by adding the RoleEntryPoint. Here is a post that actually shows how to do what you are trying to do with sending email:
http://blog.maartenballiauw.be/post/2012/11/12/Sending-e-mail-from-Windows-Azure.aspx
Session management is painfully slow in SQL Azure DB, I would use the Azure Cache if you can..it is fast.
SQL Server with VMs is going to cause problems for you, because you will also need to create a virtual network between that and any cloud services. This is really stupid, but if you deploy a cloud service AND a VM they communicate over the PUBLIC LOAD BALANCER causing a potential security concern and network latency. So, first you need to virtual network them (that is an extra cost)..then you also need to host a DNS server to address the SQL Server VM. Yes this is really stupid, unless you are OK with your web/worker roles communicating with your SQL Server over the internet :)
EDIT: changed "public internet" to "public load balancer" (and noted latency)
EDIT: The above information is 100% correct contrary to the comment by David below. Please read the guidance from Microsoft here:
http://msdn.microsoft.com/library/windowsazure/dn133152.aspx#scenario
DIRECTLY FROM MICROSOFT GUIDANCE speaking about cross Cloud Service communication (VM->web/worker roles):
"We recommend that you implement the first option as the connection process would not need to go through the public Internet. Therefore, it would provide a better network performance."
As of today (8/29/2013) Azure VMs and Worker/Web Roles are deployed into DIFFERENT "Cloud Services". Therefore communication between them needs to be secured via a Virtual Network that exposes private IP addresses between the instances.
To follow up on David's point below, that about adding an ACL. You are still sending packets over the internet using TDS (SQL Server protocol). That can be encrypted, but no sane architect/enterprise governance/security governance would "allow" this scenario to happen in a production environment.

Resources