Use an existing Gateway with Azure Machine Learning? - azure-machine-learning-studio

We want to access an onprem SQL database with an existing Gateway, is that possible in AML? The tool only seems to allow creating new gateways.

Confirmed that this is not possible, AML only allows use of AML-created gateways.

Related

Linked Servers in SQL Database Managed Instance

In a SQL Server Managed Instance I have 2 databases (for security reasons both databases have different logins). I need the possibility to allow one database to look into the other one. In a local SQL Server I was able to create a Linked Server to realize this. But this seems not to work using the Managed Instance.
Can someone give some hints how to achieve this?
Managed Instance supports linked servers (unless if they use MSDTC for some distributed writes). Make sure that you add logins for remote server:
EXEC master.dbo.sp_addlinkedsrvlogin #rmtsrvname=N'PEER',#useself=N'False',#locallogin=NULL,
#rmtuser=N'$(linkedServerUsername)', #rmtpassword='$(linkedServerPassword)';
If it still doesn't work put the exact error message. This might be Network security Group blocking the port, VNets that are not peered, etc.

AML Studio: Register mutliple gateways on the same server

I am struggling to find a way to register multiple gateways. I have a local instance of my SQL server and have created a gateway to access to it from the AML Studio workspace. It works fine but now I would like to access to the same SQL server instance from another workspace. So the question is: how to register a new gateway without removing the previous one?
I followed this documentation.
Does the following explanation mean that there is no way to do that?
You can create and set up multiple gateways in Studio for each workspace. For example, you may have a gateway that you want to connect to your test data sources during development, and a different gateway for your production data sources. Azure Machine Learning gives you the flexibility to set up multiple gateways depending upon your corporate environment. Currently you can’t share a gateway between workspaces and only one gateway can be installed on a single computer.
It is quite limiting as connecting to the same server from multiple workspaces may be sometimes crucial.
Well, finally I have found a way to bypass this limitation. From this documentation I have found that:
The IR does not need to be on the same machine as the data source. But staying closer to the data source reduces the time for the gateway to connect to the data source. We recommend that you install the IR on a machine that's different from the one that hosts the on-premises data source so that the gateway and data source don't compete for resources.
So the logic is pretty simple. You provide access to your local server to another machine on vpn and install your gateway there. Important: I have set up the firewall rules on the server before, to be able to establish the connection remotely.

Is it possible to associate an Elastic Network Interface to an Opswork instance?

I'd like to have the instances I create in AWS Opsworks use preconfigured Elastic Network Interfaces so that I have some predictable internal IP addresses to use when I configure all my services. I'm not seeing a way to do this in the UI yet, has anyone come up with a slick way to do this via a recipe or custom JSON?
You can associate elastic ips with opsworks instances through opsworks api. Boto; which is a python library for aws api will allow you to do exactly what you are looking for.
Specific method to do that is:
associate_elastic_ip(elastic_ip, instance_id=None)
Boto docs

Azure Cloud Services vs VMs for Existing Asp.Net website

I have seen variations of this question but couldn't find any that dealt with our particular scenario.
We have an existing aps.net website that links to a SQL Server database.
The database has clr user-defined types, hence it can only be hosted in Azure VM since Cloud Services don't support said types.
We initially wanted to use a vm for the database and cloud service for the front-end, but then some issues arose:
We use StateServer for storing State, but Azure doesn't support that. We would need to configure either Table storage, SQL Databases, or a Worker role dedicated to State management (a new worker role is an added cost). Table storage wouldn't be ideal due to performance. The other 2 options are preferable but they introduce cost or app-reconfiguration disadvantages.
We use SimpleMembership for user management. We would need to migrate the membership tables from our vm instance sql server to Azure's SQL Databases. This is an inconvenience as we want to keep all our tables in the same database, and splitting up the 2 may require making some code changes.
We are looking for a quick solution to have this app live as soon as possible, and at manageable cost. We are desperately trying to avoid re-factoring our code just to accommodate hosting part of the app in Azure Cloud services.
Questions:
Should we just go the VM route for hosting everything?
Is there any cost benefit in leveraging a VM instance (for sql server) and a Cloud Service instance (for the front-end)?
It seems to me every added "background process" to a Cloud Service will require a new worker role. For example, if we wanted to enable smtp for email services, this would require a new role, and hence more cost. Is this correct?
To run SQL Server with CLR etc, you'll need to run SQL Server in a Virtual Machine.
For the web tier, there are advantages to Cloud Services (web roles), as they are stateless - very easy to scale out/in without worrying about OS setup. And app setup is done through startup scripts upon bootup. If you can host your session content appropriately, the stateless model will be simpler to scale and maintain. However: If you have any type of complex installations to perform that take a while (or manual intervention), then a Virtual Machine may indeed be the better route, since you can build the VM out, and then create a master image from that VM. You'll still have OS and app maintenance issues to contend with, just as you would in an on-premises environment.
Let me correct you on your 3rd bullet regarding background processes. A cloud service's web role (or worker role) instances are merely Windows Server VM's with some scaffolding code for startup and process monitoring. You don't need a separate role for each. Feel free to run your entire app on a single web role and scale out; you'll just be scaling at a very coarse-grain level.
Some things to consider...
If you want to be cheap, you can have your web/worker role share the same code on a single machine by adding the RoleEntryPoint. Here is a post that actually shows how to do what you are trying to do with sending email:
http://blog.maartenballiauw.be/post/2012/11/12/Sending-e-mail-from-Windows-Azure.aspx
Session management is painfully slow in SQL Azure DB, I would use the Azure Cache if you can..it is fast.
SQL Server with VMs is going to cause problems for you, because you will also need to create a virtual network between that and any cloud services. This is really stupid, but if you deploy a cloud service AND a VM they communicate over the PUBLIC LOAD BALANCER causing a potential security concern and network latency. So, first you need to virtual network them (that is an extra cost)..then you also need to host a DNS server to address the SQL Server VM. Yes this is really stupid, unless you are OK with your web/worker roles communicating with your SQL Server over the internet :)
EDIT: changed "public internet" to "public load balancer" (and noted latency)
EDIT: The above information is 100% correct contrary to the comment by David below. Please read the guidance from Microsoft here:
http://msdn.microsoft.com/library/windowsazure/dn133152.aspx#scenario
DIRECTLY FROM MICROSOFT GUIDANCE speaking about cross Cloud Service communication (VM->web/worker roles):
"We recommend that you implement the first option as the connection process would not need to go through the public Internet. Therefore, it would provide a better network performance."
As of today (8/29/2013) Azure VMs and Worker/Web Roles are deployed into DIFFERENT "Cloud Services". Therefore communication between them needs to be secured via a Virtual Network that exposes private IP addresses between the instances.
To follow up on David's point below, that about adding an ACL. You are still sending packets over the internet using TDS (SQL Server protocol). That can be encrypted, but no sane architect/enterprise governance/security governance would "allow" this scenario to happen in a production environment.

Configure OpenStack nova with remote Bind Server

How can we configure OpenStack to use and dynamically update remote Bind DNS Server.
This is not currently supported. There is a DNS driver layer, but the only driver at the moment is for LDAP backed PowerDNS. I have code for dynamic DNS updates (https://review.openstack.org/#/c/25194/), but have had trouble getting it landed because we need to fix eventlet monkey patching first.
So, its in progress, but you probably wont see it until Havana is released.
OpenStack relies on dnsmasq internally.
I am not aware of any way integrate an external bind server. Or plans to do that. Or even a reason to do that.
Check out Designate (https://docs.openstack.org/developer/designate/)
This could be what you are looking for:
Designate provides DNSaaS services for OpenStack:
- REST API for domain & record management
- Multi-tenant support
- Integrated with Keystone for authentication
- Framework in place to integrate with Nova and Neutron notifications (for auto-generated records)
- Support for PowerDNS and Bind9 out of the box

Categories

Resources