Is it possible to deploy Azure Face API trained model to IoT Edge like Custom Vision?
If it is, please answer me how to do that?
Updating this topic...
Now you can download a Docker Image with the Face API for running it on-premises.
Here you can find the documentation for testing this feature, that currently is in public preview.
Here you can see the list of all the Azure Cognitive Services that are available as Docker Containers.
This new feature basically is targeting enterprises that:
Are not willing or able to load all their data into the cloud for processing or storage;
Are subject to regulatory requirements on handling customer data;
Have data that they aren’t comfortable sharing and processing in a cloud, regardless of security;
Have weak bandwidth or disconnected environments with high latency and TPS issues.
Model export is not a feature supported by the Face API.
I'm trying to enable instance termination protection using terraform. But did not see any arguments for openstack like what I found for AWS 'disable_api_termination'.
I think you need a different mechanism to manage that. Terraform doesn't have option to disable termination like it is implemented for AWS. Those options are tailored after the provider APIs. I'm guessing that OpenStack just doesn't have something similar to this behavior.
To prevent some confusion I want to mention that the Terraform's lifecycle documented here won't be of much good in this regard:
https://www.terraform.io/docs/configuration/resources.html#prevent_destroy
It will disallow you to destroy it using 'terraform destroy' and the likes but won't do much in terms of protection coming from the OpenStack provider itself.
I would rather think about solving this problem in the architectural layer. Think about how you call the OpenStack API and how you manage your services. Around those steps you can probably place an additional layer or step that will manage the lifecycle and keep mistakes down to the minimum. Your process is what could protect you better than any tool.
I'm currently fighting my way through Event Hubs and EventProcessorHost. All guidance I found so far suggests running an EventProcessor in an Azure Cloud Service worker role. Since those are very slow to deploy and update I was wondering if there is any Azure service that lets me run an EventProcessor in a more agile environment?
So far my rough architecture looks like this
Device > IoT Hub > Stream Analytics Job > Event Hub > [MyEventProcessor] > SignalR > Clients...
Or maybe there is another way of getting from Steam Analytics to fire SignalR messages?
Any recommendations are highly appreciated.
Thanks, Philipp
You may use Azure Web App service with the SignalR enabled and merge your pipeline "steps" [MyEventProcessor] and SignalR into one step.
I have done that a few times, started from the simple SignalR chat demo and added the Event Hub receiver functionality to the SignalR processing. That article is close to what i mean in terms of approach.
You may take a look at Azure WebJobs as well. Basically, it can work as a background service doing your logic. WebJobs SDK has the support of Event Hub.
You can run an EventProcessorHost in any Azure thing that will run arbitrary C# code and will keep running. The options for where you should run it end up depending on how much you want to spend and what you need. So Azure Container Service may be the new fancy deployment system, but it's minimum cost may not be suitable for you. I'm running my binaries that read data from EventHubs on normal Azure Virtual Machines with our deployment system in charge of managing them.
If your front end processes using SignalR to talk to clients have a process that stays around for a while, you could just make each one of those their own logical consumer (consumer group) and have them consume the entire stream. Or even if they don't stay around (ie you're using an Azure hosting option that turns off the process when idle) you could write your receiver to just start at the end of stream (as opposed to reprocessing older data), if that's what your scenario requires.
I am starting to port one old desktop single tenant application into the cloud and wish to hear what would be your recommendation about the databases for my cloud-based multi-tenant application?
My basic requirement is simple:
For each tenant, its data is separate to any other tenants' data. I can easily backup, restore, export the data for one single tenant without affecting other tenants.
I don't really want to care about multi-tenancy in the business logic code. It should look like a single tenant application behind the security layer, no tenant ID pass around etc.
Easy to query using some mature technology like LINQ.
Availability and scalability, of course, easy to set up replicas, fail-over and scaling up and down etc.
I have gone through some investigations about multi-tenant application development. I have noticed SQL databases from Azure and AWS are both very expensive(the cost for just SQL database instance is close to the license fee of the original application), so I definitely can't use separate SQL database instances for tenants.
Now I'm reading this book Developing Multi-tenant Applications for the Cloud, 3rd Edition, and it uses Azure Storage Service to implement multi-tenancy. I haven't finished the book yet, it seems you still have to handle the multi-tenancy by yourself and the sample code is already out of date.
I have seen lots of SO questions compare Azure Table Storage with MongoDB. The MongoDB is very new to me, not sure whether it could be easily used to fulfill my requirements?
And I have seen RavenDB as well, it does support multi-tenancy out of box. But I didn't see some good sample code about how to use it in Azure app development.
Hope to hear some good advices from awesome SO guys.
I would better opt with RavenDB on top of MongoDB. Even Raven is a new comer in to the game, it supports most of the features which traditional SQL supports.
Also to make up a decisions the volume of data you are dealing is a also a key decision pointer. Also the amount of traffic you are expecting.
Also keep in mind that operational costs and development efforts. HA and DR scenarios can be problematic when you use Raven or Mongo because of the fact that you need to host them. But when it comes to Azure Storage, it by defaults protects you to a maximum extent by maintaining 3 copies of information.
So I would suggest you to carefully make the trade offs and opt wisely based on your business needs, cost optimization, development and operational effort.
Having a single instance of your application for each tenant is a very expensive way to implement an application, however I realise that if an application was developed with a single tenant in mind, then the costs of changing over can be high.
First can we start out with why you have a desktop application connecting to a database at another location. The latency can really slow down an application. Ideally you would want a locally installed database and have it sync with the cloud DB, or add in appropriate caching into your application.
However the DB would still need to differentiate the clients.
Why do you need this to go to a cloud database? Is it for backup purposes, not installing a DB locally on a clients machine, accessing the same data from many machines or something else?
Unless your application is extremely large, I would recommend rewriting it for multi-tenant to one SQL Azure database. The architecture chosen at the beginning of the project doesn't suit your requirements now. As you expand you will run into further issues.
Our company is thinking about moving to the cloud. Would we still be able to meet all our current requirements (below). We want to be able to easily scale in the future without high costs.
5 ASP.net 4.0 websites running (using sql databases, see below)
SQL Server 2008 Express (8 databases on this)
2 Scheduler services running (send nightly reports via email e.g. new orders in db)
MongoDB and Memcached are also installed on server
Currently the websites are on a separate server from the database server for security reasons.
We were thinking about Windows Azure and Amazon Web Services (AWS) as providers, which would best fit our requirements?
Are there any other factors we need to consider?
Re: SQL Databases: on Windows Azure this would map to SQL Azure. Costs start at $5/month for up to a 100 MB instance - and goes all the way up to 150 GB - and goes beyond that with Federations.
Re: 5 ASP.net 4.0 websites running: these map naturally into Windows Azure Web Roles. The "small" instance is $0.12/hour/instance, and you'll usually want two instances (to avoid single point of failure for a few scenarios). Depending on your load, you may be able to put all 5 sites on the same instances. If you have very low usage sites, consider the $0.05/hour/instance "extra small" instance.
Re: Currently the websites are on a seperate server from the database server for security reasons: of course this is also doable.
Re: 2 Scheduler services running: Running Windows Services is no problem.
Re: send nightly reports via email e.g. new orders in db: No problem doing, though is not baked into Windows Azure directly, but there are many simple ways to do this (even for free, such as via SendGrid).
Re: We want to be able to easily scale in the future without high costs: you will need to do the math regarding your actual costs, but Windows Azure can surely scale.
Re: MongoDB and Memcache are also installed on server: These can both be run on Azure. Check out https://github.com/mongodb/mongo for MongoDB. Also, the Azure Caching service is also avail (managed for you).
Re: We were thinking about Azure and Amazon as providers, which would best fit our requirements: These are functionally very similar (in capability and cost), with a few noteworthy differences.
Windows Azure is Platform as a Service meaning that you don't need to worry about Virtual Machines, but rather Applications. In other words, you upload your (basically) Zipped app package to the cloud for execution. With Amazon, you will be dealing with the Virtual Machine yourself. In Azure, you get a copy of Windows Server 2008 which is managed for you, but you can also do admin things to it if you need to. This is far less of an advantage if your app is an old messy install that isn't really clean (though may not be a good high-value cloud candidate anyway).
Windows Azure has an emulator that works great - F5 right from visual studio to work with storage system and VMs and more popular features.
Re: Are there any other factors we need to consider: Yes. With any cloud application, you need to be prepared to deal with scaling out (not up), dealing with transient retries (you may need to retry an operation to a cloud service - any cloud service). The benefits of this are much better (and more cost-effective) scalability and higher reliability (when you run across nodes, you don't have a single point of failure). Be sure to understand when/where storage on a VM is persistent vs. ephemeral. There are more considerations, but these are primary ones.
You may want to check out the Windows Azure Pricing calculator.
Good luck! And welcome to the cloud.
with the exception of the scaling question, and the 2 physical servers, you can move this functionality into a hosted environment and you will technically be in "the cloud". This could be a dedicated or VPS (Virtual Private Server), or even a shared server if you are small.
Those can allow for growth over time...you just need to upgrade what you have with the provider.
You also could use a colo-server with a hosting provider, which basically means you put your hardware in an hosting provider rack, and use their electricity and bandwidth. They charge based on bandwidth usage.
Since you are using SQL Express, remember that each database is limited to 8gb. So that will limit your growth at some point. That would entail an upgrade from Express to regular SQL if you don't want to re-engineer anything.
Have you considered AppHarbour? It has Memcached, MongoDB, SQL Server and so on, and is quicker to deploy to than Azure. I like Azure, but there is quite a learning curve and I have found the connection to SQL Azure to be pretty bad - which means re-engineering your DAL to use something like the SQL Transient Failure Library = a bit of a faff for existing projects.
AppHarbour does not have blob storage - so if you are uploading files you will need to use Azure Blob Storage or Amazon S3 or some equivalent as well.
Hope this helps.
Not an expert but being that Asp.net is a Microsoft product it should be easier to migrate to azure, although from what I have heard AWS shouldn't be difficult. Another thing you may want to consider is cost. Last time I checked AWS is significantly less costly unless you already pay for MSDN subscriptions.
All the requirements you sum up are not any issue to deploy in Windows Azure. You can find a lot of information on the internet on how to do this.
Keep in mind, if you want to deploy your services to windows azure, you'll need to do some code review of your applications to fix session state, output cache and so forth on your web applications.
Since you want to scale them out and they are sitting behind a non-sticky round-robin load balancer, you will run into issues with your session state if it is saved on the machine itself. You'll need to part session state to SQL Azure or to the Windows Azure table storage for example.
Installing MongoB and Memcache in Azure is not an issue, you'll find a lot of information on how to do it, but it'll require some to set up your role and the scripting
codingoutloud has given a very detailed answer. I would add two very key considerations to think about when moving any application to Azure (or, indeed, many other cloud providers).
Local state
With normal Azure, they reserve the right to shut down any one instance of a role at any time in order to move or upgrade it. This means you always need at least two instances of any one role and they will be transparently load balanced. If your current websites are currently running on individual servers then they may rely on session state or files in local directories etc. Now, there are ways around this (like putting session state in SQL, using the cookie provider for temp data, using a shared drive for files etc) or, indeed, bypassing a lot of the benefits of Azure and using their "virtual server" concepts which means you don't get the scale benefits etc.
But, sites that rely heavily on local state may be challenging to move to the cloud.
Time Zones
All Azure servers run on UTC time. If you are used to running on dedicated servers serving users from a single time zone then chances are that you use things like DateTime.Now() which won't really correspond to what the user wants.
I don't see any of the above as limitations of Azure, I find them very useful in forcing you to build global and scalable solutions from the start. However, when porting an existing application, the above may be quite a challenge to adapt to, even though there are workarounds.
As also mentioned elsewhere, there is a learning curve to Azure and somehow the documentation - plentiful as it is - just doesn't quite seem to help for some reason. Once you "get it", though, I find Azure really nice and there are a bunch of subtle features that will help you build scalable solutions, like the whole queuing infrastructure, the blob storage and the table storage. In some ways the learning is hampered by having too much choice.
Good luck!