containerized Azure TTS supporting list TTS voices - microsoft-cognitive

Will the Containerized Azure TTS add support for voice list to determine what is installed on a host? Matching the Azure list would be useful operationally plus for development and demos. Using "docker images" is a temp workaround.
curl http://localhost:5000/cognitiveservices/voices/list

which container do you use?
in current container design, it has only one voice in one container
so in your deployment environment, you could use a list to track which is installed.
does this help?

Related

Can we run Azure speech containers on Openshift?

can we run Microsoft speech containers on Openshift? What should we take into account when trying this?
Br, Ville
Yes you can. OpenShift can run any Docker container, so running Azure Cognitive Services containers works just fine
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-container-howto?tabs=stt%2Ccsharp%2Csimple-format
Carlos is correct. However, I should mention that OpenShift adds some security requirements for the containers running on the platform. Some of those concerns have not been properly addressed until 2.2.0 release of speech containers.
Thus, I recommend you start with the latest containers, and definitely raise issues if you hit any.

What are the difference between NGINX Plus and NGINX Community edition? Some doubts in the use of NGINX in the WSO2 EI cluster creation

I am absolutly new in NGINX and I have the following doubts about this product.
I have to create a WSO2 EI cluster and reading the official documentation it says to use NGINIX as load balancer:
https://docs.wso2.com/display/EI650/Clustering+the+ESB+Profile#ClusteringtheESBProfile-Configuringtheloadbalancer
On the official documentation it is specified that:
Follow the steps below to configure NGINX Plus version 1.7.11 or NGINX
community version 1.9.2 as the load balancer.
So the first doubt: what is the difference between NGINX Plus and NGINX Community? Is the first the payment version and the second a free version?
In case my assertion is correct what are the limitation in the use of the community edition?
Another doubt is that going on the NGNIX website:
https://www.nginx.com/solutions/adc/
it seems to me that it offers different products (from load balancer to web server and other stuff). Is it a single product doing more jobs or is it composed by different modules that have to be installed separately?
Another doubt is: basing on the amount of traffic that the load balancer have to handle changes the hardware requirements of the VM where I have to install it?
Thank you
Is the first the payment version and the second a free version?
Basically - yes. Plus additional features.
NGINX Plus as well suports out of box sticky sessions needed for HA setup for carbon console, active service healthcheck and more. I needed the two mentioned.
In theory you could build (compile) additional addon modules (e. g. for sticky sessions and healthchecks) with the community edition too, but it's not always working as smoothly as I expected. (you may as well consider Apache httpd)
It may worth to have support at hand, mainly for critical deployments. I prefer this solution, rather than clients calling me during weekends to check my custom builds.
Is it a single product doing more jobs or is it composed by different modules that have to be installed separately?
NGINX offers more products (APIM, WAF,..) as far I know it's all the NGINX Plus with additional modules. But for load balancing you may be ok with basic web server (load balancer) and keepalived
Another doubt is.. changes the hardware requirements of the VM where I have to install it?
NGINX can handle A LOT of traffic even on modest infrastructure, much more than the wso2ei itself, imho nginx won't be your bottleneck until you don't do anything special (WAF) or stupid (log payloads)

Where does web server come into play in OpenStack - CloudFoundry stack

I work for a small web startup. They have decided to use OpenStack as IaaS and then on top of it, cloudfoundry as PaaS. I am trying to learn about this technology stack. But I am really confused even after going through documentations and related materials on the web.
What do I want?
I have a web site, that currently runs on a RHEL system (aws instance), with
nginx as web server. I want to shift this to OpenStack-cloudfoundry
stack because the company's management has decided to do so. They also
want me to evaluate if I can put Docker to use anywhere.
From my understanding, OpenStack (Iaas) will provide me with all stuff related to hardware software needs, and cloudfoundry will help me on the development front.
Now, where does nginx (or any web server) come into the picture? Is it part of Openstack or Is it part of cloudfoundry?
On my aws RHEL system, Do I just install Openstack and Cloudfoundry, and then push my app and not at all bother about what happens beneath? I am really confused.. please help out.
And, Is there anywhere I can utilize Docker, in this setup?
You would generally not deploy OpenStack on top of AWS. OpenStack is similar to AWS in that it provides a service for you to create and destroy virtual machine instances, manage networking between and around your VMs, attach and detach block devices to instances, etc. In other words, both are services for managing "infrastructures", where "infrastructure" here means a virtualized datacenter, which at its core means a bunch of hardware running hypervisors that allow you regard the datacenter as a bunch of virtual machines that can be spun up and down on demand, rather than a bunch of "static" physical machines.
AWS is an Infrastructure-as-a-Service provided by Amazon, so you don't have to install AWS yourself, you can just start using it to provision VM instances within Amazon's datacenters. OpenStack is software you install yourself (or pay a vendor to manage for you) on hardware you own or pay for yourself, and once installed OpenStack provides a similar service/interface to AWS.
With a Platform-as-a-Service, you concern yourself more with your application code, and "just pushing it", and don't have to concern yourself as much with what's happening on the underlying machine. You don't have to worry as much about the underlying OS, making sure you have the right runtime and code dependencies of your application, generally don't have to care about the webserver that's serving your code, etc. And you get many more higher level features, e.g. easy ability to scale vertically or horizontally, dynamic routing, automatic log aggregation, automatic health management, etc.
As far as how nginx fits in, it depends how you're using nginx, and what kind of application you have. Cloud Foundry has few couple ways of dealing with applications.
One is the buildpack model, where you simply push your source code to the platform, and it will automatically detect the appropriate runtime and dependencies for your application. For instance, if your application is a Ruby application, it will automatically detect this, and by default automatically run the application using the WEBrick server. However, you can choose other Ruby webservers such as Phusion, Passenger, etc. [1]
If your application is primarily serving static content, it will use nginx as the webserver. [2]
Another is using Docker. You can deploy applications based on Docker images on Cloud Foundry, in which case you could have a container running nginx and your application inside the container, or not, it depends on whether you still need nginx. Pushing a docker application is as simple as:
cf push trainingwebapp --docker-image training/webapp -c 'python app.py'
Here, this uses the sample Hello World web app from the Docker documentation. [3]
[1] https://docs.cloudfoundry.org/buildpacks/ruby/ruby-prod-server.html
[2] https://docs.cloudfoundry.org/buildpacks/staticfile/index.html
[3] https://docs.docker.com/engine/userguide/containers/usingdocker/

Run multiple instances of IBM BPM

I have the IBM Business Process Manager Advanced 7.5 installed.
Question:
Is it possible to install and run newer version - IBM BPM 8.5 on the same machine?
I worry about ports conflict (for example port 9043 to IBM Console).
Maybe I should ask how to change default port configuration?
Please help.
Technically it can be possible, however I suggest you do not do this as ibm bpm requires a lot of system resources to run and installing two versions of ibm bpm can make the system slower than ever before.
However I have seen multiple instances of same ibm bpm version running on a single cluster on server VM. This is practically stable and in use from considerable tenure.
PS. - I had administered a huge ibm bpm infra containing 80+ ibm bpm servers.
As Gas already commented, in theory this is possible. But you have to be aware, that IBM BPM is not only using the specified ports for web access, it also uses ports for internal communication. In my opinion, this is not an easy task to get right.
On the other hand, the system requirements for IBM BPM are quite challenging for the server, if You want to run both instances in parallel, you should consider that your server will need to be capable. WebSphere is kind of greedy and not really designed to share its resources ;)
Yes, you can run multiple versions of BPM on the same system. The primary concerns are going to be port conflict and OS system resources. Use the BPMConfig to create a new profile and installation that is on different ports. On my lab machines with VMs, I install all the BPM installs with the default ports and only have one (1) running at a time. If I need 2, I just spin up a new VM from the base template and go from there.
By Default, the port conflicts are addressed by the WebSphere Application server code. If needed you can specify "initialPortAssignment" for Dmgr, node and cluster members while creating the environment using BPMConfig command. You can even specify specific port numbers using the
https://www.ibm.com/support/knowledgecenter/en/SSFPJS_8.6.0/com.ibm.wbpm.ref.doc/topics/samplecfgprops.html
You can also provide Websphere options like "-startingPort starting_port | -portsFile ports_file_path | -defaultPorts" for Dmgr bpm.dmgr.profileOptions= and nodes bpm.de.node.#.profileOptions in the BPMConfig properties file. For cluster members just have option to indicate the starting port.
REf: https://www.ibm.com/support/knowledgecenter/cs/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/rxml_manageprofiles.html
I would not advise on changing the port numbers once you start using the BPM environment.
As indicated by others make sure you have enough resources if you are planning to run both environments at the same time.
Yes, I am using two versions for evaluation. Port conflicts can be handled using server (WebSphere Integrated Solutions Console) console or BPMConfig utils.

What's the best way to simulate a complex production web development environment?

I want to create a modestly scalable development environment for an in-development web service.
Ideally, there would be an nginx web server with haproxy and a few database servers, websockets, the works.
I'd be going with Amazon cloud services for all of this hosting... but I'd rather not pay for CPU cycles when I'm just developing... much less develop on a remote, cloud environment.
What's the best way to go about modeling a somewhat complex development environment locally that could - hopefully, at the press of a button - sync with a similarly architected Amazon cloud environment?
All I have is my Macbook Pro. I also have a fully built 1Ghz tower computer in the closet I could leverage, if needed, and wouldn't be opposed to buying more. But, ultimately, I'd like to have the ability to sync to production with minimal steps and reconfiguration.
Thanks!
Check out vagrant and virtualbox. That will get you local environments running nicely on your macbook. Syncing to EC2 is going to be tougher. At the system level you'll want to use something like puppet or chef (which are both nicely supported by vagrant). Add to that a solid automated application deployment mechanism and you should be close. Be prepared to put some time into this, it's not likely to be a trivial undertaking.

Resources