Can't change Replication properties for Linux VM in Azure Site Recovery - azure-site-recovery

We replicete some Windows and Linux VMs from Hyper-V to Azure
As a part of recovery preparation we need to change VMs params, e.g. size, nsg, resource group to be recovered into (different from ASR vault RG) via Set-AzRecoveryServicesAsrReplicationProtectedItem cmdlet
For windows it works well, but for Linux we're getting Failed to update the LicenseType for the physical or virtual machine.
We don't need to change anything related to license, just RG to be recovered into
Why this issue is happening and how can we workaround it?

the solution is to specify param -LicenseType to be NoLicenseType

Related

Problem communicating over a local area network (LAN) with ROS on WSL2

I am a developer of ROS projects. Recently I am trying using ROS(melodic) on WSL2(Windows Subsystem for Linux), and all things works just great. But I got some trouble when I want to use another PC which also in the same local area network(LAN) to communicate with. Before setting the environment variables like "ROS_MASTER_URI, ROS_IP", I know that since WSL 2 work on Hyper-V so the IP show on WSL2 is not the one in the real LAN. I have to do some command like below in order to make everyone in LAN communicate with the specific host:PORT on WSL2.
netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr
But here comes a new question:
The nodes which use TCPROS to communicate with each other have a random PORT every time I launch the file.
How can I handle this kind of problem?
Or is there any information on the internet that I can have a look?
Thank you.
The root problem is described in WSL issue #4150. To quote from that thread,
WSL 2 seems to NAT it's virtual network, instead of making it bridged
to the host NIC.
Option 1 - Port forwarding script on login
Note: From #kraego's comment (and the edited question, which I'm just seeing based on the comment), this is probably not a good option for ROS, since the port numbers are randomly assigned. This makes port forwarding something that would have to be dynamically done.
There are a number of workarounds described in that issue, for which you've already figured out the first part (the port forwarding). The primary technique seems to be to create a PowerShell script to detect the IP address and create the port forwarding rules that runs upon Windows login. This particular comment near the top of the thread seems to be the canonical go-to answer, although many people have posted their tweaks or alternatives throughout the very long thread.
One downside - I believe the script that is mentioned there needs to be run at logon since the WSL subsystem seems to only want to run when a user is logged in. I've found that attempting to run a WSL service or instance through Windows OpenSSH results in that instance/service shutting down soon after the SSH session is closed, unless the user is already logged into Windows with a WSL instance opened.
Option 2 - WSL1
I would also propose that, assuming it fits your workflow and if the ROS works on it (it may not, given the device access you need, but not sure), you can simply use WSL1 instead of WSL2 to avoid this. You can try this out by:
Backing up your existing distro (from PowerShell or cmd, use wsl --export <DistroName> <FileName>
Import the backup into a new WSL1 instance with wsl --import <NewDistroName> <InstallLocation> <FileNameOfBackup> --version 1
It's possible to simply change versions in place, but I tend to like to have a backup anyway before doing it, and as long as you are backing up, you may as well leave the original in place.

Debugging poor I/O performance on OpenStack block device (OpenStack kolla:queen)

I have an OpenStack VM that is getting really poor performance on its root disk - less than 50MB/s writes. My setup is 10 GbE, OpenStack deployed using kolla, the Queen release, with storage on Ceph. I'm trying to follow the path through the infrastructure to identify where the performance bottleneck is, but getting lost along the way:
nova show lets me see which hypervisor (an Ubuntu 16.04 machine) the VM is running on but once I'm on the hypervisor I don't know what to look at. Where else can I look?
Thank you!
My advice is to check the performance first between host (hypervisor) and ceph , if you are able to create a ceph block device, then you will able to map it with rbd command , create filesystem, and mount it - then you can measure the device io perf with : sysstat , iostas, iotop, dstat, vmastat or even with sar

How many servers will be need to install OpenStack and CloudStack cluster?

If not use simulator or devstack, but use real production cluster, very necessary need will cost how many hosts(or nodes)?
CloudStack: 2 (management-servers and DBs) + 2 (Hypervisors) + 1 Storage(If you do not have a Storage Device, maybe you need a server for NFS or iSCSI)
Total: 5 servers for a minimal environment with load-balance and HA.
OpenStack: It depends on the component you have chosen. Every component can be installed in the right one server. But you need one more server for load-balance and HA.
Total: 2 servers for a minimal environment with load-balance and HA.
When planning a cloud platform, the total resource = ManagementServer*2 + Hypervisor*N + Storage(Server Or Storage Device)
Hypervisor number is the total cpus and memorys of how much vms you planned to run.
Storage is how much volumes you want to allocate for all vms.
For Cloudstack, unlike OpenStack, you can use just one physical machine or server for the installation of both the management server as well as agent (for execution of VMs) and yes, the database and NFS shares can be set up on the same machine too (assuming you need it for testing purpose).
You can follow the quick installation guide of Cloudstack here: http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.11/qig.html
I have personally installed using the above documentation and can assure you the above works fine with CentOS 7.4 too. For more complex setup and architecture you can find more documentation here: http://docs.cloudstack.apache.org. Just be sure to have some free IPs available ;)

Is there any reason to encrypt a VM? (already have file vault encryption for Mac enabled)

I currently have File Vault enabled for encryption on my Mac. Is there any need to encrypt a virtual machine that is installed on my Mac? Or does the File Vault cover that?
I am running Mac OS Sierra 10.12 (Host), and using Virtual Box for my VM's. The guest OS is Ubuntu 14.04. If someone got access to my Mac that is protected by File Vault, would they have any way of accessing my Ubuntu VM?
As #manannan mentioned, it is not easy to give you a proper answer to your question given the little context you provided. But I'll give it a try - feel free to edit your question and I'll update the answer.
With File Vault activated on your Mac, every file that is written to the encrypted disk will be encrypted. A virtual machine consists of one or more virtual disks which are stored on the Mac's file system. Hence they are encrypted on your Mac and can't be read by someone who disassembles your Mac and tries to read your hard disk.
If you export the virtual disk file (for a example on a non-encrypted USB flash storage), the virtual machine is no longer encrypted - so for a safe transfer to another user, you have to come up with your own encryption.
Having the virtual machine encrypt the disk "again" would enable you to safely export and transfer your disk image to another user but would also add overhead to using the machine, as the virtualized OS would have to take care of encryption on top of File Vault.
For a customer project which handled very sensitive data we once created a virtual machine inside a TrueCrypt encrypted folder on the Mac. Like that you basically get the same effect as having the virtualized OS encrypted its hard disks - but I think that TrueCrypt is no longer actively maintained and I don't know any alternative product.
Hope this helps.

byon with phycal machine, SLA is global, how to ensure that the applications are not be installed on the same machine

I hava scenario like this:
I have applications A,B,C,D..., and I hava physical machines M,N,O,P,Q...
I use byon to manage physical machine, because the physial machine is "strong", so I want to deploy several application on it, so I set the SLA is global, at this time I have a question: when application A is deployed on machine M, I deploy other application B,C,D...,whether application A,B,C,D...will install on M machine only, rather than install on machine N,O,P,Q...(in this case, the host A's pressure will be very large.)
Is this problem exist, if exists, how to resolve it? thank you very much!
It's possible to limit the number of services on a specific machine by specifying the memory required for each service. As part of the global isolation SLA You can set the amount of memory required by each service, so when there isn't enough memory left on the machine - the next one will be used.
The syntax is:
isolationSLA {
global {
instanceCpuCores 0
instanceMemoryMB 128 // each instance needs 128MB allocated for it on the VM.
useManagement true // Enables installing services on the management server. Defaults to false.
}
Please note that the above code also allows services to be installed on the management machine itself, which you can set to false.
A more detailed explanation is available here, under "Isolation SLA".

Resources