BizTalk Server - Unable to assign logon user/password for Instance ( Via PowerShell) - biztalk

I am trying to create BizTalk instance using PowerShell function CreateBizTalkInstance as found here ( all of them are same)
sandroasp /
BizTalk-Server-Resources/PowerShell-scripts/adm-bts2020-Configure-Host-Host-Instances-and-Handlers/ConfigureBizTalkServerEnvAccordingHostAndHostInstancesBestPractices_BTS2020.ps1 (github.com)
BizTalk Server Best Practices: Create and Configure BizTalk Server Host and Host Instances (https://social.technet.microsoft.com/)
# Function to Create Host Instance
function CreateBizTalkHostInstance([string]$hostName, [string]$serverName, [string]$username, [string]$password)
{
try
{
[System.Management.ManagementObject]$objServerHost = ([WmiClass]"root/MicrosoftBizTalkServer:MSBTS_ServerHost").CreateInstance()
$objServerHost["HostName"] = $hostName
$objServerHost["ServerName"] = $serverName
$objServerHost.Map()
[System.Management.ManagementObject]$objHostInstance = ([WmiClass]"root/MicrosoftBizTalkServer:MSBTS_HostInstance").CreateInstance()
$name = "Microsoft BizTalk Server " + $hostName + " " + $serverName
$objHostInstance["Name"] = $name
write-host "$username : $Password"
$objHostInstance.Install($username, $password,$true)
Write-Host "HostInstance $hostName was mapped and installed successfully. Mapping created between Host: $hostName and Server: $serverName);" -Fore DarkGreen
$LogFile.writeline("HostInstance $hostName was mapped and installed successfully. Mapping created between Host: $hostName and Server: $serverName);")
}
catch [System.Management.Automation.RuntimeException]
{
if ($_.Exception.Message.Contains("Another object with the same key properties already exists.") -eq $true)
{
Write-Host "$hostName host instance can't be created because another object with the same key properties already exists." -Fore DarkRed
$LogFile.writeline("$hostName host instance can't be created because another object with the same key properties already exists.")
}
else{
write-Error "$hostName host instance on server $Server could not be created: $_.Exception.ToString()"
$LogFile.writeline("$hostName host instance on server $Server could not be created: $_.Exception.ToString()")
}
}
}
It creates the host instance successfully, however it doesn't assign passed username password for login, and shows the instance is not configured
Error: TEST10_PX host instance on server could not be created: Exception calling "Install" : "Provided credentials are not valid. Verify logon and password.".Exception.ToString()
Did anyone come through this error and know how to resolve? ( User has SQL database access, and I can manually assign and start the service)
Expectation is to get the instance configured with username/password provided and start the service.

I had the same issue on BizTalk 2020. Problem was a new missing fourth paramter 'IsGmsaAccount' as mentioned in the Github script from Sandro Pereira you mentioned in your question.
The Script in your question is the old one, in case of you also work on BizTalk 2020.

Related

Error "The parameter KeyVault Certificate has an invalid value" with App Service Certificate

I have created in my Azure Key Vault a secret containing an ssl certificate converted from .pfx to base64 string. Now I try to use it to create a certificate linked to an App Service using bicep file.
resource kv 'Microsoft.KeyVault/vaults#2021-06-01-preview' = {
name: 'mykeyvault'
location: resourceGroup().location
properties: {
tenantId: tenantId
sku: {
name: 'standard'
family: 'A'
}
enabledForTemplateDeployment: true
accessPolicies: [...]
}
}
resource sslCertificateSecret 'Microsoft.KeyVault/vaults/secrets#2021-06-01-preview' = {
name: '${kv.name}/sslcert'
properties: {
attributes: {
enabled: true
}
value: <base64_string_ssl>
contentType: 'application/x-pkcs12'
}
}
resource appServicePlan 'Microsoft.Web/serverfarms#2021-01-15' = {
name: 'myServiceplan'
location: resourceGroup().location
kind: 'linux'
properties: {
reserved: true
}
sku: {
name: 'B1'
}
}
resource sslCertificate 'Microsoft.Web/certificates#2021-01-15' = {
name: 'myCertificate'
location: resourceGroup().location
properties: {
keyVaultId: <my_keyvaultId>
keyVaultSecretName: <my_keyvaultCertificateSecretName>
serverFarmId: appServicePlan.id
}
}
I also tried to import the certificate manually in the key vault and reexport it to ensure the base64 string was correct and it seemed ok.
However I am getting the error "The parameter KeyVault Certificate has an invalid value."
Do you have an idea of what I am missing?
Azure KeyVault as a solution for secure storage of confidential information.
Two ways to authenticate a web application in KeyVault. A better is approach is to authenticate the web application using a certificate. This certificate is also deployed directly from KeyVault. This means neither the confidential information nor the keys to the vault are ever disclosed.
Please check the below steps:
Click on the below link to know steps of create certificate linked with app service from keyVault.
Loading the access certificate for your application into KeyVault
Check the File Formats of Certificates which is the major building block when importing certificates
PEM & PFX are the supported certificate formats in Azure Key Vault resource.
• .pem file format consists of 1 or more X509 certificate files.
• A server certificate (issued for your domain), a matching private key, and an optional intermediate CA can all be stored in a single file using the .pfx archive file format.
The first step is to convert any certificates used by the App Service to (and label them as) application/x-pkcs12. It might be possible to resolve the issue by reimport the certificate from a pfx file with the —password parameter (az keyvault certificate import), and then import it from the key vault to the webapp. You could use this blog as a resource.
Also, look if Cert and the Key Vault are in their original resource group.
References:
Azure Key Vault Import Certificates provided by Microsoft and GitHub Source of Deploying Azure Web App Certificate using KV
If you missed the certificate policy on upload and if generating new certificates, then try to generate in the key vault itself.
$credential = Get-Credential
login-azurermaccount -Credential $credential
$vaultName = 'my-vault-full-of-keys'
$certificateName = 'my-new-cert'
$policy = New-AzureKeyVaultCertificatePolicy -SubjectName "CN=mememe.me" -IssuerName Self -ValidityInMonths 120
Add-AzureKeyVaultCertificate -VaultName $vaultName -Name $certificateName -CertificatePolicy $policy
"The parameter KeyVault Certificate has an invalid value"
Please check that you have given permission to access the key vault for Resource Provider
Use PowerShell to enable the 'Microsoft.Web' Resource Provider directly access the azure key Vault.
Login-AzureRmAccount
Set-AzureRmContext -SubscriptionId AZURE_SUBSCRIPTION_ID
Set-AzureRmKeyVaultAccessPolicy -VaultName KEY_VAULT_NAME -ServicePrincipalName abfa0a7c-a6b6-4736-8310-5855508787cd -PermissionsToSecrets get
Sometimes this problem exists in the step of how the certificate was uploaded to the Key Vault: If using PowerShell, give full path instead of the relative path to the cert when uploading.
$pfxFilePath = "PFX_CERTIFICATE_FILE_PATH" # Change this path
Example:
$pfxFilePath = "F:\KeyVault\PrivateCertificate.pfx"
$pwd = "[2+)t^BgfYZ2C0WAu__gw["
$flag = [System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable
$collection = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2Collection
$collection.Import($pfxFilePath, $pwd, $flag)
$pkcs12ContentType = [System.Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12
$clearBytes = $collection.Export($pkcs12ContentType)
$fileContentEncoded = [System.Convert]::ToBase64String($clearBytes)
$secret = ConvertTo-SecureString -String $fileContentEncoded -AsPlainText –Force
$secretContentType = 'application/x-pkcs12'
Set-AzureKeyVaultSecret -VaultName akurmitestvault -Name keyVaultCert -SecretValue $Secret -ContentType $secretContentType # Change the Key Vault name and secret name

FTP_INCORRECT_HOST_KEY in N/SFTP Module

While creating the connection from NetSuite to SFTP using N/SFTP module i'm facing an error states:
"FTP_INCORRECT_HOST_KEY","message":"Provided host key does not match
remote server's fingerprint."
I have tried checking with my server team but no hope. Can any one suggest me how to resolve this or how can i get an authorized finger print host key from server.
I have tried with Suitescript 2.0 module (N/SFTP) with the help of the tool mentioned below.
https://ursuscode.com/netsuite-tips/suitescript-2-0-sftp-tool/
/**
*#NApiVersion 2.x
#NScriptType ScheduledScript
*/
define(['N/sftp', 'N/file', 'N/runtime'],function(sftp, file,runtime) {
function execute(context)
{
var myPwdGuid = "Encrypted password by GUID";
var myHostKey = "Some long Host key around 380 characters";
// establish connection to remote FTP server
var connection = sftp.createConnection({
username: 'fuel_integration',
passwordGuid: myPwdGuid, // references var myPwdGuid
url: '59.165.215.45',//Example IP
directory: '/sftproot/TaleoSync',
restrictToScriptIds : runtime.getCurrentScript().id,
restrictToCurrentUser :false,
hostKey: myHostKey // references var myHostKey
});
// specify the file to upload using the N/file module
// download the file from the remote server
var downloadedFile = connection.download({
directory: '/sftproot/TaleoSync',
filename: 'Fuel Funnel Report_without filter.csv'
});
downloadedFile.folder = ;
downloadedFile.save();
context.response.write(' Downloaded "Fuel Funnel Report_without filter" to fileCabinet');
}
return {
execute: execute
};
});
I expect to create a connection between SFTP and NetSuite to down a file from SFTP and place it to NetSuite file cabinet.
A couple of things:
restrictToScriptIds : runtime.getCurrentScript().id,
restrictToCurrentUser :false,
Are not part of the createConnection signature. Those should have been used when you created a Suitelet to vault your credential.
However the hostkey complaint may be dealt with by using ssh-keyscan from a linux box.
ssh-keyscan 59.165.215.45
should replay with the server name then ssh-rsa then a long base64 string. Copy that string so it ends up in myHostKey and set the hostKeyType to RSA.

Terraform Provisioner "local-exec" not working as expected | VPC Peering Connection Accept issue

I'm unable to get the auto accept peering done through the work around mentioned in the link (Why am I getting a permissions error when attempting to auto_accept vpc peering in Terraform?"] via provisioner option
See below Terraform code of mine. Can some one help me out?
provider "aws" {
region = "us-east-1"
profile = "default"
}
provider "aws" {
region = "us-east-1"
profile = "peer"
alias = "peer"
}
data "aws_caller_identity" "peer" {
provider = "aws.peer"
}
resource "aws_vpc_peering_connection" "service-peer" {
vpc_id = "vpc-123a56789bc"
peer_vpc_id = "vpc-YYYYYY"
peer_owner_id = "012345678901"
peer_region = "us-east-1"
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
Output I'm getting:
Error: Error applying plan:
1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: 1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: Unable to modify peering options. The VPC Peering Connection "pcx-08ebd316c82acacd9" is not active. Please set `auto_accept` attribute to `true`, or activate VPC Peering Connection manually.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure
Where as I'm able to run the aws cli command successfully via linux shell, outside the terraform template. Let me know if I'm missing out something in the terraform script.
Try with moving out your "local-exec" and add depends on link with your VPC peering.
resource "null_resource" "peering-provision" {
depends_on = ["aws_vpc_peering_connection.service-peer"]
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
As said Koe it's may be better to use auto_accept option.

How to create a VM with multiple NICs with Terraform on Openstack

I try to use Terraform to deploy some machines on an Openstack Cloud.
I have no problem to create networks, subnet, keys, security groups and rules, floating ip, network ports (with security groups attached), but, when I try to create compute instances with two NICs (network ports created before), I have a syntax error with no hint to resolve it.
Could you help me please ?
My code is:
resource "openstack_compute_instance_v2" "RNGPR-REBOND-01" {
name = "RNGPR-REBOND-01"
flavor_name = "${var.MyFlavor}"
image_id = "${var.MyImage}"
key_pair = "${var.CODOB}-keypair"
network {
port = "${openstack_networking_port_v2.RNGPR-REBOND-01-eth0.id}"
access_network = true
}
network {
port = "${openstack_networking_port_v2.RNGPR-REBOND-01-eth1.id}"
}
floating_ip = "${openstack_compute_floatingip_v2.FloatingIp-RNGPR-REBOND-01.address}"
}
resource "openstack_compute_instance_v2" "RNGPR-LB-01" {
name = "RNGPR-LB-01"
flavor_name = "${var.MyFlavor}"
image_id = "${var.MyImage}"
key_pair = "${var.CODOB}-keypair"
network {
port = "${openstack_networking_port_v2.RNGPR-LB-01-eth0.id}"
}
network {
port = "${openstack_networking_port_v2.RNGPR-LB-01-eth1.id}"
}
floating_ip = "${openstack_compute_floatingip_v2.FloatingIp-RNGPR-LB-01.address}"
}
And the syntax error is:
Error applying plan:
2 error(s) occurred:
* openstack_compute_instance_v2.RNGPR-REBOND-01: Error creating OpenStack server: Invalid request due to incorrect syntax or missing required parameters.
* openstack_compute_instance_v2.RNGPR-LB-01: Error creating OpenStack server: Invalid request due to incorrect syntax or missing required parameters.
.
From my experience, these error messages aren't very helpful.
I would first set TF_LOG=DEBUG and OS_DEBUG=1 wherever you are running terraform. This will print error messages that are actually beneficial.
One time I was trying to create a server with a key pair that my user didn't have access to in openstack. I was receiving the same error and didn't figure it out until Debugging was set.

Export Biztalk Dynamic Send Port Handler Name

When I export binding for Dynamic Send Port Then no handler name is shown in the binding file. So is there any alternate method for that.
One suggestion by Stephen F March was to use a PowerShell script to set these.
From How to configure Send Handler for BizTalk 2013 Dynamic Send Port on deployment?
param
(
[string] $bizTalkDbServer = ".",
[string] $bizTalkDbName = "BizTalkMgmtDb",
[string] $fileHostInstance = "SendingHost",
[string] $sendPortName = "sm_dynamic_sp_test"
)
[System.reflection.Assembly]::LoadWithPartialName("Microsoft.BizTalk.ExplorerOM") | Out-Null
$catalog = New-Object Microsoft.BizTalk.ExplorerOM.BtsCatalogExplorer
$catalog.ConnectionString = "SERVER=$bizTalkDbServer;DATABASE=$bizTalkDbName;Integrated Security=SSPI"
foreach($sp in $catalog.SendPorts)
{
if($sp.Name -eq $sendPortName)
{
"Found send port $($sp.Name), analyzing send handler"
foreach($sh in $sp.DynamicSendHandlers)
{
if($sh.SendHandler.TransportType.Name -eq "FILE")
{
if($sh.SendHandler.Host.Name -ne $fileHostInstance)
{
"Changing $($sh.Name) send handler to '$fileHostInstance' from '$($sh.SendHandler.Host.Name)'"
$sp.SetSendHandler("FILE", $fileHostInstance)
}
else
{
"Send handler for $($sp.Name) is already '$fileHostInstance' ignorning .. "
}
}
}
}
}
$catalog.SaveChanges()
Sandro Pereira also just posted a blog about it called BizTalk DevOps: How to configure Default Dynamic Send Port Handlers with PowerShell
BizTalk 2013 no out of Box you need to use powershell as listed above.
For BizTalk 2016 + CU8 ( and above only) u will be able to get the Host details in the Binding File when u export .
For BizTalk 2020 use CU2 , CU1 has an issue with this

Resources