Microsoft vpn application triggering: connection OK, but browser doesn't use vpn - vpn

My code following the theory:
Set-VpnConnection -Name fr1-ovpn-tcp.pointtoserver.com -SplitTunneling $True
Set-VpnConnection –Name fr1-ovpn-tcp.pointtoserver.com –IdleDisconnectSeconds 10
Add-VpnConnectionTriggerApplication -Name fr1-ovpn-tcp.pointtoserver.com –ApplicationID "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"
Get-VpnConnection -Name fr1-ovpn-tcp.pointtoserver.com
Result:
Name : fr1-ovpn-tcp.pointtoserver.com
ServerAddress : fr1-ovpn-tcp.pointtoserver.com
AllUserConnection : False
Guid : {54F2D82B-1205-4998-86BB-956E0D18BC66}
TunnelType : Automatic
AuthenticationMethod : {Eap}
EncryptionLevel : Optional
L2tpIPsecAuth : Certificate
UseWinlogonCredential : False
EapConfigXmlStream : #document
ConnectionStatus : Disconnected
RememberCredential : True
SplitTunneling : True
DnsSuffix :
IdleDisconnectSeconds : 10
Get-VpnConnectionTrigger -ConnectionName fr1-ovpn-tcp.pointtoserver.com
Result:
ConnectionName : fr1-ovpn-tcp.pointtoserver.com
ApplicationID : {C:\Program Files (x86)\Google\Chrome\Application\chrome.exe}
Everything is OK, when I start the browser (any browser does the same) the connection is established automatically, when I
https://ipcim.com/en/?p=where
try to figure out which is the location of the browser, I get my home country IP address and not - in this example - an ip address from France.
When I use VPN as usual, the ip address shows that I am in France.
What am I missing?

You specified -SplitTunneling - all traffic that is not specifically for the tunnel network will go via your internet connection.

Related

Azure Powershell - Problem with assigning private IP address to an NIC

Here is my script
$my_vnet = Get-AzVirtualNetwork -Name <vnet_name>
$my_subnet = Get-AzVirtualNetworkSubnetConfig -Name <subnet_name> -VirtualNetwork $my_vnet
Add-AzNetworkInterfaceIpConfig -Name ext-ipconfig6 -NetworkInterface $my_nic -
Subnet $my_subnet -PrivateIpAddress 10.0.0.6
There is no error when running the script. If I use the following command to check, I do see the IP object created.
Get-AzNetworkInterfaceIpConfig -Name ext-ipconfig6 -NetworkInterface $my_nic
...
{
"Name": "ext-ipconfig6",
"PrivateIpAddress": "10.0.0.6",
"PrivateIpAllocationMethod": "Static",
"Subnet": {
"Id": "blabla"
},
"Primary": false
}
However, I don't see it created on the portal.
Comparing with others created in the portal, I see other properties like Etag, Id, ProvisioningState, ...etc. Where did I do wrong...?
Thanks!
You are just creating the Network Interface IP configuration and not setting it to the existing Network interface itself.
I tested the same script which resulted as below :
To fix the above you will have to add | Set-AzNetworkInterface after the Add-AzNetworkInterfaceIpConfig command like below :
$vnet = Get-AzVirtualNetwork -Name "ansuman-vnet" -ResourceGroupName "ansumantest"
$nic= Get-AzNetworkInterface -Name "ansumannic01" -ResourceGroupName "ansumantest"
Add-AzNetworkInterfaceIpConfig -Name "IPConfig2" -NetworkInterface $nic -Subnet $vnet.Subnets[0] -PrivateIpAddress "10.0.0.7" | Set-AzNetworkInterface
Output:

Unable to verify the first certificate - kibana/elasticsearch

I'm trying to configure xpack for elasticsearch/kibana, I've activated the trial license for elasticsearch, configured xpack for kibana/elasticsearch and also I've generated ca.crt, node1-elk.crt, node1-elk.key and also kibana.key , kibana.crt and if I'm testing with curl towards the elasticsearch using the kibana user and password and also the ca.crt it's working like a charm, if I'm trying to access kibana from the GUI says that the "Server is not ready yet" and the logs show that " unable to verify the first certificate" :
{"type":"log","#timestamp":"2021-11-16T04:41:09-05:00","tags":["error","savedobjects-service"],"pid":13250,"message":"Unable to retrieve version information from Elasticsearch nodes. unable to verify the first certificate"}
My configuration:
kibana.yml
server.name: "my-kibana"
server.host: "0.0.0.0"
elasticsearch.hosts: ["https://0.0.0.0:9200"]
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/certs/kibana.crt
server.ssl.key: /etc/kibana/certs/kibana.key
server.ssl.certificateAuthorities: ["/etc/kibana/certs/ca.crt"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "kibana"
elasticsearch.yml
node.name: node1
network.host: 0.0.0.0
discovery.seed_hosts: [ "0.0.0.0" ]
cluster.initial_master_nodes: ["node1"]
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/certs/node1.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/node1.crt
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca.crt" ]
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/node1.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/node1.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca.crt" ]
curl testing:
[root#localhost kibana]# curl -XGET https://0.0.0.0:9200/_cat/nodes?v -u kibana_system:kibana --cacert /etc/elasticsearch/certs/ca.crt
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.100.102 23 97 3 0.00 0.02 0.08 cdfhilmrstw * node1
I don't know what to do more here:
[root#localhost kibana]# curl -XGET https://0.0.0.0:9200/_license -u kibana_system:kibana --cacert /etc/elasticsearch/certs/ca.crt
{
"license" : {
"status" : "active",
"uid" : "872f0ad0-723e-43c8-b346-f43e2707d3de",
"type" : "trial",
"issue_date" : "2021-11-08T18:26:15.422Z",
"issue_date_in_millis" : 1636395975422,
"expiry_date" : "2021-12-08T18:26:15.422Z",
"expiry_date_in_millis" : 1638987975422,
"max_nodes" : 1000,
"issued_to" : "elasticsearch",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}
Thank you for your help

Unable to connect Corda node to Postgres with SSL

My Postgres DB in GCP (Google Cloud Platform) only accepts connections over SSL.
I tried the below inside my node.conf without any success:
dataSourceProperties {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://db-private-ip:5432/my_node"
dataSource.ssl = true
dataSource.sslMode = verify-ca
dataSource.sslRootCert = "/opt/corda/db-certs/server-ca.pem"
dataSource.sslCert = "/opt/corda/db-certs/client-cert.pem"
dataSource.sslKey = "/opt/corda/db-certs/client-key.pem"
dataSource.user = my_node_db_user
dataSource.password = my_pass
}
I'm sure that the keys (sslMode, sslRootCert, sslCert, and sslKey) are acceptable in node.conf (even though they are not mentioned anywhere in Corda docs), because in the logs I didn't get any errors that those key are not recognized.
I get this error when I try to start the node:
[ERROR] 21:58:48+0000 [main] pool.HikariPool. - HikariPool-1 - Exception during pool initialization. [errorCode=zmhrwq, moreInformationAt=https://errors.corda.net/OS/4.3/zmhrwq]
[ERROR] 21:58:48+0000 [main] internal.NodeStartupLogging. - Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database.: Could not connect to the database. Please check your JDBC connection URL, or the connectivity to the database. [errorCode=18t70u2, moreInformationAt=https://errors.corda.net/OS/4.3/18t70u2]
I tried adding ?ssl=true to the end of the data source URL as suggested in (Azure Postgres Database requires SSL Connection from Corda) but that didn't fix the problem.
Also for the same values I'm able to use the psql client to connect my VM to the DB:
psql "sslmode=verify-ca sslrootcert=server-ca.pem sslcert=client-cert.pem sslkey=client-key.pem hostaddr=db-private-ip user=some-user dbname=some-pass"
Turns out the JDBC driver cannot read the key from a PEM file, it has to be converted to a DER file using:
openssl pkcs8 -topk8 -inform PEM -in client-key.pem -outform DER -nocrypt -out client-key.der
chmod 400 client-key.der
chown corda:corda client-key.der
More details here: https://github.com/pgjdbc/pgjdbc/issues/1364
So the correct config should look like this:
dataSourceProperties {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://db-private-ip:5432/db-name"
dataSource.ssl = true
dataSource.sslMode = verify-ca
dataSource.sslRootCert = "/opt/corda/db-certs/server-ca.pem"
dataSource.sslCert = "/opt/corda/db-certs/client-cert.pem"
dataSource.sslKey = "/opt/corda/db-certs/client-key.der"
dataSource.user = db-user-name
dataSource.password = db-user-pass
}

http service not starting error 1009

I was trying to print a document for one of my games but the page viewer couldn't see the printer so I checked the print spooler service
C:\WINDOWS\system32>sc qc spooler
[SC] QueryServiceConfig SUCCESS
SERVICE_NAME: spooler
TYPE : 110 WIN32_OWN_PROCESS (interactive)
START_TYPE : 2 AUTO_START
ERROR_CONTROL : 1 NORMAL
BINARY_PATH_NAME : C:\WINDOWS\System32\spoolsv.exe
LOAD_ORDER_GROUP : SpoolerGroup
TAG : 0
DISPLAY_NAME : Print Spooler
DEPENDENCIES : RPCSS
: http
SERVICE_START_NAME : LocalSystem
C:\WINDOWS\system32>sc query spooler
SERVICE_NAME: spooler
TYPE : 110 WIN32_OWN_PROCESS (interactive)
STATE : 1 STOPPED
WIN32_EXIT_CODE : 1068 (0x42c)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
C:\WINDOWS\system32>
And tried to start it, then this happened
C:\WINDOWS\system32>net start spooler
System error 1068 has occurred.
The dependency service or group failed to start.
C:\WINDOWS\system32>
Ok so I checked the dependencies
C:\WINDOWS\system32>sc qc rpcss
[SC] QueryServiceConfig SUCCESS
SERVICE_NAME: rpcss
TYPE : 20 WIN32_SHARE_PROCESS
START_TYPE : 2 AUTO_START
ERROR_CONTROL : 1 NORMAL
BINARY_PATH_NAME : C:\WINDOWS\system32\svchost.exe -k rpcss
LOAD_ORDER_GROUP : COM Infrastructure
TAG : 0
DISPLAY_NAME : Remote Procedure Call (RPC)
DEPENDENCIES : RpcEptMapper
: DcomLaunch
SERVICE_START_NAME : NT AUTHORITY\NetworkService
C:\WINDOWS\system32>sc query rpcss
SERVICE_NAME: rpcss
TYPE : 20 WIN32_SHARE_PROCESS
STATE : 4 RUNNING
(NOT_STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
C:\WINDOWS\system32>
Ok RPCSS is good, next one
C:\WINDOWS\system32>sc qc http && sc query http
[SC] QueryServiceConfig SUCCESS
SERVICE_NAME: http
TYPE : 1 KERNEL_DRIVER
START_TYPE : 3 DEMAND_START
ERROR_CONTROL : 1 NORMAL
BINARY_PATH_NAME : system32\drivers\HTTP.sys
LOAD_ORDER_GROUP :
TAG : 0
DISPLAY_NAME : HTTP Service
DEPENDENCIES :
SERVICE_START_NAME :
SERVICE_NAME: http
TYPE : 1 KERNEL_DRIVER
STATE : 1 STOPPED
WIN32_EXIT_CODE : 1009 (0x3f1)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
C:\WINDOWS\system32>
OK seeing it stopped I tried to start it again
C:\WINDOWS\system32>net start http
System error 1009 has occurred.
The configuration registry database is corrupt.
C:\WINDOWS\system32>
So I run SFC to try and fix this BUT...
C:\WINDOWS\system32>sfc /scannow
Beginning system scan. This process will take some time.
Beginning verification phase of system scan.
Verification 100% complete.
Windows Resource Protection did not find any integrity violations.
C:\WINDOWS\system32>
A fat lot of help this is, it can't even fix something so inherently wrong...
So this is where I ask the community for help, I don't know what to do past this point. Help is very much appreciated.
In my case, I had a sub-key under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\SslBindingInfo that was missing information. i.e. all the keys such as 0.0.0.0:40015 have values like "AppId","DefaultFlags", etc.
I had one that had no values under this key. I deleted that "empty" key and HTTP was able to start up.

How can I specify server-side encryption of Amazon S3 objects with PowerShell?

Would someone explain how to enable Amazon S3 server-side encryption in a PowerShell script? I'm using the sample code below but when I check encryption in the AWS Console or Cloudberry S3 Explorer Pro the encryption type is still set to 'none'. Using AWS / Cloudberry to do this manually after files are uploaded isn't feasible because the script is to be deployed to 200+ servers, each with it's own bucket in S3. Here's a snippet of code from the script:
$TestFile="testfile.7z"
$S3ObjectKey = "mytestfile.7z"
#Create Amazon PutObjectRequest.
$AmazonS3 = [Amazon.AWSClientFactory]::CreateAmazonS3Client($S3AccessKeyID,$S3SecretKeyID)
$S3PutRequest = New-Object Amazon.S3.Model.PutObjectRequest
$S3PutRequest.BucketName = $S3BucketName
$S3PutRequest.Key = $S3ObjectKey
$S3PutRequest.FilePath = $TestFile
$S3Response = $AmazonS3.PutObject($S3PutRequest)
I've tried inserting the following without success (before the $S3Response line):
$S3PutRequest.ServerSideEncryption
When the above is added I get this message in the output but the file is still not tagged as encrypted on S3:
MemberType : Method
OverloadDefinitions : {Amazon.S3.Model.PutObjectRequest WithServerSideEncryptionMethod(Amazon.S3.Model.ServerSideEncryptionMethod encryption)}
TypeNameOfValue : System.Management.Automation.PSMethod
Value : Amazon.S3.Model.PutObjectRequest WithServerSideEncryptionMethod(Amazon.S3.Model.ServerSideEncryptionMethod encryption)
Name : WithServerSideEncryptionMethod
IsInstance : True
Can anyone tell me what I'm doing wrong? Many thanks in advance.
You should add:
$S3PutRequest.WithServerSideEncryptionMethod([Amazon.S3.Model.ServerSideEncryptionMethod]::AES256)
Or:
$S3PutRequest.ServerSideEncryptionMethod = [Amazon.S3.Model.ServerSideEncryptionMethod]::AES256
If you are using CloudBerry, it has its own PowerShell snapin
Add-PSSnapin CloudBerryLab.Explorer.PSSnapin
$s3 = Get-CloudS3Connection -Key XXXXXXX -Secret YYYYYYY
$destFolder = $s3 | Select-CloudFolder -path "mybucket"
$local = Get-CloudFilesystemConnection
$srcFolder = $local | Select-CloudFolder -path "c:\myzips"
$srcFolder | Copy-CloudItem $destFolder -filter "testfile.7z" -SSE
Notice -SSE parameter in the Copy-CloudItem command.
Some helpful examples can be found on their blog: http://blog.cloudberrylab.com/search?q=powershell

Resources