When I retrieve messages (with imap) the folder /home/user/mail is created while my config files (postfix and dovecot) redirects to /home/user/Maildir folder, and Maildir folder contains emails I want to get with imap !
I made a symbolic link between these folders and now with android I can retrieve messages stored in strange folders. With ThunderBird I can not explore folders. I see that dovecot creates an .imap folder where messages are finally stored, but not like expected.
Does somebody knows the mistake I did ?
Best Regards
It was due to a bad dovecot config !
Lines added to /etc/dovecot/dovecot.conf :
ssl = yes
auth_mechanisms = plain login
mail_location = maildir:~/Maildir
passdb {
args = dovecot
driver = pam
}
protocols = imap
service auth {
unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0660
user = postfix
}
}
ssl_cert = </etc/ssl/certs/ssl-cert-snakeoil.pem
ssl_key = </etc/ssl/private/ssl-cert-snakeoil.key
userdb {
driver = passwd
}
Related
I am using the following code in my file
require ABSPATH.'wp-admin/includes/class-ftp.php';
$ftp = new ftp();
$ftp->Verbose = TRUE;
$ftp->LocalEcho = TRUE;
if (!$ftp->SetServer(self::$form_vars['host']))
{
$ftp->quit();
die("Setting server failed\n");
}
if (!$ftp->connect())
{
die("Cannot connect\n");
}
if (!$ftp->login(self::$form_vars['user'], self::$form_vars['pass']))
{
die("Login failed\n");
$ftp->quit();
}
I am not able to login, it says Login Failed.
The same FTP works with Filezila. but not works using the script default wordprss ftp class.
It gives the following error also
GET < 421-Sorry, cleartext sessions and weak ciphers are not accepted on this server.
421 Please reconnect using TLS security mechanisms.
Any solution.
I am trying to connect (dotnet client) to RabbitMQ. I enabled the Peer verification option from the RabbitMQ config file.
_factory = new ConnectionFactory
{
HostName = Endpoint,
UserName = Username,
Password = Password,
Port = 5671,
VirtualHost = "/",
AutomaticRecoveryEnabled = true
};
sslOption = new SslOption
{
Version = SslProtocols.Tls12,
Enabled = true,
AcceptablePolicyErrors = System.Net.Security.SslPolicyErrors.RemoteCertificateChainErrors
| System.Net.Security.SslPolicyErrors.RemoteCertificateNameMismatch,
ServerName = "", // ?
Certs = X509CertCollection
}
Below are my client certification details which I am passing through "X509CertCollection".
CertSubject: CN=myhostname, O=MyOrganizationName, C=US // myhostname is the name of my client host.
So, if I pass "myhostname" value into sslOption.ServerName, it works. If I pass some garbage value, it still works.
As per documentation of RabbitMQ, these two value should be match i.e. certCN value and serverName. What will be the value of sslOption.ServerName here and why?
My Bad. I found the reason. Posting as it might help someone.
Reason: As I set a policy "System.Net.Security.SslPolicyErrors.RemoteCertificateNameMismatch".
Corda open source on Linux. Node RPC SSL enabled. I am getting error "Failed to find a store at certificates\sslkeystore.jks". Any ideas? I have entered absolute path in keyStorePath.
You must follow the steps of this paragraph: https://docs.corda.net/clientrpc.html#wire-security which I detailed for you below.
When you enable RPC SSL, you must run this command one time (you will be asked to supply 2 new passwords):
java -jar corda.jar generate-rpc-ssl-settings
It will create the rpcsslkeystore.jks under certificates folder, and rpcssltruststore.jks under certificates/export folder.
Inside your node.conf supply the path and password of rpcsslkeystore.jks:
rpcSettings {
useSsl=true
ssl {
keyStorePath=${baseDirectory}/certificates/rpcsslkeystore.jks
keyStorePassword=password
}
standAloneBroker = false
address = "0.0.0.0:10003"
adminAddress = "0.0.0.0:10004"
}
Now if you have a webserver, inside NodeRPCConnection you must use the constructor that takes a ClientRpcSslOptions parameter:
// RPC SSL properties.
#Value("${config.rpc.ssl.truststorepath}")
private String trustStorePath;
#Value("${config.rpc.ssl.truststorepassword}")
private String trustStorePassword;
#PostConstruct
public void initialiseNodeRPCConnection() {
NetworkHostAndPort rpcAddress = new NetworkHostAndPort(host, rpcPort);
ClientRpcSslOptions clientRpcSslOptions = new ClientRpcSslOptions(Paths.get(trustStorePath),
trustStorePassword, "JKS");
CordaRPCClient rpcClient = new CordaRPCClient(rpcAddress, clientRpcSslOptions, null);
rpcConnection = rpcClient.start(username, password);
proxy = rpcConnection.getProxy();
}
We added above 2 extra attributes that you must now supply when starting the webserver, for that; modify your clients module build.gradle:
task runNodeServer(type: JavaExec, dependsOn: jar) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.server.ServerKt'
args '--server.port=50005', '--config.rpc.host=localhost',
'--config.rpc.port=10005', '--config.rpc.username=user1', '--config.rpc.password=test',
'--config.rpc.ssl.truststorepath=/path-to-project/build/nodes/your-node/certificates/export/rpcssltruststore.jks',
'--config.rpc.ssl.truststorepassword=password'
}
If you're planning to connect to the node with a standalone shell, you must do something similar, but it didn't work for me; I reported the following bug: https://github.com/corda/corda/issues/5955
While creating the connection from NetSuite to SFTP using N/SFTP module i'm facing an error states:
"FTP_INCORRECT_HOST_KEY","message":"Provided host key does not match
remote server's fingerprint."
I have tried checking with my server team but no hope. Can any one suggest me how to resolve this or how can i get an authorized finger print host key from server.
I have tried with Suitescript 2.0 module (N/SFTP) with the help of the tool mentioned below.
https://ursuscode.com/netsuite-tips/suitescript-2-0-sftp-tool/
/**
*#NApiVersion 2.x
#NScriptType ScheduledScript
*/
define(['N/sftp', 'N/file', 'N/runtime'],function(sftp, file,runtime) {
function execute(context)
{
var myPwdGuid = "Encrypted password by GUID";
var myHostKey = "Some long Host key around 380 characters";
// establish connection to remote FTP server
var connection = sftp.createConnection({
username: 'fuel_integration',
passwordGuid: myPwdGuid, // references var myPwdGuid
url: '59.165.215.45',//Example IP
directory: '/sftproot/TaleoSync',
restrictToScriptIds : runtime.getCurrentScript().id,
restrictToCurrentUser :false,
hostKey: myHostKey // references var myHostKey
});
// specify the file to upload using the N/file module
// download the file from the remote server
var downloadedFile = connection.download({
directory: '/sftproot/TaleoSync',
filename: 'Fuel Funnel Report_without filter.csv'
});
downloadedFile.folder = ;
downloadedFile.save();
context.response.write(' Downloaded "Fuel Funnel Report_without filter" to fileCabinet');
}
return {
execute: execute
};
});
I expect to create a connection between SFTP and NetSuite to down a file from SFTP and place it to NetSuite file cabinet.
A couple of things:
restrictToScriptIds : runtime.getCurrentScript().id,
restrictToCurrentUser :false,
Are not part of the createConnection signature. Those should have been used when you created a Suitelet to vault your credential.
However the hostkey complaint may be dealt with by using ssh-keyscan from a linux box.
ssh-keyscan 59.165.215.45
should replay with the server name then ssh-rsa then a long base64 string. Copy that string so it ends up in myHostKey and set the hostKeyType to RSA.
I am new with terraform, but I have created an openstack compute instance like this:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
# Import SSH key pair into openstack project
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
# Create a new virtual machine
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
}
For maintainability and flexibility reasons I would like to add some "components" in the same instance, it could be anything, but here I have tried with a provisionner file and remote execution.
Indeed, when I add this arguments in my compute instance, I noticed that my compute instance will not be updated. For example:
provider "openstack" {
auth_url = "https://my-auth/v2.0/"
domain_name = "default"
alias = "alias"
user_name = "username"
tenant_name = "tenantname"
password = "pwd"
region = "region"
}
resource "openstack_compute_keypair_v2" "keypair" {
provider = "myprovider"
name = "keypair"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}
resource "openstack_compute_instance_v2" "compute_instance" {
name = "compute_instance" # Instance Name
provider = "myprovider" # Instance distr
image_name = "Centos 7" # Image name
flavor_name = "b2-7" # Machine type name
key_pair = "${openstack_compute_keypair_v2.keypair.name}"
network {
name = "Ext-Net"
}
# Add a provisionner file on the ressource
provisioner "file" {
source = "foo_scripts/bar-setup.sh"
destination = "/tmp/bar-setup.sh"
connection {
type = "ssh"
user = "user"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
# execute server setup file
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bar-setup.sh",
"sudo bash /tmp/bar-setup.sh",
]
connection {
type = "ssh"
user = "centos"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
Indeed, by adding the provionner file on the ressource, when I run the command terraform plan or terraform apply, nothing change on my instance. I have infos messages notify me that:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
What it's the right way to apply my changes to my compute instance.
Following Terraform documentation:
Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction.
If you want the provisionners to run again, you should destroy (terraform destroy) and create (terraform apply) the resource again.
There's no way that Terraform can check the state of a local or a remote execution, it's not like there's an API call that can tell you what happen on your custom code - bar-setup.sh.
That would be like magic, or actual Magic.
Terraforms' for managing the infrastructure, the config of the instance, and not really for the content on the instance. Immutable content and recreating is the true path here. Making a completely new instance. However if it's your Hammer there are ways.
If you taint the resource that you want to update, then when terraform is run again next time the resource will be re-executed. But heed what I said about Hammers.
Alternatively you could leverage your CM tool of choice to manage the content of your instance - Chef/Ansible or create the images (i.e. immutable) used by Openstack via a tool like packer and update those. I'd do the latter.