New host key every day using MSFTP and WinSCP - sftp

I am tranfering a file from one server to another using "Core FTP mini-sftp-server" on source side and WinSCP on destination side (both servers are running Windows).
I am logging in these two machine using local admin account which are same on both servers.
I have been doing this process manually:
Start MSFTP server on source
Start WinSCP on destination, connect to source and get the file.
Now I want to automate it and i tried the following
Start msftp from command line on source.
On destination in winscp.exe console:
open login:password#IPAdress
get <file> <destination>
close
exit
The problem with this is if I do it for the first time everyday, it asks me to update the key at the destination side saying:
"WARNING POTENTIAL SECURITY BREACH! The server’s host key does not
match the one WinSCP has is cache. This means that either the server
administrator has charged the host key, the server presents different
key under certain circumstances, or you have actually connected to
another computer pretending to be the server"
I have to manually do it (click on Update) at first and then for the following copies, the automation works.
Question:
How can I update the key using cmd line while connecting to the server?
Can I prevent the source to generate new key daily? Or should I do it?

You should prevent the source server generating a new key - there is absolutely no reason to do so. The server's public key identifies the server, and so this identity shouldn't be changed.

You are losing any security by connecting to a SSH server that changes public key every day.
Anyway, if that's your only option, recent WinSCP allows accepting any host key automatically using the -hostkey=* switch of the open command:
open -hostkey=*
You lose any security by doing that, but you are already, so it makes no difference.

Related

R studio server browser freezes upon login

I have been working on my R studio session hosted by a Linux server and recently, ran a piece of code that was taking way too long to execute and I decided to kill it.
Here is the sequence of steps that I took - none of them helped me restore the health of my session.
1) Hit the stop button on R studio and be patient.
2) Ssh into my Linux server and ran the following command to kill all the processes running with my userid
killall -u myuserid
3) Removed the.RData,.Renviron,.Rhistory files from my workspace.
4) Ran the following R command via the Linux server for garbage collection
gc(reset=TRUE)
4) Restarted the entire Linux server.
I am running out of ideas and would really appreciate any other suggestions before I take more drastic steps like revoking access and granting it again(not sure if that would be the right fix)
Note: The browser window freezes every time I login, and it happens only for my R studio session, the rest of the users in the same network have no issues.
I solved this problem - Rstudio-serverfreezing. I think it was a network problem since I couldn't receive any response from calling "~~~~~~.cache.js". In this case, you can find out "~~~~~~~~~.cache.js" no response with pushing key before you click log-in button.
Anyway, here is my way.
Reset your Network with following orders
you can insert these into cmd terminal as an admin mode.
netsh winsock reset
netsh int ip reset
Reboot
The IP information may be erased. So if you're using fixed IP address, fill the blanks with as-is IP address.
That's all.
You may follow this way to recover the connection.

Oracle 11g Express error: ORA-12505, TNS: listener does not currently know of SID given in connect descriptor

I am facing problem with Oracle 11g Express hosted on Linux Centos 6.4. This server is relocated from one place to another, this relocation changes the IP of server.
We have changed the IP in tnsname.ora and listerner.ora files. After these changes when we are trying to connect to database instance from server we are getting error “Connected to idle instance”. If we are trying it to connect from client using SQL developer we are getting error “Status : Failure -Test failed: Listener refused the connection with the following error: ORA-12505, TNS: listener does not currently know of SID given in connect descriptor”.
We have restarted this server and database multiple time (through option Start and Stop database under Oracle menu) multiple time, but still we are getting the same error.
Request you to please help to resolve this issue.
The local connection via sqlplus user/passwd (as opposed to sqlplus user/passwd#TNSALIAS) is not impacted by IP address or by contents of tnsnames.ora. It also does not require a listener at all, the listener could be stopped or not defined at all in listener.ora. In other words, you did something wrong here.
My guess is that you are mislead into thinking that you've started the database, when in fact it is not started. Check if you have a process called ora_MYORACLE_pmon.
Also the file tnsname.ora is irrelevant; Oracle only checks tnsnames.ora.
we resolved this issue, actually we were missing server address in some files it was still referring old address.
initXE.ora missing local listener parameter, then we added this parameter to it and it start working.
May be this is not the generalise solution but it work in our case.
This behavior is expected if the listener was originally configured with an ALTER SYSTEM command like ALTER SYSTEM SET LOCAL_LISTENER=''; and that command specified the SCOPE=MEMORY option or if the SCOPE is left to default and the database was started with a pfile.
To fix it, reissue all ALTER SYSTEM commands from before the restart. Or at least one that identifies the LOCAL_LISTENER. And set SCOPE=BOTH.

Can I suppress the MLSD with WinSCP .NET assembly?

I'm using WinSCP .NET assembly. When I call the Session.PutFiles method, it sends the following series of commands:
TYPE A
PASV
MLSD
TYPE A
PASV
STOR myfile
Is there a way to tell it NOT to send the MLSD? (MLSD requests the contents of the remote directory be sent back.) At the very least, I don't need this information so it's just wasting bandwidth. I don't even know how I would access it -- maybe WinSCP is doing something with it internally? What worries me more, though, is that I was given very specific specs about the series of FTP commands that I was supposed to send, which includes several non-standard commands, apparently the site at the other end has a customized FTP server. So I don't want an extra command to screw things up.
In the latest version, with default transfer settings, WinSCP does not use the MLSD command.
It's used only with OverwriteMode.Resume or OverwriteMode.Append to retrieve attributes of the remote file.
Also, WinSCP issues the MLSD command once for every destination directory (not for each file).

Azure Virtual Network Point-to-Site (ex. Azure Connect) autoconnect

While Azure Connect is being retired and Azure Virtual Network provides similar feature with better speed, i've noticed few drawbacks though.
Azure Connect was capable of maintaining connection automatically, without user even having to log in. Azure Virtual Network however requires user to interactively connect/reconnect to VPN. This makes it quite unusable in production environment. Are there any ways to overcome this obstacle?
To solve this problem you can use rasdial.
First time i used rasdial i run into this problem:
This function is not supported on this system. Don't get fooled by this message because its just that you didn't give the correct syntax.
rasdial "Your VPN name" /phonebook:%userprofile%\AppData\Roaming\Microsoft\Network\Connections\Cm\Your-VPN\Your-VPN.pbk"
%userprofile% is de user profiel you used to install Azure vpn with.
Your-VPN is de name of the azure vpn connection.
A simpel methode is to make a batch script:
SET VPN_NAME=azureVPN
:loop
rasdial %VPN_NAME% /PHONEBOOK:C:\Users\bas\AppData\Roaming\Microsoft\Network\Connections\Cm\%VPN_NAME%\%VPN_NAME%.pbk
timeout 10
goto loop
result will be:
Connecting to test...
Verifying username and password...
Registering your computer on the network...
Successfully connected to test.
Command completed successfully.
after 10 seconds:
You are already connected to test.
Command completed successfully.
To let this script start when the computer starts use the taskscheduler.
This works you just need to go to the folder and get the long name for the phone book from that folder. Also the AzureVPN (the name) should be replaced with the same thing without .pbk

SQL Server session in CLUSTER

Can anyone help me with this ...
I have a 3 node sql server cluster lets say N1, N2 and N3. The name for the three node cluster is SQLCLUS.
The application connects to database using the name SQLCLUS in connections strings name.
The application uses SQL Server session manangement. So I remote desktopped to N1 (which is active while N2 and N3 are passive) and from the locaiton
C:\Windows\Microsoft.NET\Framework64\v2.0.50727
I executed the following command
aspnet_regsql.exe -S SQLCLUS -E -ssadd -sstype p
The command executed successfully. I could then login into SQLCLUS and see ASpState database created with 2 tables.
I then tested the applciation which uses the SQL Server session and it also works fine.
Now my question is ...
If there is a fail over to node N2 or N3 will my application still work ?. I did not execute the above command (aspnet_regsql.exe ) from N2.
Should I execute the command , aspnet_regsql.exe -S SQLCLUS -E -ssadd -sstype p , in N2 and N3 too ?
What changes happens in a sql server after executing the above command ?. I mean , is there any kind of service ot settings changes that can be seen ?.
Greatly apprecite any in puts regarding this....
Thanks in advance...
3.
Sql Server failover clustering can be conceptually explained as a smoke-and-mirrors dns hack. Thinking of clustering in familiar terms makes you realize how simple a technology it really is.
Simplified description of Sql Server Failover Clustering
Imagine you have two computers: SrvA and SrvB
You plug an external HD (F:) into SrvA, Install Sql Server and configure it to store its database files on f:\ (The executable is under C:\Program Files).
Unplug the HD, plug it into SrvB, Install Sql Server and configure it to store its database files on F:\ in the exact same location.
Now, you create a dns alias "MyDbServer" that points to SrvA, plug the external HD back into SrvA, and start sql server.
All is good until one day when the power supply fails on SrvA and the machine goes down.
To recover from this disaster you do the following:
Plug the external drive into SrvB
Start sql server on SrvB
Tweak the dns entry for "MyDbServer" to point to SrvB.
You're now up and going on SrvB, and your client applications are blissfully unaware that SrvA failed because they only ever connected using the name "MyDbServer".
Failover Clustering in the Reality
SrvA and SrvB are the cluster nodes.
The External HD is Shared SAN Storage.
The three step recovery process is what happens during a cluster failover and is managed automatically by the Windows Failover Clustering service.
What kinds of tasks need to be run on each Sql Node?
99.99% of the tasks that you perform in Sql Server will be stored in the database files on shared storage and therefore will move between nodes during a failover. This includes everything from creating logins, creating databases, INSERTS/UPDATES/DELETES on tables, Sql Agent jobs and just about everything else you can think of. This also includes all of the tasks that aspnet_regsql command performs (it does nothing special from a database perspective).
The remaining .01% of things that would have to be done on each individual node (because they aren't stored on shared storage) are things like applying service packs (remember that the executable is on c:), certain registry settings (some Sql Server registry settings are "checkpointed" and failover, some aren't), registering 3rd party COM dll's (no one really does this anymore) and changing the service account that Sql Server runs under.
Try it for yourself
If you want to verify that aspnet_regsql doesn't need to be run on each node, then try failing over and verify that your app still works. If you do run aspnet_regsql on each node and reference the clustered name (SQLCLUS) then you will effectively be over-writing the database, so if it doesn't error out, it will just wipe out your existing data.

Resources