Terraform Openstack attach pre-allocated Floating IPs to instance - ip

I have a use case where I need to re-use detached Floating IPs. Is there a way to do this in Terraform? I've tried:
`
data "openstack_networking_floatingip_v2" "fips" {
status = "DOWN"
}
`
to get a list of detached IPs, but I get an error saying there is more than one Floating IP (Which is true).
Is there a good way to get detached floating IPs as a data resource in terraform? The alternative is passing an array of available IPs via a wrapper script with the command outlined here: Reuse detached floating IPs in OpenStack

For anyone else that comes across this, here is I solved it for now:
I used the 'external' data resource to call the openstack cli to retrieve a comma seperated list of available ips. The openstack cli command looks like this:
openstack floating ip list --status DOWN -f yaml -c "Floating IP Address"
In order to get the output in the format suitable for terraform's external data resource, I used a python script. The script outputs a json object that looks like this: {ips = "ip.1.2.3,ip.4.5.6,ip.7.8.9"}
The external data resource in terraform looks like this:
data "external" ips {
program = ["python", "<path-to-python-script>"]
}
From there I'm able to split the comma seperated string of ips in terraform and access the the ips as an array:
output available_ips {
value = split(",", data.external.ips.result.ips)
}
It's definitely not elegant, I wish the openstack_networking_floatingip_v2 data resource would allow for this functionality, I'll look into opening an issue to get it added.

Related

How to get filtered list of files from SFTP server using SSHJ [duplicate]

I am using SSHJ SFTP library to get file list from SFTP-server.
The connection to server is very slow and there are tens of thousands of files in directory. Often getting file list will end in various timeout / socket errors.
Is there possibility to tell the client to retrieve file list only from eg. ".zip" files so that it would have positive impact on the performance? Pseudo command: sftpClient.ls("*.zip")
I know there is a method List<RemoteResourceInfo> net.schmizz.sshj.sftp.SFTPClient.ls(String path, RemoteResourceFilter filter) which will filter the list, but from what I understand, the filtering would happen only in client side? ie. the client would still receive whole file list and just after then it would be filtered.
Is there any way to achieve this so that server would only return the names requested? Does the SFTP-protocol even support this?
Indeed, the SFTP protocol does not have a way to provide a list of files matching any criteria. It does not matter, what SFTP library you are using.
You would have to use another interface/API if you need the filtered list. If you have a shell access, you might use shell command ls *.zip.
Or build you own (REST?) API.

Send file via SCP in R

I would like to copy a file from my computer to a remote server via SCP using R.
I have found 2 functions that appear to satisfy this partially.
1.
Using ssh.utils
ssh.utils::cp.remote(path.src="~/myfile.txt",
remote.dest="username#remote",
remote.src="", path.dest="~/temp", verbose=TRUE)
I've noticed that with this method, if I need to enter a password (when remote doesn't have my public key), the function produces an error.
2.
Using RCurl:
RCurl appears to have more robust functionality in the scp() function, but, from what I can tell, it is only for copying a file from a remote to my local machine. I would like to do the opposite.
Is there another way to use these functions or is there another function that would be able to copy a file from my local machine to a remote machine via SCP?
One approach to address the need to enter a password interactively is to use sshpass (see https://stackoverflow.com/a/13955428/6455166) in a call to system, e.g.
system('sshpass -p "password" scp ~/myfile.txt username#remote.com:/some/remote/path')
See the linked answer above for more details, including options to avoid embedding the password in the command.

Trouble with PUT/GET in Riak testing

I'm working on a little e2 application that needs to put/get data
to/from riak. I apparently have a misunderstanding
about some things.
I can successfully put and get key-value pairs to a bucket using the
erlang client. However, when I try to curl the
data out, I get nothing. In fact, doing:
curl -X GET
"http://riak1:8098/riak?buckets=true" returns '{"buckets":[]}'.
Ditto for curl -X GET "http://192.168.29.11:8098/buckets?buckets=true"
In my application, though, I can do this:
8> Object = riakc_obj:new(<<"testbucket">>, <<"testkey">>,
<<"testdata">>). {riakc_obj,<<"testbucket">>,<<"testkey">>,undefined,[],
undefined,<<"testdata">>}
9> riakc_pb_socket:put(Pid, Object).
ok
10> riakc_pb_socket:get(Pid, <<"testbucket">>, <<"testkey">>).
{ok,{riakc_obj,<<"testbucket">>,<<"testkey">>,
<<107,206,97,96,96,96,204,96,202,5,82,28,202,156,255,126,
238,185,94,62,53,131,41,...>>,
[{{dict,2,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],...},
{{[],[],[],[],[],[],[],[],[],[],...}}},
<<"testdata">>}],
undefined,undefined}}
11> riakc_pb_socket:list_buckets(Pid).
{ok,[<<"tb1">>,<<"testbucket">>]}
So what is the difference between the two? How can I use curl (or any
other client) to see the buckets, and retrieve
data?
--
Note that this was also sent to the riak-user mailing list
I figured out what the problem was. I had another instance of Riak running on the machine where my application was running, and the app was defaulting to "localhost" when connecting to Riak, rather than connecting to my test cluster.
The data was written correctly, just to the wrong instance.

Biztalk File send port with a variable path

Is it possible to make the send port change output location based on a promoted property?
We have an interface that needs to send it to a different port based on the client. But we add clients on a regular basis, so adding a new send port (both in the administrator and orchestration) will require a lot of maintenance, while the only thing that happens is a directory change
The folders are like this ...
\\server\SO\client1\Out
\\server\SO\client2\Out
\\server\SO\client3\Out
I tried using the SourceFilename to create a file name like client1\Out\filename.xml but this doesn't work.
Is there any way to do this with a single send port?
It is possible to set the OutboundTransportLocation property in context. This property contains the full path/name of the file that will be output by the file adapter. So in your case I guess you could do something along the line (if it had to be done in a pipeline component):
message.Context.Write(
OutboundTransportLocation.Name,
OutboundTransportLocation.Namespace,
string.format(#"\\server\SO\{0}\Out", client));
Of course you can do a similar thing in your orchestration.
No need of a dynamic port...

SaltStack : Identify environment with DNS record

I have multiple isolated environments to setup with SaltStack. I have created some base states and custom states for each environment. For the moment, the only way I can identify an environment is by requesting a TXT record on the DNS server.
Is there a way I can select the right environment in SaltStack.
How can I put this information in a pillar or a grain?
Salt's dig module might help you here. You can use it to query information from DNS records. It needs the command line dig tool to be installed.
Use a command line:
salt-call dig.TXT google.com
to produce an output like this:
local:
- "v=spf1 include:_spf.google.com ~all"
Use a salt state to put it into a grain:
# setupgrain.sls
mygrainname:
grains.present:
- value: {{ salt['dig.TXT']('google.com') }}
Once you have the information in a grain you can select salt nodes on the grain information using matchers.

Resources