terraform public key & private key adding error as well as file not upload in the instance - terraform-provider-aws

resource "aws_instance" "dove-web" {
ami = var.AMIS[var.REGION]
instance_type = "t2.micro"
subnet_id = aws_subnet.dove-pub-1.id
key_name = aws_key_pair.rsa.key_name
vpc_security_group_ids = [aws_security_group.dove_stack_sg.id]
tags = {
Name = "cool"
}
provisioner "file" {
source = "web.sh"
destination = "/home/ubuntu/web.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/ubuntu/web.sh",
"sudo /home/ubuntu/web.sh"
]
}
# privat key
resource "tls_private_key" "rsa" {
algorithm = "RSA"
rsa_bits = 4096
}
# public key
resource "aws_key_pair" "rsa" {
key_name = var.PUB_key
public_key = tls_private_key.rsa.public_key_openssh
provisioner "local-exec" { # Generate "terraform-key-pair.pem" in current directory
command = <<-EOT
echo '${tls_private_key.rsa.private_key_pem}' > ./'${var.PUB_key}'.pem
chmod 400 ./'${var.PUB_key}'.pem
EOT
}
}
# store key localy
/*resource "local_file" "TF-key" {
content = tls_private_key.rsa.private_key_pem
filename = "tfkey"
}
*/
}
output "PublicIP" {
value = aws_instance.dove-web.public_ip
}
$ terraform validate
╷
│ Error: Reference to undeclared resource
│
│ on instance.tf line 6, in resource "aws_instance" "dove-web":
│ 6: key_name = aws_key_pair.rsa.key_name
│
│ A managed resource "aws_key_pair" "rsa" has not been declared in the root module.

Related

Apache Mina SFTP: Mount Remote Sub-Directory instead of Filesystem Root

I would to use Apache SSHD to create an SFTP server and use SftpFileSystemProvider to mount a remote directory.
I successfully create the virtual file system with SftpFileSystemProvider following the documentation https://github.com/apache/mina-sshd/blob/master/docs/sftp.md#using-sftpfilesystemprovider-to-create-an-sftpfilesystem.
However I'm stuck when mouting remote directory even with the documentation https://github.com/apache/mina-sshd/blob/master/docs/sftp.md#configuring-the-sftpfilesystemprovider. It keeps mouting the root directory instead of the target one.
I tried:
adding the target directory into the sftp uri (not working)
getting new filesystem from path (not working)
Here is a quick example.
object Main:
class Events extends SftpEventListener
class Auth extends PasswordAuthenticator {
override def authenticate(username: String, password: String, session: ServerSession): Boolean = {
true
}
}
class FilesSystem extends VirtualFileSystemFactory {
override def createFileSystem(session: SessionContext): FileSystem = {
val uri = new URI("sftp://xxx:yyy#host/plop")
// val uri = SftpFileSystemProvider.createFileSystemURI("host", 22, "xxx", "yyy")
val fs = Try(FileSystems.newFileSystem(uri, Collections.emptyMap[String, Object](), new SftpFileSystemProvider().getClass().getClassLoader())) match {
case Failure(exception) =>
println("Failed to mount bucket")
println(exception.getMessage)
throw exception
case Success(filesSystem) =>
println("Bucket mounted")
filesSystem
}
//fs.getPath("plop").getFileSystem
fs
}
}
private val fs = new FilesSystem()
private val sftpSubSystem = new SftpSubsystemFactory.Builder().build()
sftpSubSystem.addSftpEventListener(new Events())
private val sshd: SshServer = SshServer.setUpDefaultServer()
sshd.setPort(22)
sshd.setHost("0.0.0.0")
sshd.setSubsystemFactories(Collections.singletonList(sftpSubSystem))
sshd.setKeyPairProvider(new SimpleGeneratorHostKeyProvider(Paths.get("hostkey.ser")))
sshd.setShellFactory(new InteractiveProcessShellFactory())
sshd.setCommandFactory(new ScpCommandFactory())
sshd.setFileSystemFactory(fs)
sshd.setPasswordAuthenticator(new Auth())
sshd.setSessionHeartbeat(HeartbeatType.IGNORE, Duration.ofSeconds(30L))
#main def m() = {
sshd.start()
while (sshd.isStarted) {
}
}
end Main
Am I missing something ?
SSHD version 2.8.0, SFTP protocol version 3, Scala3, Java11
I could be wrong, but, I think that these two ...
sshd.setShellFactory(new InteractiveProcessShellFactory())
sshd.setCommandFactory(new ScpCommandFactory())
sshd.setFileSystemFactory(fs)
... are redundant and this ...
private val sftpSubSystem = new SftpSubsystemFactory.Builder().build()
... needs to be made aware of the virtual file system.

aws_wafv2_rule_group Rate based rule

I am struggling to create a rate based wafv2 rule group; i am getting below error when doing a plan.
not sure is this feature available in terraform till now. below is my code and error:
I have also upgrade terraform and provider registry:
Terraform v1.0.9; on windows_amd64; provider registry.terraform.io/hashicorp/aws v3.63.0
terraform code:
name = "test-rulegroup-ratelimit"
scope = "REGIONAL"
capacity = 5
rule {
name = "test-rulegroup-ratelimit"
priority = 1
action {
count {}
}
statement {
rate_based_statement {
limit = 9999
aggregate_key_type = "IP"
}
}
visibility_config {
cloudwatch_metrics_enabled = false
metric_name = "test-rulegroup-ratelimit"
sampled_requests_enabled = true
}
}
visibility_config {
cloudwatch_metrics_enabled = false
metric_name = "test-rulegroup-ratelimit"
sampled_requests_enabled = true
}
}
Error:
│
│ on r_rulegroup.tf line 15, in resource "aws_wafv2_rule_group" "test-rulegroup-ratelimit":
│ 15: rate_based_statement {
│
│ Blocks of type "rate_based_statement" are not expected here.""
I cannot find anything specific for wafv2 just like classic waf:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/waf_rate_based_rule

list of objects (blocks for network)

In openstack_compute_instance_v2, Terraform can attach the existing networks, while I have 1 or n network to attach, in module:
...
variable "vm_network" {
type = "list"
}
resource "openstack_compute_instance_v2" "singlevm" {
name = "${var.vm_name}"
image_id = "${var.vm_image}"
key_pair = "${var.vm_keypair}"
security_groups = "${var.vm_sg}"
flavor_name = "${var.vm_size}"
network = "${var.vm_network}"
}
in my .tf file:
module "singlevm" {
...
vm_network = {"name"="NETWORK1"}
vm_network = {"name"="NETWORK2"}
}
Terraform returns expected object, got invalid error.
What am I doing wrong here?
That's not how you specify a list in your .tf file that sources the module.
Instead you should have something more like:
variable "vm_network" { default = [ "NETWORK1", "NETWORK2" ] }
module "singlevm" {
...
vm_network = "${var.vm_network}"
}

How can I read a folder owned by root with Vala?

I'm trying to read the path /var/cache/apt/archives with the following permissions:
drwxr-xr-x 3 root root 90112 ago 2 14:36 archives
And I got the following error:
ERROR: Error opening directory '/var/cache/apt/archives/partial': Permission denied
Can somebody give me a hand with this?
The source code is the following:
using Gtk;
using GLib;
private int64[] get_folder_data (File file, string space = "", Cancellable? cancellable = null) throws Error
{
FileEnumerator enumerator = file.enumerate_children (
"standard::*",
FileQueryInfoFlags.NOFOLLOW_SYMLINKS,
cancellable);
int64 files = 0;
int64 size = 0;
int64[] data = new int64[2];
FileInfo info = null;
while (cancellable.is_cancelled () == false && ((info = enumerator.next_file (cancellable)) != null)) {
if (info.get_file_type () == FileType.DIRECTORY) {
File subdir = file.resolve_relative_path (info.get_name ());
get_folder_data (subdir, space + " ", cancellable);
} else {
files += 1;//Sum Files
size += info.get_size ();//Accumulates Size
}
}
if (cancellable.is_cancelled ()) {
throw new IOError.CANCELLED ("Operation was cancelled");
}
data[0] = files;
data[1] = size;
stdout.printf ("APT CACHE SIZE: %s\n", files.to_string());
stdout.printf ("APT CACHE FILES: %s\n", size.to_string());
return data;
}
public static int main (string[] args) {
Gtk.init (ref args);
File APT_CACHE_PATH = File.new_for_path ("/var/cache/apt/archives");
try {
get_folder_data (APT_CACHE_PATH, "", new Cancellable ());
} catch (Error e) {
stdout.printf ("ERROR: %s\n", e.message);
}
Gtk.main ();
return 0;
}
And the command I used for compile is the following:
valac --pkg gtk+-3.0 --pkg glib-2.0 --pkg gio-2.0 apt-cache.vala
If you run your app as a normal user, you have to exclude the "partial" dir, it has more restrictive permissions (0700):
drwx------ 2 _apt root 4096 Jul 29 11:36 /var/cache/apt/archives/partial
One way to exclude the partial dir is to just ignore any dir that is inaccessible:
int64[] data = new int64[2];
FileEnumerator enumerator = null;
try {
enumerator = file.enumerate_children (
"standard::*",
FileQueryInfoFlags.NOFOLLOW_SYMLINKS,
cancellable);
}
catch (IOError e) {
stderr.printf ("WARNING: Unable to get size of dir '%s': %s\n", file.get_path (), e.message);
data[0] = 0;
data[1] = 0;
return data;
}
In addition it might be a good idea to always explicitly ignore the partial folder.
If you are planning to make your utility useful for the root user as well, you might even think of adding a command line option like "--include-partial-dir".
Also the same thing can be done with simple bash commands which is much easier than writing your own program.
du -sh /var/cache/apt/archives
find /var/cache/apt/archives -type f | wc -l
Note that du and find also warn about the inaccessible partial dir:
$ du -sh /var/cache/apt/archives
du: cannot read directory '/var/cache/apt/archives/partial': Permission denied
4.6G /var/cache/apt/archives
$ find /var/cache/apt/archives -type f | wc -l
find: '/var/cache/apt/archives/partial': Permission denied
3732

FileStatus use to recurse directory

I have following directory structure,
Dir1
|___Dir2
|___Dir3
|___Dir4
|___File1.gz
|___File2.gz
|___File3.gz
The subdirectories are just nested and donot contain any files
I am trying to use the following for recursing through a directory on HDFS.If its a directory I append /* to the path and addInputPath
arg[0] = "path/to/Dir1"; // given at command line
FileStatus fs = new FileStatus();
Path q = new Path(args[0]);
FileInputFormat.addInputPath(job,q);
Path p = new Path(q.toString()+"/*");
fs.setPath(p);
while(fs.isDirectory())
{
fs.setPath(new Path(p.toString()+"/*"));
FileInputFormat.addInputPath(job,fs.getPath());
}
But the code doesnt seem to go in the while loop and I get not a File Exception
Where is the if statement you are referring to?
Anyway, you may have a look at these utility methods which add all files within a directory to a job's input:
Utils:
public static Path[] getRecursivePaths(FileSystem fs, String basePath)
throws IOException, URISyntaxException {
List<Path> result = new ArrayList<Path>();
basePath = fs.getUri() + basePath;
FileStatus[] listStatus = fs.globStatus(new Path(basePath+"/*"));
for (FileStatus fstat : listStatus) {
readSubDirectory(fstat, basePath, fs, result);
}
return (Path[]) result.toArray(new Path[result.size()]);
}
private static void readSubDirectory(FileStatus fileStatus, String basePath,
FileSystem fs, List<Path> paths) throws IOException, URISyntaxException {
if (!fileStatus.isDir()) {
paths.add(fileStatus.getPath());
}
else {
String subPath = fileStatus.getPath().toString();
FileStatus[] listStatus = fs.globStatus(new Path(subPath + "/*"));
if (listStatus.length == 0) {
paths.add(fileStatus.getPath());
}
for (FileStatus fst : listStatus) {
readSubDirectory(fst, subPath, fs, paths);
}
}
}
Use it in your job runner class:
...
Path[] inputPaths = Utils.getRecursivePaths(fs, inputPath);
FileInputFormat.setInputPaths(job, inputPaths);
...

Resources