Can I write to the container's local filesystem from a Google Cloud Function? AWS Lambda allows writing to /tmp:
Q: What if I need scratch space on disk for my AWS Lambda function?
Each Lambda function receives 500MB of non-persistent disk space in
its own /tmp directory.
Is there something equivalent in GCF?
Yes, the whole filesystem is writeable and mapped to memory. From https://cloud.google.com/functions/docs/concepts/exec#file_system:
The filesystem itself is entirely writeable (except for files used by the underlying OS) and is stored within the Cloud Functions instance's memory.
Related
I have a backup of DynamoDB table, I want to download it to my localhost in order to restore it on local dynamodb instance. Couldn't find any documents, every tool I found like dynamodump creates on-demand backup and then downloads it. Can anyone help me?
Your best bet is to do an Export to S3 and then you’ll have direct access to the objects in S3. Hopefully that satisfies your need?
The backup which you state belongs to DynamoDB and not directly accessible to you, it's only purpose is to restore to DynamoDB tables in the cloud.
You have 2 options
1. Export to S3
As #hunterhacker stated you can do an export to S3 and then download the data from there.
2. Scan
A more cost effective solution is to write a local script which does a Scan or if there is a large amount of data a parallel Scan and store the data locally
I have a frontend component that consists of a chart and several different filters that allow users to filter by data type. However, the data that they are filtering is relatively large, and so I do not want to load all of it into the webpage, and instead have a firebase cloud function handle the filtering. The issue is that users will usually do a bunch of filtering while using this component, so it does not make sense for the cloud function to repeatedly download the necessary data. Is there a way to "attach" the cloud function to the call and have it update without having to re-retrieve the data, or to somehow cache the retrieved firebase data somewhere accessible to the cloud function if this is not possible?
exports.handleChartData = functions.database.ref("chartData").onCall((data, context) => {
// can I cache data here somehow
// or can I have this function read in updates from user selected filters
// without having to retrieve data again?
}
You can write data to the local /tmp disk. Just be aware that:
There is no guarantee that the data will be there next time, as instances are spun up and down as needed. So you will need to check if the file exists on each call, and be ready to create it when it doesn't exist yet.
The /tmp disk space is a RAM disk, so any files written there will come out of the memory you've allocated for your Cloud Functions containers.
You can't reliably keep listeners alive across calls, so you won't be able to update the cache.
Also see:
Write temporary files from Google Cloud Function
the documentation on cleaning up temporary files
Firebase cloud function [ Error: memory limit exceeded. Function invocation was interrupted.] on youtube video upload
does anybody know where OpenStack Swift stores the "Rings"? Is there a distributed algorithm or is it just one table somewhere on some of the Storage Nodes with information about all (!) the physical object locations (I cannot believe that because from my understanding of Object Storage, it should scale to Exabytes, and this would need lots of entries in such a table...)?
This page could not help me: http://docs.openstack.org/developer/swift/overview_ring.html
Thanks in advance for your help!
Ring Builder
The rings are built and managed manually by a utility called the ring-builder. The ring-builder assigns partitions to devices and writes an optimized Python structure to a gzipped, serialized file on disk for shipping out to the servers. The server processes just check the modification time of the file occasionally and reload their in-memory copies of the ring structure as needed.
so, it's stored in all servers.
If you were asking the path of ring,gz files it is under /etc/swift by default
Also these ring files are can be updated using the .builder files when swift rebalance is run.
I am creating cloud formation script, which will have ELB. In Auto Scaling launch configuration, I want to add encrypted EBS volume. Couldn't find an encrypted property withing blockdevicemapping. I need to encrypt volume. How can I attach an encrypted EBS volume to an EC2 instance through auto scaling launch configuration?
There is no such property for some strange reason when using launch configurations, however it is there when using blockdevicemappings with simple EC2 instances. See
launchconfig-blockdev vs ec2-blockdev
So you'll either have to use simple instances instead of autoscaling groups, or you can try this workaround:
SnapshotIds are accepted for launchconf blockdev too, and as stated here "Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are also automatically encrypted."
Create a snapshot from an encrypted empty EBS volume and use it in the CloudFormation template. If your template should work in multiple regions then of course you'll have to create the snapshot in every region and use a Mapping in the template.
As Marton says, there is no such property (unfortunately it often takes a while for CloudFormation to catch up with the main APIs).
Normally each encrypted volume you create will have a different key. However, when using the workaround mentioned (of using an encrypted snapshot) the resulting encrypted volumes will inherit the encryption key from the snapshot and all be the same.
From a cryptography point of view this is a bad idea as you potentially have multiple, different volumes and snapshots with the same key. If an attacker has access to all of these then he can potentially use differences to infer information about the key more easily.
An alternative is to write a script that creates and attaches a new encrypted volume at the boot time of a instance. This is fairly easy to do. You'll need to give the instance permissions to create and attach volumes and either have installed the AWS CLI tool or a library for your preferred scripting language. One you have that you can, from the instance that is booting, create a volume and attach it.
You can find a starting point for such a script here: https://github.com/guardian/machine-images/blob/master/packer/resources/features/ebs/add-encrypted.sh
There is an AutoScaling EBS Block Device type which provides the "Encrypted" option:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-template.html
Hope this helps!
AWS recently announced Default Encryption for New EBS Volumes. You can enable this per region via
EC2 Console > Settings > Always encrypt new EBS volumes
https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/
I am wondering is there any cloud storage service that could be used to read and write data just like local disk.
Take R language as an example.
Read data from local disk:
dat01 <- read.table("E:/001.txt")
From cloud:
dat02 <- read.table("http://cloud.com/username/001.txt")
Write data to local disk:
write.table(x, file="E:/002.txt")
write data to cloud:
write.table(x, file="http://cloud.com/username/002.txt")
Other operations includes copy, move and delete.
I know Dropbox and the R package rDrop. I tried these and got errors saying fail to connect to the host. However, I can use Dropbox on my computer without any problem. I also read the manual of rDrop which failed to meet my command.
You can use Google Cloud Storage API for this, i-e it provides a RESTful interface for programmatically accessing Google's cloud storage technology. Here's a Reference Guide to consume the Cloud API.
Assuming you are familiar with RESTful and Web Services.