After a certain limit it should point to different vm - nginx

I have a file system in which the files are stored with the numeric number (SQL index) but my VM size is full and I can't shift my files to different Cloud or anything.
My file system URL will be
https://example.com/5374/randomstring.jpg
5374 is file number which is saved in SQL DB and a random string is generated.
What I'm planning to do is using nginx redirecting right now I have 56770 in a vm if a user tries to upload it will go and save in different vm and if user wants to access 56771 means using nginx it should point to that VM.

You will make your life easier by choosing the cutoff point yourself, it's not essential but it will make matching a regular expression a lot more concise.
If you said 56000 and above was on VM2 then your regex is as simple as /([5-9][6-9][0-9][0-9][0-9])/

Related

Is it possible to set non-volatile wsrep_provider_options, pc.weight by command?

We are using MariaDB v10.5.15 and clustering of Galera-4 v26.4.11.
The cluster is in the weighted quorum mode so that the primary site has more votes than the non-primary. During a network outage, the primary site with more votes prevails and continues service without the other peer.
The system needs to undergo a regular disaster recovery examination, including switching the primary site to the other peer. So, we need to change the weight assignments for the design reasons explained above.
Within MySQL client, we can set pc.weight dynamically by command like set global wsrep_provider_options="pc.weight=2";. However, this command only changes the in-memory configuration. So, if reboot the host, the MariaDB will start again with the old value written in the configuration file /etc/my.cnf.d/server.cnf.
To make the new weights non-volatile, we need to edit the below part of the configuration file. And the edit is error-prone because the file contains many other items in the same line of wsrep_provider_options, with pc.weight in the middle.
[galera]
...
wsrep_provider_options="socket.ssl=true; socket.ssl_key=/etc/pki/galera/server-key.pem; socket.ssl_cert=/etc/pki/galera/server-cert.pem; socket.ssl_ca=/etc/pki/galera/ca-cert.pem; pc.weight=2"
...
I am wondering:
Is it possible to set the pc.weight non-volatile without editing the configuration files?
Otherwise, is it possible to separate pc.weight into another .cnf file while keeping the other items of wsrep_provider_options in the original file?
We highly appreciate your help and suggestions.
No pc.weight cannot be non-volatile without editing a configuration file.
If you put it into another configuration file and start the server with --defaults-extra-file=/path/to/other/file.cnf that would pick it up.
Another option is to start another node, even an arbitrator node, on the secondary site during DR/DR testing. That node might need more than a 2 weight.
How does the primary site know the weight of a node it hadn't seen? I'm not sure. So be careful.

Where does OpenStack Swift store the rings?

does anybody know where OpenStack Swift stores the "Rings"? Is there a distributed algorithm or is it just one table somewhere on some of the Storage Nodes with information about all (!) the physical object locations (I cannot believe that because from my understanding of Object Storage, it should scale to Exabytes, and this would need lots of entries in such a table...)?
This page could not help me: http://docs.openstack.org/developer/swift/overview_ring.html
Thanks in advance for your help!
Ring Builder
The rings are built and managed manually by a utility called the ring-builder. The ring-builder assigns partitions to devices and writes an optimized Python structure to a gzipped, serialized file on disk for shipping out to the servers. The server processes just check the modification time of the file occasionally and reload their in-memory copies of the ring structure as needed.
so, it's stored in all servers.
If you were asking the path of ring,gz files it is under /etc/swift by default
Also these ring files are can be updated using the .builder files when swift rebalance is run.

AWS Auto Scaling Launch Configuration Encrypted EBS Cloud Formation Example

I am creating cloud formation script, which will have ELB. In Auto Scaling launch configuration, I want to add encrypted EBS volume. Couldn't find an encrypted property withing blockdevicemapping. I need to encrypt volume. How can I attach an encrypted EBS volume to an EC2 instance through auto scaling launch configuration?
There is no such property for some strange reason when using launch configurations, however it is there when using blockdevicemappings with simple EC2 instances. See
launchconfig-blockdev vs ec2-blockdev
So you'll either have to use simple instances instead of autoscaling groups, or you can try this workaround:
SnapshotIds are accepted for launchconf blockdev too, and as stated here "Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are also automatically encrypted."
Create a snapshot from an encrypted empty EBS volume and use it in the CloudFormation template. If your template should work in multiple regions then of course you'll have to create the snapshot in every region and use a Mapping in the template.
As Marton says, there is no such property (unfortunately it often takes a while for CloudFormation to catch up with the main APIs).
Normally each encrypted volume you create will have a different key. However, when using the workaround mentioned (of using an encrypted snapshot) the resulting encrypted volumes will inherit the encryption key from the snapshot and all be the same.
From a cryptography point of view this is a bad idea as you potentially have multiple, different volumes and snapshots with the same key. If an attacker has access to all of these then he can potentially use differences to infer information about the key more easily.
An alternative is to write a script that creates and attaches a new encrypted volume at the boot time of a instance. This is fairly easy to do. You'll need to give the instance permissions to create and attach volumes and either have installed the AWS CLI tool or a library for your preferred scripting language. One you have that you can, from the instance that is booting, create a volume and attach it.
You can find a starting point for such a script here: https://github.com/guardian/machine-images/blob/master/packer/resources/features/ebs/add-encrypted.sh
There is an AutoScaling EBS Block Device type which provides the "Encrypted" option:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig-blockdev-template.html
Hope this helps!
AWS recently announced Default Encryption for New EBS Volumes. You can enable this per region via
EC2 Console > Settings > Always encrypt new EBS volumes
https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/

How to migrate Wordpress between Compute Engine instances

I have recently created a very small Google Compute Engine instance, naively thinking it's one of those easily scalable things Google people keep raving about.
I used the quick deployment feature of Wordpress and it all installed itself nicely, so I started configuring and adding data etc.
However, I then found out that I can't scale an existing instance (i.e. it won't allow me to change the instance type to a bigger one. I don't get why not, but there you go.), so it looks like I need to find a way to migrate my Wordpress installation to a new instance.
Will I simply be able to create a new instance and point it at the persistent disk my small instance currently uses, et voila, Bob's your uncle?
Or do I need to manually get the files and MySql data off the first instance and re-import into an empty new instance?
What's the easiest way?
Any advise or helpful links would be appreciated.
Thanks.
P.S.: Btw, should I try to use the Google Cloud SQL store instead of a local MySql installation?
In order to upgrade your VM:
access the VM's settings in the Developers Console (your project -> Compute -> Compute Engine -> VM instances -> click on the VM's name)
Scroll down to the "Disks" section, and un-check "Delete boot disk when instance is deleted"
Delete the VM in question. Take note that the disk, named after the instance, will remain.
Create a new VM, selecting "Existing disk" under Boot disk - Boot source. In the next box down, select the disk from point 3 above, as well as a bigger machine type.
The resulting new instance will use the existing disk from the old one, with improved hardware / performance.
As for using Cloud SQL in lieu of a VM-installed database, it's perfectly feasible, and allows to adjust the Cloud SQL instance to match your actual use. A few consideration when setting up this kind of instance:
limit the IPs allowed to connect to your Cloud SQL instance to your frontend's IP, and perhaps the workstation's IP or subnet from which you maintain the database out of.
configure Cloud SQL to use SSL certificates.
Sammy's answer covers the important stuff I just wanted to clarify how your files are arranged on the two disks that are attached to your instance:
The data disk contains /var/www/ which is all of the wordpress files. It's mounted on the instance at /wordpress
The boot disk contains everything else, including the MySQL database that was created for the Wordpress installation.

Copy / Move One SAP Client From One System to another System

We have one SAP system in the US (let's call it TKIJVPL1), this system has an SAP client 241. We have another SAP system in Germany (lets call it Lockweiler).
We need to move this client 241 from our TKIJVPL1 server to this new server.
Can I simply use transaction SCC8? It says client export, but when I look at the options, it says
Source client : 241 (which is good),
Profile Name SAP_ALL (which is also good as I need all data),
but Target System all that is coming up is PL1 / QL1.
What is the easiest way to export one client from one system to another system in SAP system?
Or I would rather export it to a hard drive, take these files place them on a DVD and mail them to Germany. But I do not see an export to the local disk transaction???
I got it working.
You still need to use transaction SCC8, it could potentially take 8-20+ hours doing a client export. As it is exporting you need to look at the folder \usr\sap\trans\data and in there SAP will create some files with the system name as an extension. In my case it created files like RT03292.PL1 and a few others.
Now one needs to take these files to do an import...
Here are some good links pertaining to this:
http://basissap.blogspot.com/2008/05/what-is-client-copy.html
http://forums.sdn.sap.com/thread.jspa?threadID=1713405&tstart=0
Make sure you look at /usr/sap/trans/data for the data files (usually start with R).
Also make sure you look at /usr/sap/trans/cofiles (usually starts with a K).

Resources