Undeleted partition - file-recovery

I had 4 partition in my laptop. C(NTFS) with 118 GB for win7 , D(NTFS) with 211 GB , E(NTFS) with 236 GB and 26 GB for ext4 partition.
Unfortunately I accidentally deleted all partition in my laptop and then created 1 partition with 230 GB. And after this I realized that I was wrong.
I use PARAGON software to recover my partitions. Unfortunately it can only find E(NTFS) with 236 GB.
now what should I do to recover D drive?

It's solved. First I delete new created partition and

Related

Mariadb in raid 10 NVME SSD slow read speeds

It would seem to me that we have a bottleneck we just cant seem to get over.
We have a setup which contains 4 NVME drives in Raid 10
We are using mariadb 10.4
We have indexes
The workload that we have will 99% of the time be IO bound there is no way around that fact
What I have seen while watching the performance dashboard in mysql workbench is that both the SATA SSD and NVME SSD read at about 100MB for the same data set
Now if I am searching through 200M rows(or pulling 200M) I would think that the Innodb disk read would go faster then 100MB
I mean these drives should be capable of reading 3GB(s) so I would at least expect to see like 500MB(s)
The reality here is that I am seeing the exact same speed on the NVME that I see on the SATA SSD
So the question I have is how do I get these disk to be fully utilized
Here is the only config settings outside of replication
sql_mode = 'NO_ENGINE_SUBSTITUTION'
max_allowed_packet = 256M
innodb_file_per_table
innodb_buffer_pool_size = 100G
innodb_log_file_size = 128M
innodb_write_io_threads = 16 // Not sure these 2 lines actually do anything
innodb_read_io_threads = 16
IO bound there is no way around that fact
Unless you are very confident on the suitability of indexes this seems a little presumptuous.
Assuming your right, this would imply a 100% write workload, or a data size orders of magnitude higher that RAM available and a uniform distribution of small accesses?
innodb_io_capacity is providing a default limitation and your hardware is capable of more.
Also if you are reading so frequently, your innodb_buffer_pool_size isn't sufficient.

Checking latency between client and host

The user is complaining about uploading a 1.2 GB file from China Shangai to our data center in Japan Tokyo is taking more than 1 hour.But when I try to upload the file from the USA West it's faster and takes 1 minute.I am thinking it might be latency issue and also user has a BandWidth of 16-17 Mbps
How to perform latency test.I can ask the user to run latency test to my servers and conclude its latency issues.
I know it's more generic question but is there any way we can improve this upload performance?
Try to use ping and compare last your and your customer columns:
Here is an example of ping result from 2 different servers:
First one has 17.3 ms
64 bytes (173.194.222.113): icmp_seq=1 ttl=49 time=17.3 ms
The second one 9.35 ms
64 bytes (216.58.217.206): icmp_req=1 ttl=58 time=9.35 ms

Why is Datastax Opscenter eating too much CPU?

Environment :
machines : 2.1 xeon, 128 GB ram, 32 cpu
os : centos 7.2 15.11
cassandra version : 2.1.15
opscenter version : 5.2.5
3 keyspaces : Opscenter (3 tables), OpsCenter (10 tables), application`s keyspace with (485 tables)
2 Datacenters, 1 for cassandra (5 machines )and another one DCOPS to store up opscenter data (1 machine).
Right now the agents on the nodes consume on average ~ 1300 cpu (out of 3200 available). The only transactioned data being ~ 1500 w/s on the application keyspace.
Any relation between number tables and opscenter? Is it behaving alike, eating a lot of CPU because agents are trying to write the data from too many metrics or is it some kind of a bug!?
Note, same behaviour on previous version of opscenter 5.2.4. For this reason i first tried to upg opscenter to newest version available.
From opscenter 5.2.5 release notes :
"Fixed an issue with high CPU usage by agents on some cluster topologies. (OPSC-6045)"
Any help/opinion much appreciated.
Thank you.
Observing with the awesome tool you provided Chris, on specific agent`s PID noticed that the heap utilisation was constant above 90% and that triggered a lot of GC activity with huge GC pauses of almost 1 sec. In this period of time i suspect the pooling threads had to wait and block my cpu alot. Anyway i am not a specialist in this area.
I took the decision to enlarge the heap for the agent from default 128 to a nicer value of 512 and i saw that all the GC pressure went off and now any thread allocation is doing nicely.
Overall the cpu utilization dropped from values of 40-50% down to 1-2% for the opscenter agent. And i can live with 1-2% since i know for sure that the CPU is consumed by the jmx-metrics.
So my advice is to edit the file:
datastax-agent-env.sh
and alter the default 128 value of Xmx
-Xmx512M
save the file, restart the agent, and monitor for a while.
http://s000.tinyupload.com/?file_id=71546243887469218237
Thank you again Chris.
Hope this will help other folks.

I can see cassandra is CPU bound for write heavy work loads but is it network bound as well?

SETUP: 1
3-node cassandra cluster. Each node is on a different machine with 4 cores 32 GB RAM, 800GB SSD (DISK), 1Gbit/s = 125 MBytes/sec Network bandwith
2 cassandra-stress client machines with same exact configuration as above.
Experiment1: Ran one client on one machine creating anywhere from 1 to 1000 threads and with Conistency Level of Quorum and the max network throughput on a cassandra node was around 8MBytes/sec with a CPU Usage of 85-90 percent on both cassandra node and the client
Experiment2: Ran two clients on two different machines creating anywhere from one to 1000 threads with Conistency Level of Quorum and the max network throughput on a cassandra node was around 12MBytes/sec with a CPU Usage of 90 percent on both cassandra node and both the client
Did not see double the throughput even though my clients were running on two different machines but I can understand the cassandra node is CPU bound and thats probably why. so that lead me to setup2
SETUP 2
3-node cassandra cluster. Each node is on a different machine with 8 cores 32 GB RAM, 800GB SSD (DISK), 1Gbit/s = 125 MBytes/sec Network bandwith
2 cassandra-stress client machines with 4 cores 32 GB RAM, 800GB SSD (DISK), 1Gbit/s = 125 MBytes/sec Network bandwith
Experiment3: Ran one client on one machine creating anywhere from 1 to 1000 threads and with Conistency Level of Quorum and the max network throughput on a cassandra node was around 18MBytes/sec with a CPU Usage of 65-70 percent on a cassandra node and >90% on the client node.
Experiment4: Ran two clients on two different machines creating anywhere from 1 to 1000 threads and with Conistency Level of Quorum and the max network throughput on a cassandra node was around 22MBytes/sec with a CPU Usage of <=75 percent on a cassandra node and >90% on both client nodes.
so the question here is with one client node I was able to push 18MB/sec (Network throughput) and with two client nodes running two different machine I was only able to push at a peak of 22MB/sec(Network throughput) ?? And I wonder why this is the case even though this time the cpu usage on cassandra node is around 65-70 percent on a 8 core machine.
Note: I stopped cassandra and ran a tool called iperf3 on two different ec2 machines and I was able to see the network bandwith of 118 MBytes/second. I am converting everything into Bytes rather than bits to avoid any sort of confusion.

Cached memory on unix machine continuously grows

On my Ubuntu 12 vps I am running a full bitcoin node. When I first start this up it uses around 700mb of memory. If I come back 24 hours later (free -m) will look something like this:
total used free shared buffers cached
4002 3881 120 0 32 2635
-/+ buffers/cache: 1214 2787
Swap: 255 0 255
But then if I clear "cached" using
echo 3 > /proc/sys/vm/drop_caches
and then do free -m again:
total used free shared buffers cached
4002 1260 2742 0 1 88
-/+ buffers/cache: 1170 2831
Swap: 255 0 255
Can see the cached column clears and I have way more free memory than it looked like before.
I have some questions:
what is this cached number?
my guess is it's files being cached for quicker access to the disk?
is it okay to let it grow and use all my free memory?
will other processes that need memory be able to evict the cached memory?
if not, should I routinely clear it using the echo3 command I showed earlier?
Linux tries to utilize the system resources more efficiently. Linux caches the data to reduce the no. of io operations thereby speeding up the system. The metadata about the data is stored in buffers and the actual data will be stored in the cache.
When you clear the cache the processes using cache will lose the data so you have to run
sync
before clearing the cache so that the system will copy the data to secondary storage which reduces the errors.

Resources