Extending a Logical Volume on RHEL7 - rhel7

I am using a RHEL7 box, created by our in-house vm-provisioning system.
They create logical volumes for the likes of /var, /home, swap etc. using 2 pools of space. I was attempting to follow the examples and descriptions of how to add some of that un-allocated space to a volume from https://www.tecmint.com/extend-and-reduce-lvms-in-linux/, and am stuck getting 'resize2fs' to operate as expected.
using lvdisplay - I got the appropriate volume:
--- Logical volume ---
LV Path /dev/rootvg/lvvar
LV Name lvvar
VG Name rootvg
LV UUID WGkYI1-WG0S-uiXS-ziQQ-4Pbe-rv1H-0HyA2a
LV Write Access read/write
LV Creation host, time localhost, 2018-06-05 16:10:01 -0400
LV Status available
# open 1
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:5
I found the associated volume group with vgdisplay:
--- Volume group ---
VG Name rootvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 8
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 7
Open LV 7
Max PV 0
Cur PV 1
Act PV 1
VG Size <49.00 GiB
PE Size 4.00 MiB
Total PE 12543
Alloc PE / Size 5120 / 20.00 GiB
Free PE / Size 7423 / <29.00 GiB
VG UUID 5VkgVi-oZ56-KqMk-6vmf-ttNo-EMHG-quotwk
I decided to take 4G from the Free PE's and extended the space with:
lvextend -l +1024 /dev/rootvg/lvvar
which answered as expected:
Size of logical volume rootvg/lvvar changed from 2.00 GiB (512 extents) to 6.00 GiB (1536 extents).
Logical volume rootvg/lvvar successfully resized.
But when I try to use resize2fs - I get this:
# resize2fs /dev/mapper/rootvg-lvvar
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/rootvg-lvvar
I'm sure it's something dumb I'm missing - can anyone push me in the right direction here?

Use xfs_growfs instead.
xfs_growfs /dev/mapper/rootvg-lvvar

Related

Fetching Toll costs

I'm trying to fetch Toll costs data (cost, TollCost groups from the Json response). I'm making the call to https://route.api.here.com/routing/7.2/calculateroute.json with the following parameters
alternatives: 0
currency: EUR
rollup: none,total,country
mode: fastest;truck;traffic:disabled
driver_cost: 0
vehicle_cost: 0.46
vehicleCostOnFerry: 0
routeAttributes: none,no,wp,lg,sc
jsonAttributes: 41
maneuvreAttributes: none
linkAttributes: none,sh
legAttributes: none,li
cost_optimize: 1
metricsystem: metric
truckRestrictionPenalty: soft
tollVehicleType: 3
trailerType: 2
trailersCount: 1
vehicleNumberAxles: 2
trailerNumberAxles: 3
hybrid: 0
emissionType: 6
fuelType: diesel
height: 4
length: 16.55
width: 2.55
trailerHeight: 4
vehicleWeight: 16
limitedWeight: 40
weightperaxle: 10
disabledEquipped: 0
passengersCount: 2
tiresCount: 12
commercial: 1
detail: 1
heightAbove1stAxle: 3.5
waypoint0: geo!stopOver!46.8036700000,19.3648579000;;null
waypoint1: geo!stopOver!48.1872046178,14.0647109247;;null
waypoint2: geo!stopOver!48.0690426000,16.3346156000;;null
Based on the documentation (https://developer.here.com/documentation/fleet-telematics/dev_guide/topics/calculation-considerations.html), it should be enough to add the tollVehicleType parameter.
For sure I'm missing something, but would be very grateful for any support. Thank you.
If you have a problem do not use 157 different arguments for an API but only use the minimum required ones to get a result. A minimum working reproduceable example. Then add the additional arguments.
And try to omit all other problematic factors - in this example I just pasted the GET request into the address line of your browser and looked at the output.
So you have no possible interference of any kind by the programming language.
I just did this - registered for HERE API got the api key - looked at the API (never did anything with HERE) and 4 minutes later I got the toll cost.. :-)
Please replace the XXXXXXXXXXX by your own key
https://fleet.ls.hereapi.com/2/calculateroute.json?apiKey=XXXXXXXXXXXXXXXXXXXXXX
&mode=fastest;truck;traffic:disabled
&tollVehicleType=3
&waypoint0=50.10992,8.69030
&waypoint1=50.00658,8.29096
and look what it returned AT THE END of 5 pages JSON (lots of JSON before)
"cost":{"totalCost":"4.95","currency":"EUR","details":{"driverCost":"0.0","vehicleCost":"0.0",
"tollCost":"4.95","optionalValue":"0.0"}},
"tollCost":{"onError":false}}],"warnings":[{"message":"No vehicle height specified, assuming 380 cm","code":1},{"message":"No vehicle weight specified, assuming 11000 kg","code":1},{"message":"No vehicle total weight specified, assuming 11000 kg","code":1},{"message":"No vehicle height above first axle specified, assuming 380 cm","code":1},{"message":"No vehicle total length specified, assuming 1000 cm","code":1}],"language":"en-us"}}
HERE ARE YOUR TOLL COST :-)
(NOTE: Keep in mind that there is probably no toll for the given car type and country!
I also got a cost result with a car but it was 0,- because there is no toll for cars in Germany (only trucks) where the example waypoints are located. )

Combining and Filtering Daily Data Sets in R

I am currently trying to find the most efficient way to use a large collection of daily transaction data sets. Here is an example of one day's data set:
Date Time Name Price Quantity
2005/01/03 012200 Dave 1.40 1
2005/01/03 012300 Anne 1.35 2
2005/01/03 015500 Steve 1.54 1
2005/01/03 021500 Dave 1.44 15
2005/01/03 022100 James 1.70 7
In the real data, there are ~40,000 rows per day, and each day is a separate comma-delimited .txt file. The data go from 2005 all the way to today. I am only interested in "Dave" and "Anne," (as well as 98 other names) but there are thousands of other people in the set. Some days may have multiple entries for a given person, some days may have none for a given person. Since there is a large amount of data, what would be the most efficient way of extracting and combining all of the data for "Anne," "Dave," and the other 98 individuals (Ideally into 100 separate data sets)?
The two ways I would think off are:
1) filtering each day to only "Dave" or "Anne" and then appending to one big data set.
2) Appending all days to one big data set and the filtering to "Dave" or "Anne."
Which method would give me the most efficient results? And is there a better method that I can't think of?
Thank you for the help!
Andy
I believe the question can be answered analytically.
Workflow
As #Frank had pointed it out, the method may depend on the processing requirements:
Is this a one-time exercise?
Then the feasibility of both methods can be further investigated.
Is this a repetitive task where the actual daily transaction data should be added?
Then method 2 might be less efficient if it processes the whole bunch of data anew at every repetition.
Memory requirement
R keeps all data in memory (unless one of the special "big memory" packages is used). So, one of the constraints is the available memory of the computer system used for this task.
As already pointed out in brittenb's comment there are 12 years of daily data files summing up to a total of 12 * 365 = 4380 files. Each file contains about 40 k rows.
The 5 rows sample data set provided in the question can be used to create a 40 k rows dummy file by replication:
library(data.table)
DT <- fread(
"Date Time Name Price Quantity
2005/01/03 012200 Dave 1.40 1
2005/01/03 012300 Anne 1.35 2
2005/01/03 015500 Steve 1.54 1
2005/01/03 021500 Dave 1.44 15
2005/01/03 022100 James 1.70 7 ",
colClasses = c(Time = "character")
)
DT40k <- rbindlist(replicate(8000L, DT, simplify = FALSE))
str(DT40k)
Classes ‘data.table’ and 'data.frame': 40000 obs. of 5 variables:
$ Date : chr "2005/01/03" "2005/01/03" "2005/01/03" "2005/01/03" ...
$ Time : chr "012300" "012300" "012300" "012300" ...
$ Name : chr "Anne" "Anne" "Anne" "Anne" ...
$ Price : num 1.35 1.35 1.35 1.35 1.35 1.35 1.35 1.35 1.35 1.35 ...
$ Quantity: int 2 2 2 2 2 2 2 2 2 2 ...
- attr(*, ".internal.selfref")=<externalptr>
- attr(*, "sorted")= chr "Name"
print(object.size(DT40k), units = "Mb")
1.4 Mb
For method 2, at least 5.9 Gb (4380 * 1.4 Mb) of memory is required to hold all rows (unfiltered) in one object.
If your computer system is limited in memory then method 1 might be the way to go. The OP has mentioned that he is only interested to keep the transaction data of just 100 names out of several thousand. So after filtering, the data volume finally might be reduced to 1% to 10% of the original volume, i.e., to 60 Mb to 600 Mb.
Speed
Disk I/O is usually the performance bottleneck. With the fast I/O functions included in the data.table package we can simulate the time needed for reading all 4380 files.
# write file with 40 k rows
fwrite(DT40k, "DT40k.csv")
# measure time to read the file
microbenchmark::microbenchmark(
fread = tmp <- fread("DT40k.csv", colClasses = c(Time = "character"))
)
Unit: milliseconds
expr min lq mean median uq max neval
fread 34.73596 35.43184 36.90111 36.05523 37.14814 52.167 100
So, reading all 4380 files should take less than 3 minutes.
IMO, and if storage space is not an issue, you should go with option 2. This gives you a lot more flexibility in the long run (say you want to add / remove names in the future).
Always easier to trim the data rather than regret not collecting it. The only reason I would go with option 1 is is storage or speed is a bottleneck in your workflow.

R memory issues for extremely large dataset

I need to perform regression analysis on a 3.5gb dataset consisting mixed (numerical and categorical) dataset in CSV format consisting of 1.8 million records and 1000 variables/columns mainly containing 0s and 1s and a few categorical and numeric values. (Refer snapshot of data.)
I was initially supposed to directly perform clustering on this dataset but I kept getting a lot of errors related to memory in spite of running it on a remote virtual machine (64-bit Windows Server 2012 R2) having 64gb RAM. So I thought of doing some factor analysis to find correlation between the variables so that I can reduce the number of columns to 600 - 700 (as much possible). Any other ideas are appreciated as I am very naïve to data analysis.
I have tried various packages like ff, bigmemory, biganalytics, biglm, FactoMineR, Matrix etc but with no success. Have always encountered “cannot allocate vector of size …” or reached maximum allocation of size 65535MB some other errors.
Can you guys let me know of a solution to this as I feel memory should be a problem as 64gb RAM should suffice.
Snapshot of dataset:
SEX AGE Adm Adm LOS DRG DRG RW Total DC Disp Mortality AAADXRUP
M 17 PSY 291 887 0.8189 31185 PDFU 0 0
M 57 PSY ER 31 884 0.9529 54960.4 SNF 0 0
F 23 AC PH 3 775 0.5283 9497.7 HOM 0 0
F 74 AC PH 3 470 2.0866 23020.3 SNF 0 0
There are additional columns after Mortality mostly containing 0s or 1s

readPDF (tm package) in R

I tried to read some online pdf document in R. I used readRDF function. My script goes like this
safex <- readPDF(PdftotextOptions='-layout')(elem=list(uri='C:/Users/FCG/Desktop/NoteF7000.pdf'),language='en',id='id1')
R showed the message that running command has status 309. I tried different pdftotext options. however, it is the same message. and the text file created has no content.
Can anyone read this pdf
readPDF has bugs and probably isn't worth bothering with (check out this well-documented struggle with it).
Assuming that...
you've got xpdf installed (see here for details)
your PATHs are all in order (see here for details of how to do that) and you've restarted your computer.
Then you might be better off avoiding readPDF and instead using this workaround:
system(paste('"C:/Program Files/xpdf/pdftotext.exe"',
'"C:/Users/FCG/Desktop/NoteF7000.pdf"'), wait=FALSE)
And then read the text file into R like so...
require(tm)
mycorpus <- Corpus(URISource("C:/Users/FCG/Desktop/NoteF7001.txt"))
And have a look to confirm that it went well:
inspect(mycorpus)
A corpus with 1 text document
The metadata consists of 2 tag-value pairs and a data frame
Available tags are:
create_date creator
Available variables in the data frame are:
MetaID
[[1]]
Market Notice
Number: Date F7001 08 May 2013
New IDX SSF (EWJG) The following new IDX SSF contract will be added to the list and will be available for trade today.
Summary Contract Specifications Contract Code Underlying Instrument Bloomberg Code ISIN Code EWJG EWJG IShares MSCI Japan Index Fund (US) EWJ US EQUITY US4642868487 1 (R1 per point)
Contract Size / Nominal
Expiry Dates & Times
10am New York Time; 14 Jun 2013 / 16 Sep 2013
Underlying Currency Quotations Minimum Price Movement (ZAR) Underlying Reference Price
USD/ZAR Bloomberg Code (USDZAR Currency) Price per underlying share to two decimals. R0.01 (0.01 in the share price)
4pm underlying spot level as captured by the JSE.
Currency Reference Price
The same method as the one utilized for the expiry of standard currency futures on standard quarterly SAFEX expiry dates.
JSE Limited Registration Number: 2005/022939/06 One Exchange Square, Gwen Lane, Sandown, South Africa. Private Bag X991174, Sandton, 2146, South Africa. Telephone: +27 11 520 7000, Facsimile: +27 11 520 8584, www.jse.co.za
Executive Director: NF Newton-King (CEO), A Takoordeen (CFO) Non-Executive Directors: HJ Borkum (Chairman), AD Botha, MR Johnston, DM Lawrence, A Mazwai, Dr. MA Matooane , NP Mnxasana, NS Nematswerani, N Nyembezi-Heita, N Payne Alternate Directors: JH Burke, LV Parsons
Member of the World Federation of Exchanges
Company Secretary: GC Clarke
Settlement Method
Cash Settled
-
Clearing House Fees -
On-screen IDX Futures Trading: o 1 BP for Taker (Aggressor) o Zero Booking Fees for Maker (Passive) o No Cap o Floor of 0.01 Reported IDX Futures Trades o 1.75 BP for both buyer and seller o No Cap o Floor of 0.01
Initial Margin Class Spread Margin V.S.R. Expiry Date
R 10.00 R 5.00 3.5 14/06/2013, 16/09/2013
The above instrument has been designated as "Foreign" by the South African Reserve Bank
Should you have any queries regarding IDX Single Stock Futures, please contact the IDX team on 011 520-7399 or idx#jse.co.za
Graham Smale Director: Bonds and Financial Derivatives Tel: +27 11 520 7831 Fax:+27 11 520 8831 E-mail: grahams#jse.co.za
Distributed by the Company Secretariat +27 11 520 7346
Page 2 of 2

Solaris 10 i386 vmstat giving more free than swap

How come when running vmstat on Solaris 10 i386 I got more free space than swap space? Isn't free a proportion of swap which is available?
$ vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 s1 -- -- in sy cs us sy id
1 0 0 7727088 17137388 37 303 1 0 0 0 0 -0 4 0 0 7247 7414 8122 4 1 95
No. Free RAM represent the part of RAM that is immediately available to use while free swap represent part of virtual memory which is neither allocated or reserved. Reserved memory doesn't physically use any storage (RAM or disk).
Have a look at swap -s output for details.

Resources