Mariadb.service failed because a fatal signal - mariadb

suddenly my database has been crashed , when I run the command service mariadb restart
I got the following error
ob for mariadb.service failed because a fatal signal was delivered to the control process. See "systemctl status mariadb.service" and "journalctl -xe" for details.
After that I ran this code journalctl -xe after restarting my server, I am getting this error.
[root#mumbai algotradingtoolx]# journalctl -xe
Sep 13 13:28:32 mumbai mysqld[19463]: The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains
Sep 13 13:28:32 mumbai mysqld[19463]: information that should help you find out what is causing the crash.
Sep 13 13:28:32 mumbai mysqld[19463]: Writing a core file...
Sep 13 13:28:32 mumbai mysqld[19463]: Working directory at /var/lib/mysql
Sep 13 13:28:32 mumbai mysqld[19463]: Resource Limits:
Sep 13 13:28:32 mumbai mysqld[19463]: Limit Soft Limit Hard Limit Units
Sep 13 13:28:32 mumbai mysqld[19463]: Max cpu time unlimited unlimited seconds
Sep 13 13:28:32 mumbai mysqld[19463]: Max file size unlimited unlimited bytes
Sep 13 13:28:32 mumbai mysqld[19463]: Max data size unlimited unlimited bytes
Sep 13 13:28:32 mumbai mysqld[19463]: Max stack size 8388608 unlimited bytes
Sep 13 13:28:32 mumbai mysqld[19463]: Max core file size 0 unlimited bytes
Sep 13 13:28:32 mumbai mysqld[19463]: Max resident set unlimited unlimited bytes
Sep 13 13:28:32 mumbai mysqld[19463]: Max processes 29854 29854 processes
Sep 13 13:28:32 mumbai mysqld[19463]: Max open files 32768 32768 files
Sep 13 13:28:32 mumbai mysqld[19463]: Max locked memory 65536 65536 bytes
Sep 13 13:28:32 mumbai mysqld[19463]: Max address space unlimited unlimited bytes
Sep 13 13:28:32 mumbai mysqld[19463]: Max file locks unlimited unlimited locks
Sep 13 13:28:32 mumbai mysqld[19463]: Max pending signals 29854 29854 signals
Sep 13 13:28:32 mumbai mysqld[19463]: Max msgqueue size 819200 819200 bytes
Sep 13 13:28:32 mumbai mysqld[19463]: Max nice priority 0 0
Sep 13 13:28:32 mumbai mysqld[19463]: Max realtime priority 0 0
Sep 13 13:28:32 mumbai mysqld[19463]: Max realtime timeout unlimited unlimited us
Sep 13 13:28:32 mumbai systemd[1]: mariadb.service: main process exited, code=killed, status=6/ABRT
Sep 13 13:28:32 mumbai systemd[1]: Failed to start MariaDB 10.2.40 database server.
-- Subject: Unit mariadb.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mariadb.service has failed.
--
-- The result is failed.
Sep 13 13:28:32 mumbai systemd[1]: Unit mariadb.service entered failed state.
Sep 13 13:28:32 mumbai systemd[1]: mariadb.service failed.
Sep 13 13:28:32 mumbai systemd[1]: Started Plesk Web Socket Service.
-- Subject: Unit plesk-web-socket.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit plesk-web-socket.service has finished starting up.
--
-- The start-up result is done.
Sep 13 13:28:32 mumbai grafana-server[2048]: t=2021-09-13T13:28:32+0530 lvl=eror msg="Alert Rule Result Error" logger=alerting.evalContext ruleId=3 name="Mail server memory usage" error="request hand
Sep 13 13:28:32 mumbai grafana-server[2048]: t=2021-09-13T13:28:32+0530 lvl=eror msg="Alert Rule Result Error" logger=alerting.evalContext ruleId=2 name="nginx memory usage" error="request handler re
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: [2021-09-13 13:28:32.979] 19485:613f04a8eeea8 ERR [panel] DB query failed: SQLSTATE[HY000] [2002] Connection refused:
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: 0: /usr/local/psa/admin/plib/Db/Adapter/Pdo/Mysql.php:79
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: Db_Adapter_Pdo_Mysql->query(string 'SET sql_mode = ''')
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: 1: /usr/local/psa/admin/plib/CommonPanel/Application/Abstract.php:103
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: CommonPanel_Application_Abstract::initDbAdapter()
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: 2: /usr/local/psa/admin/plib/Session/Helper.php:176
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: Plesk\Session\Helper::initStorage()
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: 3: /usr/local/psa/admin/plib/CommonPanel/Application/Abstract.php:52
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: CommonPanel_Application_Abstract->run()
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: 4: /usr/local/psa/admin/plib/CommonPanel/Application/Abstract.php:34
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: CommonPanel_Application_Abstract::init()
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: 5: /usr/local/psa/admin/plib/pm/Bootstrap.php:16
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: pm_Bootstrap::init()
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: 6: /usr/local/psa/admin/plib/sdk.php:11
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: require_once(string '/usr/local/psa/admin/plib/sdk.php')
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: 7: /usr/local/psa/admin/plib/WebSocket/bin/ws-server.php:3
Sep 13 13:28:32 mumbai sw-engine-pleskrun[19485]: ERROR: Plesk\Exception\Database: DB query failed: SQLSTATE[HY000] [2002] Connection refused (Mysql.php:79)
Sep 13 13:28:32 mumbai systemd[1]: plesk-web-socket.service: main process exited, code=exited, status=1/FAILURE
Sep 13 13:28:32 mumbai systemd[1]: Unit plesk-web-socket.service entered failed state.
Sep 13 13:28:32 mumbai systemd[1]: plesk-web-socket.service failed.
If some of the experts redirect me how to solve this issue , it will be really helpful to me , thank you.

Related

What is the syntax for a stop message when no data

I am importing some data in to R and want the code to stop running if there is no file or there is no data in the file. I'm using base R and readxl. Please can you help with the syntax?
I've tried
if (dim(Llatest) == NULL) {stop('STOP NO DATA')}
if (dim(Llatest)[1] == 0) + stop('STOP NO DATA')}
if (isTRUE(dim(Llatest) == NULL)) {stop('STOP NO DATA')}
Some data imported from Sep19import.xlsx
ID Code Received Actioned Decision
1 123 Jul 01 2019 Sep 02 2019 Hold
2 456 Jul 11 2019 Sep 13 2019 No action
3 789 Nov 26 2018 Sep 25 2019 Investigate
4 321 Sep 12 2019 Sep 12 2019 Await decision
5 654 Aug 30 2019 Sep 26 2019 Hold
6 987 Feb 22 2019 Sep 02 2019 Investigate
Obtain list of files for import
LFiles <- list.files(path = "C:/Projects/Sep/code", pattern = "*import.xlsx", full.names = TRUE)
***I wish to stop here if LFiles is empty
Identify the latest file
Llatest <- subset(LFiles, LFiles == max(LFiles))
Extract data from file
LMonthly <- read_excel(Llatest)
***I wish to stop here if LMonthly is empty
Error Messages received - no non-missing arguments, returning NA
I expect the output to be 'STOP NO DATA'

Processing multiple images with Magick (in R) with transformations

I need to automate some image transformations to do the following:
- read in 16,000+ images that are short and wide, sizing is not the same.
- rescale each image to 90 pixels high
- crop 90 pixels over the width of the image, so multiple 90x90 crops over 1 image - then do it all over again for the next image
- each 90x90 image needs to be saved as file-name_1.png, file-name_2.png and so on in sequential order
I've completed a test on 8 images, and using the magick package I was able to rescale and create multiple crops from each image manually. The problem is when I try to do multiple, I am able to resize the images easily but when it comes to saving them there is a problem.
# capture images, file paths in a list
img_list <- list.files("./orig_images", pattern = "\\.png$", full.names = TRUE)
# get all images in a list
all_images <- lapply(img_list, image_read)
# scale each image height - THIS DOESN'T WORK, GET NULL VALUE
scale_images <-
for (i in 1:length(all_images)) {
scale_images(all_images[[i]], "x90")
}
# all images added into one
all_images_joined <- image_join(all_images)
# scale images - THIS WORKS to scale, but problems later
all_images_scaled <-
image_scale(all_images_joined, "x90")
# Test whether a single file will be written or multiple files;
# only writes one file (even if I
for (i in 1:length(all_images_scaled)) {
image_write(all_images_scaled[[i]], path = "filepath/new_cropimages/filename")
}
Ideally, I would scale the images with a for loop. That way I can save the scaled images to a directory. This didn't work - I don't get an error, but when I check the contents of the variable it is null. The image_join function puts them all together and scales the height to 90 (width is also scaled proportionately) but I can't write the separate images to directory. Also, the next piece is to crop each image across the width and save the new images file-name_1.png, and so on for every image 90x90, move over 90 pixels, crop 90x90, and so on. I chose magic because it was easy to individually scale and crop, but I'm open to other ideas (or learning how to make that package work). Thanks for any help.
Here are some images:
[Original Image, untransformed][1]
[Manual 90x90 crop][2]
[Another manual 90x90 crop, farther down the same image][3]
[1]: https://i.stack.imgur.com/8ptXv.png
[2]: https://i.stack.imgur.com/SF9pG.png
[3]: https://i.stack.imgur.com/NyKxS.png
I don't speak R, but I hope to be able to help with the ImageMagick aspects and getting 16,000 images processed.
As you are on a Mac, you can install 2 very useful packages very easily with homebrew, using:
brew install imagemagick
brew install parallel
So, your original sentence image is 1850x105 pixels, you can see that in Terminal like this:
magick identify sentence.png
sentence.png PNG 1850x105 1850x105+0+0 8-bit Gray 256c 51626B 0.000u 0:00.000
If you resize the height to 90px, leaving the width to follow proportionally, it will become 1586x90px:
magick sentence.png -resize x90 info:
sentence.png PNG 1586x90 1586x90+0+0 8-bit Gray 51626B 0.060u 0:00.006
So, if you resize and then crop into 90px wide chunks:
magick sentence.png -resize x90 -crop 90x chunk-%03d.png
you will get 18 chunks, each 90 px wide except the last, as follows:
-rw-r--r-- 1 mark staff 5648 6 Jun 08:07 chunk-000.png
-rw-r--r-- 1 mark staff 5319 6 Jun 08:07 chunk-001.png
-rw-r--r-- 1 mark staff 5870 6 Jun 08:07 chunk-002.png
-rw-r--r-- 1 mark staff 6164 6 Jun 08:07 chunk-003.png
-rw-r--r-- 1 mark staff 5001 6 Jun 08:07 chunk-004.png
-rw-r--r-- 1 mark staff 6420 6 Jun 08:07 chunk-005.png
-rw-r--r-- 1 mark staff 4726 6 Jun 08:07 chunk-006.png
-rw-r--r-- 1 mark staff 5559 6 Jun 08:07 chunk-007.png
-rw-r--r-- 1 mark staff 5053 6 Jun 08:07 chunk-008.png
-rw-r--r-- 1 mark staff 4413 6 Jun 08:07 chunk-009.png
-rw-r--r-- 1 mark staff 5960 6 Jun 08:07 chunk-010.png
-rw-r--r-- 1 mark staff 5392 6 Jun 08:07 chunk-011.png
-rw-r--r-- 1 mark staff 4280 6 Jun 08:07 chunk-012.png
-rw-r--r-- 1 mark staff 5681 6 Jun 08:07 chunk-013.png
-rw-r--r-- 1 mark staff 5395 6 Jun 08:07 chunk-014.png
-rw-r--r-- 1 mark staff 5065 6 Jun 08:07 chunk-015.png
-rw-r--r-- 1 mark staff 6322 6 Jun 08:07 chunk-016.png
-rw-r--r-- 1 mark staff 4848 6 Jun 08:07 chunk-017.png
Now, if you have 16,000 sentences to process, you can use GNU Parallel to get them all done in parallel and also get sensible names for all the files. Let's do a dry-run first so it actually doesn't do anything, but just shows you what it would do:
parallel --dry-run magick {} -resize x90 -crop 90x {.}-%03d.png ::: sentence*
Sample Output
magick sentence1.png -resize x90 -crop 90x sentence1-%03d.png
magick sentence2.png -resize x90 -crop 90x sentence2-%03d.png
magick sentence3.png -resize x90 -crop 90x sentence3-%03d.png
That looks good, so remove the --dry-run and do it again and you get the following output for the three (identical copies) of your sentence I made:
-rw-r--r-- 1 mark staff 5648 6 Jun 08:13 sentence1-000.png
-rw-r--r-- 1 mark staff 5319 6 Jun 08:13 sentence1-001.png
-rw-r--r-- 1 mark staff 5870 6 Jun 08:13 sentence1-002.png
-rw-r--r-- 1 mark staff 6164 6 Jun 08:13 sentence1-003.png
-rw-r--r-- 1 mark staff 5001 6 Jun 08:13 sentence1-004.png
-rw-r--r-- 1 mark staff 6420 6 Jun 08:13 sentence1-005.png
-rw-r--r-- 1 mark staff 4726 6 Jun 08:13 sentence1-006.png
-rw-r--r-- 1 mark staff 5559 6 Jun 08:13 sentence1-007.png
-rw-r--r-- 1 mark staff 5053 6 Jun 08:13 sentence1-008.png
-rw-r--r-- 1 mark staff 4413 6 Jun 08:13 sentence1-009.png
-rw-r--r-- 1 mark staff 5960 6 Jun 08:13 sentence1-010.png
-rw-r--r-- 1 mark staff 5392 6 Jun 08:13 sentence1-011.png
-rw-r--r-- 1 mark staff 4280 6 Jun 08:13 sentence1-012.png
-rw-r--r-- 1 mark staff 5681 6 Jun 08:13 sentence1-013.png
-rw-r--r-- 1 mark staff 5395 6 Jun 08:13 sentence1-014.png
-rw-r--r-- 1 mark staff 5065 6 Jun 08:13 sentence1-015.png
-rw-r--r-- 1 mark staff 6322 6 Jun 08:13 sentence1-016.png
-rw-r--r-- 1 mark staff 4848 6 Jun 08:13 sentence1-017.png
-rw-r--r-- 1 mark staff 5648 6 Jun 08:13 sentence2-000.png
-rw-r--r-- 1 mark staff 5319 6 Jun 08:13 sentence2-001.png
-rw-r--r-- 1 mark staff 5870 6 Jun 08:13 sentence2-002.png
-rw-r--r-- 1 mark staff 6164 6 Jun 08:13 sentence2-003.png
-rw-r--r-- 1 mark staff 5001 6 Jun 08:13 sentence2-004.png
-rw-r--r-- 1 mark staff 6420 6 Jun 08:13 sentence2-005.png
-rw-r--r-- 1 mark staff 4726 6 Jun 08:13 sentence2-006.png
-rw-r--r-- 1 mark staff 5559 6 Jun 08:13 sentence2-007.png
-rw-r--r-- 1 mark staff 5053 6 Jun 08:13 sentence2-008.png
-rw-r--r-- 1 mark staff 4413 6 Jun 08:13 sentence2-009.png
-rw-r--r-- 1 mark staff 5960 6 Jun 08:13 sentence2-010.png
-rw-r--r-- 1 mark staff 5392 6 Jun 08:13 sentence2-011.png
-rw-r--r-- 1 mark staff 4280 6 Jun 08:13 sentence2-012.png
-rw-r--r-- 1 mark staff 5681 6 Jun 08:13 sentence2-013.png
-rw-r--r-- 1 mark staff 5395 6 Jun 08:13 sentence2-014.png
-rw-r--r-- 1 mark staff 5065 6 Jun 08:13 sentence2-015.png
-rw-r--r-- 1 mark staff 6322 6 Jun 08:13 sentence2-016.png
-rw-r--r-- 1 mark staff 4848 6 Jun 08:13 sentence2-017.png
-rw-r--r-- 1 mark staff 5648 6 Jun 08:13 sentence3-000.png
-rw-r--r-- 1 mark staff 5319 6 Jun 08:13 sentence3-001.png
-rw-r--r-- 1 mark staff 5870 6 Jun 08:13 sentence3-002.png
-rw-r--r-- 1 mark staff 6164 6 Jun 08:13 sentence3-003.png
-rw-r--r-- 1 mark staff 5001 6 Jun 08:13 sentence3-004.png
-rw-r--r-- 1 mark staff 6420 6 Jun 08:13 sentence3-005.png
-rw-r--r-- 1 mark staff 4726 6 Jun 08:13 sentence3-006.png
-rw-r--r-- 1 mark staff 5559 6 Jun 08:13 sentence3-007.png
-rw-r--r-- 1 mark staff 5053 6 Jun 08:13 sentence3-008.png
-rw-r--r-- 1 mark staff 4413 6 Jun 08:13 sentence3-009.png
-rw-r--r-- 1 mark staff 5960 6 Jun 08:13 sentence3-010.png
-rw-r--r-- 1 mark staff 5392 6 Jun 08:13 sentence3-011.png
-rw-r--r-- 1 mark staff 4280 6 Jun 08:13 sentence3-012.png
-rw-r--r-- 1 mark staff 5681 6 Jun 08:13 sentence3-013.png
-rw-r--r-- 1 mark staff 5395 6 Jun 08:13 sentence3-014.png
-rw-r--r-- 1 mark staff 5065 6 Jun 08:13 sentence3-015.png
-rw-r--r-- 1 mark staff 6322 6 Jun 08:13 sentence3-016.png
-rw-r--r-- 1 mark staff 4848 6 Jun 08:13 sentence3-017.png
A word of explanation about the parameters to parallel:
{} refers to "the current file"
{.} refers to "the current file without its extension"
::: separates the parameters meant for parallel from those meant for your magick command
One note of warning, PNG images can "remember" where they came from which can be useful, or very annoying. If you look at the last chunk from above you will see it is 56x90, but that following that, it "remembers" it came from a canvas 1586x90 at offset 1530,0:
identify sentence3-017.png
sentence3-017.png PNG 56x90 1586x90+1530+0 8-bit Gray 256c 4848B 0.000u 0:00.000
This can sometimes upset subsequent processing which is annoying, or sometimes be very useful in re-assembling images that have been chopped up! If you want to remove it, you need to repage, so the command above becomes:
magick input.png -resize x90 -crop 90x +repage output.png
Updated - to make better use of the tools in EBImage
ImageMagick is a great approach. But should you want to perform some content analysis on the images, here is a solution with R. R does provide some pretty handy tools. Also, images are "nothing" but matrices, which R handles really well. By reducing the images to matrices, the package EBImage does this very well and, for better or for worse, removes some of the metadata with each image. Here's a R solution with EBImage. Again though, Mark's solution may be better for really big production runs.
The solution is structured around a large "for" loop. It would be prudent to add error checking at several steps. The code takes advantage of EBImage to manage both color and grayscale images.
Here, the final image is centered in an extended image by adding pixels of the desired background color. The extended image is then cropped into tiles. The logic determining the value for pad can be adjusted to simply crop the image or left justify or right justify it, if desired.
It starts by assuming you begin in the working directory with the source files in ./source and the destination to be in ./dest. It also creates a new directory for each "tiled" image. That could be changed to have a single directory receive all the images as well as other protective coding. Here, the images are assumed to be PNG files with an appropriate extension. The desired tile size (90) to be applied to both height and width is stored in the variable size.
# EBImage needs to be available
if (!require(EBImage)) {
source("https://bioconductor.org/biocLite.R")
biocLite("EBImage")
library(EBImage)
}
# From the working directory, select image files
size <- 90
bg.col <- "transparent" # or any other color specification for R
ff <- list.files("source", full = TRUE,
pattern = "png$", ignore.case = TRUE)
# Walk through all files with a 'for' loop,
for (f in ff) {
# Extract base name, even names like "foo.bar.1.png"
txt <- unlist(strsplit(basename(f), ".", fixed = TRUE))
len <- length(txt)
base <- ifelse(len == 1, txt[1], paste(txt[-len], collapse = "."))
# Read one image and resize
img <- readImage(f)
img <- resize(img, h = size) # options allow for antialiasing
# Determine number tiles and padding needed
nx <- ceiling(dim(img)[1]/size)
newdm <- c(nx * size, size) # extend final image
pad <- newdm[1] - dim(img)[1] # pixels needed to extend
# Translate the image with given background fille
img <- translate(img, c(pad%/%2, 0), output.dim = newdm, bg.col = bg.col)
# Split image into appropriate sized tiles with 'untile'
img <- untile(img, c(nx, 1), lwd = 0) # see the help file
# Create a new directory for each image
dpath <- file.path("dest", trimws(base)) # Windows doesn't like " "
if (!dir.create(dpath))
stop("unable to create directory: ", dpath)
# Create new image file names for each frame
fn <- sprintf("%s_%03d.png", base, seq_len(nx))
fpaths <- file.path(dpath, fn)
# Save individual tiles (as PNG) and names of saved files
saved <- mapply(writeImage, x = getFrames(img, type = "render"),
files = fpaths)
# Check on the results from 'mapply'
print(saved)
}

For loop results are only correct for the first iteration in R

My project:
I am looping through shapefiles in a folder, and running some calculations to add new columns with new values in the output shapefile
My problem:
The calculations are correct for the first iteration. However these values are then added as columns to every subsequent shapefile (rather than doing new calculations per iteration). Below is the code. The final columns resulting from this code running are: final_year, final_month, final_day, final_date.
My code:
library(rgdal)
library(tidyverse)
library(magrittr)
library(dplyr)
input_path<- "/Users/JohnDoe/Desktop/Zone_Fixup/Z4/Z4_Split/"
output_path<- "/Users/JohnDoe/Desktop/Zone_Fixup/Z4/Z4_Split_Out/"
files<- list.files(input_path, pattern = "[.]shp$")
for(f in files){
ifile<- list.files(input_path, f)
shp_paste<- paste(input_path, ifile, sep = "")
tryCatch({shp0<- readOGR(shp_paste, verbose=FALSE)}, error = function(e){print("Error1.")})
#Order shapefile by filename
shp1<- as.data.frame(shp0)
shp2<- shp1[order(shp1$filename),]
#Sort final dates by relative length values.
#If it's increasing, it's day1; if it's decreasing it's day3, etc.
shp2$final_day1<- ifelse(lag(shp2$Length1)<shp2$Length1, paste0(shp2$day1), paste0(shp2$day3))
shp2$final_month1<- ifelse(lag(shp2$Length1)<shp2$Length1, paste0(shp2$month1), paste0(shp2$month3))
shp2$final_year1<- ifelse(lag(shp2$Length1)<shp2$Length1, paste0(shp2$year1), paste0(shp2$year3))
#Remove first NA value of each column
if(is.na(shp2$final_day1[1])){
ex1<- shp2$day1[1]
ex2<- as.character(ex1)
ex3<- as.numeric(ex2)
shp2$final_day1[1]<- ex2
}
if(is.na(shp2$final_month1[1])){
ex4<- shp2$month1[1]
ex5<- as.character(ex4)
ex6<- as.numeric(ex5)
shp2$final_month1[1]<- ex5
}
if(is.na(shp2$final_year1[1])){
ex7<- shp2$year1[1]
ex8<- as.character(ex7)
ex9<- as.numeric(ex8)
shp2$final_year1[1]<- ex9
}
#Add final dates to shapefile as new columns
shp0$final_year<- shp2$final_year1
shp0$final_month<- shp2$final_month1
shp0$final_day<- shp2$final_day1
final_paste<- paste(shp0$final_year, "_", shp0$final_month, "_", shp0$final_day, sep = "")
shp0$final_date<- final_paste
#Create new shapefile for write out
shp44<- shp0
#Write out shapefile
ifile1<- substring(ifile, 1, nchar(ifile)-4)
#tryCatch({writeOGR(shp44, output_path, layer = ifile1, driver = "ESRI Shapefile", overwrite_layer = TRUE)}, error = function(e){print("Error2.")})
test1<- head(shp44)
print(test1)
}
My results:
Here are two head() tables. The first table is correct. The second table is not correct. Notice that the final_year, final_month, final_day, and final_year columns are identical in the two tables. NOTE: These columns are the last four in the table
Table 1:
coordinates Length1 Bathy Vector filename zone year1 year2 year3 month1 month2 month3 day1 day2 day3 final_year final_month final_day final_date
1 (-477786.3, 1110917) 29577.64 -6.455580 0 Zone4_2000_02_05_2000_02_15_2000_02_24 Zone4 2000 2000 2000 02 02 02 05 15 24 1997 02 15 1997_02_15
2 (-477786.3, 1110917) 29577.64 -6.455580 0 Zone4_2000_02_24_2000_03_10_2000_03_17 Zone4 2000 2000 2000 02 03 03 24 10 17 1997 03 26 1997_03_26
3 (-477848.2, 1113468) 27025.88 -2.100153 0 Zone4_2000_03_24_2000_04_03_2000_04_10 Zone4 2000 2000 2000 03 04 04 24 03 10 1997 04 19 1997_04_19
4 (-477871, 1114406) 26087.98 -4.700025 0 Zone4_2006_03_10_2006_03_27_2006_04_03 Zone4 2006 2006 2006 03 03 04 10 27 03 1998 02 08 1998_02_08
5 (-477876.1, 1114616) 25877.25 -7.598877 0 Zone4_2008_03_06_2008_03_16_2008_03_25 Zone4 2008 2008 2008 03 03 03 06 16 25 1998 03 28 1998_03_28
6 (-477878.8, 1114730) 25764.14 -7.598877 0 Zone4_2008_03_30_2008_04_09_2008_04_23 Zone4 2008 2008 2008 03 04 04 30 09 23 1998 04 21 1998_04_21
Table 2:
coordinates Length1 Bathy Vector filename zone year1 year2 year3 month1 month2 month3 day1 day2 day3 final_year final_month final_day final_date
1 (-477813.5, 1110939) 29612.26 -6.455580 1 Zone4_2000_02_05_2000_02_15_2000_02_24 Zone4 2000 2000 2000 02 02 02 05 15 24 1997 02 15 1997_02_15
2 (-477813.5, 1110939) 29612.26 -6.455580 1 Zone4_2000_02_24_2000_03_10_2000_03_17 Zone4 2000 2000 2000 02 03 03 24 10 17 1997 03 26 1997_03_26
3 (-477883.4, 1113392) 27158.05 -2.100153 1 Zone4_2000_03_24_2000_04_03_2000_04_10 Zone4 2000 2000 2000 03 04 04 24 03 10 1997 04 19 1997_04_19
4 (-477909.9, 1114319) 26230.17 -4.700025 1 Zone4_2006_03_10_2006_03_27_2006_04_03 Zone4 2006 2006 2006 03 03 04 10 27 03 1998 02 08 1998_02_08
5 (-477916.7, 1114558) 25991.57 -7.598877 1 Zone4_2008_03_06_2008_03_16_2008_03_25 Zone4 2008 2008 2008 03 03 03 06 16 25 1998 03 28 1998_03_28
6 (-477920.1, 1114678) 25871.39 -7.598877 1 Zone4_2008_03_30_2008_04_09_2008_04_23 Zone4 2008 2008 2008 03 04 04 30 09 23 1998 04 21 1998_04_21
It looks like my code is taking the column values from the first iteration and adding them to shapefiles in subsequent iterations. How can my code be modified to run new calculations with each iteration, and add those unique values to their respective shapefiles?
Thank you
I think your problem may be with the start of your for loop.
files<- list.files(input_path, pattern = "[.]shp$") #keep this line to get your files
for (f in 1:length(files)){ # change this to the length of files to iterate over files one by one
ifile<- list.files(input_path, f) #delete this line from your code
shp_paste<-paste(input_path,files[f],sep="") # use this line to iterate over each shp file
keep the rest of you code as it is and see if this helps..
Thank you for your help, everyone, I found the problem. A tad embarrassing, I wasn't sorting the filename by ascending order before adding the new columns in. Therefore it seemed like the values in the new columns were wrong, because they weren't matched to the correct rows. A clumsy error on my part, thanks to all who offered advice.

how to retrieve text from span & p tag in r

I have following link
url = "https://timesofindia.indiatimes.com/topic/Adani"
In above url I want to extract the headline, para below that and date in 3 different columns.
I am able to extract only one news headline and para with following code
results_headline <- url2 %>%
read_html() %>%
html_nodes(xpath='//*#id="c_topic_list1_1"]/div[1]/ul/li[4]/div/a/span[1]')
results_para <- url2 %>%
read_html() %>%
html_nodes(xpath='//*[#id="c_topic_list1_1"]/div[1]/ul/li[4]/div/a/p')
I want to extract all the headlines,paragraph and date on that page.
How can I do it in R?
Once again you can simply use css selector to extract the content of it.
url2 = "https://timesofindia.indiatimes.com/topic/Adani"
titles <- url2 %>% read_html() %>% html_nodes("div > a > span.title") %>% html_text()
dates <- url2 %>% read_html() %>% html_nodes("div > a > span.meta") %>% html_text()
desc <- url2 %>% read_html() %>% html_nodes("div > a > p") %>% html_text()
data.frame(titles,dates,desc)
output:
> data.frame(titles,dates,desc)
titles dates
1 \nDRI drops Adani Group overvaluation case\n Oct 28
2 \nAdani Enterprises to demerge renewable energy biz\n Oct 7
3 \nAdani Enterprises' Q2 PAT falls 6% to Rs 59 cr\n Nov 13
4 \nAdani firm close to finalising RInfra power acquisition deal\n Nov 12
5 \nAdani group shares surge up to 9%\n Aug 28
6 \nAdani Transmission acquires RInfra WRSSS assets for Rs 1k cr\n Nov 1
7 \nVedanta, Adani may bid for Bunder diamond project in MP\n Oct 27
8 \nAdani Power coercing land from farmers: M K Stalin\n Oct 31
9 \nAdani Transmission acquires 2 SPVs from RVPN\n Aug 6
desc
1 Additional director general, DRI (adjudication), K V S Singh, has dropped all charges and summarily closed all proceedings in a speaking order.
2 New Delhi, Oct 7 () Adani Enterprises today announced plans to demerge its renewable energy business into associate company Adani Green Energy Ltd as part of simplifying overall business structure.
3 New Delhi, Nov 13 () Adani Enterprises, the flagship firm of Adani group, today said its profit after tax fell by 6.34 per cent to Rs 59 crore in the July-September quarter of 2017-18 compared to Rs 63 crore in the same quarter a year ago.
4 New Delhi, Nov 12 () Adani Transmission is likely to clinch a deal of Rs 13,000-14,000 crore with Reliance Infrastructure to acquire the latter's Mumbai power business much before the January 2018 deadline to mark its foray into power distribution business.
5 New Delhi, Aug 28 () Shares of Adani group of companies surged up to 9 per cent today as the mining giant will start work on its 16.5 billion dollar Carmichael coal project in Australia in October and is expected to ship the first consignment in March 2020. The stock jumped 9.
6 New Delhi, Nov 1 () Adani Transmission today said it has completed acquisition of operational transmission assets of WRSS Schemes of Reliance Infra for Rs 1,000 crore. In effect, its power-wheeling network crossed the 8,500 circuit km mark.
7 New Delhi, Oct 27 () Metals and mining major Vedanta Ltd and the Adani Group may bid for the Bunder diamond project in Madhya Pradesh from which global giant Rio Tinto exited this year, according to sources. "Vedanta may bid for the Bunder project," said a source on the condition of anonymity.
8
9

Spot erroneous row data frame

I have a data frame like this
head(data)
V1 V2 V3 V4 V5 V6 V7
1 458263182005000000 1941 2 14 -73.90 38.60 US009239
2 451063182005000002 1941 2 14 -74.00 36.90 US009239
3 447463182005000000 1941 2 14 -74.00 35.40 US009239
4 443863182105000000 1941 2 15 -74.00 34.00 US009239
5 436663182105000001 1941 2 15 -74.00 32.60 US009239
6 433063182105000000 1941 2 15 -73.80 31.70 US009239
but when I do
data <- read.table("data.dat",header=F,sep=";")
I get this error
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, :
could not allocate memory (2048 Mb) in C function 'R_AllocStringBuffer'
How can I determine in which row something is going wrong (e.g. the format is different)?
Many thanks
R says can not allocate memory. So you can check how large is the dataset and your computer memory.
despite this is an old question...
I think R_AllocStringBuffer
does not have to do with the overall memory of your computer. Which is also the optionen in this thread:
R could not allocate memory on ff procedure. How come?
Maybe check the delimiter "," or ";". It seems to create a huge string...

Resources