ISSUE: MPIRUN hangs and does not display any error message even with I_MPI_DEBUG 100
example:
tried with any IMB-* benchmarks or even simple task as display hostname.
mpirun -n 2 hostname
it will just hang and never return any output or error.
Any idea what I may need to check or where to check for more info.
OS info:
Rocky Linux release 8.5 (Green Obsidian)
MPI version:
Intel(R) MPI Library for Linux* OS, Version 2019 Update 12
Copyright 2003-2021, Intel Corporation.
strace hangs at:
[pid 19786] sched_setaffinity(0, 8, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]) = 0
[pid 19786] nanosleep({tv_sec=0, tv_nsec=0}, 0x7ffc04672b50) = 0
[pid 19786] openat(AT_FDCWD, "/sys/devices/system/node/node0/cpulist", O_RDONLY) = 6
[pid 19786] fstat(6, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
[pid 19786] fstat(6, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
[pid 19786] lseek(6, 0, SEEK_SET) = 0
[pid 19786] lseek(6, 0, SEEK_SET) = 0
I had the very same issue with Rocky 8.6 on a server with a AMD EPYC 74F3 CPU. Unfortunately I don't know the root cause either, but a simple yum update solved the issue for me.
Best regards,
Sebastian
Related
i have tried to run a simple task using airflow bash operator but keep getting stuck on my DAG never stop running, it stays like green forever without success or fail, when i check the logs i see something like this. Thanks in advance for your time and answers
**`your text`**airflow-scheduler_1 | [SQL: INSERT INTO task_fail (task_id, dag_id, execution_date, start_date, end_date, duration) VALUES (%(task_id)s, %(dag_id)s, %(execution_date)s, %(start_date)s, %(end_date)s, %(duration)s) RETURNING task_fail.id]
airflow-scheduler_1 | [parameters: {'task_id': 'first_task', 'dag_id': 'LocalInjestionDag', 'execution_date': datetime.datetime(2023, 1, 20, 8, 0, tzinfo=Timezone('UTC')), 'start_date': datetime.datetime(2023, 1, 23, 3, 35, 27, 332954, tzinfo=Timezone('UTC')), 'end_date': datetime.datetime(2023, 1, 23, 3, 35, 27, 710572, tzinfo=Timezone('UTC')), 'duration': 0}]
postgres_1 | 2023-01-23 03:55:59.712 UTC [4336] ERROR: column "execution_date" of relation "task_fail" does not exist at character 41"""
I have tried with execution_datetime , using xcom_push and creating functions with xcom and changing to python operator but everything still fall back to same error
My crawler isn't working properly and I can't find what is the solution to it.
Here is the related part of my spider:
def parse(self, response):
original_price=0
discounted_price=0
star=0
discounted_percent=0
try:
for product in response.xpath("//ul[#class='c-listing__items js-plp-products-list']/li"):
title= product.xpath(".//div/div[2]/div/div/a/text()").get()
if product.xpath(".//div/div[2]/div[2]/div[1]/text()"):
star= float(str(product.xpath(".//div/div[2]/div[2]/div[1]/text()").get()))
if product.xpath(".//div/div[2]/div[3]/div/div/div[1]/span/text()"):
discounted_percent = int(str(product.xpath(".//div/div[2]/div[3]/div/div/div[1]/span/text()").get().strip()).replace('٪', ''))
if product.xpath(".//div/div[2]/div[3]/div/div/div/text()"):
discounted_price= int(str(product.xpath(".//div/div[2]/div[3]/div/div/div/text()").get().strip()).replace(',', ''))
if product.xpath(".//div/div[2]/div[3]/div/div/del/text()"):
original_price= int(str(product.xpath(".//div/div[2]/div[3]/div/div/del/text()").get().strip()).replace(',', ''))
discounted_amount= original_price-discounted_price
else:
original_price= print("not available")
discounted_amount= print("not available")
url= response.urljoin(product.xpath(".//div/div[2]/div/div/a/#href").get())
This is my log:
2020-10-21 16:49:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.digikala.com/search/category-book/> from <GET https://www.digikala.com/search/category-book>
2020-10-21 16:49:57 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digikala.com/search/category-book/> (referer: None)
2020-10-21 16:49:57 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.digikala.com/search/category-book/> (referer: None)
Traceback (most recent call last):
File "C:\Users\shima\anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "C:\Users\shima\anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
for x in result:
File "C:\Users\shima\anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "C:\Users\shima\anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\shima\anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\shima\projects\digi_allbooks\digi_allbooks\spiders\allbooks.py", line 31, in parse
discounted_percent = int(str(product.xpath(".//div/div[2]/div[3]/div/div/div[1]/span/text()").get().strip()).replace('٪', ''))
ValueError: invalid literal for int() with base 10: 'تومان'
2020-10-21 16:49:57 [scrapy.core.engine] INFO: Closing spider (finished)
2020-10-21 16:49:57 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 939,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 90506,
'downloader/response_count': 3,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/301': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 10, 21, 13, 19, 57, 630044),
'log_count/DEBUG': 3,
'log_count/ERROR': 1,
'log_count/INFO': 9,
'log_count/WARNING': 1,
'response_received_count': 2,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'spider_exceptions/ValueError': 1,
'start_time': datetime.datetime(2020, 10, 21, 13, 19, 55, 914304)}
2020-10-21 16:49:57 [scrapy.core.engine] INFO: Spider closed (finished)
I guess it says there is a string in an int() function which returns the ValueError but the XPath I'm using targets a number, not a string.
I can't get the error correctly, so I don't find the solution. Can someone help me out, please?
In at least one of the iterations this line is scraping تومان instead of an integer
discounted_percent = int(str(product.xpath(".//div/div[2]/div[3]/div/div/div[1]/span/text()").get().strip()).replace('٪', ''))
From a google search it seems this is a monetary unit. You need to work on your XPaths, or have the spider ignore this return as there isn't a discount in this item.
It seems this XPath may be a better option for your intention: (I haven't checked all items though)
product.xpath(".//div[#class="c-price__discount-oval"]/span/text()").get()
I use following code to find whether daylight saving is in use in Central Europe in day given by variables year, month and day.
timeString = paste(toString(year), formatC(month, width = 2, flag="0"), formatC(day, width = 2, flag="0"), "12", sep = "-")
time = strptime(timeString, format = "%Y-%m-%d-%H")
diff = as.numeric(as.POSIXct(time, tz="UTC") - as.POSIXct(time, tz="Europe/Prague"))
On my PC (Ubuntu 16.04), diff is 2 when daylight saving is active, 1 othervise, but on server with Debian 8.8 it is 1 in all cases. Do you know how to set-up the server to behave as Ubuntu? Thanks.
Update: The change of Debian time settings would also change time used for crontab, which is undesirable. Reinstaling R with new configuration seemed risky becase there runs a few R script operationally every few minutes. So i chose "ugly" solution in form of R function:
DaylightSaving = function(year, month, day) {
# years 2010-2030
if (year < 2010 || year > 2030) {
stop("The function is implemented now only for years 2010-2030")
}
dayStart = c(28, 27, 25, 31, 30, 29, 27, 26, 25, 31, 29, 28, 27, 26,
31, 30, 29, 28, 26, 25, 31)
dayEnd = c(31, 30, 28, 27, 26, 25, 30, 29, 28, 27, 25, 31, 30, 29,
27, 26, 25, 31, 29, 28, 27)
if (month < 3 || month > 10) {
return(FALSE)
} else if (month == 3 && day < dayStart[year - 2009]) {
return(FALSE)
} else if (month == 10 && day >= dayEnd[year - 2009]) {
return(FALSE)
}
return(TRUE)
}
First of all, if you want to check whether daylight saving is in use, you can simply do:
#Make a test date
atime <- as.POSIXct("2017-05-23 13:25",
tz = "Europe/Prague")
#test for DST
as.POSIXlt(atime)$isdst > 0
The POSIXlt class is internally a list with an element isdst that is 0 if daylight saving time is not active, positive when it is, and negative when that information is not available. (see ?DateTimeClasses).
I would also like to point out the following from the help pages on timezones :
Note that except where replaced, the operation of time zones is an OS
service, and even where replaced a third-party database is used and
can be updated (see the section on ‘Time zone names’). Incorrect
results will never be an R issue, so please ensure that you have the
courtesy not to blame R for them.
The problem isn't R, but your Debian installation disregarding daylight saving time. You can solve this by configuring R with the option --with-internal-tzcode, so it uses its own timezone database. But this is generally not necessary if your Debian's timezone system is set up correctly. More info on how to configure R can be found on the help page ?timezones and in the Installation and Administration manual - appendix B.
The best way to solve this, is to make sure that your Debian installation deals with daylight saving time correctly. You can start with checking whether you have a correct version of the tzdata package.
There's a similar question on unix.stackexchange.com :
https://unix.stackexchange.com/questions/274878/system-disregards-daylight-saving-time
Attempting to extend expiration on a list of hits per API instructions
for hit_id in expired_hit_list:
response = client.update_expiration_for_hit(
HITId=hit_id,
ExpireAt=datetime(2017, 4, 9, 19, 9, 41, tzinfo=tzlocal())
)
getting error:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-59-e0764e20a54b> in <module>()
2 response = client.update_expiration_for_hit(
3 HITId=hit_id,
----> 4 ExpireAt=datetime(2017, 4, 9, 19, 9, 41, tzinfo=tzlocal())
5 )
NameError: name 'datetime' is not defined
I also tried datetime.datetime and dateTime and also just removing it.
ExpireAt=(2017, 4, 9, 19, 9, 41, tzinfo=tzlocal())
nothing working. Suggestions?
it's just an issue with my Python setup, nothing to do with boto3
import datetime
from dateutil.tz import tzlocal
Platform = WP 3.9.1 with WooCommerce 2.1.2
Using ShortCodes to display multiple Tabbed Product Carousels (on HomePage) each with 4 columns and 8 products : need about 6 product carousels/slider.
Code :
[tabbed_section]
[tab title="A"] [products ids="1, 2, 3, 4"] [/tab] [tab title="B"] [products ids="5, 6, 7, 8"][/tab] [tab title="C"] [products ids="9, 10, 11, 12"] [/tab] [tab title="D"][products ids="13, 14, 15, 16"][/tab][tab title="E"][products ids="17, 18, 19, 20"][/tab]
[/tabbed_section]
Issue :
In this first block, under each tab, only three products are being displayed in the first row and the fourth product is being pushed to a second row. The carousel navigation is absent.
Code :
[tabbed_section]
[tab title="F"][products ids="21, 22, 23, 24"][/tab]
[/tabbed_section]
Issue :
This is the second block. Same issues as in first block above + After the first tab which is "F", displaying an exact copy of Tabs "B" + "C" + "D" + "E"
The entire code is below :
[container]
[brands title="Shop by Brand"]
[custom_featured_products title="DEALS"]
[tabbed_section]
[tab title="A"] [products ids="1, 2, 3, 4"] [/tab] [tab title="B"] [products ids="5, 6, 7, 8"][/tab] [tab title="C"] [products ids="9, 10, 11, 12"] [/tab]
[/tabbed_section]
[tabbed_section]
[tab title="D"][products ids="13, 14, 15, 16"][/tab][tab title="E"][products ids="17, 18, 19, 20"][/tab][tab title="F"][products ids="21, 22, 23, 24"][/tab]
[/tabbed_section]
[recent_posts title="Latest Deals"]
[/container]