get the latitude and longitude from url - web-scraping

With xpath I was able to obtain the url that contains the latitude and longitude, but I would need these values to be shown separately in the following way:
latitude = -34.552654847695510
longitude= -58.457549057672110
<div class="article-map" id="article-map">
<img id="static-map" src="//maps.google.com/maps/api/staticmap?center=-34.552654847695510,-58.457549057672110&zoom=16&markers=-34.552654847695510,-58.457549057672110&channel=ZP&size=780x456&sensor=true&scale=2&key=AIzaSyDuxqN04nAj6aHygffqUpehsbMFbxEZX90&signature=W-cOkT98ssMPpXbZbU3jil5xNes=" class="static-map">
</div>
response.xpath ('// div [# id = "article-map"] / img'). extract ()
['<img id = "static-map" src = "// maps.google.com/maps/api/staticmap?center=-34.552654847695510,-58.457549057672110&zoom=16&markers=-34.552654847695510,-58.457549057672110& channel = ZP & amp; size = 780x456 & amp; sensor = true & amp; scale = 2 & amp; key = AIzaSyDuxqN04nAj6aHygffqUpehsbMFbxEZX90 & signature = W-cOkT98ssMPpXbZbU3jil5xNes = "class =" static-map "> ']

Try this, for example: response.css('#article-map img::attr(src)').re(r'markers=([-\d\.]+),([-\d\.]+)')
Or
a. get url like response.css('#article-map img::attr(src)').get()
b. extract markers or center param via from w3lib.url import url_query_parameter and then apply regexp.
But first variant looks much shorter and easier.

use the url parse module is handy and accuracy:
from urllib.parse import urlparse, parse_qs
img_url_string = Selector(text=body).xpath('//img[#id="static-map"]/#src').extract_first()
url_data = urlparse(img_url_string, scheme='https')
qs = url_data.query
parse_qs(qs)['center']
# output ['-34.552654847695510,-58.457549057672110']

Related

Wrong page parsed BeautifulSoup?

I want to enter two values on this website https://hausratversicherung.friday.de/ and retrieve the value after submitting it. I wrote the following code
import requests, re
from robobrowser import RoboBrowser
br = RoboBrowser(parser='html.parser')
br.open("https://hausratversicherung.friday.de/")
form = br.get_form()
form['area'] = 100
form['postalCode'] = 44326
br.submit_form(form)
src = str(br.parsed())
start = '<div class="Typography-sc-3c3fuf-0 jEIicc" data-testid="totalPrice">'
end = ' €</div>'
result = re,search('%s(.*)%s' % (start, end),src).group(1)
print(result)
But the browser br is not opening the mentioned page and taking these values.
The postal code 44326 isn't accepted by the server. For other postal codes you can query their API directly:
import json
import requests
area = 100
postalcode = 44309
url = 'https://fdy2-policycenter-production.k8s.blue.friday-prod.de/rest/friday/hc/price?area={area}&postalCode={postalcode}'
data = requests.get(url.format(area=area, postalcode=postalcode)).json()
# uncomment this to print all data:
# print(json.dumps(data, indent=4))
# print some info to screen:
print(data['basicCoverages']['coverages'][0]['insuredSum']['amount'])
print(data['basicCoverages']['coverages'][0]['price']['amount'])
Prints:
65000.0
7.81

How to import data from a HTML table on a website to excel?

I would like to do some statistical analysis with Python on the live casino game called Crazy Time from Evolution Gaming. There is a website that has the data to do this: https://tracksino.com/crazytime. I want the data of the lowest table 'Spin History' to be imported into excel. However, I do not now how this can be done. Could anyone give me an idea where to start?
Thanks in advance!
Try the below code:
import json
import requests
from urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
import csv
import datetime
def scrap_history():
csv_headers = []
file_path = '' #mention your system where you have to save the file
file_name = 'spin_history.csv' # filename
page_number = 1
while True:
#Dynamic URL fetching data in chunks of 100
url = 'https://api.tracksino.com/crazytime_history?filter=&sort_by=&sort_desc=false&page_num=' + str(page_number) + '&per_page=100&period=24hours'
print('-' * 100)
print('URL created : ',url)
response = requests.get(url,verify=False)
result = json.loads(response.text) # loading data to convert in JSON.
history_data = result['data']
print(history_data)
if history_data != []:
with open(file_path + file_name ,'a+') as history:
#Headers for file
csv_headers = ['Occured At','Slot Result','Spin Result','Total Winners','Total Payout',]
csvwriter = csv.DictWriter(history, delimiter=',', lineterminator='\n',fieldnames=csv_headers)
if page_number == 1:
print('Writing CSV header now...')
csvwriter.writeheader()
#write exracted data in to csv file one by one
for item in history_data:
value = datetime.datetime.fromtimestamp(item['when'])
occured_at = f'{value:%d-%B-%Y # %H:%M:%S}'
csvwriter.writerow({'Occured At':occured_at,
'Slot Result': item['slot_result'],
'Spin Result': item['result'],
'Total Winners': item['total_winners'],
'Total Payout': item['total_payout'],
})
print('-' * 100)
page_number +=1
print(page_number)
print('-' * 100)
else:
break
Explanation:
I have implemented the above script using python requests way. The API url https://api.tracksino.com/crazytime_history?filter=&sort_by=&sort_desc=false&page_num=1&per_page=50&period=24hours extarcted from the web site itself(refer screenshot). In the very first step script will take the dynamic URL where page number is dynamic and changed upon on every iteration. For ex:- first it will be page_num = 1 then page_num = 2 and so on till all the data will get extracted.

Sample python code to replace a substring value in an xlsx cell

Sample code snippet tried:
for row in range(1,sheet.max_row+1):
for col in range(1, sheet.max_column+1):
temp = None
cell_obj = sheet.cell(row=row,column=col)
temp = re.search(r"requestor", str(cell_obj.value))
if temp:
if 'requestor' in cell_obj.value:
cell_obj.value.replace('requestor',
'ABC')
Trying to replace from an xlsx cell containing value "Customer name: requestor " with value "Customer name: ABC" .How can this be achieved easily ?
I found my answer in this post:https://www.edureka.co/community/42935/python-string-replace-not-working
The replace function doesn't store the result in the same variable. Hence the solution for above:
mvar = None
for row in range(1,sheet.max_row+1):
for col in range(1, sheet.max_column+1):
temp = None
cell_obj = sheet.cell(row=row,column=col)
temp = re.search(r"requestor", str(cell_obj.value))
if temp:
if 'requestor' in cell_obj.value:
mvar = cell_obj.value.replace('requestor',
'ABC')
cell_obj.value = mvar
Just keep it simple. Instead of re and replace, search for the given value and override the cell.
The example below also gives you the ability to change 'customer name' if needed:
wb = openpyxl.load_workbook("example.xlsx")
sheet = wb["Sheet1"]
customer_name = "requestor"
replace_with = "ABC"
search_string = f"Customer name: {customer_name}"
replace_string = f"Customer name: {replace_with}"
for row in range(1, sheet.max_row + 1):
for col in range(1, sheet.max_column + 1):
cell_obj = sheet.cell(row=row, column=col)
if cell_obj.value == search_string:
cell_obj.value = replace_string
wb.save("example_copy.xlsx") # remember that you need to save the results to the file

How to fix python returning multiple lines in a .csv document instead of one?

I am trying to scrape data form a public forum for a school project, but every-time I run the code, the resulting .csv file shows multiple rows for the text variable instead of just one.
from bs4 import BeautifulSoup as soup
from urllib.request import urlopen as uReq
my_url = 'https://www.emimino.cz/diskuse/1ivf-repromeda-56566/'
uClient = uReq(my_url)
page_soup = soup(uClient.read(),"html.parser")
uClient.close()
containers = page_soup.findAll("div",{"class":"discussion_post"})
out_filename = "Repromeda.csv"
headers = "text,user_name,date \n"
f = open(out_filename, "w")
f.write(headers)
for container in containers:
text1 = container.div.p
text = text1.text
user_container = container.findAll("span",{"class":"user_category"})
user_id = user_container[0].text
date_container = container.findAll("span",{"class":"date"})
date = date_container[1].text
print("text: " + text + "\n" )
print("user_id: " + user_id + "\n")
print("date: " + date + "\n")
# writes the dataset to file
f.write(text.replace(",", "|") + ", " + user_id + ", " + date + "\n")
f.close()
Ideally I am trying to create a row for each data entry (ie. text, user_id, date in one row), but instead I get multiple rows for one text entry and only one row for user_id and date entry.
this is the actual output
this is the expected output
Just replace the new line with blank string.
for container in containers:
text1 = container.div.p
text = text1.text.replace('\n', ' ')

Python ArcPy - Print Layer with highest field value

I have some python code that goes through layers in my ArcGIS project and prints out the layer names and their corresponding highest value within the field "SUM_USER_VisitCount".
Output Picture
What I want the code to do is only print out the layer name and SUM_USER_VisitCount field value for the one layer with the absolute highest value.
Desired Output
I have been unable to figure out how to achieve this and can't find anything online either. Can someone help me achieve my desired output?
Sorry if the code layout is a little weird. It got messed up when I pasted it into the "code sample"
Here is my code:
import arcpy
import datetime
from datetime import timedelta
import time
#Document Start Time in-order to calculate Run Time
time1 = time.clock()
#assign project and map frame
p =
arcpy.mp.ArcGISProject(r'E:\arcGIS_Shared\Python\CumulativeHeatMaps.aprx')
m = p.listMaps('Map')[0]
Markets = [3000]
### Centers to loop through
CA_Centers = ['Castro', 'ColeValley', 'Excelsior', 'GlenPark',
'LowerPacificHeights', 'Marina', 'NorthBeach', 'RedwoodCity', 'SanBruno',
'DalyCity']
for Market in Markets:
print(Market)
for CA_Center in CA_Centers:
Layers =
m.listLayers("CumulativeSumWithin{0}_{1}_Jun2018".format(Market,CA_Center))
fields = ['SUM_USER_VisitCount']
for Layer in Layers:
print(Layer)
sqlClause = (None, 'ORDER BY ' + 'SUM_USER_VisitCount') # + 'DESC'
with arcpy.da.SearchCursor(in_table = Layer, field_names = fields,
sql_clause = sqlClause) as searchCursor:
print (max(searchCursor))
You can create a dictonary that stores the results from each query and then print out the highest one at the end.
results_dict = {}
for Market in Markets:
print(Market)
for CA_Center in CA_Centers:
Layers =
m.listLayers("CumulativeSumWithin{0}_{1}_Jun2018".format(Market,CA_Center))
fields = ['SUM_USER_VisitCount']
for Layer in Layers:
print(Layer)
sqlClause = (None, 'ORDER BY ' + 'SUM_USER_VisitCount') # + 'DESC'
with arcpy.da.SearchCursor(in_table = Layer, field_names = fields,
sql_clause = sqlClause) as searchCursor:
print (max(searchCursor))
results_dict[Layer] = max(searchCursor)
# get key for dictionary item with the highest value
highest_count_layer = max(results_dict, key=results_dict.get)
print(highest_count_layer)
print(results_dict[highest_count_layer])

Resources