How to replicate the Column with data set in table - python-3.6

I want to convert question columns into a row.using Python Pandas like below

import pandas as pd
from openpyxl import load_workbook
df = pd.read_excel (r'file' ,sheet_name='results')
d = {'Score.1':'Score','Score.2':'Score','Duration.1':'Duration','Duration.2':'Duration'}
melted=pd.melt(df, id_vars=['userid','Candidate','Score','Duration'], value_vars=['Question 1'],var_name='myVarname', value_name='myValname')
melted1=pd.melt(df, id_vars=['userid','Candidate','Score.1','Duration.1'], value_vars=['Question 2'],var_name='myVarname', value_name='myValname').rename(columns=d)
melted2=pd.melt(df, id_vars=['userid','Candidate','Score.2','Duration.2'], value_vars=['Question 3 '],var_name='myVarname', value_name='myValname').rename(columns=d)
......
melted2=pd.melt(df, id_vars=['userid','Candidate','Score.25','Duration.25'], value_vars=['Question 25 '],var_name='myVarname', value_name='myValname').rename(columns=d)
meltedfinal=[melted,melted1,melted2]
result = pd.concat(meltedfinal)
result.to_excel(r'file') # doctest: +SKIP

Related

Pydeck HexLayer min and log scale

Let's consider this HexLayer example using PyDeck in StreamLit:
import numpy as np
import pandas as pd
import pydeck as pdk
import streamlit as st
lat0=40.7
lon0=-74.1201062
n_points = 1000
lat = np.random.normal(loc=lat0, scale=0.02, size=n_points)
lon = np.random.normal(loc=lon0, scale=0.02, size=n_points)
data = pd.DataFrame({'lat': lat, 'lon': lon})
st.pydeck_chart(pdk.Deck(
map_provider="mapbox",
initial_view_state=pdk.ViewState(
latitude=lat0,
longitude=lon0,
zoom=10,
),
layers=[
pdk.Layer(
'HexagonLayer',
data=data,
get_position='[lon, lat]',
radius=1000,
coverage=0.6,
),
],
))
Here's the output:
Is there a way to only display the hexagonal bis with a count above a given threshold, say counts>5?
Similarly, is it possible to set a logarithmic scale for the color/height of the hexagons?

Get prediction of OLS fit from statsmodels

I am trying to get in sample predictions from an OLS fit as below,
import numpy as np
import pandas as pd
import statsmodels.api as sm
macrodata = sm.datasets.macrodata.load_pandas().data
macrodata.index = pd.period_range('1959Q1', '2009Q3', freq='Q')
mod = sm.OLS(macrodata['realgdp'], sm.add_constant(macrodata[['realdpi', 'realinv', 'tbilrate', 'unemp']])).fit()
mod.get_prediction(sm.add_constant(macrodata[['realdpi', 'realinv', 'tbilrate', 'unemp']])).summary_frame(0.95).head()
This is fine. But if I alter the positions of regressors in mod.get_prediction, I get different estimates,
mod.get_prediction(sm.add_constant(macrodata[['tbilrate', 'unemp', 'realdpi', 'realinv']])).summary_frame(0.95).head()
This is surprising. Can't mod.get_prediction identify the regressors based on column names?
As noted in the comments, sm.OLS will convert your data frame into an array for fitting, and likewise for prediction, it expects the predictors to be in the same order.
If you would like the column names to be used, you can use the formula interface, see the documentation for more details. Below I apply your example :
import statsmodels.api as sm
import statsmodels.formula.api as smf
macrodata = sm.datasets.macrodata.load_pandas().data
mod = smf.ols(formula='realgdp ~ realdpi + realinv + tbilrate + unemp', data=macrodata)
res = mod.fit()
In the order provided :
res.get_prediction(macrodata[['realdpi', 'realinv', 'tbilrate', 'unemp']]).summary_frame(0.95).head()
mean mean_se mean_ci_lower mean_ci_upper obs_ci_lower obs_ci_upper
0 2716.423418 14.608110 2715.506229 2717.340607 2710.782460 2722.064376
1 2802.820840 13.714821 2801.959737 2803.681943 2797.188729 2808.452951
2 2781.041564 12.615903 2780.249458 2781.833670 2775.419588 2786.663539
3 2786.894138 12.387428 2786.116377 2787.671899 2781.274166 2792.514110
4 2848.982580 13.394688 2848.141577 2849.823583 2843.353507 2854.611653
Results are the same if we flip the columns:
res.get_prediction(macrodata[['tbilrate', 'unemp', 'realdpi', 'realinv']]).summary_frame(0.95).head()
mean mean_se mean_ci_lower mean_ci_upper obs_ci_lower obs_ci_upper
0 2716.423418 14.608110 2715.506229 2717.340607 2710.782460 2722.064376
1 2802.820840 13.714821 2801.959737 2803.681943 2797.188729 2808.452951
2 2781.041564 12.615903 2780.249458 2781.833670 2775.419588 2786.663539
3 2786.894138 12.387428 2786.116377 2787.671899 2781.274166 2792.514110
4 2848.982580 13.394688 2848.141577 2849.823583 2843.353507 2854.611653

Plotly choropleth map in jupyter notebooks not showing color

Trying to make a choropleth map in plotly using some data I have in a csv file. Have created This is what i get in result(my map)
Below are the coding that I have did to the work:
import json
import pandas as pd
import plotly.express as px
asean_country = json.load(open("aseancovidmap.geojson","r"))
df= pd.read_csv("covidcases.csv")
df["iso-2"]=df['Country'].apply(lambda x: id_map[x])
id_map={}
for feature in asean_country['features']:
feature['id']= feature['properties']['sform']
id_map[feature['properties']['name']]=feature['id']
figure=px.choropleth(df,locations='iso-2',locationmode='country names',geojson=asean_country,color='Ttlcases',scope='asia',title='Total COVID 19 cases in ASEAN Countries as on 10/1/2022')
figure.show()
clearly I don't have access to your files, so have sourced geometry and COVID data. For reference this is at end of answer.
the key change I have made. *Don't loop over geojson Define locations as column in dataframe and featureidkey
clearly this is coloring countries
solution
import json
import pandas as pd
import plotly.express as px
# asean_country = json.load(open("aseancovidmap.geojson","r"))
asean_country = gdf_asean.rename(columns={"adm0_a3": "iso_a2"}).__geo_interface__
# df= pd.read_csv("covidcases.csv")
df = gdf_asean_cases.loc[:, ["iso_code", "adm0_a3", "total_cases", "date"]].rename(
columns={"iso_code": "iso_a2", "total_cases": "Ttlcases"}
)
figure = px.choropleth(
df,
locations="iso_a2",
featureidkey="properties.iso_a2",
geojson=asean_country,
color="Ttlcases",
title="Total COVID 19 cases in ASEAN Countries as on 10/1/2022",
).update_geos(fitbounds="locations", visible=True).update_layout(margin={"t":40,"b":0,"l":0,"r":0})
figure.show()
data sourcing
import requests, io
import geopandas as gpd
import pandas as pd
# get asia geometry
gdf = gpd.read_file(
"https://gist.githubusercontent.com/hrbrmstr/94bdd47705d05a50f9cf/raw/0ccc6b926e1aa64448e239ac024f04e518d63954/asia.geojson"
)
# get countries that make up ASEAN
df = pd.read_html("https://en.wikipedia.org/wiki/List_of_ASEAN_countries_by_GDP")[1].loc[1:]
# no geometry for singapore.... just ASEAN geometry
gdf_asean = (
gdf.loc[:, ["admin", "adm0_a3", "geometry"]]
.merge(
df.loc[:, ["Country", "Rank"]], left_on="admin", right_on="Country", how="right"
)
)
# get COVID data
dfall = pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
# filter to last date in data
dfall["date"] = pd.to_datetime(dfall["date"])
dflatest = dfall.groupby(["iso_code"], as_index=False).last()
# merge geometry and COVID data
gdf_asean_cases = gdf_asean.merge(
dflatest.loc[:, ["iso_code", "total_cases", "date"]], left_on="adm0_a3", right_on="iso_code"
)

rpy2 does not convert back to pandas

I have an R object that will not convert to Pandas, and the strange part is that it doesn't throw an error.
Updated with the code I'm using, sorry not to supply that up front -- and to miss the request for 2 weeks!
Python code that calls an R script
import pandas as pd
import rpy2.robjects as ro
from rpy2.robjects.packages import importr
from rpy2.robjects import pandas2ri
import datetime
from rpy2.robjects.conversion import localconverter
def serial_date_to_string(srl_no):
new_date = datetime.datetime(1970,1,1,0,0) + datetime.timedelta(srl_no - 1)
return new_date.strftime("%Y-%m-%d")
jurisdiction='TX'
r=ro.r
r_df=r['source']('farrington.R')
with localconverter(ro.default_converter + pandas2ri.converter):
pd_from_r_df = ro.conversion.rpy2py(r_df)
The issue is that pd_from_r_df returns an R object rather than a Pandas dataframe:
>>> pd_from_r_df
R object with classes: ('list',) mapped to:
[ListSexpVector, BoolSexpVector]
value: <class 'rpy2.rinterface.ListSexpVector'>
<rpy2.rinterface.ListSexpVector object at 0x7faa4c4eff08> [RTYPES.VECSXP]
visible: <class 'rpy2.rinterface.BoolSexpVector'>
<rpy2.rinterface.BoolSexpVector object at 0x7faa4c4e7948> [RTYPES.LGLSXP]
Here's the R script "farrington.R", which returns a surveillance time series, which ro.conversion.rpy2py isn't (as used above) converting to a pandas dataframe
library('surveillance')
library(readr)
library(tidyr)
library(dplyr)
w<-1
b<-3
nfreq<-52
steps_back<- 28
alpha<-0.05
counts <- read_csv("Weekly_counts_of_death_by_jurisdiction_and_cause_of_death.csv")
counts<-counts[,!colnames(counts) %in% c('Cause Subgroup','Time Period','Suppress','Note','Average Number of Deaths in Time Period','Difference from 2015-2019 to 2020','Percent Difference from 2015-2019 to 2020')]
wide_counts_by_cause<-pivot_wider(counts,names_from='Cause Group',values_from='Number of Deaths',values_fn=(`Cause Group`=sum))
wide_state <- filter(wide_counts_by_cause,`State Abbreviation`==jurisdiction)
wide_state <- filter(wide_state,Type=='Unweighted')
wide_state[is.na(wide_state)] <-0
important_columns=c('Alzheimer disease and dementia','Cerebrovascular diseases','Heart failure','Hypertensive dieases','Ischemic heart disease','Other diseases of the circulatory system','Malignant neoplasms','Diabetes','Renal failure','Sepsis','Chronic lower respiratory disease','Influenza and pneumonia','Other diseases of the respiratory system','Residual (all other natural causes)')
all_columns <- append(c('Year','Week'),important_columns)
selected_wide_state<-wide_state[, names(wide_state) %in% all_columns]
start<-c(as.numeric(min(selected_wide_state[,'Year'])),as.numeric(min(selected_wide_state[,'Week'])))
freq<-as.numeric(max(selected_wide_state[,'Week']))
sts <- new("sts",epoch=1:nrow(numeric_wide_state),start=start,freq=freq,observed=numeric_wide_state)
sts_4 <- aggregate(sts[,important_columns],nfreq=nfreq)
start_idx=end_idx-steps_back
cntrlFar <- list(range=start_idx:end_idx,w==w,b==b,alpha==alpha)
surveil_ts_4_far <- farrington(sts_4,control=cntrlFar)
far_df<-tidy.sts(surveil_ts_4_far)
far_df
(using the NCHS data here [from a couple months back] https://data.cdc.gov/NCHS/Weekly-counts-of-death-by-jurisdiction-and-cause-o/u6jv-9ijr/ )
In R, when calling source() by default on a script without named functions, the returned object is a list of two named components, $value and $visible, where:
$value is the last displayed or defined object which in your case is the far_df data frame (which in R data.frame is a class object extending list type);
$visible is a boolean vector indicating if last object was displayed or not which in your case is TRUE. This would be FALSE had you ended script at far_df <- tidy.sts(surveil_ts_4_far).
In fact, your Python error confirms this output indicatating a list of [ListSexpVector, BoolSexpVector].
Therefore, since you only want the first item, index for first item accordingly by number or name.
r_raw = ro.r['source']('farrington.R') # IN R: r_raw <- source('farrington.R')
r_df = r_raw[0] # IN R: r_df <- r_raw[1]
r_df = r_raw[r_raw.names.index('value')] # IN R: r_df <- r_raw$value
with localconverter(ro.default_converter + pandas2ri.converter):
pd_from_r_df = ro.conversion.rpy2py(r_df)

Convert Pyspark dataframe to dictionary

I'm trying to convert a Pyspark dataframe into a dictionary.
Here's the sample CSV file -
Col0, Col1
-----------
A153534,BDBM40705
R440060,BDBM31728
P440245,BDBM50445050
I've come up with this code -
from rdkit import Chem
from pyspark import SparkContext
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
sc = SparkContext.getOrCreate()
spark = SparkSession(sc)
df = spark.read.csv("gs://my-bucket/my_file.csv") # has two columns
# Creating list
to_list = map(lambda row: row.asDict(), df.collect())
#Creating dictionary
to_dict = {x['col0']: x for x in to_list }
This creates a dictionary like below -
'A153534': {'col0': 'A153534', 'col1': 'BDBM40705'}, 'R440060': {'col0': 'R440060', 'col1': 'BDBM31728'}, 'P440245': {'col0': 'P440245', 'col1': 'BDBM50445050'}
But I want a dictionary like this -
{'A153534': 'BDBM40705'}, {'R440060': 'BDBM31728'}, {'P440245': 'BDBM50445050'}
How can I do that?
I tried the rdd solution by Yolo but I'm getting error. Can you please tell me what I am doing wrong?
py4j.protocol.Py4JError: An error occurred while calling
o80.isBarrier. Trace: py4j.Py4JException: Method isBarrier([]) does
not exist
at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318)
at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326)
at py4j.Gateway.invoke(Gateway.java:274)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Here's a way of doing it using rdd:
df.rdd.map(lambda x: {x.Col0: x.Col1}).collect()
[{'A153534': 'BDBM40705'}, {'R440060': 'BDBM31728'}, {'P440245': 'BDBM50445050'}]
This could help you:
df = spark.read.csv('/FileStore/tables/Create_dict.txt',header=True)
df = df.withColumn('dict',to_json(create_map(df.Col0,df.Col1)))
df_list = [row['dict'] for row in df.select('dict').collect()]
df_list
Output is:
['{"A153534":"BDBM40705"}',
'{"R440060":"BDBM31728"}',
'{"P440245":"BDBM50445050"}']

Resources