How to extract Bills receivable data to excel from tally by ODBC? - odbc

I am new to extracting data with ODBC Tally ERP 9.0. I researched and got some help with Exracting Day Books / Transaction Data from Tally using ODBC
Although this solution helps in some way it doesn't fully solves the problem. I want to get data in this format
Date
Party's Ledger Name
Voucher No
Amount
This data should be filtered on only Outstanding Invoices (Invoices which are not paid by Sundry Debtors). The data should also include invoices of previous years (if the bill is outstanding it should be included irrespective of how many years in the past we have to go).
The above solution do addresses the issue getting those 4 columns. However, it does not solve the issue mentioned in the above paragraph. Meaning I getting all the invoices pad or unpaid for the current year and I am getting the data for only the current year.Any help in this is appreciated.
Thanks in advance...

You need to create a collection as below to get receivable bills of company.
[Collection: CMPRecevables]
Type : Bills
Is ODBC Table : YEs
Filter : IsReceivable
Fetch : Name, BillDate, Parent, ClosingBalance
connect this table using odbc and use below query
select $Name, $BillDate, $Parent, $ClosingBalance from CMPRecevables

Related

Export Receipts and Payment Details from Tally using Python

I am trying to export payment details in a below report format:
Date Agst Ref Party Name Amount
11/25/2019 19-20/1256 ABC 4,145
Extracting Day Books / Transaction Data from Tally using ODBC
The answer in the above link solved 75% of my problem. I was able to extract Date, Party's Ledger Name and Amount. But I am struggling to export the 4th data point "Agst Ref" for both payment and receipt.
I am fine with sending XML export request if exporting it via ODBC route is not possible. In which case I need XML code.
Please assist me because I am stuck on this for a while.
Agst Ref is just $Reference. What do you get when you run the queries?

Firebase Cohorts in BigQuery

I am trying to replicate Firebase Cohorts using BigQuery. I tried the query from this post: Firebase exported to BigQuery: retention cohorts query, but the results I get don't make much sense.
I manage to get the users for period_lag 0 similar to what I can see in Firebase, however, the rest of the numbers don't look right:
Results:
There is one of the period_lag missing (only see 0,1 and 3 -> no 2) and the user counts for each lag period don't look right either! I would expect to see something like that:
Firebase Cohort:
I'm pretty sure that the issue is in how I replaced the parameters in the original query with those from Firebase. Here are the bits that I have updated in the original query:
#standardSQL
WITH activities AS (
SELECT answers.user_dim.app_info.app_instance_id AS id,
FORMAT_DATE('%Y-%m', DATE(TIMESTAMP_MICROS(answers.user_dim.first_open_timestamp_micros))) AS period
FROM `dataset.app_events_*` AS answers
JOIN `dataset.app_events_*` AS questions
ON questions.user_dim.app_info.app_instance_id = answers.user_dim.app_info.app_instance_id
-- WHERE CONCAT('|', questions.tags, '|') LIKE '%|google-bigquery|%'
(...)
WHERE cohorts_size.cohort >= FORMAT_DATE('%Y-%m', DATE('2017-11-01'))
ORDER BY cohort, period_lag, period_label
So I'm using user_dim.first_open_timestamp_micros instead of create_date and user_dim.app_info.app_instance_id instead of id and parent_id. Any idea what I'm doing wrong?
I think there is a misunderstanding in the concept of how and which data to retrieve into the activities table. Let me state the differences between the case presented in the other StackOverflow question you linked, and the case you are trying to reproduce:
In the other question, answers.creation_date refers to a date value that is not fix, and can have different values for a single user. I mean, the same user can post two different answers in two different dates, that way, you will end up with two activities entries like: {[ID:user1, date:2018-01],[ID:user1, date:2018-02],[ID:user2, date:2018-01]}.
In your question, the use of answers.user_dim.first_open_timestamp_micros refers to a date value that is fixed in the past, because as stated in the documentation, that variable refers to The time (in microseconds) at which the user first opened the app. That value is unique, and therefore, for each user you will only have one activities entry, like:{[ID:user1, date:2018-01],[ID:user2, date:2018-02],[ID:user3, date:2018-01]}.
I think that is the reason why you are not getting information about the lagged retention of users, because you are not recording each time a user accesses the application, but only the first time they did.
Instead of using answers.user_dim.first_open_timestamp_micros, you should look for another value from the ones available in the documentation link I shared before, possibly event_dim.date or event_dim.timestamp_micros, although you will have to take into account that these fields refer to an event and not to a user, so you should do some pre-processing first. For testing purposes, you can use some of the publicly available BigQuery exports for Firebase.
Finally, as a side note, it is pointless to JOIN a table with itself, so regarding your edited Standard SQL query, it should better be:
#standardSQL
WITH activities AS (
SELECT answers.user_dim.app_info.app_instance_id AS id,
FORMAT_DATE('%Y-%m', DATE(TIMESTAMP_MICROS(answers.user_dim.first_open_timestamp_micros))) AS period
FROM `dataset.app_events_*` AS answers
GROUP BY id, period

How do I create a running count of outcomes sequentially by date and unique to a specific person/ID?

I have a list of unique customers who have made transactions over a year (Jan – Dec). They have bought products using 3 different methods (card, cash, check). My goal is to build a multi-classification model to predict the method pf payment.
To do this I am engineering some Recency and Frequency features into my training data, but am having trouble with the following frequency count because the only way I know how to do it is in Excel using the Countifs and SUMIFs functions, which are inhibitingly slow. If someone can help and/or suggest another solution, it would be very much appreciated:
So I have a data set with 3 columns (Customer ID, Purchase Date, and Payment Type) that is sorted by Purchase Date then Customer ID. How do I then get a prior frequency count of payment type by date that does not include the count of the current row transaction or any future transactions that are > the Purchase Date. So basically I want to do a running count of each payment option, based on a unique Customer ID, and a date range that is < purchase date of that training row. In my head I see it as “crawling” backwards through the transactions and counting. Simplified screenshot of data frame is below with the 3 prior count columns I am looking to generate programmatically.
Screenshot
This gives you the answer as a list of CustomerID, PurchaseDate, PaymentMethod and prior counts
SELECT CustomerID, PurchaseDate, PaymentMethod,
(
select count(CustomerID) from History T
where
T.CustomerID=History.CustomerID
and T.PaymentMethod=History.PaymentMethod
and T.PurchaseDate<History.PurchaseDate
)
AS PriorCount
FROM History;
You can save this query and use it as the source for a crosstab query to get the columnar format you want
Some notes:
I assumed "History" as the source table name - you can change the query above to use the correct source
To use this as a query, open a new query in design view. Close the window that asks what tables the query is to be built on. Open the SQL view of the query design - like design view, but it shows the SQL instead of the normal design interface. Copy the above into the SQL view.
You should now be able to switch to datasheet view and see the results
When the query is working to your satisfaction, save it with any appropriate name
Open a new query in design view
When you get the list of tables to include, switch to the list of queries and include the query you just saved
Change the query type to crosstab and update the query as needed to select rows, columns and values - look up "access crosstab queries" if you need more help.
Another tip to see what is happening here:
You can take the subquery - the parts inside the () above - and make
just that statement into it's own query, excluding the opening and closing (). Then you can look at it's design view to see what it does
Save it with an appropriate name and put it into the query above in place of the statement in () - then you can look at the design view.
Sometimes it's easier to visualize and learn from 2 queries strung together this way than to work with sub queries.

ODBC Microsoft Query BMC Remedy SLM Status Table

I want to use Microsoft Query to pull out stats on incident SLA status that can normally be seen in the SLM Status window. See the pictures below for reference.
However, I am struggling with finding the proper table to get the data from. What table is available to use as a ODBC data source for getting this information?
[
The data you are looking for is stored in the SLM:Measurement form. You'll want the following fields:
SVTTitle (SVT Title) (300411500)
GoalCategoryChar (Incident Response Time) (300426800)
GoalTimeHr (Hours) (300396000)
GoalTimeMin (Min) (300451200)
GoalSchedCost (Cost Per Min) (301489500)
SVTDueDate (Due Date/Time) (300364900)
MeasurementStatus (Progress) (300365100)
ApplicationUserFriendlyID (Incident ID) (301238500)
From what I can tell, the "Next Target Date:" is calculated by Remedy when the SLM dialog is opened using the active links:
SLM:IntegrationDialog:OnLoadSelectTimeBasedTab_SetNextDueDate-Incident
SLM:IntegrationDialog:OnLoadSelectTimeBasedTab_SetNextDueDate-Change
SLM:IntegrationDialog:OnLoadSelectTimeBasedTab_SetNextDueDate-Request
It isn't stored in the table.
Hope this helps!

Getting ga:socialNetwork dimension in BigQuery Export

I'm trying to retrieve site referral data from social networks via BigQuery Export.
I've gotten the referral path from such sites, but what I cannot seem to find is the neatly categorized field that is available in Google Analytics.
i.e. ga:socialNetwork
Anyone know where to find this data?
So far, I've looked here: https://support.google.com/analytics/answer/3437719?hl=en
(and, in our data, of course)
Cheers!
Although the ga:socialNetwork dimension isn't currently available via BigQuery Export, as you mentioned, you can get the referral path using trafficSource.source.
You can see the difference between these two fields by running this query (against the Core Reporting API, which has both fields). You can then use the result as a lookup table for your data.
If anyone is interested here was my solution, based on Andy's answer:
SELECT
Week,
IF (SocialNetwork IS NULL, Medium, "social" ) AS Medium,
Referral_URL,
SocialNetwork,
Total_Sessions,
Avg_Time_On_Site_in_Mins,
Avg_Session_Page_Depth,
Bounce_Rate,
FROM (
SELECT
Week,
Medium,
Referral_HostName,
Referral_URL,
SocialNetworks.socialNetwork AS SocialNetwork,
Total_Sessions,
Avg_Time_On_Site_in_Mins,
Avg_Session_Page_Depth,
Bounce_Rate,
FROM
[zzzzzzz.ga_sessions_20141223] AS All_Sessions
LEFT JOIN EACH [GA_API.SocialNetworks] AS SocialNetworks
ON ALL_Sessions.Referral_HostName = SocialNetworks.Source
GROUP EACH BY Week, Medium, Full_URL, Referral_HostName, Referral_URL, SocialNetwork, Total_Sessions, Avg_Time_On_Site_in_Mins, Avg_Session_Page_Depth, Bounce_Rate,
ORDER BY Total_Sessions DESC )
GROUP EACH BY Week, Medium, Full_URL, Referral_URL, SocialNetwork, Total_Sessions, Avg_Time_On_Site_in_Mins, Avg_Session_Page_Depth, Bounce_Rate,
ORDER BY Total_Sessions DESC;
This is now available in BigQuery Export with the field name hits.social.socialNetwork.
The detailed documentation is here: https://support.google.com/analytics/answer/3437719?hl=en
Following this documentation I ran a sample query which worked fine
SELECT
COUNT(totals.visits),
hits.social.socialNetwork
FROM
[project:dataset.ga_sessions_20161101]
GROUP BY
hits.social.socialNetwork
ORDER BY
1 DESC

Resources