Filter on directory and all subdirectories in Google Analytics - google-analytics

I am trying to set up a filter to include only data for a particular directory and all its sub directories.
For example, I would like to include data from dogs directory and subdirectories
http://www.mysite.com/dogs/
http://www.mysite.com/dogs/index.htm
http://www.mysite.com/dogs/breeds.htm
http://www.mysite.com/dogs/puppies/
http://www.mysite.com/dogs/puppies/index.htm
http://www.mysite.com/dogs/puppies/care/
http://www.mysite.com/dogs/puppies/care/feeding.htm
When creating a new filter I think I should select a Pre-defined filter to Include only traffic to the subdirectories that begin with the subdirectory /dogs/ and not case sensitive.
I am just a bit confused about whether it should be that begin with or that are equal to. Am I right to use that begin with in my situation and is my subdirectory value correct?
(I ask because it takes 24 hours for the data to update so getting it wrong costs a lot of time)

Turns out I was right. To filter on a directory and all subdirectories the settings should be
Predefined-Filter
Include Only
traffic to the subdirectories
that begin with
/dogs/
Not case sensitive

Related

I'm looking for a way to search for and delete files which end in a specific character

I have a large ebook library which I have somehow managed to duplicate. Each .pdf is stored within its own individual folder inside a directory.
The pdf's have been duplicated in their original location, and have an identical file name except with a "2" at the end of each file name.
Is there a way to automatically delete all files within a directory, and all subdirectories, whose file name ends with a "2"?
For example, in one directory there are two files:
"Tolle_2001_thepowerofnow.pdf"
"Tolle_2001_thepowerofnow2.pdf"
I would like to delete the second file, but there are thousands of these folders in my ebook directory.
Thanks for your help!

Google Analytics - Find most popular site directories

I want to be able to track the most popular directories on my site. The site is split as follows:
www.sitename.com/directoryA
www.sitename.com/directoryB
www.sitename.com/directoryC
Each directory has thousands of pages recording views underneath it e.g.
www.sitename.com/directoryA/page1
www.sitename.com/directoryA/page2
www.sitename.com/directoryA/page3
What I would like to is roll up the views from all the pages to create a table of my my popular directories (there are thousands on the site)
How can I do this?
Create a new field in DataStudio with this regex:
REGEXP_EXTRACT(Page,'(/[^/]+)')
This way, you can use this new field to group your counters based on part of the URL.
If you need to analyze other levels, just adjust the regex, repeating /[^/]+:
1st level (/path1): REGEXP_EXTRACT(Page,'(/[^/]+)')
2nd level (/path1/path2): REGEXP_EXTRACT(Page,'(/[^/]+/[^/]+)')
3rd level (/path1/path2/path3): REGEXP_EXTRACT(Page,'(/[^/]+/[^/]+/[^/]+)')
...

How to replace multiple urls to one specific url at once?

In wordpress database We have these thumbs as url links:
1st example:
http://thumb-v.xhcdn.com/t/170/1_6933170.jpg
2nd example:
https://thumb-v.xhcdn.com/t/325/200/3_8168325.jpg
Others are similar. There are over 2000+ urls are exist in the database like above. However the entire database have 15000 urls different than these urls. So I don't want to change all of them just the above structured ones.
I'd like to change/replace these multiple urls with one specific url like this:
http://example.com/lockedthumb.jpg
How to do this? By excel? Or is there any useful wordpress plugin?
As you can see multiple & different urls are starting with this: http://thumb-v.xhcdn.com/t/ or https version. But the rest of it has different numbered folders and jpg filenames.
You can try this in SQL :
UPDATE wp_posts SET post_content = REPLACE (post_content, 'http://thumb-v.xhcdn.com/t/170/1_6933170.jpg', 'http://example.com/lockedthumb.jpg');
You have to edit the table name and the field where are saved the data you search.
Best regards

Using filters to track hits for multiple URLs

We have a large website that is split up into groups of organisations with a number of micro-sites. We would like to provide one organisation within a group with their own set of data and I am having troubling getting the filtering working.
I think my main problem is I have 2 include filters. According to the documentation:
"If you apply multiple Include Filters, the hit must match every applied Include Filter in order to save the hit."
Our website urls would go something like this: https://[host]/[group]/[site]/[params]. I would like to track the following, given that this client (id 9) is in group "foo":
https://mysite.com/foo/live/default.aspx?id=9
https://mysite.com/foo/live/?id=9
https://mysite.com/foo/reporting/9/*
so that any hits on those urls would be captured for this particular client.
Our 2 current filters (type="Include") are as follows:
/foo/Reporting/9/
/foo/[^\?]*\?id=9
but these do not seem to track everything we think they should. Any help would be much appreciated.
By the time the first filter is done there is nothing left for the second filter to match - the first filter throws everything away that does not match (that's what Google means by "the hit must match every applied Include Filter").
I would suggest you first use an advanced filter to transform your urls so they follow all the same pattern (i.e. grab the value from the query parameter and append it to the url path) and then apply the include filter. I'm pretty certain that would be easier than trying to include different url structures (if you need help with the filters holler away in the comments, but the example given in the advanced filters interface should give you a clue how this works).

Read a CSV file that have indefinite number of columns every time and create a table based on column names in csv file

I have a requirement to load the csv into DB using oracle apex or pl/sql code, but the problem is they are asking to load the csv file which will not come with same number of columns and column names .
I should create table & upload data dynamically based on the file name and data that i'm uploading.
For every file i need to create a new table dynamically and insert data that are present in csv file.
For Example:
File1:
col1 col2 col3 col4 (NOTE: If i upload File 1, Table should be created dynamically based on the file name and table should contain same column name and data same as column headers of csv file . )
file 2:
col1 col2 col3 col4 col 5
file 3:
col4 col2 col1 col3
Depending on the columns and file name i need to create table for every file upload.
Can we load like this or not?
If yes, Please help me on this.
Regards,
Sachin.
((Where's the PL/SQL code in this solution!!??! Bear with me... the
answer is buried in here somewhere... I introduced some considerations
and assumptions you will need to think about before going into the
task. In the end, you'll find that Oracle APEX actually has a
built-in solution that satisfies exactly what you've specified... with
some caveats.))
If you are working within the Oracle APEX platform, you will have some advantages. APEX Version 4.2 and higher has a new page element called "Data Loading". The disadvantage however is that the definition of the upload target is fixed and not dynamic. You will need to know how your table is structured prior to loading the data.
One approach to overcome this is to build a generic, two-column table as your target, which will serve for all uploads. Column 1 will be your file-name and column two will be a single clob data type, which will contain the entire data file's contents including the header row. The "Data Loading" element will give the user the opportunity to verify and select this mapping convention in a couple of clicks.
At this point, it's mostly PL/SQL backend work doing the heavy lifting to parse and transform the data uploaded. As far as the dynamic table creation, I have noticed that the Oracle package, DBMS_SQL allows the execution of DDL SQL commands, which could be the route to making custom tables.
Alex Poole's comment is important as well, you will need to make some blanket assumption about the data type or have a provision to give more clues about what kind of data is contained. Assuming you can rely on a sample of existing data values is not good... what if all the values in your upload are null? I recommend perhaps a second column in the data input with a clue about the type of data for each column... just like the intended header names, maybe: AAAAA = for a five character column, # = for a numeric, MM/DD/YYYY = for a date with a specific masking.
The easier route:
You will need to allow your end-user access to a developer-role account on a workspace of your APEX server. It is not as scary as you think. With careful instruction and some simple precautions, I have been able to make this work with even the most non-technical of users. The reason for this is that there is a more powerful upload tool found under the following menu item:
SQL Workshop --> Utilities --> Data Workshop
There is a choice under "Data Load" --> "Spreadsheet Data"
The data load tool will automatically do the following:
Accept a CSV formatted file through a browse function on your client machine
Upload the file and parse the first record for the column layout (names)
Allow the user to create a new table from the uploaded file, or to map to an existing one.
For new tables, each column data type can be declared and also a specific numeric/date mask if additional conversion from the uploaded data is necessary.
Delimiter type, optional enclosures (like double quotes), decimal conventions and currency types can also be declared prior to parsing the uploaded file.
Once the user has identified all these mappings and settings, the table is created with the uploaded data. Any errors in record upload are reported immediately afterwards with detailed feedback on the failed records.
A security consideration to note:
You probably do not want to give end users access to your APEX server's backend... but you CAN create a new workspace... just for your end users... create a new database schema for receiving their uploads, maybe with some careful resource controls. Developer is the minimum role needed... but even if the end users see the other stuff there won't be access to anything important from an isolated workspace.
I have implemented the isolated workspace approach on a 4.0/4.1 release APEX platform a few years back, and it worked nicely. Our end user had control over the staging and quality checking of her data inputs (from excel spreadsheet/csv exports collected from a combination of sources). I suppose it may have been even better to cut her out of the picture entirely and focused on automating the export-review-upload process between our database and her other sources. In this case, the volume of data involved was not great enough (100's to 1000's of records) and the need for manual review and edit of the exported data was very important prior to pushing it into the database... so the human element was still important in this case - it is something you'll want to think about now.

Resources