I'm using the Search API to search in a Bing Custom Search instance. I defined one URL - also known as "slice" in the documentation - for my main site and additional URLs that are subparts of it, boosted up at various levels.
In my results page, I'd like to add filter functionality with filter criteria that match site slices.
Is it a legit approach to reference both a parent domain and subsites of it in the same search instance?
Is there a way to break up search results by slice with the API? I.e. for each result to be able to know which slice it originates from, and ideally to have aggregated numbers (e.g. result count) by slice?
Can I query on a specific slice or do I have to create a dedicated search instance with only that slice in it?
All domain related subsites can be in same instance. Pinned feature is used to filter search result for a specific query from a specific URL in the site. Please refer below link,
https://learn.microsoft.com/en-us/azure/cognitive-services/bing-custom-search/tutorials/custom-search-web-page
So, I'm working on trying to improve a Mailchimp RSS campaign that was created by one of my coworkers.
The email that gets sent out is a list of posts from different categories in our website.
So to do this the RSS campaign is made up of different FEEDBLOCKS – one FEEDBLOCK for each kind of category on the website. An example of one of the FEEDBLOCKs looks like this (which is pretty standard and basic, I guess):
*|FEEDBLOCK:http://website.com/specific-category/feed|*
*|FEEDITEMS:[$count=2]|*
*|FEEDITEM:TITLE|*
*|END:FEEDITEMS|*
*|END:FEEDBLOCK|*
The thing I want to fix is for the FEEDBLOCK to only show new posts from the past 7 days (the Mailchimp campaign goes out once a week). At the moment, we do this manually by changing the number in the *|FEEDITEMS:[$count=2]|* field. We have to manually count the number of new posts on the website each week and input the new count number so the correct number of new posts are displayed on the email.
I'm pretty new to using RSS feeds and Mailchimp but it seems to me from knowing some basic coding that there should be a way to do this automatically, rather than having to manually change the count number for every FEEDBLOCK before we send out the email to our subscribers.
Can any of you give me any advice on how I can change the code we're using to update the count number automatically?
Thanks in advance!
I'm not entirely sure this is how Feedblock works - but with the standard RSS merge tags available in Mailchimp, the schedule of the email determines how many items are pulled in. For example, I have an rss feed scheduled for once a week - with no item number parameters the code - Mailchimp pulls in only the articles posted to my feed since the last time the campaign sent - i.e. in one week it can be anywhere from 3 to 10.
I am trying to do the same thing. FEEDBLOCKS and FEEDITEMS do not act like RSSFEED and RSSITEMS. If you use FEEDBLOCKS the RSS dates entered in the "RSS Feed" step are not used!
This is what I received from MailChimp: "Any FEEDBLOCK tag in a sense will be its own feed so any dates set for the RSS campaign are ignored. FEEDBLOCK's are usually used for non-rss campaigns to display something from another feed. You can only control how many post are shown when using the FEEDBLOCK tag."
Unfortunately I, like you, have only the manual solution! I am not a coder, but I think the rssfeed itself has to be changed. That would be to build an RSS feed that limits entries to a certain date range i.e., previous 7 days ending with previous full day.
I want to create a Google Analytics segment for our users who view at least a certain number of pages on our site. From what I can tell (please correct me if I'm wrong) this is easy to do if you don't care about what kind of page they view: you create a filter for the segment that checks to see if Unique Pageviews is greater than some value such as 4. However our site has a whole bunch of pages that I don't really care if someone reads (our "about page" for example). So what I'm trying to do is create a segment of how many people view at least X pages of what we call "Learning Content" (basically two specific page types on our site). How can I segment the users who read a certain amount of learning content?
Two types of pages fit into our definition of learning content. The first one has a URL matching a regex that sort of looks like /learning_content_1/.* and the second matches regex /learning_content_2/.*. I've already created a content group for learning content that correctly identifies these two content groups. However I wasn't able to find any way to filter a segment based on how many unique pageviews (or even just pageviews) come from a specific content grouping. Is this even possible? If not, how might I work around that?
The research I've done so far: Google Analytics: How to segment by many groups of pages was somewhat helpful but didn't address the question of how to create an actual GA segment based on pageview information for a content grouping or content group.
The only way I can think of handling this, is by associating a specific custom event that gets triggered on this page. Then you can create a segment that matches users who have that event category:
and total events greater than 4:
It's a workaround, and it doesn't work if you are tracking other events, but maybe that works for you?
I have a lot of product pages like this:
www.example.com/catalog001/item123
www.example.com/catalog002/item321
www.example.com/catalog002/item567
Every catalog and product(item) have its own numeric id.
Product pages are similar. Just different product image, price, title.
I tried to use Regular Expressions to set up original url pattern in Analytics Experiments:
www.example.com/catalog(\d+)?/item(\d+)?
Is there any way to set up original url pattern?
I'm not quite sure what you're asking. It sounds like you want to test many different product pages without setting up many different experiments, presumably to test two different product page layouts.
If so you can use relative urls in the experiments interface for that, there is no need for regular expressions. Create an experiment for one product page, select relative urls for the variations, enter a query string (?foo=bar) or fragment identifier (#foo=bar) that triggers the variation page, add experiment code to all the originals and the test will be enabled for all your product pages, not just the one url you entered in the interface.
If you were after something else I suggest you re-word the question to explain the actual problem rather than your attempt to solve it.
Have you used pipes.yahoo.com to quickly and easily do... anything? I've recently created a quick mashup of StackOverflow tags (via rss) so that I can browse through new questions in fields I like to follow.
This has been around for some time, but I've just recently revisited it and I'm completely impressed with it's ease of use. It's almost to the point where I could set up a pipe and then give a client privileges to go in and edit feed sources... and I didn't have to write more than a few lines of code.
So, what other practical uses can you think of for pipes?
It's nice for aggregating feeds, yes, but the other handy thing to do is filtering the feeds. A while back, I created a feed for Digg (before Digg fell into the Fark pit of dispair). I didn't care about the overwhelming Apple and Ubuntu news, so I filtered those keywords out of Technology, which I then combined with Science and World & Business feeds.
Anyway, you can do a lot more than just combine things. If you wanted to be smart about it, you could set up per-subfeed and whole-feed filters to give granular or over-arching filtering abilities as the news changes and you get bored with one topic or another.
The one thing I have really used Y! Pipes for (rather than just playing around with it) is to clean up item titles, merge and finally de-dupe the feeds I got from querying multiple blog search engines with the same search term. This is something I’ve done in several very different contexts, eg. for my own ego surfing, in another case for the planet site set up by some conference’s organisers to keep an eye on their conference’s buzz, etc. Highly recommended.
You can do tons of things with pipes. For example for sites like digg or reddit, you can make one to bypass the site and go directly to the linked article (rewriting the RSS).
I like also to filter webcomics' feeds to keep just the comics, and then mix them all in only one feed
I've taken the liberty of copying your pipe and rearranging it a bit so that it's easier to add and remove tags:
Yahoo Pipe: StackOverflow Merge Tags
Tags are now listed in a string builder, so to add a tag you just have to hit the + button on the string builder and type in the tag preceded by a slash.
Well, pipes are real fast and useful.
Other effective uses might be:
1) combine many feeds into one, then sort, filter and translate it.
2) geocode your favorite feeds and browse the items on an interactive map.
3) power widgets/badges on your web site.
4) grab the output of any Pipes as RSS, JSON, KML, and other formats.
This is by no means a comprehensive list.
One of my favorite things to do with Yahoo! Pipes is to aggregate multiple craigslist feeds into a single feed. You can make a feed out of any category or search criteria on craigslist. I live in a university town and am always on the lookout for tickets to sporting events, for example. I have a half-dozen craigslist searches all being combined into a single feed via Yahoo! Pipes. This works a lot better for me than simply monitoring the entire "Tickets" category; filters out most of the tickets I am not interested in. Yes, this is another aggregating feeds example, but the craigslist usage is quite valuable with the ability to aggregate feeds that are themselves based upon searches.
I've used Pipes to translate blogs into English. I would have liked to use it to fetch the full text for blogs which only provide a summary of the content in the feed, but unfortunately they don't provide any input which fetches the content from a parameterizable source :-(.
Just stumbled on this while looking for ways to connect Excel to Pipes. A bit necromancer-ish, but here goes.
One thing I've done, is take an HTML page (science data) which has links to tons of CSV files for a bunch of Army Corps measurement stations. Each station has a big table of datafiles, all organized individually by month and year. I use YQL to parse out and organize the links to the individual CSV files in a way that Pipes can read them. Then, I use that as input into a Pipe, which has a user input for "Station" and "Date."
Using this, I can go to the Pipes page, type in those values and get the values only for a specific station and date, rather than have to find the station on a website, find the year and month in a big table, click the link, open the CSV file, and find the values for a day within that month's worth of data. I can even change the pipe to specify the hour, and the parameter, and then get a single value returned.
Now, I wish I could figure out how to program Excel so that I can use "=yahoo_function(station, datetime)" to place that value automatically into a cell give the values of other columns!