Our business uses Adobe Scene7. One of the things we need to be able to do is share the URL of an image, to a vendor for all of the products with an image.
We have identified the construct of the URL to predict the link, and then we ping the image URL to ensure it is valid and available for viewing.
As of late, we've come into a problem where many of the images are not rendering...
Most images:
http://s7d5.scene7.com/is/image/LuckyBrandJeans/7W64372_960_1
Some images:
https://s7d9.scene7.com/is/image/LuckyBrandJeans/7Q64372_960_1
The only difference appears to be the s7d5 becomes s7d9 on some images. What drives that?
How do we get a list of all of those URL's if we can't predict the d9 vs d5?
I'm not sure it matters. I think all you need is the filename. It looks like if you take the filename "7W64372_960_1" it works on both s7d5 and s7d3:
http://s7d5.scene7.com/is/image/LuckyBrandJeans/7W64372_960_1
http://s7d9.scene7.com/is/image/LuckyBrandJeans/7W64372_960_1
In fact, you can change it to s7d1, s7d2, s7d3, etc. and it still works.
So, I think if you were to build some sort of template you could just pick whatever URL you wanted and just append the filename on the end like:
http://s7d5.scene7.com/is/image/LuckyBrandJeans/{{imageFileName}}
We have the same thing with our company.
One domain serves the images for the lower "sandbox" environment (d5) and the other serves the images to your live environment (d9).
Related
I am looking to create a website that generates content depending on your city location. The best Example I found was Craigslit.They generate a web domain name like https://yourcity.craigslist.org/ when you either click on the city or it locates where you are. I was just wondering if I could get some help on how to build something like that.
The web pages are created using a template that doesn't change, populated with data that is selected from a database server, using your location to lookup appropriate items.
The subdomain (your city) is usually defined in the DNS record, just like www. There would be an entry for chicago.craigslist.org, for example.
edit
If you're asking how they know where you are, they can take a guess based on your IP address, however this isn't very reliable. Google does this also, when getting you search results that could be localized.
So yeah, it is expected of you to type some stuff into google to (try) find your answer (like detect city from javascript will bring up a lot of results for your problem.)
But yeah you would use a service like https://ipstack.com/ to detect where you live, depending on where you live the accuracy increases. (EU has some rules and regulations that make it a lot less accurate than if you would be living in the US)
Once you have a database with content - For example craigslist has a database of second hand items sold by people from all over. When you connect to craigslist they ask a service where your request came from - then use some filter function based on your location to match the results.
Good luck
Your IP address can be used to make an educated guess as to where you are, but it's not very accurate. When providing you with search results that might be localised, Google also does this.To know more about creating a website like craigslist follow here
https://www.yarddiant.com/blog/classifieds/how-to-build-a-website-like-craigslist.html
I am using the imgix.com CDN for a test project and for some reason it keeps downloading the images instead of browsing and applying the rules to to them.
So if I type in myprefix.imgix.net/myimage.png it simply downloads it and if I type https://myprefix.imgix.net/myimage.png~text?txtsize=44&txt=470%C3%97480&w=450&h=480 nothing happens.
Has anyone come across this problem?
Thanks
These are two separate issues:
1) If you request an imgix URL without adding any query parameters, imgix will just act as a passthrough to your source. If your images are being treated as a download by the browser rather than as images to display, there must be something mis-configured at the source level. Not knowing anything about your source, I really can't offer any better advice here.
2) The myimage.png~text URL isn't working because you shouldn't be using ~text at all here. Take those five characters out of your URL and it should work as you expect.
Imgix's ~text endpoint is a way to request an image where the "base image" is text rather than a real image. In trying to combine a real base image (myimage.png, in your URL above) with this text-only endpoint (~text), you're making a request that imgix doesn't know how to handle.
If you've got further questions about your imgix integration, especially if they're configuration questions that involve your specific account and settings, I'd encourage you to send your questions to support#imgix.com instead of StackOverflow. While SO is a great place to answer one-off questions, writing into our support-ticket system will allow us to answer account-specific questions a lot easier.
Once your Source has been configured and deployed, you can begin making image requests to imgix. These requests differ slightly for each imgix Source type, but they all have the same basic structure:
https:// example.imgix.net imgix domain / products/desk.jpg path ? w=600&exp=1 query string
The hostname, or domain, of the imgix URL will have the form YOUR_SOURCE_NAME.imgix.net. In the above URL, the name of the Source is example, so the hostname takes the form of example.imgix.net. Different hostnames can be set in your Source by clicking Manage under the Domains header.
The path consists of any additional directory information required to locate your image within your image storage (e.g. if you have different subfolders for your images). In this example, /products/desk.jpg completes the full path to the image.
imgix’s parameters are added to the query string of the URL. In the above example, the query string begins with ?w=600 and the additional parameters are linked with ampersands. These parameters dictate how images are processed. In the above URL, w=600 specifies the width of the image and exp=1 adjusts the exposure setting.
I'm getting some very strange behavior in DTM. When our page loads (from a local instance of the website) we get the expected call going out with the proper dev report suite. When a custom link call is made from that page, for some reason DTM sends it with a production report suite. If I look in Adobe Analytics for the custom link name reported under the prod RSI, it does not show up in there.
Any ideas on what is going on and how I can fix this issue?
This is my shot in the dark based on what you have said, and it is based on the assumption that your statements are true (e.g. you aren't seeing pink elephants, that the request was indeed showing your prod rsid in the proper portion of the request url, that you did in fact check your prod rsid after an acceptable amount of time has past, no segment or other filter shenanigans, etc..: in short, that you do know how to accurately perform the basic QA song and dance).
Under that assumption, the below is a scenario that can plausibly reproduce what you are describing. I could be partially right or totally off for your specific situation, but there's really no way for me to know for sure without having access to your DTM instance.
The Scenario
Long story short is it sounds like you have a blend of custom code and DTM automatic settings enabled, and DTM is overriding and/or not caring about your custom code for link tracking.
More specifically, it sounds to me like you have AA implemented as a tool in DTM, and in the config settings, you have your production and staging rsids specified in the text fields.
Then in the General section, you either do NOT have values specified for Tracking Server and Tracking Server Secure, or else they are set to the wrong values.
Then, in the Library Management section, you have either selected "Managed by Adobe" in which case DTM takes care of the library, or else you have selected "Custom" and you are adding the library yourself AND you have NOT checked "Set report suites using custom code below".
Then, somewhere in DTM (e.g. the Library Management > Custom code box, or Customize Page Code codebox) you have code that pops rsid stuff (e.g. s.account, s_account, dynamicAccountList stuff), and possibly also trackingServer and trackingServerSecure.
Finally, you (like most other people, because DTM's double script include for staging vs. prod is.. dumb) just use the prod script include on your page, and either use the debug/staging mode or rely on whatever rsid routing logic you've setup to route to dev.
So.. when the page is first loaded, DTM loads the AA library and it sets variables and stuff based on what you specified in the tool config. During this time, it is also popping any custom code blocks you have in the tool config, which may or may not override what you have specified in the tool config fields, depending on what you enabled. Then after that, it pops stuff you have in page load rules (if any), etc..
But then comes the link click.. As I have mentioned in other posts on SO, DTM has this caveat (IMO bug) about how it references the AA object after the initial page load/AA request: basically, it doesn't. Instead, it makes use of internal methods (the main one being a .getS() method) to create a new instance of the AA object, based on whatever things you have configured in the tool config section. Well here's the rub.. it does NOT account for or execute any custom coding you have done in code boxes in the tool config section.
So that basically happens whenever an event based or direct call rule is triggered, and it effectively screws you. Why does DTM do this? I do not know. IMO Adobe needs to change this feature caveat bug. Either they should refactor DTM to execute the code boxes, OR they could, you know.. just reference the original AA object created, like any normal script would do..
But in any case..
So for example, my theory here is that page loads fine, points to dev rsid based on your setup. But then you click a link and an event triggers, and DTM makes a new AA object not caring about your custom code, so all it has to go on is what you have in the tool's config fields.
Since DTM doesn't actually have any rules around the prod vs. dev rsids you specify in those fields (you have to write custom code in the custom code boxes - that DTM ignores!), it just pops the prod rsid, because that's the script include you have on your page.
Then as far as not seeing the data actually show up in your prod rsid: again, since DTM ignores what you set in your custom code boxes, it's defaulting to what is specified in the trackingServer fields in the tool config, and my assumption here is they are either blank or wrong (you should be able to look at the request url to adobe to verify this). This theory is because you said the prod rsid is right, and you see a request being made. So the next culprit would be wrong tracking server specified.
So, that is my theory of what's going on. Maybe it's all right, maybe it's some right, hopefully it may point you in the right direction at least.
Edit:
If you can confirm that this is indeed how you have things setup, then you will naturally ask "Okay, well what do I do about that?". As I have said in a lot of my other SO answers.. basically, your only option is to uncheck all the settings that make DTM automate AA, and in all your rules, keep the AA section disabled and whatever AA vars you wanna set, set them yourself and make the s.t() or s.tl() call yourself in a 3rd party script code box, so that it continues to reference and pop based off the originally instantiated AA object.
Update
Based on your comments below, okay so yeah.. that sounds like what I described, and accounts for prod rsid popping. As for data not showing up in report.. so if you are certain tracking server is set correct (the request url looks good) then this isn't a DTM issue. Here are some other explanations for why the data wouldn't show up:
Are you sure the request is being sent to your prod rsid? I don't know what you are looking at to verify this, but this is where you should be looking: In the request URL to AA: "http://[trackingServer value]/b/ss/[s.account value]/1..."
Click request isn't making it to Omniture. Verify in a packet sniffer that the request is actually made and that you are getting a 200 OK or NS_Binding_Aborted response.
You aren't waiting long enough to check for the data. Even basic hit data and looking at "real time" reports takes a little bit of time to show up.
You have a segment/filter active that's not jiving with the data you are trying to look at. Make sure that you don't have anything applied. Or, if you are using those things to find your data (and aren't seeing it), ensure that you are correctly applying it.
You recently created the rsid and the "go live" date hasn't passed yet. Data will not show up in the report suite until up to 24 hours after the specified "go live" date.
You have a vista rule in place that's affecting data showing up. Some companies have a vista rule in place for a number of reasons and there are a million ways it could affect data (e.g. routing to a different report suite). For shits and grins, check your dev (or other rsids) to see if your data showed up there. Even if that doesn't make sense, at least it's a step forward.
You have a bots / ip exclusion rule in place that's catching data from your location.
The data sent in from the link click isn't relevant to the report. For example, maybe you are looking at e.g. prop10 report and prop10 isn't actually sent in the click request.
I know a lot of these are basic things to check, and no doubt you've checked, but check again. Have someone else check for you to be sure. I'm not questioning g your abilities here, but even the best of coders forget to cross their t's and dot their i's sometimes, and manage to miss obvious things. If you are sure about all of these then contact Adobe ClientCare because I really can't think of anything else that wouldn't involve an issue with Adobe's backend.
I ran into a similar problem with my implementation. Essentially what I did was set the s.account variable directly inside the doPlugins, so it would be set on all tracking calls. I wrote specifics here also: DTM Tracking Account
We have uploaded tons of files via FTP to a Plone intranet we're deploying. This step does not set the titles of the files; so searching for a file called: "invoice_policy.odt" it won't show up in a search by "invoice policy" (two words); cause the index for id's is a field index.
Moreover, the default plone lexicon does not split words by underscores, so setting the title to be just the id won't help either.
So, in order to improve our search recall, we have scripted (taken from several sources including some answers in StackOverflow) a quite simple normalization script: https://gist.github.com/3701401
However, after applying it to near 8000 files I see that the titles have changed, but the files still appear in the navigation with the id "invoice_policy.odt"; I have to edit the file and then save it in order for it to appear with its title in the navigation.
I have uploaded three images to flickr to show the process:
Image 1. The (last) file in its folder.
Image 2. When I click the file you may see it has a title (normalized with our script)
Image 3. I just clicked the title and the click the Save and went back to its containing folder. Now it's been shown properly.
Do I need to do (or undo) something in my script for it to work properly. Furthermore, although I (think I) enclosed each rename in its own transaction, I don't see any transaction in the Undo tab of the ZMI. I guess it's because it's not associate to a real request, is that so? Can I fix it?
Best regards,
Manuel.
You need to reindex the items, either one by one in your script, or in batch at the end. http://collective-docs.readthedocs.org/en/latest/searching_and_indexing/indexing.html will probably help.
I'm trying to figure out if I can get browsers to cache images with signed urls.
What I want is to generate a new signed url for every request (same image, but with an updated signature), but have the browser not re-download it every time.
So, assuming the cache-related headers are set correctly, and all of the URL is the same except for the query string, is there any way to make the browser cache it?
The urls would look something like:
http://example.s3.amazonaws.com/magic.jpg?WSAccessKeyId=stuff&Signature=stuff&Expires=1276297463
http://example.s3.amazonaws.com/magic.jpg?WSAccessKeyId=stuff&Signature=stuff&Expires=1276297500
We plan to set the e-tags to be an md5sum, so will it at least figure out it's the same image at that point?
My other option is to keep track of when last gave out a url, then start giving out new ones slightly before the old ones expire, but I'd prefer not to deal with session info.
The browser will use the entire URL for caching purposes, including request parameters. So if you change a request parameter it will effectively be a new "key" in the cache and will always download a new copy of that image. This is a popular technique in the ad-serving world - you add a random number (or the current timestamp) to the end of the URL as a parameter to ensure the browser always goes back to the server to make a new request.
The only way you might get this to work is if you can make the URL static - i.e. by using Apache rewrite rules or a proxy of some sort.
I've been having exactly the same issue with S3 signed URLs. The only solution I came up with is to have the URLs expire on the same day. This is not ideal but at least it will provide caching for some time.
For example all URLs signed during April I set the expiry on May 10th. All URLs signed in June I set to expire on July 10th. This means the signed URLs will be identical for the whole month.
Just stumbled on this problem and found a way to solve it. Here's what you need to do:
Store first url string (in localStorage for example);
When you receive img url next time just check if their main urls match (str1.split('?')[0] === str2.split('?')[0])
If they do, use the first one as img src attribute.
Hope it helps someone.