we have successfully setup the OpenAS2 (https://github.com/OpenAS2/OpenAs2App)to send messages between our partners.
We are receiving orders from our partner in the EDI (edifact) format. Does anyone have an suggestion how to best translate this order and get it into our WooCommerce server as an order. Woo has an api to place orders:
https://woocommerce.github.io/woocommerce-rest-api-docs/#create-an-order but not sure how to go from edifact order to Woo order. Any suggestions ?
What programming languages are in your tech stack? You need to convert the segments/elements into XML/JSON that the API requires. EDI is just text formatted a specific way. If you have Python, you might give BOTS a try. There are a lot of open source parsers for EDIFACT. Or, you could go the commercial translator route.
Related
I want to develop a job board website. I want specific feature in that is job posts will be created automatically by email parsing.
I tried Zapier, but it creates only blog posts.
And tried postie plugin to, but Gmail didn't allow it.
Willing to use job monster / work scout/ superio any one of these themes. If you have any suggestions, please let me know about it.
Is there any way to parse the email data and create a new job post. Please help me to resolve this issue.
No paid task. Need help to learn the things
There is a lot to unpack here.
The main problem you are going to encounter is that the emails you are parsing may not all be formatted the same. To pull the info out of an email you will need to be able to generate some rules to extract it.
If however, the emails are formatted the same then you can use the "split" function in Zapier to pull out the various bits of data from the email. Once you have these you can create a new post with your Zap.
I would recommend looking for a Wordpress plugin that allows you to create lists with custom post types. WP-Bakery does this from memory. You can set up a custom feed based on that post type.
Hopefully this helps narrow down the process for you. Good Luck.
are there any detailed information on Google Translate and the GDPR?
In my opinion, translating personal data with the google translate widget is an big issue here, especially if you run an online-store and the user translates pages while checking out (i.e.: last checkout step, where all the personal data including cart-positions, billing-information and user contact-information are preset).
There is a way the exclude parts of the website from being translated (adding "notranslate" class attribute), but i assume the data itself is send to google translate servers anyway?
Looking forward to an answer.
Best regards,
Andrea
Collect some statistics about your visitors and implement localization for major languages they use.
This will probably prevent users from using google translate for your checkout process.
I am looking for EDI x12 4010-204 SEF file. Please let me know where can find this file.
Actually, I need the details of all the EDI-204 segments and there corresponding qualifiers. Please help me if anyone knows where can I find this.
Thanks,
Nitin
A similar question arose from someone dealing with Amazon. I strongly suggest you reach out to your trading partner for their standards -- this is the normal behavior in the EDI world. Since EDI provides general guidelines, but the actual implementation within even a single transaction like the 850 (purchase order), varies widely from customer to customer.
You don't technically need a SEF file. In our case, we went straight to the in-house developed solution path by writing our own classes and programs to translate EDIFACT messages into XML and then capturing the data we needed into SQL Server tables.
First you need to determine what data you want out of the EDI message. Next, you need to translate the message into a readable format such as XML (the SEF file is a map of the EDI which is why you think you need it. Yes it helps but it's not mandatory). Once you have it translated, then you can extract what you need.
I strongly suggest you read and study this great effort in CodeProject...
https://www.codeproject.com/Articles/11278/EDIFACT-to-XML-to-Anything-You-Want
Remember, EDI files are really a "message packet" and are not files that one would convert into another file such as CSV or even XML. You need to firstly "map" the EDI message into a meaningful layout that you can then "pull out" data from, to meet your needs.
Think of EDI as being the precursor to BLOCKCHAIN. Yes, Bitcoin. Remember that!? Blockchain is the modern version of EDI messaging and Business-2-Business processing.
I've been experimenting with writing my own RSS reader. I can handle the "parse XML" bit. The thing I'm getting stuck on is "How do I fetch older posts?"
Most RSS feeds only list the 10-25 most recent items in their XML file. How do I get ALL the items in a feed, and not just the most recent ones?
The only solution I could find was using the "unofficial" Google Reader API, which would be something like
http://www.google.com/reader/atom/feed/http://fskrealityguide.blogspot.com/feeds/posts/default?n=1000
I don't want to make my application dependent on Google Reader.
Is there any better way? I noticed that on Blogger, I can do "?start-index=1&max-results=1000", and on WordPress I can do "?paged=5". Is there any general way to fetch an RSS feed so that it gives me everything, and not just the most recent items?
RSS/Atom feeds does not allow for historic information to be retrieved. It is up to the publisher of the feed to provide it if they want such as in the blogger or wordpress examples you gave above.
The only reason that Google Reader has more information is that it remembered it from when it came up the first time.
There is some information on something like this talked about as an extension to the ATOM protocol, but I don't know if it is actually implemented anywhere.
As the other replies here mentioned, a feed may not provide archival data but historical items may be available from another source.
Archive.org’s Wayback Machine has an API to access historical content, including RSS feeds (if their bots have downloaded it). I’ve created the web tool Backfeed that uses this API to regenerate a feed containing concatenated historical items. If you'd like to discuss the implementation in detail please get in touch.
In my experience with RSS, the feed is compiled by the last X items where X is a variable. Certain Feeds may have the full list, but for bandwidth sake most places are likely limiting to just the last few items.
The likely answer for google reader having the old info, is that it is storing it on its side for users later.
Further to what David Dean said the RSS/Atom feeds will only contain what the publisher of the feed has up at that moment and someone would need to be actively collecting this informaton in order to have any historical information. Basically Google Reader was doing this for free and when you interacted with it you could retrieve this stored informaton from the google database servers.
Now that they have retired the service, to my knowledge you have two choices. You either have to start collection of this information from your feeds of interest and store the data using XML or some such, or you could pay for this data from one of the companies who sell this type of archived feed information.
I hope this information helps somebody.
Seán
Another potential solution that might not have been available when the question was originally asked and shouldn't require any specific service.
Find the URL of the RSS feed you want and use waybackpack to get the archived urls for that feed.
Use FeedReader or a similar library to pull down the archived RSS feed.
Take the URLs from each feed and scrape them as you wish. If you're going way back in time it's possible there might be some dead links.
All previous answers more or less relied on existing services to still have a copy of that feed or the feed engine to be able to provide older items dynamically.
There's though another, admittedly pro-active and rather theoretical way to do so: Let your feedreader use a caching proxy which semantically understands RSS and/or Atom feeds and caches them on a per-item base up to as many items as you configure.
If the feedreader doesn't poll feeds regularily, the proxy could fetch known feeds time-based on its own to not miss an item in highly volatile feeds like the one from User Friendly which has only one item and changes every day (or at least used to do so). Hence if the feedreadere.g. crashed or lost network connection while you are away for a few days, you might loose items in your feedreader's cache. Having the proxy to fetch those feeds regularily (e.g. from a data center instead from at home or on a server instead of a laptop) allows you to easily run the feedreader only then and when without loosing items which were posted after your feedreader fetched feeds the last time but rotated out again before you fetch them the next time.
I call that concept a Semantic Feed Proxy and I've implemented a proof of concept implementation called sfp. It's though not much more than a proof of concept and I haven't developed it further. (So I'd be happy about hints to projects with similar ideas or purposes. :-)
Why does this problem exist?
Most RSS readers need to import feeds through a live URL, which makes things harder for sites that are unindexed on Wayback Machine.
The reason why Wayback Machine feeds can be imported is that the reader can regularly poll the server for updates according to its defined TTL configuration. The reader compares the current datetime with the RSS feed posts pubDate or lastBuildDate keys in the XML response. We can't hack the machine datetime to work around the datetime resolution because the current datetime is fetched live.
I've outlined an alternative solution without Wayback below. Unfortunately, I have not been able to find a universal solution for all feed sources.
Alternative Solution(s)
In my experience, NOT ALL feeds are partial though. The XML doesn't have to specify the datetime of each post. This means the RSS Reader doesn't have a datetime to filter the feed with. An example of this feed type can be found here.
This kind of reading experience is useful when chronological order is irrelevant, and the content doesn't need to be sorted. This approach is useful for sites where ALL the content is valuable, and the linked Essays of Paul Graham is a good example.
If the site has a generic, non-chronological feed option, subscribe to that RSS instead (the preferred option).
Download the linked timestamped .rss file, strip datetimes and host the file on your own server. Note, we can implement this via an AWS Lambda.
Set up a server that fetches the RSS from live.
Strip the pubDate tags from the XML file on fetch.
Host the modified RSS on your own server.
Note
These are suboptimal solutions due to loss of orders, however, I wanted to provide a potential alternative to WaybackMachine.
In addition, some existing answers require advanced SysDesign workarounds, more prework and in some cases are outdated (Google Reader is shut down). I hope it's helpful for those who really need a solution for a complete feed list. Constructing new RSS feeds is not too hard from the original RSS file.
I wondered if anyone can give an example of a professional use of RSS/Atom feeds in a company product. Does anyone use feeds for other things than updating news?
For example, did you create a product that gives results as RSS/Atom feeds? Like price listings or current inventory, or maybe dates of training lessons?
Or am I thinking in a wrong way of use cases for RSS/Atom feeds anyway?
edit #abyx has a really good example of a somewhat unexpected use of RSS as a way to get debug information from program transactions. I like the idea of this process. This is the type of use I was thinking of - besides publishing search results or last changes (like mediawiki)
Some of my team's new systems generate RSS feeds that the developers syndicate.
These feeds push out events that interest the developers at certain times and the information is controlled using different loggers. Thus when debugging you can get the debugging feed, when you want to see completed transactions you go to the transactions feeds etc.
This allows all the developers to get the information they want in a comfortable way and without any need to mess a lot with configuration. If you don't want to get it there's no need to remove yourself from a mailing list or edit a configuration file - simply remove the feed and be done with it.
Very cool, and the idea was stolen from Pragmatic Project Automation.
Most of the digital libraries uses RSS/ATOM to display their search/results, data update, according to the OAI-PMH protocol
With our internal TRAC server, I'm subscribed to the timeline view for each project that I work on. It's great for keeping track of checkins and bug tickets. This is pretty exclusive to a developer position though.
I also am subscribed to the recent changes for our installation of MediaWiki that we use for our intranet. That way it's easy to see if documents that I need have been changed, or if there's new policies etc.
Our website has a news page that I wrote an RSS feed for as well. While you mentioned that you weren't really interested in recent news, it is nice to keep up with our press releases.
I have seen RSS used to syndicate gas prices from a service for a specific zip code.
there are many examples. Here are a couple.
SharePoint provides RSS feeds from its lists.
Many faceted navigation products allow you to get an RSS feed based on a selected filter. For example, you can navigate to view 24" LCD Monitors on newegg.com and then get an RSS feed of that view.
Mantis bug tracker includes RSS feeds although I wish they were more configurable. Also we use MediaWiki for documentation which has all sorts of RSS Feeds including a per page watch, and recent changes.
I just added RSS feeds to the ticketing system I use at work (TicketDesk) and that feature should be in the next release of the product.
It's nice because it basically provides me a custom search view of outstanding trouble tickets or work requests that comes to me rather then me having to go to the application. It also allows users to get feeds of issues they may be interested in, but not require them to get emails on each update.
I'm looking at implementing an RSS feed for calls for service that our agency takes, to provide the administrators a quick and easy way to see what has been going on.
Atom feed documents and Atom entry documents are used as the representation format for RESTful web services that follow the Atom Publication Protocol (AtomPub).
I personally have used syndication feeds to expose a sub-set of the Windows Event Log information so that I could subscribe and be notified of critical events on a server.
immobilienscout24
they use RSS feeds for updates on your search.