Using /get requestHandler in solrj search - solrcloud

I need to use the "/get" request handler using solrj code in solr 4.10. I am using CloudSolrServer to run my solr queries. Can someone tell me the exact way to use that request handler through solrj. Setting solrQuery.setRequestHandler("/get") and the query as
setSolrQuery("id=1") or setSolrQuery("id:1") does not bring back any results.
Using curl command and running the query as
curl -i 'http://localhost:8983/solr/collection/get?id=1' does bring back results though.
I believe there is some specific way we need to pass the solr document id while using the get request handler. Any leads would be appreciated.

Related

Making an HTTP request with a blank user agent

I'm troubleshooting an issue that I think may be related to request filtering. Specifically, it seems every connection to a site made with a blank user agent string is being shown a 403 error. I can generate other 403 errors on the server doing things like trying to browse a directory with no default document while directory browsing is turned off. I can also generate a 403 error by using a tool like Modify Headers for Google Chrome (Google Chrome extension) to set my user agent string to the Baidu spider string which I know has been blocked.
What I can't seem to do is generate a request with a BLANK user agent string to try that. The extensions I've looked at require something in that field. Is there a tool or method I can use to make a GET or POST request to a website with a blank user agent string?
I recommend trying a CLI tool like cURL or a UI tool like Postman. You can carefully craft each header, parameter and value that you place in your HTTP request and trace fully the end to end request-response result.
This example straight from the cURL docs on User Agents shows you how you can play around with setting the user agent via cli.
curl --user-agent "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" [URL]
In postman its just as easy, just tinker with the headers and params as needed. You can also click the "code" link on the right hand side and view as HTTP when you want to see the resulting request.
You can also use a heap of hther HTTP tools such as Paw and Insomnia, all of which are quite well suited to your task at hand.
One last tip - in your chrome debugging tools, you can right click the specific request from the network tab and copy it as cURL. You can then paste your cURL command and modify as needed. In Postman you can import a request and past from raw text and Postman will interpret the cURL command for you which is particularly handy.

Injecting an api key to google script's UrlFetchApp for an HTTP Request

The Problem
Hi. I'm trying to use Google Script's "UrlFetchApp" function to pull data from an email marketing tool (Klaviyo). Klaviyo's documentation, linked here, provides an API endpoint via an HTTP request.
I'm trying to use Google Script's "UrlFetchApp" to pull data from Klaviyo's API. Unfortunately, Klaviyo requires a key parameter in order to access data, but Google doesn't document if (or where) I can add a custom parameter (note, it should look something like: "api_key=xxxxxxxxxxxxxxxx". Note, it's quite easy for me to pull data into my terminal using the api_key parameter, but ideally I'd have it pulled via google scripts and added to a google sheet appropriately. If I can get JSON into google scripts, I can work with the data to output it how i want.
KLAVIYO'S EXAMPLE REQUEST FOR TERMINAL
curl https://a.klaviyo.com/api/v1/metrics -G \
-d api_key=XX_XXXXXXXXXXXXXXX
THIS OUTPUTS CORRECT DATA IN JSON
Note: My ultimate end goal is to pipe the data into Google data studio on a recurring basis for reporting. I thought i'd get the data into a csv for download / upload into google data studio on a recurring basis. If I'm thinking about this the wrong way, let me know.
Regarding the -G flag, from the curl man pages (emphasis mine):
When used, this option will make all data specified with -d, --data,
--data-binary or --data-urlencode to be used in an HTTP GET request instead of the POST request that otherwise would be used. The data
will be appended to the URL with a '?' separator.
Given that the default HTTP method for UrlFetchApp.fetch() is "GET", your request will be very simple:
UrlFetchApp.fetch("https://a.klaviyo.com/api/v1/metrics?api_key=XX_XXXXXXXXXXXXXXX");

How to create article in Drupal 8 via JMeter?

I am trying to create a JMeter test case for article creation in Drupal 8. I am able to add steps for other navigations. But when clicking the Create Article button after entering some values in the form fields, from JMeter I am getting HTTP response 200. But the article is not getting created.
If I do the same steps in browser I am getting HTTP response 303 and article getting created successfully.
I found this in request headers of POST request while hitting the Create Article button. I am suspecting this might be the reason Drupal server is not accepting the request. Because I am not sure how this dynamic ID "JJPKbuyIinQT5mQZ" is getting generated
Is this being generated by browser? If yes, how to do the same action in JMeter?
Is this being generated by server? If yes, I don't see this token in previous request, like form_token.
This dynamic ID should be automatically generated by JMeter given you tick Use multipart/form-data for POST box, this is so called multipart boundary
Other things to be considered:
Don't forget to add HTTP Cookie Manager, otherwise you will not be able to even perform a login
Correlate form_build_id and form_token. You can do this using CSS/JQuery Extractor
Correlate changed, you can generate timestamp like 1532969982 using __groovy() function like: ${__groovy(Math.round(System.currentTimeMillis() / 1000),)}
Correlate created[0][value][date]. You can do this using __time() function like ${__time(YYYY-MM-dd,)}
Correlate created[0][value][time]. You can do this using the same __time() function like ${__time(HH:mm:ss,)}
That's probably it, other values should be good to be used from the recorder.

Extending artifactory's pypi repos with plugins

I am trying to migrate a legacy system to use artifactory. However I have two blockers:
the old scripts require PyPixmlrpc, which artifactory doesn't support
they also make use of upload_docs, not supported by artifactory's pypi implementation either
a smaller issue, the old scripts call register and they expect 200 instead of 204 http status code.
Would it be possible for me to write a plugin to implement this?
Looking at https://www.jfrog.com/confluence/display/RTF/User+Plugins I couldn't find a callback for when POST /api/pypi/<index-name> is requested.
If I can make
work for the methods we actually use, to just pretend it deployed docs and to respond with the correct status code I will be happy enough.
As you say, there is no plugin hook for the Pypi API endpoints. It would be possible to use the altResponse endpoint to customize artifact downloads, but then you would be restricted to GET requests with no request body, which is also not a good option for you.
I think the most viable approach would be to define a custom executions endpoint. With this, you can specify the acceptable method, read the body, and set your own response code and body. The main shortcoming with this is that you can't customize the path (it's always /api/plugins/execute/[execution_name]), but this can be worked around.
Execution endpoints can take params in the following form:
/api/plugins/execute/[execution_name]?params=[param_name]=[param_val]
Say your plugin takes a param path, which represents the API path your old scripts are going to call. Then you can set your base URL to /api/plugins/execute/[execution_name]?params=path=/, so that the API path is appended to the param. Alternatively, you can use nginx or another reverse proxy to rewrite the original API path to this form.
(Since you'll be using XML-RPC, I don't suppose you'll need to worry about any of this path stuff, but I'm including it anyway for completeness.)
Some issues with this approach:
Execution endpoints only allow String responses, so sending binary data in the response body might be finnicky. However, no such limitation exists with the request body.
If you need more than one request method, you'll need more than one execution endpoint. This means you'll need to use a reverse proxy to rewrite each method to a separate endpoint. Again, since XML-RPC just uses POST, this probably won't be an issue for you.
Execution endpoints can't customize response headers. Therefore, if your scripts expect a particular Content-Type or other header, you'll need to use a reverse proxy to insert it into the response.

Emacs Lisp - How do you transfer binary files via HTTP?

Recently, I have been experimenting with using Elisp to communicate with a local CouchDB database via HTTP requests. Sending and receiving JSON works nicely, but I hit a bit of a road-block when I tried to upload an attachment to a document. In the CouchDB tutorial they use this curl command to upload the attachment:
curl -vX PUT http://127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af/artwork.jpg?rev=2-2739352689 \
--data-binary #artwork.jpg -H "Content-Type:image/jpg"
Does anyone know how I would go about using the built-in url package to achieve this? I know that it is possible to do upload using multipart MIME requests. There is a section in the emacs-request manual about it. But I also read that CouchDB does not support multipart/form-data as part of their public API, even though Futon uses it under-the-hood.
I think you need to use url-retrieve and bind url-request-method to "PUT" around the call.
You will also need to bind url-request-data to your data:
(let ((url-request-data (with-temp-buffer
(insert-file-contents "artwork.jpg")
(buffer-substring-no-properties (point-min) (point-max)))))
(url-retrieve "http://127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af/artwork.jpg?rev=2-2739352689"))
Please see also
Creating a POST with url elisp package in emacs: utf-8 problem
How does emacs url package handle authentication?
You might also find enlightening by reading the sources.

Resources