Getting a list of salesforce reports with the 'unique name' and not just the user label - reflection

Get a manipulatable list of Salesforce reports is already a little bit convoluted, requiring login to the site and then downloading /servlet/servlet.ReportList, which is an xml file containing a list of reports. For each report you get the folder name, the name (user label), the id and whether it is public. However two fields are missing - the "unique" name and the description. The unique name is important here, as Salesforce allows any number of reports, even in the same folder, to have the same name/label. This means the only way to tell them apart is by the unique name.
Is there any way to get a list of reports that includes both the unique name and the id? (or failing that the description and the id?)

The Metadata API supports Reports.
Included are the "name"and "fullName" fields. The latter is the unique ID for a Report.

Related

Ignore Turkish Characters On Firestore Query

I have a .net app that uses Firestore as a database and It's using Firestore Query to find some data. The problem is data fields that include Turkish characters but if someone uses my app and wants to search for data and if don't use Turkish characters, the query can not find this data.
For example, if I want to search my name on my app and my name is saved like "Ertuğrul" and if the user searches like "Ertugrul", the query can not find it. I need it to find it. Is there a way to do that?
My code that uses query is here:
QRef = DataBase.Collection("CollName").Document("DocName").Collection("CollName")
.WhereGreaterThanOrEqualTo("NameSurname", $"{NameSurname}")
.WhereLessThanOrEqualTo("NameSurname", $"{NameSurname}\uF7FF");
Firestore queries always return documents where a particular field holds a perfect match. If you want to be able to search for "Ertuğrul" as well as for "Ertugrul", then besides the "NameSurname" field you should consider adding a new field called "NameSurnameWithoutSpecialCharacters" and store each name without those Turkish characters.
When a user searches, simply verify if the searched term contains "special" characters. If it does, search on the "NameSurname", otherwise search on the newly created field.

Which field to use to filter documents by ID in Firestore dashboard?

I am using the firestore dashboard to browse through some documents in my collection. In one particular case, I am looking to find a document in a collection called private, but when I enter "id" in the "Filter by field" and specify the ID that I want it to match, the dashboard doesn't find anything.
All I want to do is a find a specific document in a collection via the ID using the dashboard. Any idea how to do this? Seems such a mundane feature that I am just surprised firebase wouldn't have it?!
you can search for one specific document by clicking on the "table header", in your case, "Home > private > 0EU..."
The value you type there is taken as the name of a field to search for. "id" means the name of the field called literally "id". There is one special field name "__name__" which is taken by the Firestore SDK to mean the document ID in some cases (normally specified as FieldPath.documentId()), but apparently the console does not accept that.
What you have here is, in my opinion, a valid feature request for the Firebase console, and you can file that with Firebase support.

Scraper to Database visualiser connector

I'm using Dexi.io to scrape some data that outputs to Google Drive as a CSV, that gets parsed (through a Google sheets script) and added to a Native sheet (all automatically).
I'd like to push my data (automatically) to a "database Visualizer" of some sort (using knack.com currently) that allows me to display the data (in Table format) with some options to filter, sort and dig deeper; all protected by login creds that I manage.
I tried using Zapier to automate the Google Sheets to Knack integration, but Knack only has an option to "Create New Records" through Zapier and not "Update Records". (Updating records exists as an API endpoint)
I need help proceeding as I'm not a developer and am starting to hit the limits of my capabilities.
Could someone please recommend a tool (that integrates with Sheets, updates data periodically and lets me control the domain and login creds) or the optimal way to proceed with this? (I'd gladly hire a freelancer to help me build this out optimally)
Some more, potentially relevant, info: Dexi.io can output through FTP, Drive, Box or Amazon S3 (remember, not a dev :$)
kintone is a "database Visualizer" similar to Knack, and they have actions to update records.
https://zapier.com/zapbook/kintone/
There are two options to update records as there are two ways in which the unique key can be defined.
Each record in kintone has a "Record ID" associated with it - this is an autonumber made by kintone. You can specify this as the key to update, in which case you would use the "Update Record By Record ID" action.
If you would prefer though to set your own unique key and use that as the key to update, you can define that unique key in your database (I guess the data you are scraping has its own ID for each record). In this case, you can place a "Text (single-line)" field in your database, open up the options and select "Prohibit duplicate values" which will make this field into a unique field - meaning that no two records can hold the same value.
Once you set that field up (and update your kintone App settings), you can select this field to be the unique key to update for the "Update Record By Update Key" Action (the "Update Key" in the action name is referring to the unique key that you just made).
And yes, kintone gives you control over login creds, and each login cred can have different view/add/edit permissions over each record you have in your app.
You can also set a custom subdomain name, but the domain name will have to be kintone.com i.e. you can have a https:/ /mycustomname.kintone.com sort of name.
Hope this helps.

What is the format of the LinkedIn id field r_basicprofile?

We make use of Sign in with LinkedIn for a pre-existing app. The app uses the id field returned as part of the user's profile, however the app has restrictions on what character values can be present in the id.
What are the legal characters that LinkedIn will put in the id?
The description for id says
A unique identifying value for the member.
This value is linked to your specific application. Any attempts to use it with a different application will result in a "404 - Invalid member id" error.
Testing a small sample size, shows things like zHjkl_t-4D, _IcF7_r2b1 and -1ZM8mwCKM, which caused an issue with the field being restricted to starting with alphanumeric characters. I'd like to know the legal values so we can access if LinkedIn signups are suitable for future applications.
Member IDs are presented in Base64 encoded format. Any characters that show up in the Base64 index table are valid.

complex Queries in kibana or quering for different values of a single field type

I am new to Kibana. I have successfully installed Logstash ,Elasticsearch and Kibana. All the links or documents i read have simple query syntax like search by text,by typing phrase or putting logical operators .but all this is so basic.
How can we query in detail.for example i have logs of my magento store and the logs have time stamp,product ID and the action that is the product is purchased or viewed or removed like that.
I imported these logs in kibana via logstash.
Now i want to query logs for the action field not different fields.When i query the logs it returns me logs that have added action and logs that have remove action.The query is "added" OR "removed" when i do "added" AND "removed" there are no logs given because these both words are of same field type and kibana does not allow this it returns zero records because any particular log cant have two valuesin the action field that is product added and removed.I need to know the product which is added and removed the most by people and do a visualization of that.
please suggest if there are any tutorial for studying kibana lik, how to configure it learn to write complex queries
You can try to parse your logs in Logstash to multiple fields.
As your requirement, say add field-"Action" and field "Product".
In the Kibana you can add Table with terms set to "Product" field.
So, when you search for "Added", the table will show out all the product with Added action.
I wanted to match two disparate search terms in the SAME field using logical operators. For example, a field called 'product_comments' has value 'residential plumbing bathroom sink", and I want "residential" AND "sink" to match.
The documentation here: https://lucene.apache.org/core/2_9_4/queryparsersyntax.html#AND says this is possible, just as OP originally tried.
Using Kibana 5.1.1 I found that logical operator is case sensitive:
"residential" and "sink" matched documents with the word 'and' in it, but
"residential" AND "sink" worked as expected

Resources