I'm a new user to Kibana 5.3.
I'm working on a search in "Dev Tools" panel. When I'm done with the code of the search, how do I save this search so it becomes available later for visualizations etc.?
it saves in local storage, so as long as you don't clear your browser cache, you're OK. You CAN, however append something like ?load_from=http://my-app/assets/kibana.json to your kibana URL and specify a json file which you save your queries to. This way you can possibly version them as well.
For example, I keep my queries in an assets folder for the project I'm working on and make sure I sync the 2 in case I have to blow away my cache. Note that a JSON linter will complain about the CURL-esque statements if you copy & save to that file.
Related
I have created a media type that accepts XML files and saves them to a custom publicly accessible location on the server.
Ideally I would like the file to be overwritten when the exact same file is uploaded. This does not happen, instead it creates a new file and adds a number on the end. I have "Create new Revision" turned off.
To get around this issue I thought I could just delete the file via the CMS. The uploaded file has status of "Permanent" and is used 0 places. I know the cron job cleans up files for you, but when I run the cron the file in question is still there. I figure it's because the file is set to permanent, but I don't see a way to flip this to temporary.
Any help is much appeciated.
There is a setting nested away in the file system settings, which lets you configure it to remove (or not removed) orphaned files. If drush isn't removing them despite having no usages recorded, I'd check this option isn't ticked.
The temporary and permanent status are used for storing temporary files during the upload/save process, so I wouldn't tinker with those too much.
If you fancy making the form yourself using the form API, then you can save the file programmatically using the FILE_EXISTS_REPLACE parameter.
https://api.drupal.org/api/drupal/core%21modules%21file%21file.module/function/file_save_data/8.5.x
I'm running RStudio server on an instance of Google Compute Engine. My RScript creates a map file that I would like to include in a public web site.
The file gets created OK.
Separately, I've also created a bucket and can upload images to it, viewing them from a web browser with a URL like this: https://storage.googleapis.com/...
Still, I'm confused as to how to make the image created by the R script viewable by a browser. Does the image have to find its way over to a bucket? Or is it viewable where it is somehow?
There are infinite possible solutions depending on what you want to implement and how much time you want to spend on it (and if you are the only one accessing or not and if you can share the file or they are sensible), therefore I will provide you some hints:
The easiest one is to upload the file to a Google Storage Bucket, then you can control who can access that link (a single user, a domain or everyone), it could be access by accessing with the browser with the following link:
https://storage.googleapis.com/namebucket/folder1/folder2/nome_file
There is no graphical interface, you will need to know the address to download the file (at the end it is enough to know the name). You will need to create a small script to make sure every time a image is available to upload it to the bucket and to make it public available. Or you can decide to make he bucket itself public.
The second possible solution is to do the same but to create an html page REALLY simple, basically a list of links to the files in the bucket, each time you upload a file to the bucket you update the html file. At least you would solve the issue regarding the knowing the names and you can navigate it a bit.
<html><body>
This is a link
</body></html>
If you need to expose the resources to more people, or you would like to have something more "nice" graphically you will have to spend more time and build a decent frontend. You can follow thousands of different approaches.
You have really thousands of possibilities.
P.S.
Documentation regarding uploading a file to bucket.
Documentation regarding managing access to file stored.
Notice that in this way depending on the extension of the file you want to share the browser behaves differently, a .txt, a .jpg are shown an .exe is downloaded.
Each time I write something into WebStorm IDE Meteor rebuilds. Even if I don't save the files. My computer becomes noisy and hot quickly.
I am looking for a way to prevent Meteor to watch WebStorm temp files.
Does this happen when debugging your code? If yes, this must be a Live Edit issue. Try disabling Live Edit plugin - does it help?
Look at the synchronization section of the system settings page from their documentation
Here's what it says:
Synchronization
Synchronize files on frame or editor tab activation
If this check box
is selected, all the files that were changed externally are reloaded
from disk when you switch to WebStorm from a different application, or
when you you switch to their editor tab.
Save files on frame deactivation
If this check box is selected, all
modified files are auto saved when you switch from WebStorm to a
different application. Note that you cannot disable autosave
completely by turning off this and the following option. See Saving
and Reverting Changes.
Save files automatically if application is idle for N seconds
If this
check box is selected, all modified files are auto saved at regular
time internals. See also, Saving and Reverting Changes.
Use "safe write" (save changes to a temporary file first)
If this
check box is selected, a changed file is first saved in a temporary
file. If the save operation succeeds, the file being saved is replaced
with the saved file. (Technically, the original file is deleted and
the temporary file is renamed.) Also, the ownership of such file
changes. If this check box is not selected, the ownership of a file
does not change, but all the advantages of safe write will be lost.
Try unchecking those checkboxes and see if that prevents some saves.
I had a similar problem in that WebStorm was constantly saving edits to Jade files, even after typing just one character, triggering a gulp watcher. Disabling the Live Edit plugin and unchecking these boxes worked for me. Since disabling Live Edit didn't work for you, maybe unchecking one or more of these boxes will fix it.
Check this meteor specific webstorm help article. There is detailed description on how to configure live edit and code updating and how that would work together with meteor.
I'm trying implement a kind of private 'cloud store' for may web application. Let me explain, I have 'reports' (a file containing queries etc) which can be installed on a client pc. Normally, we e-mail them the files to end users whenever we create new ones and they manually import them into the app (using standard file upload in any browser).
Now we want to take it a step further and create a page which will pull a list of files from our site, eg. www.me.com/reports. The app will go through the list, compare to those installed and display new ones, updated ones etc. An end user could then just click on a button and the files are downloaded on the server and installed.
I'm trying to avoid writing any web server code, I'd prefer to just create a windows authenticated virtual directory that allows for file listing (or something close to this). I'm thinking maybe some javascript that will silently download the file to the client, then upload it back to the intranet iis server. All done without user interaction. Is this even possible?
I'd like to get anyone's thoughts on how something like this could be implemented, and what pitfalls I should watch out for.
Thanks
JK
I've been messing around a bit with various solutions to what I would see as a fairly common problem, but I've not yet been able to solve it in a satisfactory way.
What I wish to achieve is some kind of functionality where a user can upload new files, or select existing files to reuse them.
What I've been using so far is a combination of the filefield, filefield_sources, imce and ckeditor modules. I guess ckeditor isn't really important for the solution, but I need to be able to embed images from the archive somehow, and this is done with IMCE . Since I do not want everything to be accessible from the filebrowser I created a subdirectory and set full access to it in the IMCE settings, lets call it default/files/site
This worked fine as long as all filehanding was done through IMCE, but when I uploaded files directly from the filefield my files ended up in the default/files root, so I set up folders for my fields, for example default/files/site/movies in a field that allowed the .flv format. This worked fine to, as long as I didn't try to access the files through IMCE. It appears the folders created by filefield are not accessible from IMCE?
I'm also in a position where I need to support large uploads (200MB+), but from my experience in other projects, allowing file uploads through FTP is usually a life-saver, but from what I understand IMCE won't support files not uploaded through Drupal in some way, since they are not present in the database (giving the message: The selected file could not be used because the file does not exist in the database.)
I'm aware that I don't really have a clear question to my problem, but somehow I need to figure this out pretty fast. How would I preferably solve this? I'm aware that I'm not the first to have this problem, but I have not yet been able to find a nice and stable solution. What am I missing?
Also check this thread (http://drupal.org/node/438940) and the reference to John Locke's work at: http://www.freelock.com/blog/john-locke/2010-02/using-file-field-imported-files-drupal-drush-rescue
Well, I'm not personally familiar with IMCE off of the top of my head, but if you need files that have been uploaded via ftp to be added to the files tables, then my impulse would be to write a small module which would then allow the user to click a button and start off a batch process. (This is me assuming that you are using Drupal 6, as the batch api doesn't exist in 5.)
Said batch process would then iterate over all of the files in the appropriate directory, which I would assume you had uploaded the files to, use file_copy() (from Drupal's file API) to copy the files to default/files/site, and then would add said files to the files table, which is actually quite simple with drupal_write_record().
It might not even need to use the batch api - it somewhat matters if you're just uploading 10-30 really big files, or 200-300 MB files.
For using the batch api, I'd look at http://drupal.org/node/180528 - this has a fairly basic example of how the batch api works, which basically consists of telling the api that you want to keep calling function_a, and then inside of function_a setting your progress in the context array until you're done, at which time the batch process finishes. Then you just have whoever uploads the files via ftp to hit a button on the website to move and register the files.