I am trying to use tiddlywiki (version 5.1.10) together with QWebView in PyQt (version 4.10.2).
I am able to load the tiddlywiki page with QtCore.QUrl.fromLocalFile("C:\\path\\to\\tiddlywiki\\empty.html"), but unfourtunately my changes aren't saved.
I already tried to place TiddlySaver.jar in the same directory as tiddlywiki, but that doesn't change anything.
Does somebody know if it's possible to use tiddlywiki together with QWebView?
As nobody has answered here, I want to quickly respond with my "solution". After all, I couldn't get tiddlywiki to work in PyQt4...I researched it a little bit and it seems like that it isn't possible with QWebView (correct me, if I'm wrong).
Therefore, I looked for alternatives to tiddlywiki. My requirements were, that it shouldn't require any installation. That's why I decided to use tiddlywiki in the first place. After some googling, I found Dokuwiki on a stick. It's a pretty nice looking wiki, that doesn't require any installation. It is possible to download the wiki with or without a webserver (dokuwiki on a stick provides a microapache webserver, which is a really small apache webserver binary that doesn't need an installation). The cool thing is, that you can pack the microapache webserver to your PyQt application. When your application starts, you start the webserver locally (with QProcess or something similar) and can connect to the wiki.
Related
My problem is that I can't use R-studio at my work place as the IT does not support it . I want to use R and R-studio that installed on my personnel laptop on my company laptop ( using a modern browser which is behind firewall ) . Some of the options I am thinking of two two things
should I need to build a docker for R and R-studio (I see base images are already available) , I am mostly interested in basic R , Dplyr (haven ,xporter, and Reticulate ) packages .
Should I have to use a binder . I am not technical person and my programming skills are very limited can any one suggest me way .
What exactly are the difference between using Docker option vs Binder ?
I know I can use R-Studio online and get my work done but with the new paid account I am running out of project hours and very slow sometimes . Thanks in advance
Here are some examples beyond the modern RStudio MyBinder example:
https://github.com/fomightez/pythonista_skewedf
https://github.com/fomightez/r_phylogenetics_worshop
https://github.com/fomightez/chapter7/tree/master/binder
The modern RStudio MyBinder example has been set as a template on GitHub so you can use
The first one is for a special use of a package not on conda. And I started that one from square one.
The other two were converted from content by others to aid in making them Binder-ready.
You essentially list everything you need from conda in the environment.yml along with the appropriate channels. If you need special stuff not on conda, you need the other configuration files included there.
Getting everything working can take some iterations on adding things, letting the image get built, and testing your libraries are available. Although you seem to think your situation is not overly complex.
The binder launch badges you see are just images where you modify the URL to point the MyBinder federation site at your repository. Look at the URL and you should see the pattern where you put studio at the end of the URL pointing at your repo. The form at MyBinder.org site can help with this; however, most often it is easier to just adapt a working launch badge's code copied from elsewhere. The form isn't set up at this time for making the URLs for launching to RStudio.
Download anything useful your create in a running session. The sessions timeout after 10 minutes, although RStudio usually keeps them active.
Lack of Persistence and limited memory, storage, & power can be drawbacks. The inherent reproducibility and portability are advantages.
MyBinder.org doesn't work with private repos. If you have code you don't want to share, you can upload it to the temporary session, using the repo for specifying the environment. You could host a private binderhub that does allow the use of private git repositories; however, that is probably overkill for your use case and exceed your ability level at this time.
GitHub isn't the only place to host repositories that can be pointed at the MyBinder system. If you go to the MyBinder.org page and click where it says 'GitHub' on the left side of the top line of the form, you can see a list of the sources at which you can host a repository and point the system to build an image and launch a container with that specified image.
Building the image from a source repository takes some minutes the first time. Once the image is built though on the service, launch is typically less than 30 seconds. Each time you make a change on the source repo, a build is necessary. Some changes don't cause the new build to be as long as the initial one as some optimizing is done to only build what is necessary after a change. Keep in mind there are several members of the federation around the workd and if traffic on the internet gets sent to where the built image isn't yet available, it will be built from scratch again first.
The Holepunch project is out there to offer some help for users working in the R ecosystem; however, with the R-Conda system that is now integrated into MyBinder it is pretty much as easy to do it the way I described. Last I knew, the Holepunch route makes a Dockerfile that isn't as easy to troubleshoot as using the current the R-Conda system route. Dockerfiles are essentially a last ditch configuration file that MyBinder can handle. The reason being the other configuration files are much easier and don't require knowing Dockerfile syntax. MyBinder aims to offer the ability to take advantage of Docker offering containers with a specified environment without users needing to know anything about Docker.
There is a Binder Help category for posting to get help at the Jupyter Discourse Forum. Some other examples of posts already there may help you troubleshoot.
Notice of a common pitfall
Most of the the configuration files for making a repository Binder-ready are simply text and can be edited right in the GitHub browser interface, without need to git or even cloning the repo locally.
Last I knew, there are two exceptions to this. The postBuild and start configuration files have settings that allow them to be run as scripts and these get altered in a way they no longer work if you edit them via the GitHub browser interface. (This was my experience when last I tried. Your mileage may vary or things may have changed now.) To edit those, you have to have git available on a system you have and pull one from some other source. Then edit that on your machine that has git working & add it your repo and push it back up from your local computer.
(If this is a problem, you can post in the Jupyter Discourse Forum Binder help category and you and I could coordinate where I fork and edit those files in your repo to your specifications and then make a pull request to update your source of the fork with those changes.)
If you are using Jupyter notebooks extensively then it may make sense to use Binder
But if you simply want to use R and Rstudio, then all you need is docker. A good resource is
https://github.com/rocker-org/rocker
I've got a QT application running in a redhat 6.5 server and displayed in a redhat 6.5 server X, with OpenBox as a window manager.
I want to automatize GUI tests, so I choose ldtp (maybe not the best choice, I'm open to suggestions). Ldtp works with the accessibility tool, for disable people.
My problem is that I can't manage to activate at-spi-registry in OpenBox. When I come back to the gnome desktop, I manage to do it, and ldtp works fine, but that is not what I want.
Can anyone help me ?
Thank you.
I finally manage to understand where was the mistake, at-spi-registry need (in my case) a gnome-session to work properly. There is maybe a way for OpenBox to simulate such a session, but I didn't figure it out and if it was possible couldn't be done. I had to keep the testing environment and the production one identical.
So, I took another gui testing tool, name SikuliX. it works perfectly on my platform, it is using OpenCV for image recognition instead of accessibility tool.
I'm trying to learn about the guts of Unix right now, mostly through experimentation. When I was first starting, I found myself looking through forum posts, copying and pasting bash code. When I broke something, I often had to do a fresh install because I couldn't remember what exactly I had changed where. Now, the simple solution is to record a log of all the system files I've changed and keep original copies of all the default files so I can revert if necessary. It would be great if there was a cl tool which did this for me automatically. It would be even greater if I could step back through changes. Basically, I'm looking to version control my entire OS.
Does anything like this exist? I would also accept alternative strategies for spelunking through Unix without causing permanent damage if you think I'm going about this wrong.
Using debian if it matters.
So I've recently setup a LEMP server and have managed to work may way through some of the configurations. I'm now to the point where I can begin writing php scripts and building basic pages. Looking at the php5-fpm wiki there aren't any pages discussing any changes I should expect as far as php scripts and such are concerned, only installation/configuration settings.
Is everything beyond the installation/configurations steps business as usual? From the point of view of a php developer what changes should I expect/make? How can best take advantage of the fpm version (in the php code, not module/system configurations)? I'm focused on comparing well-written php in both cases.
When I made the switch myself, I got to know a few perks about this kind of setup, such as APC file upload progress does not work out of the box (and you're better off using something else, such as nginx-progress-upload and/or JS File API); Some header names might have changed (prepending HTTP_); and a new and very useful function called fastcgi_finish_request.
For more information, though, look around the PHP-FPM Manual.
Only major gotcha I can think of is that some functions in the pcntl extensions, such as pcntl_fork, are not supported when running under FPM. (However, they're not supported under mod_php either, so this shouldn't come as too much of a surprise.)
We have a website that we are planning to distribute in a device. It is basically a big web site with lots of pictures and information. The web site is already built using some flash and javascript. I am thinking on using ubuntu for this. My plan is to install ubuntu( server, maybe!) without a graphical enviroment( Gnome, KDE, etc...) and start a browser like firefox using X servers. I have already tried this using
Code:
xinit firefox
It works and loads firefox fine. I am also thinking to build a Simple UI that will be launched at start. This UI will have a button to start this website and maybe other programs.
I hope I made myself clear.
I would like to know what do you guys think about this? Does it sound like something feasible? Do you think its a good idea to do this way? Do you have any suggestions?
It terms of licensing I don't understand well. I know ubuntu is licensed mainly under GNU GPL and I know is open source. I know that you are required to have any modifications available. However I am not sure if that includes the source code for the web site or any other proprietary application that i create and include. My understanding is that you only need to have open source any changes made to the OS but not any configuration after it has been installed.
What about Qt which is liscenced GNU LGPL v. 2.1? Do i need to release the code for the UI i make or is it only the code for any changes made to Qt itself?
Thanks in advance to anyone that reads this. I have read a lot on this but I am not so sure i got it right. I would like to know if I am at least in the right path.
Any help will be appreciated.enter code here
Ubuntu is GPL - if you make any changes to the Ubuntu (or rather linux) kernel itself then you have to offer those changes to anyone you distribute Ubuntu to - that has nothing to do with any applications or data you use on the operating system.
Qt is LGPL - you can use Qt to make any application you want without releasing anything about your application. You only have to release any modifications you make to the Qt source code yourself - which you are unlikely to do.
You don't need Qt for any of this, you can have a browser run full screen at startup in Ubuntu (or any other linux), and you can have a simple start page which will also start other local apps with just html - this may be a lot easier.
There are also "kiosk modes" for most browsers which limit what features and tool bars are present so you can prevent users quitting the browser or loading/saving other data.
Finally check out xubuntu - it's a version of ubuntu with X but without Gnome or KDE
IANAL, but with LGPL you can dynamically link to Qt and not be required to license your own sources under LGPL.
The general rule of thumb is that your end user should be able to take code of LGPLed component, make modifications to it, and have your proprietary code work with it. This also means you can link statically to LGPLed code if you provide at least object files of your own code, so they can be relinked.
For linux I suspect the answer is yes as well, but I can't say anything specific.