I opened a file through python. So, i did a lsof on the python process. output of lsof has the following line
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 15855 inaflash 3w REG 0,25 0 4150810088 /home/inaflash/he.txt
Thing is, it has 3w. which means that the file is open for writing. But, i actually opened the file as follows
a = open('he.txt','r')
I read that, w means file is open for write. Can anyone help me understand why its w instead of r
I tried the same code in Python 3 and my file is opened in read mode.
Are you sure your file is the same opened with python and same python process ?
Maybe you forgot to close the file somewhere in your code after opened it in write mode.
Edit: Also tried in Python 2, same result (read mode)
Related
I'm running a udev rule on my 3D printing server to automatically create easily identifyable symlinks to some attached microcontroller boards, which worked perfectly fine on ubuntu 20.04.
The rule triggers on the usb vendor and product ids and runs a python script via the PROGRAM directive. The script connects to the Microcontroller boards and reads it's init sequence to get the board's 'name'. It then outputs a string like "aaaaaaa b cccccc" and only the first block (containing the name) is used in the udev rule.
However, it seems like the whole PROGRAM directive is not executed at all anymore, since I updated my system to ubuntu 22.04.1.
My udev rule currently looks like this (While debugging. Regularly it just contained lines 1 and 3. I added #2 for testing purposes because the hook in line 1 works and that script is executed):
KERNELS=="ttyUSB*", ENV{ID_VENDOR_ID}=="0403", ENV{ID_MODEL_ID}=="6001", ENV{ID_SERIAL_SHORT}!="AI046A0Q", ACTION=="add|remove", RUN="/bin/su me -c \"/opt/me/deviceReg.py -d %k -a %E{ACTION}\""
KERNELS=="ttyUSB*", ENV{ID_VENDOR_ID}=="0403", ENV{ID_MODEL_ID}=="6001", ENV{ID_SERIAL_SHORT}!="AI046A0Q", ACTION=="add|remove", PROGRAM="/opt/me/serialUdev.py -s %s{serial} /dev/%k", SYMLINK+="%c{1}", OWNER="me", GOTO="script_end"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}!="AI046A0Q", PROGRAM="/opt/me/serialUdev.py -s %s{serial} /dev/%k", SYMLINK+="%c{1}", OWNER="me", GOTO="script_end"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="A9QXPRV7", SYMLINK+="tty_MainSwitch", GROUP="dialout", OWNER="me", GOTO="script_end"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="A9QOIMJ6", SYMLINK+="tty_Cooler", GROUP="dialout", OWNER="me", GOTO="script_end"
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="A9PTMHGV", SYMLINK+="tty_CurrentTransformer", GROUP="dialout", OWNER="me", GOTO="script_end"
The python scripts write to some logfiles which clearly indicate that only lines 1 and 4, 5 or 6 are executed.
Is there anything in line 3 that isn't supported anymore in the latest udev version? As I said, line 3 worked perfectly before I updated the system.
The last 3 lines are my current workaround. They work fine but that's not what I want to achive with this whole naming system at all.
The python script in lines 2 & 3 runs perfectly fine, either if called as standard user or as root. It would also deliver valid output if the '-s ' input data is not matching the uc board, is missing or is random garbage.
Does anyone have an idea why the script omits any line with a PROGRAM statement?
Ok, I was able to solve the issue.
I set udev's log level to debug too see what's actually happening when the device is handled. The script actually IS invoked but immediately failed during importing needed modules: The pyserial module could not be found.
The module is installed though, but obviously in a way it could not be imported.
However, I checked the python script again and changed the first line from #!/usr/bin/env python3 to #!/usr/bin/python3 and now it works again.
So my problem actually wasn't related to udev at all, it was just my python script.
I have a RM Server running on a VM (Ubuntu) on top of my Win10 machine.
I have a process to read a .csv file and write its contents on a MySQL database on a MySQL Server which also runs on the same VM.
The problem is that the read file operator does not seem to be able to find the file.
Scenario1.
When I try as location-name in the read csv operator ../data/myFile.csv
and run the process on Server I am getting Failed to execute initialization process: Error executing process /apps/myApp/process/task_read_csv_to_db: The file 'java.io.FileNotFoundException: /root/../data/myFile.csv (No such file or directory)' does not exist.
Scenario2.
When I try as location-name in the read csv operator /apps/myApp/data/myFile.csv
and run the process on Server I am getting Failed to execute initialization process: Error executing process /apps/myApp/process/task_read_csv_to_db: The file 'java.io.FileNotFoundException: /apps/myApp/data/myFile.csv (No such file or directory)' does not exist.
What is the right filepath that I should give to the Read CSV operator?
Just to update with the answer. After David's suggestion, I resulted in storing the .csv file outside of the /rapidminer-server-home/data/repository since every remote repository seems to be depicted with an integer instead of its original name, making the use of the actual full path of the file not usable.
I would say, the issue is that depending on the location of the JobAgent that is executing your process, the relative path might be varying.
Is /apps/myApp/data/myFile.csv the correct path to the file? If not, I would suggest to use the absolute path to the file. Hope this helps.
Best,
David
So I am looking at my professor's code that he handed out to try and give us an idea of how to implement >, <, | support into our unix shell. I ran his code and was amazed at what actually happened.
if( pid == 0 )
{
close(1); // close
fd = creat( "userlist", 0644 ); // then open
execlp( "who", "who", NULL ); // and run
perror( "execlp" );
exit(1);
}
This created a userlist file in the directory I was currently in, with the "who" data inside that file. I don't see where any connection between fd, and execlp are being made. How did execlp manage to put the information into userlist? How did execlp even know userlist existed?
Read Advanced Linux Programming. It has several chapters related to the issue. And we cannot explain all this in a few sentences. See also the standard stream and process wikipages.
First, all the system calls (see syscalls(2) for a list, and read the documentation of every individual system call that you are using) your program is doing should be tested against failure. But assume they all succeed. After close(1); the file descriptor 1 (STDOUT_FILENO) is free. So creat("userlist",0644) is likely to re-use it, hence fd is 1; you have redirected your stdout to the newline created userlist file.
At last, you are calling execlp(3) which will call execve(2). When successful, your entire process is restarted with the new executable (so a fresh virtual address space is given to it), and its stdout is still the userlist file descriptor. In particular (unless execve fails) the perror call is not reached.
So your code is a bit what a shell running who > userlist is doing; it does a redirection of stdout to userlist and runs the who command.
If you are coding a shell, use strace(1) -notably with -f option- to understand what system calls are done. Try also strace -f /bin/sh -c ls to look into the behavior of a shell. Study also the source code of existing free software shells (e.g. bash and sash).
See also this and the references I gave there.
execlp knowns nothing. Before execing stdout was closed and a file opened, so the descriptor is the one corresponding to stdout (opens always returns the lowest free descriptor). At that point the process has an "stdout" plugged to the file. Then exec is called and this replaces to whole address space, but some properties remains as the descriptors, so know the code of who is executed with an stdout that correspond to the file. This is the way redirections are managed by shells.
Remember that when you use printf (for example) you never specify what stdout exactly is... That can be a file, a terminal, etc.
Basile Starynkevitch correctly explained:
After close(1); the file descriptor 1 (STDOUT_FILENO) is free. So creat("userlist",0644) is likely to re-use it…
This is because, as Jean-Baptiste Yunès wrote, "opens always returns the lowest free descriptor".
It should be stressed that the professor's code only likely works; it fails if file descriptor 0 is closed.
I am currently working on a project which aims to find out what the system is doing behind a series of user interaction on the android UI. For example, if user click send button in Facebook Messenger, the measured response time for such action is 1.2 seconds. My goal is to figure out what the 1.2 seconds consist of. My friend suggested that I should take a look into 'Systrace'.
However, when I tried systrace on my HTC one M8, I have encountered some problems:
First, error opening /sys/kernel/debug/tracing/options/overwrite - no such file or directory. I solved this problem by building up the support of the kernel following http://opensourceforu.com/2010/11/kernel-tracing-with-ftrace-part-1/ and mount -t debugfs none /sys/kernel/debug. Then I could find the tracing directory. Besides, I set ro.debuggable=1 in file default.prop within Ramdisk and burn the boot.img into my phone.
Now I encounter another problem: when I run - python systrace.py --time=10 -o mynewtrace.html sched gfx view wm, the following error(19) pop up: error truncating /sys/kernel/debug/tracing/set_ftrace_filter: No such device (19). I don't know if the way my building up kernel support for systrace is incorrect or anything is missing.
Could anyone helps me out with this problem, please?
I think I have worked out the solution. My environment is Ubuntu 16.04 + HTC one M8. I will write the steps as followed:
open terminal and enter: $adb shell
(1) $su (2) $mount -t debugfs none /sys/kernel/debug. Now you should be able to see many directories under /sys/kernel/debug/. (You may cd into /sys/kernel/debug to confirm this)
New a new terminal and enter: dd if=/dev/block/platform/msm_sdcc.1/by-name/boot of=/sdcard/boot.img to generate the boot.img kernel image from your device.
Use AndroidImageKitchen to unpack the boot.img and find the default.prop within Ramdisk folder. Then change ro.debuggable=0 to ro.debuggable=1. Repack the boot.img and flash boot it to your device.
Once the device boot, under terminal, enter: adb root and message like: restarting adbd as root may pop up. Disconnect the USB and connect again.
cd to the systrace folder, e.g. ~/androidSDK/platform-tools/systrace and use:
python systrace.py --time=10 -o mynewtrace.html sched gfx view wm
Now you may able to generate your own systrace files.
I need some Powershell advice.
I need to install an application's MSP update file on multiple Win08r2 servers. If I run these commands locally, within the target machine's PS window, it does exactly what I want it to:
$command = 'msiexec.exe /p "c:\test\My Application Update 01.msp" REBOOTPROMPT=S /qb!'
invoke-wmimethod -path win32_process -name create -argumentlist $command
The file being executed is located on the target machine
If I remotely connect to the machine, and execute the two commands, it opens two x64 msiexec.exe process, and one msiexec.exe *32 process, and just sits there.
If I restart the server, it doesn't show that the update was installed, so I don't think it's a timing thing.
I've tried creating and remotely executing a PS1 file with the two lines, but that seems to do the same thing.
If anyone has advice on getting my MSP update installed remotely, I'd be all ears.
I think I've included all the information I have, but if something is missing, please ask questions, and I'll fill in any blanks.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
My process for this is:
Read a CSV for server name and Administrator password
Create a credential with the password
Create a new session using the machine name and credential
Create a temporary folder to hold my update MSP file
Call a PS1 file that downloads the update file to the target server
>>> Creates a new System.Net.WebClient object
>>> Uses that web client object to download from the source to the location on the target server
Call another PS1 file that applies the patch that was just downloaded –>> This is where I’m having issues.
>>> Set the variable shown above
>>> Execute the file specified in the variable
Close the session to the target server
Move to the next server in the CSV…
If I open a PS window and manually set the variable, then execute it (as shown above in the two lines of code), it works fine. If I create a PS1 file on the target server, containing the same two lines of code, then right click > ‘Run With PowerShell’ it works as expected / desired. If I remotely execute my code in PowerGUI, it returns a block of text that looks like this, then just sits there. RDP’d into the server, the installer never launches. My understanding of the “Return Value” value is that “0″ means the command was successful.
PSComputerName : xx.xx.xx.xx
RunspaceId : bf6f4a39-2338-4996-b75b-bjf5ef01ecaa
PSShowComputerName : True
__GENUS : 2
__CLASS : __PARAMETERS
__SUPERCLASS :
__DYNASTY : __PARAMETERS
__RELPATH :
__PROPERTY_COUNT : 2
__DERIVATION : {}
__SERVER :
__NAMESPACE :
__PATH :
ProcessId : 4808
ReturnValue : 0
I even added a line of code between the variable and the execution that creates a text file on the desktop, just to verify I was getting into my ‘executeFile’ file, and that text file does get created. It seems that it’s just not remotely executing my MSP.
Thank you in advance for your assistance!
Catt11.
Here's the strategy I used to embed an msp into a powershell script. It works perfectly for me.
$file = "z:\software\AcrobatUpdate.msp"
$silentArgs = "/passive"
$additionalInstallArgs = ""
Write-Debug "Running msiexec.exe /update $file $silentArgs"
$msiArgs = "/update `"$file`""
$msiArgs = "$msiArgs $silentArgs $additionalInstallArgs"
Start-Process -FilePath msiexec -ArgumentList $msiArgs -Wait
You probably don't need to use the variables if you don't want to, you could hardcode the values. I have this set up as a function to which I pass those arguments, but if this is more of a one-shot deal, it might be easier to hard-code the values.
Hope that helps!
using Start-Process for MSP package is not a good practice because some update package lockdown powershell libs and so you must use WMI call