I have an I.MX6 embedded system with an Angstrom Linux.
By adding the kernel parameter init=/bin/sh to the boot arguments I have now root access to the system. Now I want to add/change a user.
But the root is mounted as read only.
cat /etc/fstab
rootfs / squashfs defaults 0 0
proc /proc proc defaults 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
tmpfs /tmp tmpfs defaults 0 0
/dev/mmcblk0p1 /appconfig ext2 defaults 0 0
/dev/mmcblk0p2 /media/kernel_1 ext2 noauto 0 0
/dev/mmcblk0p3 /media/kernel_2 ext2 noauto 0 0
/dev/mmcblk0p5 /media/rootfs_b f2fs noauto 0 0
/dev/mmcblk0p6 /media/rootfs_1 squashfs noauto,nofail,ro 0 0
/dev/mmcblk0p7 /media/rootfs_2 squashfs noauto,nofail,ro 0 0
/dev/mmcblk0p8 /var f2fs defaults 0 0
There are two kernels, actual kernel_1 with rootfs_1 is running.
I tried to use mount -o remount,rw / but this isn't working.
The command mount isn't working ether:
mount: failed to read mtab: No such file or directory
[ 1.500600] mmc0: SDHCI controller on 2198000.usdhc [2198000.usdhc] using ADMA
[ 1.541992] ledtrig-cpu: registered to indicate activity on CPUs
[ 1.560080] NET: Registered protocol family 10
[ 1.569834] sit: IPv6 over IPv4 tunneling driver
[ 1.581656] NET: Registered protocol family 17
[ 1.587361] Key type dns_resolver registered
[ 1.596955] ThumbEE CPU extension supported.
[ 1.601296] Registering SWP/SWPB emulation handler
[ 1.607158] mmc0: MAN_BKOPS_EN bit is not set
[ 1.614429] registered taskstats version 1
[ 1.630641] mmc0: new DDR MMC card at address 0001
[ 1.638019] Key type encrypted registered
[ 1.645244] mmcblk0: mmc0:0001 P1XXXX 3.60 GiB
[ 1.652696] input: gpio_buttons#0 as /devices/soc0/gpio_buttons#0/input/input0
[ 1.662385] mmcblk0boot0: mmc0:0001 P1XXXX partition 1 2.00 MiB
[ 1.668707] snvs_rtc 20cc034.snvs-rtc-lp: setting system clock to 1970-01-01 00:04:09 UTC (249)
[ 1.677753] mmcblk0boot1: mmc0:0001 P1XXXX partition 2 2.00 MiB
[ 1.700312] mmcblk0: p1 p2 p3 p4 < p5 p6 p7 p8 >
[ 1.806836] VFS: Mounted root (squashfs filesystem) readonly on device 179:7.
[ 1.818583] devtmpfs: mounted
[ 1.821894] Freeing unused kernel memory: 348K (c069f000 - c06f6000)
So I want to remount /dev/mmcblk0p6 (current root) as read-write.
Any idea?
Related
So i can make calls but i'am offline in the console. I don't get notice in the console when I register Why that ? thanks.
PJSIP.conf
[transport-udp]
type=transport
protocol=udp
bind=0.0.0.0
[7000]
type=endpoint
context=from-internal
disallow=all
allow=g729
transport=transport-udp
auth=7000
aors=7000
[7000]
type=auth
auth_type=userpass
password=7000
username=7000
[7000]
type=aor
qualify_timeout=4.0
qualify_frequency=50
max_contacts=1
Cmd: pjsip show endpoints
Endpoint: 7000 Unavailable 0 of inf
InAuth: 7000/7000
Aor: 7000 1
Transport: transport-udp udp 0 0 0.0.0.0:5060
Cmd: pjsip show endpoint 7000
Endpoint: 7000 Unavailable 0 of inf
InAuth: 7000/7000
Aor: 7000 1
Transport: transport-udp udp 0 0 0.0.0.0:5060
I had to add allow_subscribe=yes ;)
Endpoint: 7000 In use 1 of inf
InAuth: 7000/7000
Aor: 7000 1
Contact: 7000/sip:7000#127.0.0.1:62210;ob a891149c2b Avail 1.464
Transport: transport-udp udp 0 0 0.0.0.0:5060
Channel: PJSIP/7000-00000001/Playback Up 00:00:03
Exten: 999 CLCID: "" <>
You can do outbound call by using sip response "Authentification required" and authentification after that.
For incoming call need "registration", when your device use it asterisk will record ip/port pair for use for incoming calls. That pair can be any if your device have no public ip(NAT).
I am trying to copy the files from my salt master to the minon using salt-ssh
with file.managed module. but it is giving the below error...
[salt]# pwd
/srv/salt
[salt]# ls -l
total 20
-rw-r--r-- 1 root root 80 Sep 4 21:37 copy.sls
-rw-r--r-- 1 root root 47 Sep 3 04:52 lftp_install.sls
-rw-r--r-- 1 root root 44 Sep 3 04:52 lftp_remove.sls
-rw-r--r-- 1 root root 124 Sep 3 04:50 lftp.sls
-rw-r--r-- 1 root root 65 Sep 3 04:53 Service_check.sls
[salt]# cat copy.sls
add script:
file.managed:
- name: testSalt
- source: /root/testSalt
- dest: /tmp/testSalt
Entries from /etc/salt/master
file_roots:
base:
- /srv/salt
[salt]# salt-ssh 'KK' state.apply copy.sls
KK:
- No matching sls found for 'copy.sls' in env 'base'
When i ran this into the debug mode, it gives as follows..
[DEBUG ] Could not find file from saltenv 'base', 'salt://copy/sls.sls'
[DEBUG ] Could not find file from saltenv 'base', 'salt://copy/sls/init.sls'
[DEBUG ] LazyLoaded nested.output
KK:
- No matching sls found for 'copy.sls' in env 'base'
I noticed that, if i'll put the file to copied into the same Directory where copy.sls exits ie in my case /srv/salt then it works.
# cat copy.sls
add script:
file.managed:
- name: /opt/testSalt
- source: salt://testSalt
- user: root
- mode: 644
salt-ssh '*' state.sls copy
KK:
----------
ID: add script
Function: file.managed
Name: /opt/testSalt
Result: True
Comment: File /opt/testSalt updated
Started: 07:19:44.332582
Duration: 20.366 ms
Changes:
----------
diff:
---
+++
## -1,1 +1,2 ##
salt copy test
+second commit
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
just found Dalekjs and tried out their "Getting Started".
I use it together with grunt but I get this message after "grunt dalek":
Running "dalek:dist" (dalek) task
Fatal error: connect ECONNREFUSED
I have this here included in my Gruntfile.js
dalek: {
options: {
browser: ['phantomjs']
},
dist: {
src: ['tests/test.js']
}
}
And my tests/test.js looks like this:
module.exports = {
'Page title is correct': function (test) {
test
.open('http://google.com')
.assert.title().is('Google', 'It has title')
.done();
}
};
If I try this "dalek tests/test.js" i get this message:
ERROR: dalek-browser-phantomjs: Could not start Ghost Driver
Any ideas? already tried to remove dalek and phantomjs and install it again
you have an old process running, find it and kill it
$ ps -lA | grep dalek
501 65879 225 4006 0 31 0 3053832 44328 - T 0 ttys002 0:00.46 node /usr/local/bin/dalek test.js
501 65881 65879 400a 0 33 0 777876 50968 - T 0 ttys002 0:00.75 .../node_modules/dalekjs/node_modules/dalek-browser-phantomjs/node_modules/phantomjs/lib/phantom/bin/phantomjs --webdriver 9001 --ignore-ssl-errors=true
501 66135 225 4006 0 31 0 2432784 496 - R+ 0 ttys002 0:00.00 grep dalek
$ kill -9 65881
$ kill -9 65879
I'm running an Linux Image (kernel 3.2.8) for beagleboard-xm on QEMU's 1.4.0 emulator Ubuntu distribution for 13.04. My image is created using Buildroot beagle_defconfig. I added some pkgs to be able to debug a little.
QEMU call cmd:
`$ sudo qemu-system-arm -M beaglexm -m 1024 -sd ./test.img -clock unix -serial stdio -device usb-mouse -device usb-kbd -serial pty -serial pty`
[sudo] password for emperador:
char device redirected to /dev/pts/3 (label serial1)
char device redirected to /dev/pts/4 (label serial2)
What I want to do is to have a communication between guest and host across serial the 4 differents ttyO present on the guest. QEMU offer facilities to redirect the trafic to some device in the host side. My problem goes like this:
At the guest kernel boot Im able to see that my UART where enabled
[ 2.682040] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[ 2.777947] omap_uart.0: ttyO0 at MMIO 0x4806a000 (irq = 72) is a OMAP UART0
[ 2.794967] omap_uart.1: ttyO1 at MMIO 0x4806c000 (irq = 73) is a OMAP UART1
[ 2.814942] omap_uart.2: ttyO2 at MMIO 0x49020000 (irq = 74) is a OMAP UART2
[ 2.966825] console [ttyO2] enabled
[ 2.984777] omap_uart.3: ttyO3 at MMIO 0x49042000 (irq = 80) is a OMAP UART3
In fact when I go see in to /proc/tty/driver and I do a cat on OMAP-SERIAL Im able to see this
serinfo:1.0 driver revision:
0: uart:OMAP UART0 mmio:0x4806A000 irq:72 tx:0 rx:0 CTS|DSR|CD
1: uart:OMAP UART1 mmio:0x4806C000 irq:73 tx:0 rx:0 CTS|DSR|CD
2: uart:OMAP UART2 mmio:0x49020000 irq:74 tx:268 rx:37 RTS|CTS|DTR|DSR|CD
3: uart:OMAP UART3 mmio:0x49042000 irq:80 tx:0 rx:0 CTS|DSR|CD
I know that ttyO2 is working because my console is been redirected to it. The thing is that doing a set serial on any of the ttyO I get the following message:
[root#enu driver]# setserial -a /dev/ttyO0
/dev/ttyO0, Line 0, UART: undefined, Port: 0x0000, IRQ: 72
Baud_base: 3000000, close_delay: 50, divisor: 0
closing_wait: 3000
Flags: spd_normal
The same goes with ttyO2.
I tried to set some settings to any of the ttyO with setserial but I always get the same message:
[root#enu ~]# setserial /dev/ttyO0 uart 8250
setserial: can't set serial info: Invalid argument
[root#enu ~]# setserial /dev/ttyO0 port 0x4806a000
setserial: can't set serial info: Invalid argument
While looking at guest /proc/tty/drives this is what we see
/dev/tty /dev/tty 5 0 system:/dev/tty
/dev/console /dev/console 5 1 system:console
/dev/ptmx /dev/ptmx 5 2 system
/dev/vc/0 /dev/vc/0 4 0 system:vtmaster
sdio_uart /dev/ttySDIO 249 0-7 serial
acm /dev/ttyACM 166 0-31 serial
ttyprintk /dev/ttyprintk 5 3 console
OMAP-SERIAL /dev/ttyO 253 0-3 serial
serial /dev/ttyS 4 64-95 serial
pty_slave /dev/pts 136 0-1048575 pty:slave
pty_master /dev/ptm 128 0-1048575 pty:master
unknown /dev/tty 4 1-63 console
Basically I want to establish a serial communication between a guest and a host, but the serial ports on the guest side aren't well configured.
/sys/class/tty show that tty drivers had been linked to a serial device.
has I showed up before, only omap uarts have been initialized and attached to ttyO*. notice that the console is been redirected ttyO2 by kernel configs. but because I added -serial stdio, console is been redirected to the terminal that invoked QEMU.
If I redirect the console using at first -serial pty instead of -serial stdio , I'm able to prompt the console in minicom by opening the pty created on the host side. Still nothing happen on the others pty created on the host side to communicate across other ports.
On host side I open /dev/pts/3 and /dev/pts/4 with minicom or by doing cat on them
On guest side:
Whent I do echo "test" > /dev/ttyO0 or 1 or 3 nothing. but when I do it on ttyO2, "test" prompt on the console terminal (which is normal).
now when using any of the ttyS:
echo "test" > /dev/ttyS0
I get
-bash: echo: write error: Input/output error
I made some research about this error and what I found is that is could be many things. But one thing that I noticed was that no device beside serial has been assigned to ttyS. and looking at /proc/tty/driver/serial we see this :
serinfo:1.0 driver revision:
0: uart:unknown port:00000000 irq:0
1: uart:unknown port:00000000 irq:0
2: uart:unknown port:00000000 irq:0
3: uart:unknown port:00000000 irq:0
also setserial -a /dev/ttyS0 confrim this:
/dev/ttyS0, Line 0, UART: unknown, Port: 0x0000, IRQ: 0
Baud_base: 0, close_delay: 50, divisor: 0
closing_wait: 3000
Flags: spd_normal
I managed to do serial communication with muliples ports usig grml image on a x86 architecture. So its seems my host side is fine.
If anyone have ever made something like this work before on QEMU -M beaglexm or any other ARM architecture, I would gladly take any details on the VM used, QEMU's version and distribution as well as the kernel details and image configs used.
I found what my problem was, QEMU ins't mapping the serial chardev of any extra -serial pty.
After doing the this Invoke command:
sudo qemu-system-arm -M beaglexm -m 1024 -sd ./test.img -clonix -serial stdio -device usb-mouse -device usb-kbd -serial pty -serial pty -monitor pty
char device redirected to /dev/pts/5 (label compat_monitor0)
char device redirected to /dev/pts/7 (label serial1)
char device redirected to /dev/pts/10 (label serial2)
We can see that 2 extra serials where created with the label serial 1 and 2.
But if I look at the tree info
(qemu) info qtree
dev: omap_uart, id "uart4"
revision = 82
mmio_size = 4096
baudrate = 812500
chardev = uart4
irq 3
mmio 0000000049042000/0000000000001000
dev: omap_uart, id "uart3"
revision = 82
mmio_size = 4096
baudrate = 812500
chardev = serial0
irq 3
mmio 0000000049020000/0000000000001000
dev: omap_uart, id "uart2"
revision = 82
mmio_size = 4096
baudrate = 812500
chardev = uart2
irq 3
mmio 000000004806c000/0000000000001000
dev: omap_uart, id "uart1"
revision = 82
mmio_size = 4096
baudrate = 812500
chardev = uart1
irq 3
mmio 000000004806a000/0000000000001000
We clearly see that just the label serial0 was attached to a uart (the one setted to be the console). The other labels (serial1 and serial2) are no where to be found.
With the working image of grml that jofel was realy nice to tell me we see this:
dev: i440FX-pcihost, id ""
irq 0
bus: pci.0
type PCI
dev: PIIX3, id ""
addr = 01.0
romfile = <null>
rombar = 1
multifunction = on
command_serr_enable = on
class ISA bridge, addr 00:01.0, pci id 8086:7000 (sub 1af4:1100)
bus: isa.0
type ISA
dev: isa-serial, id ""
index = 2
iobase = 0x3e8
irq = 4
chardev = serial2
wakeup = 0
isa irq 4
dev: isa-serial, id ""
index = 1
iobase = 0x2f8
irq = 3
chardev = serial1
wakeup = 0
isa irq 3
dev: isa-serial, id ""
index = 0
iobase = 0x3f8
irq = 4
chardev = serial0
wakeup = 0
isa irq 4
all 3 serial lebels were attached to a chardev.
Now I just have to ask a new question about how making QEMU to link those lables to my beagleboard uarts.
Also I would like to add I think that setserial did not outputed any info about ttyO's because it doesn't support omap uarts. setserial ? shows what devices are supported. In the case of the ttyS's, I think its because the tty drivers are installed but there is no other type of uarts bisede omap uarts emulated for bealgeboard in QEMU.
Thanks alot for everyone that took a look on this question and specialy jofel.
Hi
I am running a bi-di 'iperf' test on an interface using my driver.
Steps to repro would be to run bi-di I/O on one interface(other interface is not active):
Run iperf -c -P 8 -t 100000 -I 10 on DUT
iperf -c with same params as above from peer almost immediately ( after 1st 10s of above 'iperf send' are over)
With 'iperf -s -w 256K' on both
The crash is not happening as such in the driver but in the 'iperf' context. I am going to copy-paste the stack trace:
PID: 8855 TASK: f7036550 CPU: 0 COMMAND: "iperf"
#0 [c074bed0] crash_kexec at c0443233
#1 [c074bf14] die at c04064d3
#2 [c074bf44] do_page_fault at c062134b
#3 [c074bf94] error_code (via page_fault) at c0405abb
EAX: f5888100 EBX: 00000000 ECX: 00100100 EDX: 00200200 EBP: 00000001
DS: 007b ESI: f5888000 ES: 007b EDI: cb614000
CS: 0060 EIP: c05c4e94 ERR: ffffffff EFLAGS: 00010046
#4 [c074bfc8] net_rx_action at c05c4e94
#5 [c074bfe4] __do_softirq at c042aa65
--- <soft IRQ> ---
#0 [f281ac4c] do_softirq at c04073e5
#1 [f281ac58] do_IRQ at c04074d9
#2 [f281ac70] common_interrupt at c0405975
EAX: 39383736 EBX: f281af4c ECX: 00000428 EDX: 31303938 EBP: f378b042
DS: 007b ESI: f378b1c2 ES: 007b EDI: 09fdb448
CS: 0060 EIP: c04f1c07 ERR: ffffffba EFLAGS: 00000202
#3 [f281aca4] __copy_to_user_ll at c04f1c07
#4 [f281acb0] memcpy_toiovec at c05bfecc
#5 [f281acc4] skb_copy_datagram_iovec at c05c059b
#6 [f281acf4] tcp_rcv_established at c05ef40a
#7 [f281ad20] tcp_v4_do_rcv at c05f48c5
#8 [f281ad54] tcp_prequeue_process at c05e6bdd
#9 [f281ad5c] tcp_recvmsg at c05e90e2
#10 [f281ad9c] sock_common_recvmsg at c05bb1c4
#11 [f281adc0] sock_recvmsg at c05b8dc6
#12 [f281aea0] sys_recvfrom at c05ba6ab
#13 [f281af64] sys_recv at c05ba727
#14 [f281af80] sys_socketcall at c05bab52
#15 [f281afb8] system_call at c0404f44
EAX: ffffffda EBX: 0000000a ECX: b6ba2340 EDX: 00014268
DS: 007b ESI: 00000000 ES: 007b EDI: 09fbe630
SS: 007b ESP: b6ba2328 EBP: b6ba2378
CS: 0073 EIP: 004ad410 ERR: 00000066 EFLAGS: 00000293
crash>
the EIP at the time of crash is net_rx_action:0xdd/19ca. Now i have compiled the kernel-2.6.18-238 sources( the source version of the OS on which the DUT is running) and did an 'objdump -S ./net/core/dev.o > dev_o_dmp' on the ./net/core/dev.c which has the definition of the net_rx_acdtion(). Now in the 'dev_o_dmp' file the net_rx_action() has lots of inline definitions and hence somehow does not exactly mirror the flow in the source file. In such a scenario ,is it safe to add 0xdd to the base addr of net_rx_action (say 32FF) => 340C .i.e would 340C be the offending line number that is giving rise to the crash ' kernel paging request error'
Any tips /recommendations on how to go about debugging this problem would be of great help
Unfortunately, or fortunately depending on your perspective, with high levels of optimization it is possible for the compiler to create assembly code that the debug format cannot make a reasonable C code line to assembly instruction(s) mapping. What type of cases you can run into this problem depends on the compiler, optimization level, debug symbol format, debug symbol level, and the code itself.
You have to assume that line numbers gained via this technique could be wrong. That being said, I use this technique frequently in my own kernel work and I have not had any problems yet (knocks on wood). Just remember that if you are faced with something that just makes no sense, you could have a bad line number.