"bad interpreter: Input/output error"
Jeff
atrocity at wywh.com
Tue Mar 12 14:52:14 UTC 2024
Update almost two months later:
I never could get this to work via SMB/CIFS and just got used to typing
out the commands the long way.
Yesterday I updated my Raspberry Pi 5, which I just set up on February
24 and has been working fine. After updating and rebooting I had the
SAME %$#*! problem with that box, yet an Ubuntu Server box, also fully
up to date, is still behaving normally.
I've changed the mounts on both of the trouble boxes to be via nfs
rather than cifs, and that seems to have...well, I was going to say
"fixed" the problem, but it's probably more fair to say it's "obscured"
the problem. I had resisted using nfs because the same share also needs
to be accessed on a Windows box or three and I know it's not a good idea
to use nfs and cifs at the same time. But I'm the one and only user, so
I'm just going to have to assume (!) that I can get away with it.
On the one hand, it sure looks like there's some subtle issue with
TrueNAS + cifs shares accessed via Ubuntu/Debian/Pi...on the other hand,
why is NO ONE ELSE having this issue? All I know for sure is that two
different boxes have now had exactly the same issue: They worked great,
they got updated and rebooted, they stopped working.
At this point I'm just going to live with it and hope it doesn't get any
weirder.
Thank you for your past help. Hopefully this won't bite anyone else.
Jeff
On 1/16/24 13:46, Jeff wrote:
> I'm sorry that I disappeared! I just couldn't spend more time on this
> for a while. Another group response below:
>
> On 1/14/24 12:10, Little Girl wrote:
>
> > I'd start by making sure the misbehaving computer is backed up. Then,
> > I'd do what Colin suggested and look at the output of the dmesg
> > command (you may need to use sudo for that).
>
> dmesg doesn't show anything I find obvious, though I did just try a
> umount and mount to see what it would show. I got:
>
> 2024-01-16T13:10:11,749983-08:00 evict_inodes inode 00000000446bab82,
> i_count = 1, was skipped!
> 2024-01-16T13:10:11,749991-08:00 evict_inodes inode 0000000071a4b4b6,
> i_count = 1, was skipped!
> 2024-01-16T13:10:11,749992-08:00 evict_inodes inode 000000004c4e8a55,
> i_count = 1, was skipped!
> 2024-01-16T13:10:12,058486-08:00 CIFS: Attempting to mount
> //192.168.1.12/WD8TBNAS03
>
> I have no idea what "evict_inodes" means, but the drive mounted
> despite no message after the "Attempting" one.
>
> > This page looks like it's got some good, solid advice for a series of
> > steps one can take in addition to that to sleuth input/output errors:
> >
> >
> https://unix.stackexchange.com/questions/542554/got-input-output-error-when-execute-any-commands
>
> Thank you for that. It's a little overwhelming for me at the moment,
> but I'll keep it on-hand. I'm also still struggling with idea that it
> could be hardware when I have no other symptoms at all. But maybe I'm
> just in denial.
>
> It feels a bit like something incorrect is cached somewhere in
> cifs-land, but I say that grasping at straws as I wallow in complete
> ignorance of how this all works under the hood.
>
> On 1/14/24 12:53, Jon LaBadie wrote:
>
> > Did you continue moving the script down the rest of the directory
> > path on the NAS?
>
> Yes, sorry, I should have made that more clear.
>
> > Note, this is not "executing" the script from the NAS. Python only
> > needs to read, not "execute" the script file.
>
> Ah, yes. Obvious in hindsight but not something I'd actually thought
> through at the time.
>
> Here's an odd thing I found, though: I suddenly wondered if I could
> execute from ANY network storage, so I copied the test script to a few
> different places and tried to run it. Some shares would allow it, some
> would not. Curiously (but probably not meaningfully), I could
> successfully execute it from a different share that's actually located
> on the same physical hard drive as the one that isn't working. So
> whatever is happening is not a global block on any execution of a file
> served up over the network.
>
> I then unmounted the offending drive, created a new mountpoint and
> attempted to re-mount it there, but again execution failed. Of course,
> in that case I'm still accessing the same share as defined by TrueNAS.
>
> On 1/14/24 13:45, Karl Auer wrote:
>
> > Just to be super-pedantic what I meant was something like this script:
>
> > #!/bin/bash
> > echo "Boo!"
>
> > named e.g. test.sh and stored on your local disk in e.g. /tmp, flagged
> > executable. Can you then run /tmp/test.sh? I would expect so.
>
> Yes.
>
> > If you then put that script on the NAS, can you run it from there? I
> > would expect not.
>
> Exactly right, I can't.
>
> > About all I can suggest is doing an extremely careful comparison of
> > the mount configurations on the working and non-working systems.
>
> They are identical. And pretty vanilla, for that matter.
>
> In the course of trying to figure this out I discovered the findmnt
> command, but the results are identical across systems:
>
> /mnt/WD8TBNAS03 //192.168.1.12/WD8TBNAS03 cifs
> rw,relatime,vers=3.0,cache=strict,username=<user>,uid=1000,noforceuid,gid=1000,noforcegid,addr=192.168.1.12,file_mode=0755,dir_mode=0755,iocharset=utf8,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=5
>
> > While you are at it, check the CIFS configuration on the NAS.
>
> I opened up multiple share definitions (i.e., working and not working)
> in multiple tabs in the TrueNAS web interface and didn't see any
> differences.
>
> > Also check what users are doing the mounting and whether they are the
> > right users, because that is something that could differ between the
> > three clients. Make sure your user is not getting squashed to nobody,
> > guest or something by the CIFS server.
>
> It's all just me, so there's no way anyone else is doing something funny.
>
> On the one hand, I want to blame the computer where this is happening
> because it came about after updating it and rebooting. On the other
> hand, I want to blame TrueNAS because it's what's doing the serving
> (and I now know the executability is inconsistent across shares), but
> I haven't made changes there in a long time and hadn't even rebooted
> in weeks before this happened. I also want to blame random file
> corruption somehwere, but the SSD that holds the OS is running zfs.
> Then again, it's not mirrored, so I suppose there's a tiny chance of
> something getting mangled.
>
> I'm wondering if it's possible to simply purge cifs and re-install it,
> but that seems...less than optimal.
>
More information about the ubuntu-users
mailing list