#StackBounty: #linux #command-line #bash #terminal #xargs How to batch delete the redis keys from local mache and through a jump machin…

Bounty: 50

I need to delete some keys in my redis cluster which can only be accessed from a jump machine deployed in the kubernetes cluster.

So if I know the key I can delete it by the following command without problem:

➜ kubectl exec -it jump-machine -- /usr/local/bin/redis-cli -c -h redis-cluster-host DEL "the-key"
(interger) 1

But if I want to do it in batch then it gives output 0 which means not deleted:

➜ kubectl exec -it jump-machine -- /usr/local/bin/redis-cli -c -h redis-cluster-host --scan --pattern "*the-key-pattern*" | xargs -L 1 kubectl exec -it jump-machine -- /usr/local/bin/redis-cli -c -h redis-cluster-host -c DEL

Unable to use a TTY - input is not a terminal or the right kind of file
0
Unable to use a TTY - input is not a terminal or the right kind of file
0
Unable to use a TTY - input is not a terminal or the right kind of file
0
Unable to use a TTY - input is not a terminal or the right kind of file
0
Unable to use a TTY - input is not a terminal or the right kind of file
0
Unable to use a TTY - input is not a terminal or the right kind of file
0
Unable to use a TTY - input is not a terminal or the right kind of file
0

I’m quite new about using the xargs, and I can’t tell where is wrong.

I tried debug it with the following command, it gives all the keys without issue:

➜ kubectl exec -it jump-machine -- /usr/local/bin/redis-cli -c -h redis-cluster-host --scan --pattern "*the-key-pattern*" | xargs -L 1 echo

the-key-pattern-1
the-key-pattern-2
the-key-pattern-3
...

Hope someone can shed some light on it, thanks in advance!


Get this bounty!!!

#StackBounty: #bash #autocomplete #options Bash how does skip-completed-text work?

Bounty: 50

I use menu-complete bash function to cycle through completion when I press Tab, and I’m happy with it. But the following has too often happened to me.

Suppose I’m looking for the file longparthardtoremember.with.QQQQQQQ.extension in a directory which contains the files

longparthardtoremember.with.AAAAAAA.nice.long.extension
longparthardtoremember.with.BBBBBBB.very.nice.long.extension
...

If I Tab-complete $ long the first filename will be inserted. At that point, I’d like to move to the middle of the filename, delete the AAAAAAA part, type B, and then Tab-complete again. If I do so, all the part after BBBBBBB is inserted as well, thus leading to a duplication of it, which I obviously don’t want.

With vi editing mode, I’m quite quick in dealing with this (I quickly move to the repeated part and delete it), but it is still annoying.

By pure chance I’ve find the skip-completed-text bash option in bash’s man page. Isn’t this what I need? I’ve set it on, but I can’t see any difference in the behavior of in-middle-of-work Tab-completion. Have I misunderstood the man page?


Get this bounty!!!

#StackBounty: #bash #shell-script #networking #linux-mint #process-management pause youtube-dl when network is disconnected and resume …

Bounty: 100

I am using Linux Mint 20.

I am using a vpn with a kill switch (protonvpn-cli ks --on).

So, if the vpn connection drops for some reason, the network get disconnected.

When the network get disconnected, my youtube-dl download stops permanently with the error

ERROR: Unable to download JSON metadata: <urlopen error [Errno -2] Name or service not known> (caused by URLError(gaierror(-2, 'Name or service not known')))

The issue is, I want youtube-dl to pause instead of closing, and resume when the connection is back.

I checked Retry when connection disconnect not working but I do not think it is relevant to my problem.

My config file looks like

--abort-on-error
--no-warnings
--console-title
--batch-file='batch-file.txt'
--socket-timeout 10
--retries 10
--continue
--fragment-retries 10 

As I use batch files, I do not want to start the process from the beginning. I just want to pause the youtube-dl process till I get connected again and then continue the process.

How can I do that?

Update 1:

So far, what I have found is, to pause a process we can do something like:

$ kill -STOP 16143

To resume a process we can do something like:

$ kill -CONT 16143

I am not sure but think that we can know if my network is up or not by pinging1 2:

#!/bin/bash
HOSTS="cyberciti.biz theos.in router"

COUNT=4

for myHost in $HOSTS
do
  count=$(ping -c $COUNT $myHost | grep 'received' | awk -F',' '{ print $2 }' | awk '{ print $1 }')
  if [ $count -eq 0 ]; then
    # 100% failed 
    echo "Host : $myHost is down (ping failed) at $(date)"
  fi
done  

However, it does not seem like an efficient solution.

Linux: execute a command when network connection is restored suggested using ifplugd or using /etc/network/if-up.d/.

There is another question and a blog post which mention using /etc/NetworkManager/dispatcher.d.

As I am using Linux Mint, I think any solution revolving around NetworkManager will be easier for me.


Get this bounty!!!

#StackBounty: #bash #shell-script #networking #process-management pause youtube-dl when network is disconnected and resume when it is c…

Bounty: 100

I am using a vpn with a kill switch (protonvpn-cli ks --on).

So, if the vpn connection drops for some reason, the network get disconnected.

When the network get disconnected, my youtube-dl download stops permanently with the error

ERROR: Unable to download JSON metadata: <urlopen error [Errno -2] Name or service not known> (caused by URLError(gaierror(-2, 'Name or service not known')))

The issue is, I want youtube-dl to pause instead of closing, and resume when the connection is back.

I checked Retry when connection disconnect not working but I do not think it is relevant to my problem.

My config file looks like

--abort-on-error
--no-warnings
--console-title
--batch-file='batch-file.txt'
--socket-timeout 10
--retries 10
--continue
--fragment-retries 10 

As I use batch files, I do not want to start the process from the beginning. I just want to pause the youtube-dl process till I get connected again and then continue the process.

How can I do that?


Get this bounty!!!

#StackBounty: #bash #wildcards #autocomplete bash inputrc autocompletion with wildcards

Bounty: 200

I’ve adapted my inputrc with the following:

#Use tab to cycle through all the possible completions.
"t": menu-complete
"e[Z": menu-complete-backward

and when I have the following directory of server logs:

ATWIEUNXSRVFILE001
ATWIEWINSRVDOMA001
USLAXUNXSRVFILE001
USLAXWINSRVFILE001

I’d like autocompletion to cycle through all FILE servers. I.E.

$ analyze_logs *FILE*Tab

should cycle through

ATWIEUNXSRVFILE001
USLAXUNXSRVFILE001
USLAXWINSRVFILE001

(Where * obviously is some kind of wildcard/regex/anything, really…)

  • This has been bugging me for a few years already
  • I do have a few workarounds like
    • Alt+*
    • ls *FILE* > serverlist.txt
    • set show-all-if-ambiguous on
  • My google-fu seems to be abandoning me as I can’t find anything that does what I want.

If it helps, I’m definitely running bash (echo $0/bin/bash), on an arch-derivative but, if possible would like something portable across multiple *nix systems.


Get this bounty!!!

#StackBounty: #bash #shell-script #zsh #rsync Detecting Directory Moves & Renames with Rsync

Bounty: 200

The command I am currently using to backup one HDD to another (locally, not remotely) is

rsync --info=PROGRESS2,BACKUP,DEL -ab --human-readable --inplace --delete-after --debug=NONE --log-file=/media/blueray/WDPurple/rsync.log --backup-dir=red_rsync_bak.$(date +"%d-%m-%y_%I-%M-%S%P") --log-file-format='%t %f %o %M' --exclude='lost+found' --exclude='.Trash-1000' /media/blueray/WDRed /media/blueray/WDPurple

if I use --delete-after rsync consider the moved directories as deleted and created directories.

As a result, when I move directories in source, it delete those directories from the destination and then copy them from the source. Often it takes a long time as I sometime move large directories in the source.

I found few solutions to this problem.

  1. Patch rsync.

  2. without patch.

  3. use BorgBackup or bup

  4. use --fuzzy --delay-updates --delete-delay

However, each has its own issues.

The patch was created long ago and I am not sure whether it will have issues with the modern rsync or not. Moreover, maintaining a patch is difficult for me.

Option two create a mess in my HDD. moreover, I use many more options and not sure whether it will be safe or not.

As far option 3 is concerned, I invested a lot of time with rsync and now do not want to move to new tool. Moreover, those tools have their own issues.

Regarding option 4, a rename using --fuzzy --delay-updates --delete-delay of /test/10GBfile to /test/otherdir/10GBfile_newname would still resend the data, since it’s not in the same directory. It has a lot more issues. Ex. --delay-updates conflicts with with --inplace.

So, the solution I am looking for is to use --itemize-changes with --dry-run and get the list of directories moved or renamed then first run mv in the destination (It will be great if it have a prompt like x will be moved to a/x in destinition, y will be moved to b/y in destinition,c/z will be moved to z in destinition. Do you want to continue?) and then run my rsync command mentioned in the top. I am ready to consider same size as similar directory.

Suppose the directory tree looks like:

.
├── dest
│   ├── test
│   │   └── empty-asciidoc-document.adoc
│   ├── test2
│   │   └── empty-asciidoc-document.adoc
│   └── test3
│       └── empty-asciidoc-document.adoc
├── src
│   ├── grandpartest
│   │   └── partest
│   │       └── test
│   │           └── empty-asciidoc-document.adoc
│   ├── grandpartest2
│   │   └── partest2
│   │       └── test2
│   │           └── empty-asciidoc-document.adoc
│   └── grandpartest3
│       └── partest3
│           └── test3
│               └── empty-asciidoc-document.adoc

I noticed that if I move directories the --itemize-changes output looks like:

% rsync --dry-run -ai --inplace --delete-after /home/blueray/Downloads/src/ /home/blueray/Downloads/dest/
.d..t...... ./
cd+++++++++ grandpartest/
cd+++++++++ grandpartest/partest/
cd+++++++++ grandpartest/partest/test/
>f+++++++++ grandpartest/partest/test/empty-asciidoc-document.adoc
cd+++++++++ grandpartest2/
cd+++++++++ grandpartest2/partest2/
cd+++++++++ grandpartest2/partest2/test2/
>f+++++++++ grandpartest2/partest2/test2/empty-asciidoc-document.adoc
cd+++++++++ grandpartest3/
cd+++++++++ grandpartest3/partest3/
cd+++++++++ grandpartest3/partest3/test3/
>f+++++++++ grandpartest3/partest3/test3/empty-asciidoc-document.adoc
*deleting   test3/empty-asciidoc-document.adoc
*deleting   test3/
*deleting   test2/empty-asciidoc-document.adoc
*deleting   test2/
*deleting   test/empty-asciidoc-document.adoc
*deleting   test/

we can get the deleted directories using:

% echo "$dryrunoutput" | grep "*deleting.*/$" | awk '{print $2}' | while read spo; do echo ${spo%?}; done
test3
test2
test

Added directories using:

% echo "$dryrunoutput" | grep "cd++.*/$" | awk '{print $2}' | while read spo; do echo ${spo%?}; done | while read spo; do echo ${spo##*/}; done
grandpartest
partest
test
grandpartest2
partest2
test2
grandpartest3
partest3
test3

Directories that were both added and deleted using:

$ sort  <(echo "$deletedirectories") <(echo "$addeddirectoriesvalue") | uniq -d
test
test2
test3

Directory size in byte, to compare both are same directory (more or less, this will work for me) using:

% /usr/bin/du -sb "/home/blueray/Documents/src/test2/test" | grep -oh "^S*"
4096
% /usr/bin/du -sb "/home/blueray/Documents/dest/test" | grep -oh "^S*"
4096

The script I came up with so far is:

#!/bin/bash

# sanity check : how many directories has similar size.

source="/media/blueray/WDRed/_working/_scripts/_rsync-test/src/"
destination="/media/blueray/WDRed/_working/_scripts/_rsync-test/dest/"
dryrunoutput=$(rsync --dry-run -ai --inplace --delete-after $source $destination)
deletedirectories=$( echo "$dryrunoutput" | grep "*deleting.*/$" | awk '{print $2}' | while read spo; do echo ${spo%?}; done )
addeddirectorieskey=$( echo "$dryrunoutput" | grep "cd++.*/$" | awk '{print $2}' | while read spo; do echo ${spo%?}; done )
addeddirectoriesvalue=$( echo "$dryrunoutput" | grep "cd++.*/$" | awk '{print $2}' | while read spo; do echo ${spo%?}; done | while read spo; do echo ${spo##*/}; done )

intersection=$( sort  <(echo "$deletedirectories") <(echo "$addeddirectoriesvalue") | uniq -d )

sourcesize=$(/usr/bin/du -sb "${source}test2/test" | grep -oh "^S*")

destsize=$(/usr/bin/du -sb "${destination}test" | grep -oh "^S*")

if [[ "$destsize" == "$sourcesize" ]]
then
  mv "${destination}test/" "$destination$addeddirectories"
fi

If you notice mv "${destination}test/" "$destination$addeddirectories", here part of the path is hard coded. It has other issues as well. It only work for single directory and stuff like that.

P.S. I know similar size does not mean they are same, but in my case it will work. My directories are the main problem, files are not. So, I am not really worried about file rename or move detection. I am only interested in directory rename or move detection.


Get this bounty!!!

#StackBounty: #macos #command-line #bash #daemon #at MacOS – "at" command is not working

Bounty: 50

I’ve started the atrun daemon using the following command.

$ sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.atrun.plist

Added my username to /var/at/at.allow file.

$ cat /var/at/at.allow
myusername

And created a job using at command.

$ at now + 1 minute
touch /tmp/x.log
^D
job 1 at Fri Jan  1 09:56:00 2021

I can see the job scheduled using atq command.

But I can’t see the file /tmp/x.log created after the scheduled time. Is there anything I’m missing here or is there a way to debug this issue?


Get this bounty!!!

#StackBounty: #c++ #bash #gcc Bash script – compile and run c++ code for coding competitions

Bounty: 50

This is a simple bash script that I use to compile and run single C++ files for coding competitions.

Features:

  • Detects if there is a corresponding .in file next to it, and if so uses that as stdin
    • e.g. if the file problem1.cpp and problem1.in are in the same directory, the script will redirect stdin from problem1.in
  • Compile each file to a temp directory so it doesn’t clutter the working directory
  • Configurable g++ warning flags as needed

Note: 33[32m and 33[0m are terminal color codes that make the text green

#!/bin/bash
# Compiles and runs .cpp code
# if there exists an .in file next to the .cpp file
# it will use that as input

if [ -z $1 ]; then
  echo -e "Please choose an input file"
  exit 1
fi

FILE="$1"
FILE_IN="${FILE%.*}.in"

clear
echo -e "33[32mCompiling...33[0m"
TMPFILE=$(mktemp /tmp/run-cpp.XXXXXXXXXX)
WARNING_FLAGS="-Wuninitialized -Wmaybe-uninitialized"
g++ $FILE -std=c++17 $WARNING_FLAGS -O3 -o $TMPFILE

if [ $? -eq 0 ]; then
  echo -e "33[32mRunning...33[0m"
  if [ -f $FILE_IN ]; then
    $TMPFILE < $FILE_IN
  else
    $TMPFILE
  fi
  ERROR=$?
fi
rm $TMPFILE 2> /dev/null
exit $ERROR

Usage:
./runcpp.sh /path/to/my-cpp-file.cpp


Get this bounty!!!

#StackBounty: #command-line #bash #scripts #inotify Adding script bash variables to awk and inotify

Bounty: 50

I want to create a script that will log every access of a directory or any file from that directory during a day, for that, I use inotifywait, but I don’t like the output even though i formated it, I need the user that accessed/modified the file as well. And I want to print it in the table format. Something like this:

TIME               USER     FILE            EVENT
%mm:%HH PM/am      root     /home/root/x    Accesed(or anything the inotifywait gives)

And I tried something like this:

#!/bin/sh

watchedDir=$1
logFileName="$(date +'%d.%m.%Y').log"

iwait() {
    inotifywait -r -m --timefmt "%Y/%m/%d %H:%M:%S" --format "%T;%w%f;%e" $watchedDir >> "$PWD/.$logFileName.tmp"
}

write_to_file() {
    while true; do
    last_entry=$(tail -n 1 "$PWD/$logFileName.tmp")
    time=$(tail -n 1 "$PWD/$logFileName.tmp" | cut -f1 -d';')
    user=$(stat $last_entry --format="%U")
    file=$(tail -n 1 "$PWD/$logFileName.tmp" | cut -f2 -d';')
    event=$(tail -n 1 "$PWD/$logFileName.tmp" | cut -f3 -d';')

    awk -v time="$time" -v user="$user" -v file="$file" -v event="$event" 'BEGIN {printf("%s %8s %8s %8s n" ,"Time", "User", "File", "Event")}
    {printf("%s %s %s %sn", time, user, file, event)}' >> "$PWD/.$logFileName.tmp"
    done
}

if [ "$(realpath $watchedDir)" != "$PWD" ]
then
    iwait &
    write_to_file &
    wait
fi

I also found out that if I try to watch the current directory and also redirect the file into the current directory it will flood the ouput… so I tried to get over that using that if.

How can I do something like that?


Get this bounty!!!