#StackBounty: #linux #arch-linux #swap #arm #swap-file swapon failed: Invalid argument with ext4 swapfile and swap partition

Bounty: 50

I’ve tried enabling swap both on a swapfile (on ext4):

# file /mnt/usb/swapfile
/mnt/usb/swapfile: Linux/i386 swap file (new style), version 1 (4K pages), size 1023999 pages, no label, UUID=9dfaa27a-d72f-4dad-ac97-ffead7e29845
# swapon /mnt/usb/swapfile
swapon: /mnt/usb/swapfile: swapon failed: Invalid argument

and a swap partition:

# parted /dev/sda2 print
Model: Unknown (unknown)
Disk /dev/sda2: 2934MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2934MB  2934MB  linux-swap(v1)

# swapon /dev/sda2
swapon: /dev/sda2: swapon failed: Invalid argument

system info:

# uname -a
Linux alarm 3.10.18-24-ARCH #1 SMP Sun Sep 17 21:03:56 CEST 2017 armv7l GNU/Linux

and swapon version:

# swapon --version
swapon from util-linux 2.31.1

I don’t see anything relevant in the man page or online. Can anyone shine light on what the issue is?


Get this bounty!!!

#StackBounty: #linux #init #exec How can I make a specific process exec a given executable with ptrace()?

Bounty: 100

I am trying to force the init process of an embedded Linux system to exec() my own init program (systemd) so that I can test an external filesystem before writing it to the system’s flash (and risk bricking the device). With GDB, I can run the command gdb app 1, then in that shell type call execl("/lib/systemd/systemd", "systemd", 0) (which works exactly as I need it to), but I do not have enough room to put GDB on the system’s flash.

I was wondering exactly what ptrace() calls GDB uses with its call command so that I can implement that in my own simple C program.

I tried using strace to figure out what ptrace() calls GDB used, but the resulting file was 172,031 lines long. I also tried looking through its source code, but there were too many files to find what I was looking for.

The device is running Linux kernel version 3.10.0, the configuration is available here: https://pastebin.com/rk0Zux62


Get this bounty!!!

#StackBounty: #linux #cluster-computing #torque Torque cannot communicate with host

Bounty: 50

I have been attempting to setup the torque scheduler for a small cluster. I followed the steps to setup the scheduler from http://docs.adaptivecomputing.com/torque/archive/3-0-2/1.2configuring_torque_on_server.php

However when i attempt

qterm -t quick

I get the following error

$ sudo qterm -t quick
Unable to communicate with Terra(192.168.1.25)
Cannot connect to specified server host 'Terra'.
qterm: could not connect to server '' (111) Connection refused 

but the server starts just fine. However when I attempt to run a command that runs on multiple nodes such as

qsub -l nodes=2:ppn=4 /home/user/scripts/someScript

it prints out somethign like

7.Terra

where Terra is the name of the head node, but is also a node in the cluster. This isn’t the problem. The problem is that it does not run. nor does it have any output anywhere :/

The torque server log: https://ptpb.pw/EaKo

The terra node log: https://ptpb.pw/9w5M

and the Marte log: https://ptpb.pw/o4PT

I can get it to run with a pbs script but only with one node….

#!/bin/bash
#PBS -l pmem=1gb,nodes=1:ppn=4
#PBS -m abe
cd Documents/
wc -l largeTest.csv

Here is the ouput of qstat after submitting a job

Job ID                    Name             User            Time Use S 
Queue
------------------------- ---------------- --------------- -------- - -----
16.Terra                   testPerformance  justin                 0 R batch      

the output of pbsnodes -a

Terra
 state = free
 power_state = Running
 np = 4
 properties = Tower
 ntype = cluster
 status = opsys=linux,uname=Linux Terra 4.17.14-arch1-1-ARCH #1 SMP PREEMPT Thu Aug 9 11:56:50 UTC 2018 x86_64,sessions=11525 22029,nsessions=2,nusers=1,idletime=57964,totmem=8111556kb,availmem=7539284kb,physmem=8111556kb,ncpus=4,loadave=0.00,gres=,netload=30570521372,state=free,varattr= ,cpuclock=Fixed,macaddr=e0:3f:49:44:72:20,version=6.1.1.1,rectime=1534937388,jobs=
 mom_service_port = 15002
 mom_manager_port = 15003
 gpus = 1

Marte
 state = free
 power_state = Running
 np = 4
 properties = NFSServer
 ntype = cluster
 status = opsys=linux,uname=Linux Marte 4.18.1-arch1-1-ARCH #1 SMP PREEMPT Wed Aug 15 21:11:55 UTC 2018 x86_64,sessions=366 556 563,nsessions=3,nusers=2,idletime=58140,totmem=7043404kb,availmem=6703808kb,physmem=7043404kb,ncpus=4,loadave=0.02,gres=,netload=36500663511,state=free,varattr= ,cpuclock=Fixed,macaddr=c8:5b:76:4a:65:91,version=6.1.1.1,rectime=1534937359,jobs=
 mom_service_port = 15002
 mom_manager_port = 15003

and the /var/spool/torque/server_priv/nodes

Terra np=4 gpus=1 Tower
Marte np=4 NFSServer

Edit: Here are the most recent logs as well

Mom Log for Node: https://ptpb.pw/DhKi

Mom Log for head node: https://ptpb.pw/MTlD

and the server log: https://ptpb.pw/HPkE


Get this bounty!!!

#StackBounty: #linux #gnome #fullscreen #virtual-desktop In Gnome how to have multiple full screen areas on one screen?

Bounty: 50

When working on big screens I often use Gnomes “snap window to left/right border” by dragging a window to the left/right side of the screen. This results in a partially maximized window which I find very convenient.

I wonder whether it’s possible to either configure this behavior to have more of those areas in which I can maximize a window or do whatever is needed to display a configurable amount of virtual desktops on the same monitor.

Note: I already tried some tiling window managers like xmonad – but unfortunately most of them force all windows to be maximized (which is not always good) and you loose all the nice Gnome convenience.

So what I’m looking for is a way to have multiple ‘fullscreen’ windows on one display while keeping the default Gnome behavior.


Get this bounty!!!

#StackBounty: #python #linux #memory-leaks Python memory not being released on linux?

Bounty: 50

I am trying to load a large json object into memory and then perform some operations with the data. However, I am noticing a large increase in RAM after the json file is read –EVEN AFTER the object is out of scope.

Here is the code

import json
import objgraph
import gc
from memory_profiler import profile
@profile
def open_stuff():
    with open("bigjson.json", 'r') as jsonfile:
        d= jsonfile.read()
        jsonobj = json.loads(d)
        objgraph.show_most_common_types()
        del jsonobj
        del d
    print ('d')
    gc.collect()

open_stuff()

I tried running this script in Windows with Python version 2.7.12 and Debian 9 with Python version 2.7.13, and I am seeing an issue with the Python in Linux.

In Windows, when I run the script, it uses up a lot of RAM while the json object is being read and in scope (as expected), but it is released after the operation is done (as expected).

list                       3039184
dict                       413840
function                   2200
wrapper_descriptor         1199
builtin_function_or_method 819
method_descriptor          651
tuple                      617
weakref                    554
getset_descriptor          362
member_descriptor          250
d
Filename: testjson.py

Line #    Mem usage    Increment   Line Contents
================================================
     5     16.9 MiB     16.9 MiB   @profile
     6                             def open_stuff():
     7     16.9 MiB      0.0 MiB       with open("bigjson.json", 'r') as jsonfile:
     8    197.9 MiB    181.0 MiB           d= jsonfile.read()
     9   1393.4 MiB   1195.5 MiB           jsonobj = json.loads(d)
    10   1397.0 MiB      3.6 MiB           objgraph.show_most_common_types()
    11    402.8 MiB   -994.2 MiB           del jsonobj
    12    221.8 MiB   -181.0 MiB           del d
    13    221.8 MiB      0.0 MiB       print ('d')
    14     23.3 MiB   -198.5 MiB       gc.collect()

However in the LINUX environment, over 500MB of RAM is still used even though all references to the JSON object has been deleted.

list                       3039186
dict                       413836
function                   2336
wrapper_descriptor         1193
builtin_function_or_method 765
method_descriptor          651
tuple                      514
weakref                    480
property                   273
member_descriptor          250
d
Filename: testjson.py

Line #    Mem usage    Increment   Line Contents
================================================
     5     14.2 MiB     14.2 MiB   @profile
     6                             def open_stuff():
     7     14.2 MiB      0.0 MiB       with open("bigjson.json", 'r') as jsonfile:
     8    195.1 MiB    181.0 MiB           d= jsonfile.read()
     9   1466.4 MiB   1271.3 MiB           jsonobj = json.loads(d)
    10   1466.8 MiB      0.4 MiB           objgraph.show_most_common_types()
    11    694.8 MiB   -772.1 MiB           del jsonobj
    12    513.8 MiB   -181.0 MiB           del d
    13    513.8 MiB      0.0 MiB       print ('d')
    14    513.0 MiB     -0.8 MiB       gc.collect()

The same script run in Debian 9 with Python 3.5.3 uses less RAM but leaks a proportionate amount of RAM.

list                       3039266
dict                       414638
function                   3374
tuple                      1254
wrapper_descriptor         1076
weakref                    944
builtin_function_or_method 780
method_descriptor          780
getset_descriptor          477
type                       431
d
Filename: testjson.py

Line #    Mem usage    Increment   Line Contents
================================================
     5     17.2 MiB     17.2 MiB   @profile
     6                             def open_stuff():
     7     17.2 MiB      0.0 MiB       with open("bigjson.json", 'r') as jsonfile:
     8    198.3 MiB    181.1 MiB           d= jsonfile.read()
     9   1057.7 MiB    859.4 MiB           jsonobj = json.loads(d)
    10   1058.1 MiB      0.4 MiB           objgraph.show_most_common_types()
    11    537.5 MiB   -520.6 MiB           del jsonobj
    12    356.5 MiB   -181.0 MiB           del d
    13    356.5 MiB      0.0 MiB       print ('d')
    14    355.8 MiB     -0.8 MiB       gc.collect()

What is causing this issue?
Both versions of Python are running 64bit versions.

EDIT – calling that function several times in a row leads to even stranger data, the json.loads function uses less RAM each time it’s called, after the 3rd try the RAM usage stabilizes, but the earlier leaked RAM does not get released..

list                       3039189
dict                       413840
function                   2339
wrapper_descriptor         1193
builtin_function_or_method 765
method_descriptor          651
tuple                      517
weakref                    480
property                   273
member_descriptor          250
d
Filename: testjson.py

Line #    Mem usage    Increment   Line Contents
================================================
     5     14.5 MiB     14.5 MiB   @profile
     6                             def open_stuff():
     7     14.5 MiB      0.0 MiB       with open("bigjson.json", 'r') as jsonfile:
     8    195.4 MiB    180.9 MiB           d= jsonfile.read()
     9   1466.5 MiB   1271.1 MiB           jsonobj = json.loads(d)
    10   1466.9 MiB      0.4 MiB           objgraph.show_most_common_types()
    11    694.8 MiB   -772.1 MiB           del jsonobj
    12    513.9 MiB   -181.0 MiB           del d
    13    513.9 MiB      0.0 MiB       print ('d')
    14    513.1 MiB     -0.8 MiB       gc.collect()


list                       3039189
dict                       413842
function                   2339
wrapper_descriptor         1202
builtin_function_or_method 765
method_descriptor          651
tuple                      517
weakref                    482
property                   273
member_descriptor          253
d
Filename: testjson.py

Line #    Mem usage    Increment   Line Contents
================================================
     5    513.1 MiB    513.1 MiB   @profile
     6                             def open_stuff():
     7    513.1 MiB      0.0 MiB       with open("bigjson.json", 'r') as jsonfile:
     8    513.1 MiB      0.0 MiB           d= jsonfile.read()
     9   1466.8 MiB    953.7 MiB           jsonobj = json.loads(d)
    10   1493.3 MiB     26.6 MiB           objgraph.show_most_common_types()
    11    723.9 MiB   -769.4 MiB           del jsonobj
    12    723.9 MiB      0.0 MiB           del d
    13    723.9 MiB      0.0 MiB       print ('d')
    14    722.4 MiB     -1.5 MiB       gc.collect()


list                       3039189
dict                       413842
function                   2339
wrapper_descriptor         1202
builtin_function_or_method 765
method_descriptor          651
tuple                      517
weakref                    482
property                   273
member_descriptor          253
d
Filename: testjson.py

Line #    Mem usage    Increment   Line Contents
================================================
     5    722.4 MiB    722.4 MiB   @profile
     6                             def open_stuff():
     7    722.4 MiB      0.0 MiB       with open("bigjson.json", 'r') as jsonfile:
     8    722.4 MiB      0.0 MiB           d= jsonfile.read()
     9   1493.1 MiB    770.8 MiB           jsonobj = json.loads(d)
    10   1493.4 MiB      0.3 MiB           objgraph.show_most_common_types()
    11    724.4 MiB   -769.0 MiB           del jsonobj
    12    724.4 MiB      0.0 MiB           del d
    13    724.4 MiB      0.0 MiB       print ('d')
    14    722.9 MiB     -1.5 MiB       gc.collect()


Filename: testjson.py

Line #    Mem usage    Increment   Line Contents
================================================
    17     14.2 MiB     14.2 MiB   @profile
    18                             def wow():
    19    513.1 MiB    498.9 MiB       open_stuff()
    20    722.4 MiB    209.3 MiB       open_stuff()
    21    722.9 MiB      0.6 MiB       open_stuff()

EDIT 2: Someone suggested this is a duplicate of Why does my program's memory not release? , but the amount of memory in question is far from the “small pages” discussed in the other question.


Get this bounty!!!

#StackBounty: #c #linux #linux-kernel #mmap #memory-mapping Write opertation on memory mapped IO gives segmentation fault

Bounty: 50

I’m accessing UART by mapping its physical base address to user space. Read operation is successful but write operation give segmentation fault. Below is my code

#define     READ_REG32(reg)     ( *((volatile int *) (reg)) )
#define     WRITE_REG32(reg,value)     ( *((volatile int *) (reg)) = value )

static int Write_on_uart()  
{  
    void * map_base;  
    FILE *f;  
    int type,fd;  

    fd = open("/dev/mem", O_RDWR | O_SYNC);  
    if (fd) {  
        printf("Success to open /dev/mem fd=%08xn", fd);  
    }  
    else {  
        printf("Fail to open /dev/mem fd=%08xn", fd);    
    }  
    map_base = mmap(0, ALLOC_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0x21E8000);

    type = READ_REG32(map_base + UCR1);
    printf("READ_REG32 successfuln");


    printf("Going to WRITE_REG32 registern");

    WRITE_REG32(map_base + UTXD,'R'); ///Got segementation fault
    printf("WRITE_REG32 successfuln");

    close(fd);  
    munmap(map_base, ALLOC_SIZE);  

    printf("reg32[%08x] = value[%08x] n", map_base, type);  

    type = (type & ( 1 << 27 )) >> 27 ;  

    printf("reg32[%08x] = value[%08x] n", map_base, type);  

    return type;  
}  

Segmentation fault is below:

, *pte=00000000, *ppte=00000000
[   50.354260] CPU: 0 PID: 401 Comm: raw_uart_access Not tainted 4.9.84-+gb2a7f2f #4
[   50.381000] Hardware name: Freescale i.MX6 UltraLite (Device Tree)
[   50.397017] task: 9229d140 task.stack: 9271a000
[   50.411233] PC is at 0x1057c
[   50.423459] LR is at 0x10574
[   50.435355] pc : [<0001057c>]    lr : [<00010574>]    psr: 200d0010
[   50.435355] sp : 7e8b4c80  ip : 00000000  fp : 7e8b4c9c
[   50.464647] r10: 76f61fac  r9 : 00000000  r8 : 00000000
[   50.478608] r7 : 00000000  r6 : 00010408  r5 : 00000000  r4 : 00010634
[   50.493738] r3 : ffffffff  r2 : 00000010  r1 : 76f60210  r0 : ffffffff
[   50.508645] Flags: nzCv  IRQs on  FIQs on  Mode USER_32  ISA ARM  Segment user
[   50.532259] Control: 10c5387d  Table: 9297806a  DAC: 00000055
[   50.546485] CPU: 0 PID: 401 Comm: raw_uart_access Not tainted 4.9.84-+gb2a7f2f #4
[   50.570395] Hardware name: Freescale i.MX6 UltraLite (Device Tree)
[   50.585015] Backtrace: 
[   50.595863] [<8010bd3c>] (dump_backtrace) from [<8010c014>] (show_stack+0x18/0x1c)
[   50.620055]  r7:00000017 r6:600d0113 r5:00000000 r4:80c1c9f0
[   50.634156] [<8010bffc>] (show_stack) from [<8042ed84>] (dump_stack+0x90/0xa4)
[   50.657979] [<8042ecf4>] (dump_stack) from [<80108a98>] (show_regs+0x14/0x18)
[   50.673778]  r7:00000017 r6:0000007f r5:0000000b r4:9229d140
[   50.688016] [<80108a84>] (show_regs) from [<801147e0>] (__do_user_fault+0xc4/0xc8)
[   50.712379] [<8011471c>] (__do_user_fault) from [<801149f0>] (do_page_fault+0x20c/0x3a4)
[   50.737755]  r8:0000007f r7:00000017 r6:9267dc40 r5:9229d140 r4:9271bfb0
[   50.753746] [<801147e4>] (do_page_fault) from [<8010134c>] (do_DataAbort+0x44/0xc0)
[   50.779830]  r10:76f61fac r9:00000000 r8:9271bfb0 r7:0000007f r6:801147e4 r5:00000017
[   50.806897]  r4:80c09db4
[   50.819029] [<80101308>] (do_DataAbort) from [<8010cee0>] (__dabt_usr+0x40/0x60)
[   50.845699] Exception stack(0x9271bfb0 to 0x9271bff8)
[   50.860521] bfa0:                                     ffffffff 76f60210 00000010 ffffffff
[   50.887867] bfc0: 00010634 00000000 00010408 00000000 00000000 00000000 76f61fac 7e8b4c9c
[   50.915238] bfe0: 00000000 7e8b4c80 00010574 0001057c 200d0010 ffffffff
[   50.931721]  r8:10c5387d r7:10c5387d r6:ffffffff r5:200d0010 r4:0001057c

Can any one give me hint?


Get this bounty!!!

#StackBounty: #linux #ubuntu #hard-drive #partitioning #data-recovery External Harddisk showing unknown, not initialised in Disk Manage…

Bounty: 100

I removed my 1 tb external hard disk (Seagate) directly from a windows system and now it is not working anymore. I’m trying to fix it via ubuntu now, and when I try to check it in Disks (gnome utility), it says no media.

I have tried to gather as much input as I can, by running some commands that I could find online in help forums.

sudo lshw -c disk

*-disk
description: SCSI Disk
product: JMS579
vendor: JMICRON
physical id: 0.0.0
bus info: scsi@4:0.0.0
logical name: /dev/sdb
configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512

sudo lshw -class disk -class storage

*-usb:1
description: Mass storage device
product: USB Mass Storage
vendor: JMicron
physical id: 4
bus info: usb@2:4
logical name: scsi4
version: 1.00
serial: 152D00579000
capabilities: usb-2.10 scsi emulated scsi-host
configuration: driver=usb-storage maxpower=34mA speed=480Mbit/s
*-disk
description: SCSI Disk
product: JMS579
vendor: JMICRON
physical id: 0.0.0
bus info: scsi@4:0.0.0
logical name: /dev/sdb
configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512
sudo hdparm -I /dev/sdb

/dev/sdb:
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

ATA device, with non-removable media
Standards:
Likely used: 1
Configuration:
Logical max current
cylinders 0 0
heads 0 0
sectors/track 0 0
--
Logical/Physical Sector size: 512 bytes
device size with M = 1024*1024: 0 MBytes
device size with M = 1000*1000: 0 MBytes 
cache/buffer size = unknown
Capabilities:
IORDY not likely
Cannot perform double-word IO
R/W multiple sector transfer: not supported
DMA: not supported
PIO: pio0 
sudo smartctl -a -d scsi /dev/sdb

=== START OF INFORMATION SECTION ===
Vendor: JMICRON
Product: JMS579
Compliance: SPC-4
Device type: disk
Local Time is: Fri Jun 22 23:07:23 2018 IST
device Test Unit Ready [unsupported scsi opcode]
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

fdisk -l

Fdisk doesn’t show any result for this disk. as it is not mounted anywhere.

sudo dmesg

[141307.332889] usb 2-4: USB disconnect, device number 5
[141310.499914] usb 2-4: new high-speed USB device number 7 using xhci_hcd
[141310.628540] usb 2-4: New USB device found, idVendor=152d, idProduct=0579
[141310.628544] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[141310.628547] usb 2-4: Product: USB Mass Storage
[141310.628549] usb 2-4: Manufacturer: JMicron
[141310.628551] usb 2-4: SerialNumber: 152D00579000
[141310.629107] usb-storage 2-4:1.0: USB Mass Storage device detected
[141310.629201] scsi host4: usb-storage 2-4:1.0
[141311.628514] scsi 4:0:0:0: Direct-Access     JMICRON  JMS579                PQ: 0 ANSI: 6
[141311.629170] sd 4:0:0:0: Attached scsi generic sg2 type 0
[141311.629942] sd 4:0:0:0: [sdb] Unit Not Ready
[141311.629953] sd 4:0:0:0: [sdb] Sense Key : Illegal Request [current] 
[141311.629960] sd 4:0:0:0: [sdb] Add. Sense: Invalid command operation code
[141311.632053] sd 4:0:0:0: [sdb] Read Capacity(10) failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[141311.632064] sd 4:0:0:0: [sdb] Sense Key : Illegal Request [current] 
[141311.632072] sd 4:0:0:0: [sdb] Add. Sense: Invalid command operation code
[141311.632253] sd 4:0:0:0: [sdb] Write Protect is off
[141311.632261] sd 4:0:0:0: [sdb] Mode Sense: 00 00 00 00
[141311.632435] sd 4:0:0:0: [sdb] Asking for cache data failed
[141311.632441] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[141311.635917] sd 4:0:0:0: [sdb] Unit Not Ready
[141311.635927] sd 4:0:0:0: [sdb] Sense Key : Illegal Request [current] 
[141311.635935] sd 4:0:0:0: [sdb] Add. Sense: Invalid command operation code
[141311.639186] sd 4:0:0:0: [sdb] Read Capacity(10) failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[141311.639197] sd 4:0:0:0: [sdb] Sense Key : Illegal Request [current] 
[141311.639205] sd 4:0:0:0: [sdb] Add. Sense: Invalid command operation code
[141311.639534] sd 4:0:0:0: [sdb] Attached SCSI disk
[141594.937486] EXT4-fs (sdb): unable to read superblock
[141594.937770] EXT4-fs (sdb): unable to read superblock
[141594.938048] EXT4-fs (sdb): unable to read superblock
[141594.938335] SQUASHFS error: squashfs_read_data failed to read block 0x0
[141594.938337] squashfs: SQUASHFS error: unable to read squashfs_super_block

no record for sdb in /proc/partitions

Here is the output of various gdisk commands that I tried:

sudo gdisk
GPT fdisk (gdisk) version 1.0.1

Type device filename, or press <Enter> to exit: /dev/sdb
Problem reading disk in BasicMBRData::ReadMBRData()!
Warning! Read error 22; strange behavior now likely!
Warning! Read error 22; strange behavior now likely!
Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by
typing 'q' if you don't want to convert your MBR partitions
to GPT format!
***************************************************************

Command (? for help): i 
no partitions

Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): Y

Command (? for help): p
Disk /dev/sdb: 0 sectors, 0 bytes
Logical sector size: 512 bytes
Disk identifier (GUID): ACBB4EFC-7AE9-4C9B-B804-DA09D936163D
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 18446744073709551582
Partitions will be aligned on 2048-sector boundaries
Total free space is 0 sectors (0 bytes)

Number  Start (sector)    End (sector)  Size       Code  Name

Command (? for help): v

Problem: Disk is too small to hold all the data!
(Disk size is 0 sectors, needs to be 0 sectors.)
The 'e' option on the experts' menu may fix this problem.

Problem: GPT claims the disk is larger than it is! (Claimed last usable
sector is 18446744073709551582, but backup header is at
18446744073709551615 and disk size is 0 sectors.
The 'e' option on the experts' menu will probably fix this problem

Partition(s) in the protective MBR are too big for the disk! Creating a
fresh protective or hybrid MBR is recommended.

Identified 3 problems!

Command (? for help): x

Expert command (? for help): e
Relocating backup data structures to the end of the disk

Expert command (? for help): z
About to wipe out GPT on /dev/sdb. Proceed? (Y/N): Y
Warning! GPT main header not overwritten! Error is 28
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.

Expert command (? for help): p
Disk /dev/sdb: 0 sectors, 0 bytes
Logical sector size: 512 bytes
Disk identifier (GUID): 4B3EC7B7-2E9E-4933-885C-0CF09BFBE24C
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 18446744073709551582
Partitions will be aligned on 2048-sector boundaries
Total free space is 0 sectors (0 bytes)

Number  Start (sector)    End (sector)  Size       Code  Name

Expert command (? for help): w
Caution! Secondary header was placed beyond the disk's limits! Moving the
header, but other problems may occur!
Warning! The claimed last usable sector is incorrect! Do you want to correct
this problem? (Y/N): Y
Have adjusted the second header and last usable sector value.

Partition(s) in the protective MBR are too big for the disk! Creating a
fresh protective or hybrid MBR is recommended.

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/sdb.
Unable to save backup partition table! Perhaps the 'e' option on the experts'
menu will resolve this problem.
Warning! An error was reported when writing the partition table! This error
MIGHT be harmless, or the disk might be damaged! Checking it is advisable.

I also tried to fix it using the Windows system.
it shows Unknown, not initialized in Disk Management.
I tried Diskpart as well, here is the output for various commands under that:

clean: 
DiskPart succeeded in cleaning the disk

recover: 
Virtual Disk Service error:
The disk is not initialized

convert gpt: 
Virtual Disk Service error:
The system's information about the object may not be up to date

DiskPart has referenced an object which is not up-to-date.
Refresh the object by using the RESCAN command. 
If the problem persists exit DiskPart, then restart DiskPart or restart the computer.

rescan:
Please wait while DiskPart scans your configuration...
Diskpart has finished scanning your configuration. 

convert mbr: this one didn't work as well. 

I tried EaseUs as well, it couldn’t detect the drive.

Any help is highly appreciated, thanks in advance.


Get this bounty!!!

#StackBounty: #linux #luks #desktop Luks+Sleep: Login screen security?

Bounty: 50

Situation: A Desktop Linux (eg. Debian, Xfce desktop, Lightdm login) with LUKS-encrypted partitions (as far as possible, eg. Efi files are not encrypted of course).
The computer is in sleep mode (not hibernate, ie. Luks in unlocked and key in RAM).

Now a thief steals the computer and wants to find a way in.

  • Anything that involves turning it off will of course not help because of the disk encryption.
  • Installing hardware keyloggers, replacing Bootloader/Efi with something maliciours, and similar things won’t help because the owner knows it is stolen and it can’t be trusted.
  • Elaborate attacks that eg. read the keys directly from RAM through some means are outside of the thiefs capabilities and/or a risk that the owner takes for being able to use sleep instead of shutdown.
  • That leaves the risk that the login screen (of LightDM) can be bypassed somehow, given the already running Desktop behind it.

My question is, what things I do need to be aware to prevent this?

Following points I already know:

  • Switching terminals (CtrlAltFn).
    • It the GUI is started with startx, this allows to get an TTY where the user is logged in already. However there is no such TTY when using LightDM.
    • There is also a GUI screen which just displays “This session is locked, will switch to login in few seconds” (or similar message). However it doesn’t appear that there is an easy way to break out from that.
  • X Server has a DontZap config option which allows to kill X with the shortcut CtrlAltBackspace. This might help in the “locked” screen or even Lightm, however it is disabled by default, so no problem.
  • There is another X shortcut CtrlAlt* (star) (config AllowClosedownGrabs) which kills all processess that hold a “lock” (whatever lock this means). This too is disabled by default.
  • Kernel SysRQ shortcut F for OOM killer. Can be disabled, and maybe the two GUIs are among the processes protected against it (I tried about 50 times and and failed to kill LightDM, just not sure about the exact reason).

What other risks might be there in 2018?


Get this bounty!!!

#StackBounty: #linux #storage #rhel #volume Volume Management: How to move space from one partition to another?

Bounty: 50

I am setting up a redhat ec2 instance and by default the software I am using created the following volumes on the two 500g ebs storage devices attached to the instance:

$ lvs
  LV        VG        Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  storetmp  rootrhel  -wi-ao----   20.00g                                                    
  varlog    rootrhel  -wi-ao----  <20.00g                                                    
  store     storerhel -wi-ao---- <348.80g                                                    
  transient storerhel -wi-ao----  <87.20g 
$ df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/xvda2                       500G  1.4G  499G   1% /
devtmpfs                          16G     0   16G   0% /dev
tmpfs                             16G     0   16G   0% /dev/shm
tmpfs                             16G   17M   16G   1% /run
tmpfs                             16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/storerhel-store      349G   33M  349G   1% /store
/dev/mapper/storerhel-transient   88G   33M   88G   1% /transient
/dev/mapper/rootrhel-storetmp     20G   33M   20G   1% /storetmp
/dev/mapper/rootrhel-varlog       20G   35M   20G   1% /var/log
tmpfs                            3.2G     0  3.2G   0% /run/user/1000

I need my storetmp to be 100g. How can I move 80g of storage from store to storetmp?

It also seems that I may need to shift some space from xvdb3 to xvdb2:

# lsblk
NAME                    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
xvda                    202:0    0   500G  0 disk 
├─xvda1                 202:1    0     1M  0 part 
└─xvda2                 202:2    0   500G  0 part /
xvdb                    202:16   0   500G  0 disk 
├─xvdb1                 202:17   0    24G  0 part [SWAP]
├─xvdb2                 202:18   0    40G  0 part 
│ ├─rootrhel-varlog     253:2    0    20G  0 lvm  /var/log
│ └─rootrhel-storetmp   253:3    0    20G  0 lvm  /storetmp
└─xvdb3                 202:19   0   436G  0 part 
  ├─storerhel-store     253:0    0 348.8G  0 lvm  /store
  └─storerhel-transient 253:1    0  87.2G  0 lvm  /transient


Get this bounty!!!

#StackBounty: #linux #kernel #boot #linux-kernel #kernel-parameters Linux Modify/Add Kernel Command Line from InitramFS "UserSpace…

Bounty: 100

I am developing an embedded Linux device. I have successfully created an InitramFS CPIO archive that runs quickly after boot. Now, I want to change the initial kernel command line to include “quiet” parameter so I can boot even faster.

However, once the splash screen is displayed in the InitramFS, I want to remove the quiet option for the kernel so the remainder of the boot is NOT quiet.

How can I achieve this? How can I reverse the initial “quiet” kernel command line option once I’ve reached the InitramFS?

Thanks.


Get this bounty!!!