#StackBounty: #mysql #replication #mysql-5.6 #linux #master-slave-replication Replication doesn't resume once it fails to connect

Bounty: 50

We have MySQL replication running for our application. Right now the slave fails to resume if it fails to connect to the server for some time.

We get the error below:

[ERROR] Slave I/O: error connecting to master 'replication-user@master-ip:3306' - retry-time: 60  retries: 186, Error_code: 1045

The above is possible because of power failure or network problems. Once the issue is fixed and it is able to connect to the master server it tries to resume but most of the times it fails with the error below:

[ERROR] Slave SQL: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave. Error_code: 1594

I followed this question and solved this problem by resetting the slave to last read relay-bin file and position. The problem is that the connection failure has become very frequent and we need to do the same thing pretty often.

What could be the possible reason of this occurring over and over again? Is there a way to overcome this problem?

Note: We are using MySQL 5.6.30.


Get this bounty!!!

#StackBounty: #mysql #replication #mysql-5.6 #linux #master-slave-replication Replication doesn't resume once it fails to connect

Bounty: 50

We have MySQL replication running for our application. Right now the slave fails to resume if it fails to connect to the server for some time.

We get the error below:

[ERROR] Slave I/O: error connecting to master 'replication-user@master-ip:3306' - retry-time: 60  retries: 186, Error_code: 1045

The above is possible because of power failure or network problems. Once the issue is fixed and it is able to connect to the master server it tries to resume but most of the times it fails with the error below:

[ERROR] Slave SQL: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave. Error_code: 1594

I followed this question and solved this problem by resetting the slave to last read relay-bin file and position. The problem is that the connection failure has become very frequent and we need to do the same thing pretty often.

What could be the possible reason of this occurring over and over again? Is there a way to overcome this problem?

Note: We are using MySQL 5.6.30.


Get this bounty!!!

#StackBounty: #mysql #replication #mysql-5.6 #linux #master-slave-replication Replication doesn't resume once it fails to connect

Bounty: 50

We have MySQL replication running for our application. Right now the slave fails to resume if it fails to connect to the server for some time.

We get the error below:

[ERROR] Slave I/O: error connecting to master 'replication-user@master-ip:3306' - retry-time: 60  retries: 186, Error_code: 1045

The above is possible because of power failure or network problems. Once the issue is fixed and it is able to connect to the master server it tries to resume but most of the times it fails with the error below:

[ERROR] Slave SQL: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave. Error_code: 1594

I followed this question and solved this problem by resetting the slave to last read relay-bin file and position. The problem is that the connection failure has become very frequent and we need to do the same thing pretty often.

What could be the possible reason of this occurring over and over again? Is there a way to overcome this problem?

Note: We are using MySQL 5.6.30.


Get this bounty!!!

#StackBounty: #linux #virtualhost #asp.net #apache2 #mono Strange directory access in apache with mutliple servicestack applications ho…

Bounty: 50

I have a webserver (Linux, Ubuntu 16.04) running a apache. I use it to host some ASP.NET applications with mono developed using the ServiceStack framework. Here is my vhost configuration

<VirtualHost *:443>
    ServerName myhost

    ServerAdmin me@myhost
    DocumentRoot /var/www/

    ErrorLog ${APACHE_LOG_DIR}/myhost-error.log
    CustomLog ${APACHE_LOG_DIR}/myhost-access.log combined

    SSLEngine on
    SSLCertificateFile    /etc/letsencrypt/live/myhost/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/myhost/privkey.pem
    SSLCertificateChainFile /etc/letsencrypt/live/myhost/fullchain.pem

    Header always set Strict-Transport-Security "max-age=15768000"

    <Directory /var/www>
       AllowOverride Nonehackathon
       deny from all
    </Directory>

    # Configure the myservice backend and frontend

    <Directory /var/www/myservice/backend>
       AllowOverride None
       Order allow,deny
       allow from all
    </Directory>

    Alias /myservice "/var/www/myservice/frontend"
    Alias /csc "/var/www/myservice/frontend"
    <Directory /var/www/myservice/frontend>
       AllowOverride None
       Order allow,deny
       allow from all
    </Directory>

    MonoMaxActiveRequests 150 
    MonoMaxWaitingRequests 150 
    MonoSetEnv MONO_THREADS_PER_CPU=100

    MonoServerPath "/usr/bin/mod-mono-server4"
    MonoServerPath backend "/usr/bin/mod-mono-server4"
    MonoApplications backend "/myservice/backend:/var/www/myservice/backend"
    KeepAliveTimeout 5
    Alias /myservice/backend "/var/www/myservice/backend"

    <Location /myservice/backend>
       Allow from all
       Order allow,deny
       MonoSetServerAlias backend
       SetHandler mono
    </Location>
    <Directory /var/www/myservice/backend>
       AllowOverride None
       Order allow,deny
       allow from all
    </Directory>

    # Configure the test sites for the myservice

    <Directory /var/www/test/myservice/backend>
       AllowOverride None
       Order allow,deny
       allow from all
    </Directory>

    Alias /test/myservice "/var/www/test/myservice/frontend"
    Alias /test/csc "/var/www/test/myservice/frontend"
    <Directory /var/www/test/myservice/frontend>
       AllowOverride None
       Order allow,deny
       allow from all
    </Directory>

    MonoServerPath test_backend "/usr/bin/mod-mono-server4"
    MonoApplications test_backend "/test/myservice/backend:/var/www/test/myservice/backend"

    <Location /test/myservice/backend>
       Allow from all
       Order allow,deny
       MonoSetServerAlias test_backend
       SetHandler mono
    </Location>


    # Configure WebDav access

    Alias /webdav "/var/www/webdav"
    <Location /webdav>
       Options Indexes
       DAV On
       AuthType Basic
       AuthName "webdav"
       AuthUserFile /etc/apache2/webdav.password
       Require valid-user
       Order allow,deny
       allow from all
    </Location>
</VirtualHost>

This works, more or less, but it still causes some error in the apache logs:

==> /var/log/apache2/myhost-error.log <==
[Tue Jun 13 09:00:27.874100 2017] [access_compat:error] [pid 62595:tid 140403123173120] [client 1.2.3.4:53342] AH01797: client denied by server configuration: /var/www/items, referer: https://myhost/csc/

==> /var/log/apache2/myhost-access.log <==
1.2.3.4 - - [13/Jun/2017:09:00:27 +0200] "GET /myservice/backend/items/42 HTTP/1.1" 200 578 "https://myhost/csc/" "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; SD; rv:11.0) like Gecko"

So, the client tries to access a valid route in the backend (/myservice/backend/items/42) via the frotend (myhost/csc) and gets a correct result from the service, but for some reason apache tries to access that item directly from the htdocs directory (/var/www/items) first.
Does somebody see, where this error is coming from?


Get this bounty!!!

#StackBounty: #linux #ubuntu #grub GRUB can't see what os-prober found

Bounty: 100

There is an HDD and an SSD.

During Lubuntu installation on my HDD, it installed GRUB on HDD and os-prober made a menu entry for the SSD’s Windows partition.

But GRUB can’t see it on boot:

Error: no such device: 5CD2C8C949DA73C

The menu entry is:

menuentry 'Windows 8 (loader) (on /dev/sdb1)' --class windows    --class os $menuentry_id_option 'osprober-chain-5CD2C8C949DA73C' {
insmod part_msdos
insmod ntfs
set root='hd1,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint-bios=hd1,msdos1    --hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1  5CD2C8C949DA73C
else
  search --no-floppy --fs-uuid --set=root 5CD2C8C949DA73C
fi
parttool ${root} hidden-
drivemap -s (hd0) ${root}
chainloader +1
}

Factors to consider:

  1. This is an MBR system.
  2. The SSD is inside a caddy.
  3. BIOS recognizes the SSD on POST as my secondary drive:
    Fixed Disk 0: HITACHI HTS.........300
    Fixed Disk 1: Samsung SSD 850 Evo 120GB
    
  4. Every OS and bootable utility including GRUB’s os-prober can see the SSD.
  5. BIOS does not show the SSD in boot options(We’re not planning to boot from it.).

Note: Workarounds like “Making the SSD boot-drive” or “Putting the SSD in the primary slot” are not acceptable for various reasons.

UPDATE: Asked about the “various reasons” in the comments: This is a ThinkPad E15. “Making the SSD boot-drive” is simply impossible due to the fact that “BIOS does not show the SSD in boot options” (It’s one of the ThinkPads with lack of ultra-bay support, and perhaps because of that, it’s designed for not booting a secondary HDD, so despite detecting it on POST as a Fixed Disk, does not show it in boot options. It only trys booting it as a CD-ROM and that doesn’t work. Also, one reason for not “Putting the SSD in the primary slot” is, I want shock protection for my HDD but again, this ThinkPad does not support it for a secondary drive according to this site.


Get this bounty!!!

#StackBounty: #linux #filesystems #data-recovery #ext3 Recovering a file on ext3

Bounty: 100

This is actually a ctf game: Enigma 2017 practice at hackcenter.com
We have to recover a deleted file on ext3.
I am following this tutorial.

The inode is 1036.
istat gives Group 0

fsstat undelete.img
Group: 0:
  Inode Range: 1 - 1280
  ...
  Inode Table: 24 - 183
  ...

From here the node table has a size of 160 blocks, each block has 8 inodes.
Inode 1036 is in block 153 and is the 4th entry.

This is confirmed by

debugfs -R 'imap <1036>' undelete.img 
debugfs 1.43.4 (31-Jan-2017)
Inode 1036 is part of block group 0
    located at block 153, offset 0x0180

jls undelete.img | grep 153$
46: Unallocated FS Block 2153
206:    Unallocated FS Block 153
214:    Unallocated FS Block 153
224:    Unallocated FS Block 153
680:    Unallocated FS Block 4153


jcat undelete.img 8 206 | dd bs=128 skip=3 count=1 | xxd
1+0 records in
1+0 records out
00000000: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000010: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000020: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000030: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000040: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000060: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000070: 0000 0000 0000 0000 0000 0000 0000 0000  ................
128 bytes copied, 0,00719467 s, 17,8 kB/s


jcat undelete.img 8 214 | dd bs=128 skip=3 count=1 | xxd
1+0 records in
1+0 records out
00000000: a481 0000 2000 0000 4d70 8b58 4d70 8b58  .... ...Mp.XMp.X
00000010: 4d70 8b58 0000 0000 0000 0100 0200 0000  Mp.X............
00000020: 0000 0000 0100 0000 ef08 0000 0000 0000  ................
00000030: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000040: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000050: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000060: 0000 0000 17ea 60e7 0000 0000 0000 0000  ......`.........
00000070: 0000 0000 0000 0000 0000 0000 0000 0000  ................
128 bytes copied, 0,00714798 s, 17,9 kB/s


jcat undelete.img 8 224 | dd bs=128 skip=3 count=1 | xxd
1+0 records in
1+0 records out
00000000: a481 0000 0000 0000 4d70 8b58 4d70 8b58  ........Mp.XMp.X
00000010: 4d70 8b58 4d70 8b58 0000 0000 0000 0000  Mp.XMp.X........
00000020: 0000 0000 0100 0000 0000 0000 0000 0000  ................
00000030: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000040: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000050: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000060: 0000 0000 17ea 60e7 0000 0000 0000 0000  ......`.........
128 bytes copied, 0,00556548 s, 23,0 kB/s
00000070: 0000 0000 0000 0000 0000 0000 0000 0000  ................

The only direct block pointer I got is 0x8ef at offset 40. The block size was reported by fsstat. But

dd bs=1024 skip=2287 count=1 if=undelete.img | xxd

gives only zeros.

I do not know what is wrong.


Get this bounty!!!

#StackBounty: #mysql #replication #mysql-5.6 #linux #master-slave-replication Replication doesn't resume once it fails to connect

Bounty: 50

We have MySQL replication running for our application. Right now the slave fails to resume if it fails to connect to the server for some time.

We get the error below:

[ERROR] Slave I/O: error connecting to master 'replication-user@master-ip:3306' - retry-time: 60  retries: 186, Error_code: 1045

The above is possible because of power failure or network problems. Once the issue is fixed and it is able to connect to the master server it tries to resume but most of the times it fails with the error below:

[ERROR] Slave SQL: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave. Error_code: 1594

I followed this question and solved this problem by resetting the slave to last read relay-bin file and position. The problem is that the connection failure has become very frequent and we need to do the same thing pretty often.

What could be the possible reason of this occurring over and over again? Is there a way to overcome this problem?

Note: We are using MySQL 5.6.30.


Get this bounty!!!

#StackBounty: #mysql #replication #mysql-5.6 #linux #master-slave-replication Replication doesn't resume once it fails to connect

Bounty: 50

We have MySQL replication running for our application. Right now the slave fails to resume if it fails to connect to the server for some time.

We get the error below:

[ERROR] Slave I/O: error connecting to master 'replication-user@master-ip:3306' - retry-time: 60  retries: 186, Error_code: 1045

The above is possible because of power failure or network problems. Once the issue is fixed and it is able to connect to the master server it tries to resume but most of the times it fails with the error below:

[ERROR] Slave SQL: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave. Error_code: 1594

I followed this question and solved this problem by resetting the slave to last read relay-bin file and position. The problem is that the connection failure has become very frequent and we need to do the same thing pretty often.

What could be the possible reason of this occurring over and over again? Is there a way to overcome this problem?

Note: We are using MySQL 5.6.30.


Get this bounty!!!

#StackBounty: #mysql #replication #mysql-5.6 #linux #master-slave-replication Replication doesn't resume once it fails to connect

Bounty: 50

We have MySQL replication running for our application. Right now the slave fails to resume if it fails to connect to the server for some time.

We get the error below:

[ERROR] Slave I/O: error connecting to master 'replication-user@master-ip:3306' - retry-time: 60  retries: 186, Error_code: 1045

The above is possible because of power failure or network problems. Once the issue is fixed and it is able to connect to the master server it tries to resume but most of the times it fails with the error below:

[ERROR] Slave SQL: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave. Error_code: 1594

I followed this question and solved this problem by resetting the slave to last read relay-bin file and position. The problem is that the connection failure has become very frequent and we need to do the same thing pretty often.

What could be the possible reason of this occurring over and over again? Is there a way to overcome this problem?

Note: We are using MySQL 5.6.30.


Get this bounty!!!

#StackBounty: #mysql #replication #mysql-5.6 #linux #master-slave-replication Replication doesn't resume once it fails to connect

Bounty: 50

We have MySQL replication running for our application. Right now the slave fails to resume if it fails to connect to the server for some time.

We get the error below:

[ERROR] Slave I/O: error connecting to master 'replication-user@master-ip:3306' - retry-time: 60  retries: 186, Error_code: 1045

The above is possible because of power failure or network problems. Once the issue is fixed and it is able to connect to the master server it tries to resume but most of the times it fails with the error below:

[ERROR] Slave SQL: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave. Error_code: 1594

I followed this question and solved this problem by resetting the slave to last read relay-bin file and position. The problem is that the connection failure has become very frequent and we need to do the same thing pretty often.

What could be the possible reason of this occurring over and over again? Is there a way to overcome this problem?

Note: We are using MySQL 5.6.30.


Get this bounty!!!