#StackBounty: #storage #sas #lsi #broadcom #storcli How to ensure SAS-3 wide port configuration is enabled?

Bounty: 250

I have:

  • A commodity Linux server, in this I have installed:
  • 1x HBA storage adapter Broadcom (LSI) 9500-8e, thus 2x mini-SAS-HD connector, connected to:
  • 1x Storage enclosure with 2x input mini-SAS-HD connector, SAS-3 capable, using:
  • 2x mini-SAS-HD cables running between the controller and the enclosure.

The enclosure and disks work just fine with a single cable attached, but they can optionally be connected with the two. Using two cables is recommended by the manual of the enclosure:

enter image description here

The enclosure I have does not support high-availability (multi-path) mode and my question is not about HA; it’s about port configuration bandwidth.

I’d like to know:

  • Main question: How do I know the controller and enclosure actually have 8x 12Gbit/s lanes (as provided by 2x4x of mini-SAS-HD) bandwidth active (a.k.a wide port configuration)?
  • How to tell both cables are connected? For reasons of monitoring, I’d like to know about cable failure, similar to an ethernet link status, that would be great.

What I do know:

The Broadcom StorCLI reference manual suggest looking at the "load balance" mode status/configuration.

set loadbalancemode =[on|off] Enables (on) or disables (off) automatic load balancing between SAS phys or ports in a wide port configuration.

However, StorCLI reports neither on or off, it reports "None" in the "Policies Table:

Load Balance Mode = None

Listing the drives in the enclosure lists two connectors, though, so that sounds okay to me:

Drive /c0/e18/s0 Device attributes :
==================================
Manufacturer Id = HGST    
[...]
Device Speed = 12.0Gb/s
Link Speed = 12.0Gb/s
[...]
Connector Name = C0   & C1   

However, all of the disks including the enclosure are listed in kernel messages with just C0 connector, e.g.: scsi 5:0:39:0: enclosure level(0x0001), connector name( C0 ). 😕


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!