#StackBounty: #csv #postgresql #r #sql How to select on CSV files like SQL in R?

Bounty: 50

I know the thread How can I inner join two csv files in R which has a merge option, which I do not want.
I have two data CSV files. I am thinking how to query like them like SQL with R.
I really like PostgreSQL so I think it would work here great or similar syntax tools of R.
Two CSV files where primary key is data_id.

data.csv where OK to have IDs not found in log.csv (etc 4)

data_id, event_value
1, 777
1, 666
2, 111
4, 123 
3, 324
1, 245

log.csv where no duplicates in ID column but duplicates can be in name

data_id, name
1, leo
2, leopold
3, lorem

Pseudocode by partial PostgreSQL syntax

  1. Let data_id=1
  2. Show name and event_value from data.csv and log.csv, respectively

Pseudocode like partial PostgreSQL select

SELECT name, event_value 
    FROM data, log
    WHERE data_id=1;

Expected output

leo, 777
leo, 666 
leo, 245

R approach

file1 <- read.table("file1.csv", col.names=c("data_id", "event_value"))
file2 <- read.table("file2.csv", col.names=c("data_id", "name"))

# TODO here something like the SQL query 
# http://stackoverflow.com/a/1307824/54964

Possible approaches where I think sqldf can be sufficient here

  1. sqldf
  2. data.table
  3. dplyr
  4. PostgreSQL database

PostgreSQL thoughts

Schema

DROP TABLE IF EXISTS data, log;    
CREATE TABLE data (
        data_id SERIAL PRIMARY KEY NOT NULL,
        event_value INTEGER NOT NULL
);
CREATE TABLE log (
        data_id SERIAL PRIMARY KEY NOT NULL,
        name INTEGER NOT NULL
);

R: 3.3.3
OS: Debian 8.7


Get this bounty!!!

#StackBounty: #postgresql #indexing #database-performance #postgresql-performance #postgresql-9.6 Slow nested loop left join with index…

Bounty: 100

I am really struggling to optimize my query…

So here is the query:

SELECT wins / (wins + COUNT(loosers.match_id) + 0.) winrate, wins + COUNT(loosers.match_id) matches, winners.winning_champion_one_id, winners.winning_champion_two_id, winners.winning_champion_three_id, winners.winning_champion_four_id, winners.winning_champion_five_id FROM
(
   SELECT COUNT(match_id) wins, winning_champion_one_id, winning_champion_two_id, winning_champion_three_id, winning_champion_four_id, winning_champion_five_id FROM matches
   WHERE
      157 IN (winning_champion_one_id, winning_champion_two_id, winning_champion_three_id, winning_champion_four_id, winning_champion_five_id)
   GROUP BY winning_champion_one_id, winning_champion_two_id, winning_champion_three_id, winning_champion_four_id, winning_champion_five_id
) winners
LEFT OUTER JOIN matches loosers ON
  winners.winning_champion_one_id = loosers.loosing_champion_one_id AND
  winners.winning_champion_two_id = loosers.loosing_champion_two_id AND
  winners.winning_champion_three_id = loosers.loosing_champion_three_id AND
  winners.winning_champion_four_id = loosers.loosing_champion_four_id AND
  winners.winning_champion_five_id = loosers.loosing_champion_five_id
GROUP BY winners.wins, winners.winning_champion_one_id, winners.winning_champion_two_id, winners.winning_champion_three_id, winners.winning_champion_four_id, winners.winning_champion_five_id
HAVING wins + COUNT(loosers.match_id) >= 20
ORDER BY winrate DESC, matches DESC
LIMIT 1;

And this is the output of EXPLAIN (BUFFERS, ANALYZE):

Limit  (cost=72808.80..72808.80 rows=1 width=58) (actual time=1478.749..1478.749 rows=1 loops=1)
  Buffers: shared hit=457002
  ->  Sort  (cost=72808.80..72837.64 rows=11535 width=58) (actual time=1478.747..1478.747 rows=1 loops=1)
"        Sort Key: ((((count(matches.match_id)))::numeric / ((((count(matches.match_id)) + count(loosers.match_id)))::numeric + '0'::numeric))) DESC, (((count(matches.match_id)) + count(loosers.match_id))) DESC"
        Sort Method: top-N heapsort  Memory: 25kB
        Buffers: shared hit=457002
        ->  HashAggregate  (cost=72462.75..72751.12 rows=11535 width=58) (actual time=1448.941..1478.643 rows=83 loops=1)
"              Group Key: (count(matches.match_id)), matches.winning_champion_one_id, matches.winning_champion_two_id, matches.winning_champion_three_id, matches.winning_champion_four_id, matches.winning_champion_five_id"
              Filter: (((count(matches.match_id)) + count(loosers.match_id)) >= 20)
              Rows Removed by Filter: 129131
              Buffers: shared hit=457002
              ->  Nested Loop Left Join  (cost=9857.76..69867.33 rows=115352 width=26) (actual time=288.086..1309.687 rows=146610 loops=1)
                    Buffers: shared hit=457002
                    ->  HashAggregate  (cost=9857.33..11010.85 rows=115352 width=18) (actual time=288.056..408.317 rows=129214 loops=1)
"                          Group Key: matches.winning_champion_one_id, matches.winning_champion_two_id, matches.winning_champion_three_id, matches.winning_champion_four_id, matches.winning_champion_five_id"
                          Buffers: shared hit=22174
                          ->  Bitmap Heap Scan on matches  (cost=1533.34..7455.69 rows=160109 width=18) (actual time=26.618..132.844 rows=161094 loops=1)
                                Recheck Cond: ((157 = winning_champion_one_id) OR (157 = winning_champion_two_id) OR (157 = winning_champion_three_id) OR (157 = winning_champion_four_id) OR (157 = winning_champion_five_id))
                                Heap Blocks: exact=21594
                                Buffers: shared hit=22174
                                ->  BitmapOr  (cost=1533.34..1533.34 rows=164260 width=0) (actual time=22.190..22.190 rows=0 loops=1)
                                      Buffers: shared hit=580
                                      ->  Bitmap Index Scan on matches_winning_champion_one_id_index  (cost=0.00..35.03 rows=4267 width=0) (actual time=0.045..0.045 rows=117 loops=1)
                                            Index Cond: (157 = winning_champion_one_id)
                                            Buffers: shared hit=3
                                      ->  Bitmap Index Scan on matches_winning_champion_two_id_index  (cost=0.00..47.22 rows=5772 width=0) (actual time=0.665..0.665 rows=3010 loops=1)
                                            Index Cond: (157 = winning_champion_two_id)
                                            Buffers: shared hit=13
                                      ->  Bitmap Index Scan on matches_winning_champion_three_id_index  (cost=0.00..185.53 rows=22840 width=0) (actual time=3.824..3.824 rows=23893 loops=1)
                                            Index Cond: (157 = winning_champion_three_id)
                                            Buffers: shared hit=89
                                      ->  Bitmap Index Scan on matches_winning_champion_four_id_index  (cost=0.00..537.26 rows=66257 width=0) (actual time=8.069..8.069 rows=67255 loops=1)
                                            Index Cond: (157 = winning_champion_four_id)
                                            Buffers: shared hit=244
                                      ->  Bitmap Index Scan on matches_winning_champion_five_id_index  (cost=0.00..528.17 rows=65125 width=0) (actual time=9.577..9.577 rows=67202 loops=1)
                                            Index Cond: (157 = winning_champion_five_id)
                                            Buffers: shared hit=231
                    ->  Index Scan using matches_loosing_champion_ids_index on matches loosers  (cost=0.43..0.49 rows=1 width=18) (actual time=0.006..0.006 rows=0 loops=129214)
                          Index Cond: ((matches.winning_champion_one_id = loosing_champion_one_id) AND (matches.winning_champion_two_id = loosing_champion_two_id) AND (matches.winning_champion_three_id = loosing_champion_three_id) AND (matches.winning_champion_four_id = loosing_champion_four_id) AND (matches.winning_champion_five_id = loosing_champion_five_id))
                          Buffers: shared hit=434828
Planning time: 0.584 ms
Execution time: 1479.779 ms

This is the DDL:

create table matches
(
    match_id bigint not null,
    winning_champion_one_id smallint,
    winning_champion_two_id smallint,
    winning_champion_three_id smallint,
    winning_champion_four_id smallint,
    winning_champion_five_id smallint,
    loosing_champion_one_id smallint,
    loosing_champion_two_id smallint,
    loosing_champion_three_id smallint,
    loosing_champion_four_id smallint,
    loosing_champion_five_id smallint,
    constraint matches_match_id_pk
        primary key (match_id)
)
;

create index matches_winning_champion_one_id_index
    on matches (winning_champion_one_id)
;

create index matches_winning_champion_two_id_index
    on matches (winning_champion_two_id)
;

create index matches_winning_champion_three_id_index
    on matches (winning_champion_three_id)
;

create index matches_winning_champion_four_id_index
    on matches (winning_champion_four_id)
;

create index matches_winning_champion_five_id_index
    on matches (winning_champion_five_id)
;

create index matches_loosing_champion_ids_index
    on matches (loosing_champion_one_id, loosing_champion_two_id, loosing_champion_three_id, loosing_champion_four_id, loosing_champion_five_id)
;

create index matches_loosing_champion_one_id_index
    on matches (loosing_champion_one_id)
;

create index matches_loosing_champion_two_id_index
    on matches (loosing_champion_two_id)
;

create index matches_loosing_champion_three_id_index
    on matches (loosing_champion_three_id)
;

create index matches_loosing_champion_four_id_index
    on matches (loosing_champion_four_id)
;

create index matches_loosing_champion_five_id_index
    on matches (loosing_champion_five_id)
;

There is probably something I do overlook. Any help is appreciated.

The table can have 100m+ rows. At the moment it does have about 20m rows.

This are the only changes I made to the postgresql.conf:

max_connections = 50
shared_buffers = 6GB
effective_cache_size = 18GB
work_mem = 125829kB
maintenance_work_mem = 1536MB
min_wal_size = 1GB
max_wal_size = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
max_parallel_workers_per_gather = 8
min_parallel_relation_size = 1

Here you can see table/index sizes of matches currently:

public.matches,2331648 rows
public.matches,197 MB
public.matches_riot_match_id_pk,153 MB
public.matches_loosing_champion_ids_index,136 MB
public.matches_loosing_champion_four_id_index,113 MB
public.matches_loosing_champion_five_id_index,113 MB
public.matches_winning_champion_one_id_index,113 MB
public.matches_winning_champion_five_id_index,113 MB
public.matches_winning_champion_three_id_index,112 MB
public.matches_loosing_champion_three_id_index,112 MB
public.matches_winning_champion_four_id_index,112 MB
public.matches_loosing_champion_one_id_index,112 MB
public.matches_winning_champion_two_id_index,112 MB
public.matches_loosing_champion_two_id_index,112 MB


Get this bounty!!!

#StackBounty: #postgresql #pgadmin-4 How to set connection timeout for pgAdmin 4 (postgres 9.6)

Bounty: 100

I have read the article listed here:

How to set connection timeout value for pgAdmin?

many times, but I still have no idea where one sets the config parameter for connection_timeout. I am connecting from a local host to a local host, so there should be no real problems with keep alives.

I am using PgAdmin 4, PostGreSQL 9.6 on windows 10.

I would like to know the path to:

  • Is this something that has to be set in the server, (e.g. postgresql.conf) and what is the value to set?
  • Or is this something in the client (and where is the client config and what has to be set?)
  • Or is this a bug in PGADMIN
  • Or something else

I see others are also confused:

Is there a timeout option for remote access to PostgreSQL database?


Get this bounty!!!

#StackBounty: #postgresql #go go-pg efficient multi table/row insert

Bounty: 50

What is an efficient way to write this raw PostgreSQL query using in go? I’m using go-pg but something with just the pg library should translate well to go-pg.

Table:

companies:
  - id string, primary key
  - name string, not null

services:
  - id string, primary key
  - company_id string, not null, foreign key (company.id)
  - name string, not null

Query:

WITH company AS ( INSERT INTO companies(id, name) VALUES('1', 'acme') RETURNING id)
INSERT INTO services(id, company_id, name) VALUES
('1', (select company.id from company), 'cool service'),
('2', (select company.id from company), 'cooler service');

This is what I came up with. It’s very hacky and I feel there is a more “idiomatic” way of doing this.

type Company struct {
    ID string
    Name string
    Services []*Service
}

type Service struct {
    ID string
    CompanyID string
    Name string
}

c := &Company{
    ID: uuid.NewV4().String(),
    Name: "test comp",
}

s := []*Service{
    &Service{
        ID: uuid.NewV4().String(),
        Name: "test svc",
    },
}

c.Service = s

values := []interface{}{
    c.ID,
    c.Name,
}

q := `
    WITH company as (INSERT INTO companies(id, name) VALUES ($1, $2) RETURNING id) INSERT INTO services(id, company_id, name) VALUES
`

var i int = 3
for _, row := range c.Services {
    q += fmt.Sprintf("($%d, (select company.id from company), $%d),", i, i+1)
    values = append(values, row.ID, row.Name)
    i += 2
}

q = strings.TrimSuffix(q, ",")

stmt, err := DB.Prepare(q)
if err != nil {
    return err
}

if _, err := stmt.Exec(values...); err != nil {
    return err
} 

I’m somewhat new to using go’s sql library and I know I have to be missing something. I saw the pg library allows for bulk imports but it’s a bit confusing how I would do that in my case where I need to populate two separate tables the and the bulk import occurs on the second table.


Get this bounty!!!

#StackBounty: postgres 9.6 index only scan on functional index logically possible but not executed

Bounty: 50

I have read about functional indices and index-only scans in the docs / wiki published by Postgres.

I now have a query like:

SELECT(xpath('/document/uuid/text()', xmldata))[1]::text,  
      (xpath('/document/title/text()', xmldata))[1]::text
FROM xmltable
WHERE(xpath('/document/uuid/text()', xmldata))[1]::text = 'some-uuid-xxxx-xxxx'

and an index:

CREATE INDEX idx_covering_index on xmltable using btree (
    ((xpath('/document/uuid/text()', xmldata))[1]::text),     
    ((xpath('/document/title/text()', xmldata))[1]::text)
)

This index is, looking at it logically, a covering index and should enable an index-only-scan, as all queried values are contained in the index (uuid and title)

I now happen to know, that Postgres only recognizes covering indices on functional indices if the columns used in the function-calls are also contained

eg.:

SELECT to_upper(column1) from table where id >10

1) cannot be covered by this index:

CREATE INDEX idx_covering_index on xmltable using btree (id, to_upper(column1));

2) but can be covered by this one:

CREATE INDEX idx_covering_index on xmltable using btree (column1, id, to_upper(column1));

thus leading to an index-only-scan.

If I now try this with my xml setup:

CREATE INDEX idx_covering_index on xmltable using btree (xmldata,
    ((xpath('/document/uuid/text()', xmldata))[1]::text),     
    ((xpath('/document/title/text()', xmldata))[1]::text)
)

I get an error:

data type xml has no default operator class for access method “btree”

fair enough, unfortunately the normally used "text_ops" or "text_pattern_ops" do not accept “xml” as input – thus rendering my index – although it would cover all values – unable to support index-only-scans.

Can this be handled in a way providing the possibility for index-only-scans?

@EDIT1:

I know that postgres cannot use the index seen at 1) as covering index, but can use an index like 2)

I also tried with very simple tables to verify this behavior, and i also kind of remember to have read this – but i cannot for the life of me remember where.

create table test (
    id serial primary key,
    quote text
)



insert into test (number, quote) values ('I do not know any clever quotes');
insert into test (number, quote) values ('I am sorry');



CREATE INDEX idx_test_functional on test using btree ((regexp_replace(quote, '^I ', 'BillDoor ')));
set enable_seqscan = off;

analyze test;

explain select quote from test where regexp_replace(quote, '^I ', 'BillDoor ') = 'BillDoor do not know any clever quotes'

--> "Index Scan using idx_test_functional on test  (cost=0.13..8.15 rows=1 width=27)"

drop index idx_test_functional;
CREATE INDEX idx_test_functional on test using btree (quote, (regexp_replace(quote, '^I ', 'BillDoor ')));

analyze test;

explain select quote from test where regexp_replace(quote, '^I ', 'BillDoor ') = 'BillDoor do not know any clever quotes'

--> "Index Only Scan using idx_test_functional on test  (cost=0.13..12.17 rows=1 width=27)"

@EDIT2:

Full table definition of xmltable:

id serial primary key (clustered),
xmldata xml (only data used to filter queries)
history xml (never queried or read, just kept in case of legal inquiry)
fileinfo text (seldom quieried, sometimes retrieved)
"timestamp" timestamp (mainly for legal inquiries too)

The Table contains approx.: 500.000 Records, the xmldata is between 350 and 800 bytes in size, history is much larger but seldom retrieved and never used in filters

For the record, to be sure got real results, i always ran analyze xmltable after i created or dropped an index

a full execution plan for the query:

explain analyze select (xpath('/document/uuid/text()', d.xmldata))[1]::text as uuid
from xmltable as d
where
(xpath('/document/uuid/text()', d.xmldata))[1]::text = 'some-uuid-xxxx-xxxx' and (xpath('/document/genre/text()', d.xmldata))[1]::text = 'bio'

covered by these indizies:

create index idx_genre on xmltable using btree (((xpath('/document/genre/text()', xmldata))[1]::text));

create index idx_uuid on xmltable using btree (((xpath('/document/uuid/text()', xmldata))[1]::text)); 

create index idx_uuid_genre on xmltable using btree (((xpath('/document/uuid/text()', xmldata))[1]::text), ((xpath('/document/genre/text()', xmldata))[1]::text));

first leads to:

"Index Scan using idx_genre on xmldata d  (cost=0.42..6303.05 rows=18154 width=32)"
"  Index Cond: (((xpath('/document/genre/text()'::text, xmldata, '{}'::text[]))[1])::text = 'bio'::text)"
"  Filter: (((xpath('/document/uuid/text()'::text, xmldata, '{}'::text[]))[1])::text = 'some-uuid-xxxx-xxxx'::text)"

fair enough i thought, just for testing i’ll force it to use the – in my mind – covering index:

drop index idx_uuid;
drop index idx_genre;

and now i get:

"Bitmap Heap Scan on xmltable d  (cost=551.13..16025.51 rows=18216 width=32)"
"  Recheck Cond: ((((xpath('/document/genre/text()'::text, xmldata, '{}'::text[]))[1])::text = 'bio'::text) AND (((xpath('/document/uuid/text()'::text, xmldata, '{}'::text[]))[1])::text = 'some-uuid-xxxx-xxxx'::text))"
"  ->  Bitmap Index Scan on idx_uuid_genre  (cost=0.00..546.58 rows=18216 width=0)"
"        Index Cond: ((((xpath('/document/genre/text()'::text, xmldata, '{}'::text[]))[1])::text = 'bio'::text) AND (((xpath('/document/uuid/text()'::text, xmldata, '{}'::text[]))[1])::text = 'some-uuid-xxxx-xxxx'::text))"

i also tried switching positions of uuid and genre in the index, same execution plan.


Get this bounty!!!

Best way to select random rows PostgreSQL

Given, you have a very large table with 500 Million rows, and you have to select some random 1000 rows out of the table and you want it to be fast.

Given the specifications:

  • You assumed to have a numeric ID column (integer numbers) with only few (or moderately few) gaps.
  • Ideally no or few write operations.
  • Your ID column should have been indexed! A primary key serves nicely.

The query below does not need a sequential scan of the big table, only an index scan.

First, get estimates for the main query:

SELECT count(*) AS ct              -- optional
     , min(id)  AS min_id
            , max(id)  AS max_id
            , max(id) - min(id) AS id_span
FROM   big;

The only possibly expensive part is the count(*) (for huge tables). You will get an estimate, available at almost no cost (detailed explanation here):

SELECT reltuples AS ct FROM pg_class WHERE oid = 'schema_name.big'::regclass;

As long as ct isn’t much smaller than id_span, the query will outperform most other approaches.

WITH params AS (
    SELECT 1       AS min_id           -- minimum id <= current min id
         , 5100000 AS id_span          -- rounded up. (max_id - min_id + buffer)
    )
SELECT *
FROM  (
    SELECT p.min_id + trunc(random() * p.id_span)::integer AS id
    FROM   params p
          ,generate_series(1, 1100) g  -- 1000 + buffer
    GROUP  BY 1                        -- trim duplicates
    ) r
JOIN   big USING (id)
LIMIT  1000;                           -- trim surplus
  • Generate random numbers in the id space. You have “few gaps”, so add 10 % (enough to easily cover the blanks) to the number of rows to retrieve.
  • Each id can be picked multiple times by chance (though very unlikely with a big id space), so group the generated numbers (or use DISTINCT).
  • Join the ids to the big table. This should be very fast with the index in place.
  • Finally trim surplus ids that have not been eaten by dupes and gaps. Every row has a completely equal chance to be picked.

Short version

You can simplify this query. The CTE in the query above is just for educational purposes:

SELECT *
FROM  (
    SELECT DISTINCT 1 + trunc(random() * 5100000)::integer AS id
    FROM   generate_series(1, 1100) g
    ) r
JOIN   big USING (id)
LIMIT  1000;

Refine with rCTE

Especially if you are not so sure about gaps and estimates.

WITH RECURSIVE random_pick AS (
   SELECT *
   FROM  (
      SELECT 1 + trunc(random() * 5100000)::int AS id
      FROM   generate_series(1, 1030)  -- 1000 + few percent - adapt to your needs
      LIMIT  1030                      -- hint for query planner
      ) r
   JOIN   big b USING (id)             -- eliminate miss

   UNION                               -- eliminate dupe
   SELECT b.*
   FROM  (
      SELECT 1 + trunc(random() * 5100000)::int AS id
      FROM   random_pick r             -- plus 3 percent - adapt to your needs
      LIMIT  999                       -- less than 1000, hint for query planner
      ) r
   JOIN   big b USING (id)             -- eliminate miss
   )
SELECT *
FROM   random_pick
LIMIT  1000;  -- actual limit

We can work with a smaller surplus in the base query. If there are too many gaps so we don’t find enough rows in the first iteration, the rCTE continues to iterate with the recursive term. We still need relatively few gaps in the ID space or the recursion may run dry before the limit is reached – or we have to start with a large enough buffer which defies the purpose of optimizing performance.

Duplicates are eliminated by the UNION in the rCTE.

The outer LIMIT makes the CTE stop as soon as we have enough rows.

This query is carefully drafted to use the available index, generate actually random rows and not stop until we fulfill the limit (unless the recursion runs dry). There are a number of pitfalls here if you are going to rewrite it.

Wrap into function

For repeated use with varying parameters:

CREATE OR REPLACE FUNCTION f_random_sample(_limit int = 1000, _gaps real = 1.03)
  RETURNS SETOF big AS
$func$
DECLARE
   _surplus  int := _limit * _gaps;
   _estimate int := (           -- get current estimate from system
      SELECT c.reltuples * _gaps
      FROM   pg_class c
      WHERE  c.oid = 'big'::regclass);
BEGIN

   RETURN QUERY
   WITH RECURSIVE random_pick AS (
      SELECT *
      FROM  (
         SELECT 1 + trunc(random() * _estimate)::int
         FROM   generate_series(1, _surplus) g
         LIMIT  _surplus           -- hint for query planner
         ) r (id)
      JOIN   big USING (id)        -- eliminate misses

      UNION                        -- eliminate dupes
      SELECT *
      FROM  (
         SELECT 1 + trunc(random() * _estimate)::int
         FROM   random_pick        -- just to make it recursive
         LIMIT  _limit             -- hint for query planner
         ) r (id)
      JOIN   big USING (id)        -- eliminate misses
   )
   SELECT *
   FROM   random_pick
   LIMIT  _limit;
END
$func$  LANGUAGE plpgsql VOLATILE ROWS 1000;

Call:

SELECT * FROM f_random_sample();
SELECT * FROM f_random_sample(500, 1.05);

You could even make this generic to work for any table: Take the name of the PK column and the table as polymorphic type and use EXECUTE … But that’s beyond the scope of this post. See:

Possible alternative

IF your requirements allow identical sets for repeated calls (and we are talking about repeated calls) I would consider a materialized view. Execute above query once and write the result to a table. Users get a quasi random selection at lightening speed. Refresh your random pick at intervals or events of your choosing.

Postgres 9.5 introduces TABLESAMPLE SYSTEM (n)

It’s very fast, but the result is not exactly random. The manual:

The SYSTEM method is significantly faster than the BERNOULLI method when small sampling percentages are specified, but it may return a less-random sample of the table as a result of clustering effects.

And the number of rows returned can vary wildly. For our example, to get roughly 1000 rows, try:

SELECT * FROM big TABLESAMPLE SYSTEM ((1000 * 100) / 5100000.0);

Where n is a percentage. The manual:

The BERNOULLI and SYSTEM sampling methods each accept a single argument which is the fraction of the table to sample, expressed as a percentage between 0 and 100. This argument can be any real-valued expression.

Bold emphasis mine.

Related:

Source