Class: Google::Cloud::Bigtable::Table

Inherits:
Object
  • Object
show all
Includes:
MutationOperations, ReadOperations
Defined in:
lib/google/cloud/bigtable/table.rb,
lib/google/cloud/bigtable/table/list.rb,
lib/google/cloud/bigtable/table/cluster_state.rb,
lib/google/cloud/bigtable/table/column_family_map.rb

Overview

Table

A collection of user data indexed by row, column, and timestamp. Each table is served using the resources of its parent cluster.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table", perform_lookup: true)

table.column_families.each do |cf|
  p cf.name
  p cf.gc_rule
end

# Get column family by name
cf1 = table.column_families.find_by_name("cf1")

# Create column family
gc_rule = Google::Cloud::Bigtable::GcRule.max_versions(3)
cf2 = table.column_families.create("cf2", gc_rule)

# Delete table
table.delete

Defined Under Namespace

Classes: ClusterState, ColumnFamilyMap, List

Instance Attribute Summary collapse

Instance Method Summary collapse

Instance Attribute Details

#app_profile_idString

Returns App profile id for request routing.

Returns:

  • (String)

    App profile id for request routing.



68
69
70
# File 'lib/google/cloud/bigtable/table.rb', line 68

def app_profile_id
  @app_profile_id
end

Instance Method Details

#check_and_mutate_row(key, predicate, on_match: nil, otherwise: nil) ⇒ Boolean Originally defined in module MutationOperations

Mutates a row atomically based on the output of a predicate Reader filter.

NOTE: Condition predicate filter is not supported.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

predicate_filter = Google::Cloud::Bigtable::RowFilter.key("user-10")
on_match_mutations = Google::Cloud::Bigtable::MutationEntry.new
on_match_mutations.set_cell(
  "cf-1",
  "field-1",
  "XYZ",
  timestamp: Time.now.to_i * 1000 # Time stamp in micro seconds.
).delete_from_column("cf2", "field02")

otherwise_mutations = Google::Cloud::Bigtable::MutationEntry.new
otherwise_mutations.delete_from_family("cf3")

response = table.check_and_mutate_row(
  "user01",
  predicate_filter,
  on_match: on_match_mutations,
  otherwise: otherwise_mutations
)

if response
  puts "All predicates matched"
end

Parameters:

  • key (String)

    Row key. The key of the row to which the conditional mutation should be applied.

  • predicate (SimpleFilter, ChainFilter, InterleaveFilter)

    Predicate filter. The filter to be applied to the contents of the specified row. Depending on whether or not any results are yielded, either +true_mutations+ or +false_mutations+ will be executed. If unset, checks that the row contains any values at all.

  • on_match (Google::Cloud::Bigtable::MutationEntry)

    Mutation entry apply on predicate filter match. Changes to be atomically applied to the specified row if +predicate_filter+ yields at least one cell when applied to +row_key+. Entries are applied in order, meaning that earlier mutations can be masked by later ones. Must contain at least one entry if +false_mutations+ is empty, and at most 100000.

  • otherwise (Google::Cloud::Bigtable::MutationEntry)

    Mutation entry apply on predicate filter do not match. Changes to be atomically applied to the specified row if +predicate_filter+ does not yield any cells when applied to +row_key+. Entries are applied in order, meaning that earlier mutations can be masked by later ones. Must contain at least one entry if +true_mutations+ is empty, and at most 100000.

Returns:

  • (Boolean)

    Predicate match or not status

#check_consistency(token) ⇒ Boolean

Checks replication consistency based on a consistency token, that is, if replication has caught up based on the conditions specified in the token and the check request.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

instance = bigtable.instance("my-instance")
table = instance.table("my-table")

token = "l947XelENinaxJQP0nnrZJjHnAF7YrwW8HCJLotwrF"

if table.check_consistency(token)
  puts "Replication is consistent"
end

Parameters:

  • token (String)

    Consistency token

Returns:

  • (Boolean)

    Replication is consistent or not.



464
465
466
467
468
# File 'lib/google/cloud/bigtable/table.rb', line 464

def check_consistency token
  ensure_service!
  response = service.check_consistency(instance_id, name, token)
  response.consistent
end

#cluster_statesArray<Google::Cloud::Bigtable::Table::ClusterState>

Map from cluster ID to per-cluster table state. If it could not be determined whether or not the table has data in a particular cluster (for example, if its zone is unavailable), then there will be an entry for the cluster with UNKNOWN replication_status. Views: FULL



135
136
137
138
139
140
# File 'lib/google/cloud/bigtable/table.rb', line 135

def cluster_states
  check_view_and_load(:REPLICATION_VIEW)
  @grpc.cluster_states.map do |name, state_grpc|
    ClusterState.from_grpc(state_grpc, name)
  end
end

#column_familiesArray<Google::Bigtable::ColumnFamily>

The column families configured for this table, mapped by column family ID. Available column families data only in table view types: SCHEMA_VIEW, FULL

Returns:

  • (Array<Google::Bigtable::ColumnFamily>)


148
149
150
151
152
153
154
155
156
157
158
159
# File 'lib/google/cloud/bigtable/table.rb', line 148

def column_families
  check_view_and_load(:SCHEMA_VIEW)
  @grpc.column_families.map do |cf_name, cf_grpc|
    ColumnFamily.from_grpc(
      cf_grpc,
      service,
      name: cf_name,
      instance_id: instance_id,
      table_id: table_id
    )
  end
end

#column_family(name, gc_rule = nil) ⇒ Object

Create column family object to perform create,update or delete operation.

Examples:

Create column family

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", my-table)

# OR get table from Instance object.
instance = bigtable.instance("my-instance")
table = instance.table("my-table")

gc_rule = Google::Cloud::Bigtable::GcRule.max_versions(5)
column_family = table.column_family("cf1", gc_rule)
column_family.create

Update column family

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

gc_rule = Google::Cloud::Bigtable::GcRule.max_age(1800)
column_family = table.column_family("cf2", gc_rule)
column_family.save
# OR Using alias method update.
column_family.update

Delete column family

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

column_family = table.column_family("cf3")
column_family.delete

Parameters:

  • name (String)

    Name of the column family

  • gc_rule (Google::Cloud::Bigtable::GcRule) (defaults to: nil)

    Optional. GC Rule only required for create and update.



281
282
283
284
285
286
287
288
289
290
291
292
# File 'lib/google/cloud/bigtable/table.rb', line 281

def column_family name, gc_rule = nil
  cf_grpc = Google::Bigtable::Admin::V2::ColumnFamily.new
  cf_grpc.gc_rule = gc_rule.to_grpc if gc_rule

  ColumnFamily.from_grpc(
    cf_grpc,
    service,
    name: name,
    instance_id: instance_id,
    table_id: table_id
  )
end

#deleteBoolean

Permanently deletes the table from a instance.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

instance = bigtable.table("my-instance", "my-table")
table.delete

Returns:

  • (Boolean)

    Returns true if the table was deleted.



193
194
195
196
197
# File 'lib/google/cloud/bigtable/table.rb', line 193

def delete
  ensure_service!
  service.delete_table(instance_id, name)
  true
end

#delete_all_rows(timeout: nil) ⇒ Boolean

Delete all rows

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

instance = bigtable.instance("my-instance")
table = instance.table("my-table")
table.delete_all_rows

# With timeout
table.delete_all_rows(timeout: 120) # 120 seconds.

Parameters:

  • timeout (Integer)

    Call timeout in seconds Use in case of : Insufficient deadline for DropRowRange then try again with a longer request deadline.

Returns:

  • (Boolean)


547
548
549
# File 'lib/google/cloud/bigtable/table.rb', line 547

def delete_all_rows timeout: nil
  drop_row_range(delete_all_data: true, timeout: timeout)
end

#delete_rows_by_prefix(prefix, timeout: nil) ⇒ Boolean

Delete rows using row key prefix.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

table.delete_rows_by_prefix("user-100")

# With timeout
table.delete_all_rows("user-1", timeout: 120) # 120 seconds.

Parameters:

  • prefix (String)

    Row key prefix. i.e "user"

  • timeout (Integer)

    Call timeout in seconds

Returns:

  • (Boolean)


568
569
570
# File 'lib/google/cloud/bigtable/table.rb', line 568

def delete_rows_by_prefix prefix, timeout: nil
  drop_row_range(row_key_prefix: prefix, timeout: timeout)
end

#drop_row_range(row_key_prefix: nil, delete_all_data: nil, timeout: nil) ⇒ Boolean

Drop row range by row key prefix or delete all.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

# Delete rows using row key prefix.
table.drop_row_range("user-100")

# Delete all data With timeout
table.drop_row_range(delete_all_data: true, timeout: 120) # 120 seconds.

Parameters:

  • row_key_prefix (String)

    Row key prefix. i.e "user"

  • delete_all_data (Boolean)

Returns:

  • (Boolean)


591
592
593
594
595
596
597
598
599
600
601
602
603
604
# File 'lib/google/cloud/bigtable/table.rb', line 591

def drop_row_range \
    row_key_prefix: nil,
    delete_all_data: nil,
    timeout: nil
  ensure_service!
  service.drop_row_range(
    instance_id,
    name,
    row_key_prefix: row_key_prefix,
    delete_all_data_from_table: delete_all_data,
    timeout: timeout
  )
  true
end

#exists?Boolean

Check table existence.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

if table.exists?
  p "Table is exists."
else
  p "Table is not exists"
end

Using bigtable instance

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

instance = bigtable.instance("my-instance")
table = bigtable.table("my-table")

if table.exists?
  p "Table is exists."
else
  p "Table is not exists"
end

Returns:

  • (Boolean)


231
232
233
234
235
# File 'lib/google/cloud/bigtable/table.rb', line 231

def exists?
  !service.get_table(instance_id, name, view: :NAME_ONLY).nil?
rescue Google::Cloud::NotFoundError
  false
end

#filterGoogle::Cloud::Bigtable::RowRange Originally defined in module ReadOperations

Get row filter

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

filter = table.filter.key("user-*")

Returns:

#generate_consistency_tokenString

Generates a consistency token for a Table, which can be used in CheckConsistency to check whether mutations to the table that finished before this call started have been replicated. The tokens will be available for 90 days.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

instance = bigtable.instance("my-instance")
table = instance.table("my-table")

table.generate_consistency_token # "l947XelENinaxJQP0nnrZJjHnAF7YrwW8HCJLotwrF"

Returns:

  • (String)

    Generated consistency token



438
439
440
441
442
# File 'lib/google/cloud/bigtable/table.rb', line 438

def generate_consistency_token
  ensure_service!
  response = service.generate_consistency_token(instance_id, name)
  response.consistency_token
end

#granularitySymbol

The granularity (e.g. MILLIS, MICROS) at which timestamps are stored in this table. Timestamps not matching the granularity will be rejected. If unspecified at creation time, the value will be set to MILLIS. Views: SCHEMA_VIEW, FULL

Returns:

  • (Symbol)


168
169
170
171
# File 'lib/google/cloud/bigtable/table.rb', line 168

def granularity
  check_view_and_load(:SCHEMA_VIEW)
  @grpc.granularity
end

#granularity_millis?Boolean

The table keeps data versioned at a granularity of 1ms.

Returns:

  • (Boolean)


177
178
179
# File 'lib/google/cloud/bigtable/table.rb', line 177

def granularity_millis?
  granularity == :MILLIS
end

#instance_idString

The unique identifier for the instance.

Returns:

  • (String)


89
90
91
# File 'lib/google/cloud/bigtable/table.rb', line 89

def instance_id
  @grpc.name.split("/")[3]
end

#modify_column_families(modifications) ⇒ Google::Cloud::Bigtable::Table

Apply multitple column modifications Performs a series of column family modifications on the specified table. Either all or none of the modifications will occur before this method returns, but data requests received prior to that point may see a table where only some modifications have taken effect.

Examples:

Apply multiple modificationss

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

instance = bigtable.instance("my-instance")
table = instance.table("my-table")

modifications = []
modifications << Google::Cloud::Bigtable::ColumnFamily.create_modification(
  "cf1", Google::Cloud::Bigtable::GcRule.max_age(600))
)

modifications << Google::Cloud::Bigtable::ColumnFamily.update_modification(
  "cf2", Google::Cloud::Bigtable::GcRule.max_versions(5)
)

gc_rule_1 = Google::Cloud::Bigtable::GcRule.max_versions(3)
gc_rule_2 = Google::Cloud::Bigtable::GcRule.max_age(600)
modifications << Google::Cloud::Bigtable::ColumnFamily.update_modification(
  "cf3", Google::Cloud::Bigtable::GcRule.union(gc_rule_1, gc_rule_2)
)

max_age_gc_rule = Google::Cloud::Bigtable::GcRule.max_age(300)
modifications << Google::Cloud::Bigtable::ColumnFamily.update_modification(
  "cf4", Google::Cloud::Bigtable::GcRule.union(max_version_gc_rule)
)

modifications << Google::Cloud::Bigtable::ColumnFamily.drop_modification("cf5")

table = bigtable.modify_column_families(modifications)

p table.column_families

Parameters:

  • modifications (Array<Google::Cloud::Bigtable::ColumnFamilyModification>)

    Modifications to be atomically applied to the specified table's families. Entries are applied in order, meaning that earlier modifications can be masked by later ones (in the case of repeated updates to the same family, for example).

Returns:



341
342
343
344
345
346
347
348
349
# File 'lib/google/cloud/bigtable/table.rb', line 341

def modify_column_families modifications
  ensure_service!
  self.class.modify_column_families(
    service,
    instance_id,
    table_id,
    modifications
  )
end

#mutate_row(entry) ⇒ Boolean Originally defined in module MutationOperations

Mutate row.

Mutates a row atomically. Cells already present in the row are left unchanged unless explicitly changed by +mutation+. Changes to be atomically applied to the specified row. Entries are applied in order, meaning that earlier mutations can be masked by later ones. Must contain at least one mutation entry and at most 100000.

Examples:

Single mutation on row.

require "google/cloud"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

entry = table.new_mutation_entry.new("user-1")
entry.set_cell("cf1", "field1", "XYZ")
table.mutate_row(entry)

Multiple mutations on row.

require "google/cloud"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

entry = table.new_mutation_entry("user-1")
entry.set_cell(
  "cf-1",
  "field-1",
  "XYZ"
  timestamp: Time.now.to_i * 1000 # Time stamp in milli seconds.
).delete_from_column("cf2", "field02")

table.mutate_row(entry)

Parameters:

Returns:

  • (Boolean)

#mutate_rows(entries) ⇒ Array<Google::Bigtable::V2::MutateRowsResponse::Entry> Originally defined in module MutationOperations

Mutates multiple rows in a batch. Each individual row is mutated atomically as in MutateRow, but the entire batch is not executed atomically.

Examples:

require "google/cloud"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

entries = []
entries << table.new_mutation_entry("row-1").set_cell("cf1", "field1", "XYZ")
entries << table.new_mutation_entry("row-2").set_cell("cf1", "field1", "ABC")
table.mutate_row(entries)

Parameters:

  • entries (Array<Google::Cloud::Bigtable::MutationEntry>)

    The row keys and corresponding mutations to be applied in bulk. Each entry is applied as an atomic mutation, but the entries may be applied in arbitrary order (even between entries for the same row). At least one entry must be specified, and in total the entries can contain at most 100000 mutations.

Returns:

#nameString Also known as: table_id

The unique identifier for the table.

Returns:

  • (String)


96
97
98
# File 'lib/google/cloud/bigtable/table.rb', line 96

def name
  @grpc.name.split("/")[5]
end

#new_column_range(family) ⇒ Google::Cloud::Bigtable::ColumnRange Originally defined in module ReadOperations

Get new instance of ColumnRange.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

range = table.column_range("test-family")
range.from("abc")
range.to("xyz")

# OR
range = table.new_column_range("test-family").from("key-1").to("key-5")

With exclusive from range

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

range = table.new_column_range("test-family").from("key-1", inclusive: false).to("key-5")

Parameters:

  • family (String)

    Column family name

Returns:

#new_mutation_entry(row_key = nil) ⇒ Google::Cloud::Bigtable::MutationEntry Originally defined in module MutationOperations

Create instance of mutation_entry

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

entry = table.new_mutation_entry("row-key-1")

# Without row key
entry = table.new_mutation_entry

Parameters:

  • row_key (String) (defaults to: nil)

    Row key. Optional The key of the row to which the mutation should be applied.

Returns:

#new_read_modify_write_rule(family, qualifier) ⇒ Google::Cloud::Bigtable::ReadModifyWriteRule Originally defined in module MutationOperations

Create instance of ReadModifyWriteRule to append or increment value of the cell qualifier.

Examples:

Create rule to append to qualifier value.

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")
rule = table.new_read_modify_write_rule("cf", "qualifier-1")
rule.append("append-xyz")

Create rule to increment qualifier value.

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")
rule = table.new_read_modify_write_rule("cf", "qualifier-1")
rule.increment(100)

Parameters:

  • family (String)

    The name of the family to which the read/modify/write should be applied.

  • qualifier (String)

    The qualifier of the column to which the read/modify/write should be

Returns:

#new_row_rangeGoogle::Cloud::Bigtable::RowRange Originally defined in module ReadOperations

Get new instance of RowRange.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

range = table.row_range
range.from("abc")
range.to("xyz")

# OR
range = table.new_row_range.from("key-1").to("key-5")

With exclusive from range

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

range = table.new_row_range.from("key-1", inclusive: false).to("key-5")

Returns:

#new_value_rangeGoogle::Cloud::Bigtable::ValueRange Originally defined in module ReadOperations

Create new instance of ValueRange.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

range = table.value_range
range.from("abc")
range.to("xyz")

# OR
range = table.new_value_range.from("abc").to("xyz")

With exclusive from range

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

range = table.new_value_range.from("abc", inclusive: false).to("xyz")

Returns:

#pathString

The full path for the table resource. Values are of the form projects/<project_id>/instances/<instance_id>/table/<table_id>.

Returns:

  • (String)


105
106
107
# File 'lib/google/cloud/bigtable/table.rb', line 105

def path
  @grpc.name
end

#project_idString

The unique identifier for the project.

Returns:

  • (String)


82
83
84
# File 'lib/google/cloud/bigtable/table.rb', line 82

def project_id
  @grpc.name.split("/")[1]
end

#read_modify_write_row(key, rules) ⇒ Google::Cloud::Bigtable::Row Originally defined in module MutationOperations

Modifies a row atomically on the server. The method reads the latest existing timestamp and value from the specified columns and writes a new entry based on pre-defined read/modify/write rules. The new value for the timestamp is the greater of the existing timestamp or the current server time. The method returns the new contents of all modified cells.

Examples:

Apply multiple modification rules.

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

rule_1 = table.new_read_modify_write_rule("cf", "field01")
rule_1.append("append-xyz")

rule_2 = table.new_read_modify_write_rule("cf", "field01")
rule_2.increment(1)

row = table.read_modify_write_row("user01", [rule_1, rule_2])

puts row.cells

Apply single modification rules.

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

rule = table.new_read_modify_write_rule("cf", "field01").append("append-xyz")

row = table.read_modify_write_row("user01", rule)

puts row.cells

Parameters:

Returns:

#read_row(key, filter: nil) ⇒ Google::Cloud::Bigtable::Row Originally defined in module ReadOperations

Read single row by key

Examples:

Read row


require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

row = table.read_row("user-1")

Read row


require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

filter = Google::Cloud::Bigtable::RowFilter.cells_per_row(3)

row = table.read_row("user-1", filter: filter)

Parameters:

Returns:

#read_rows(keys: nil, ranges: nil, filter: nil, limit: nil, &block) ⇒ Array<Google::Cloud::Bigtable::Row> | :yields: row Originally defined in module ReadOperations

Read rows

Streams back the contents of all requested rows in key order, optionally applying the same Reader filter to each. read_rows, row_ranges and filter if not specified, reads from all rows.

See Google::Cloud::Bigtable::RowFilter for filter types.

Examples:

Read with Limit

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

table.read_rows(limit: 10).each do |row|
  puts row
end

Read using row keys

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

table.read_rows(keys: ["user-1", "user-2"]).each do |row|
  puts row
end

Read using row ranges

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

range =  table.row_range.between("user-1", "user-100")

table.read_rows(ranges: range).each do |row|
  puts row
end

Read using filter


require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

filter = table.filter.key("user-*")
# OR
# filter = Google::Cloud::Bigtable::RowFilter.key("user-*")

table.read_rows(filter: filter).each do |row|
  puts row
end

Read using filter with limit


require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

filter = table.filter.key("user-*")
# OR
# filter = Google::Cloud::Bigtable::RowFilter.key("user-*")

table.read_rows(filter: filter, limit: 10).each do |row|
  puts rowow
end

Parameters:

  • keys (Array<String>)

    List of row keys to be read. Optional.

  • ranges (Google::Cloud::Bigtable::RowRange | Array<Google::Cloud::Bigtable::RowRange>)

    Row ranges array or single range. Optional.

  • filter (SimpleFilter, ChainFilter, InterleaveFilter, ConditionFilter)

    The filter to apply to the contents of the specified row(s). If unset, reads the entries of each row. Optional.

  • limit (Integer)

    Limit number of read rows count. Optional. The read will terminate after committing to N rows' worth of results. The default (zero) is to return all results.

Returns:

#reload!(view: nil) ⇒ Google::Cloud::Bigtable::Table

Reload table information.

Parameters:

  • view (Symbol)

    Table view type. Default view type is :SCHEMA_VIEW Valid view types are.

    • :NAME_ONLY - Only populates name
    • :SCHEMA_VIEW - Only populates name and fields related to the table's schema
    • :REPLICATION_VIEW - Only populates name and fields related to the table's replication state.
    • :FULL - Populates all fields

Returns:



122
123
124
125
126
# File 'lib/google/cloud/bigtable/table.rb', line 122

def reload! view: nil
  @view = view || :SCHEMA_VIEW
  @grpc = service.get_table(instance_id, name, view: view)
  self
end

#sample_row_keys:yields: sample_row_key Originally defined in module ReadOperations

Read sample row keys.

Returns a sample of row keys in the table. The returned row keys will delimit contiguous sections of the table of approximately equal size, which can be used to break up the data for distributed tasks like mapreduces.

Examples:

require "google/cloud"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

table.sample_row_keys.each do |sample_row_key|
  p sample_row_key.key # user00116
  p sample_row_key.offset # 805306368
end

Yield Returns:

Returns:

  • (:yields: sample_row_key)

    Yield block for each processed SampleRowKey.

#wait_for_replication(timeout: 600, check_interval: 5) ⇒ Boolean

Wait for replication to check replication consistency of table Checks replication consistency by generating consistency token and calling +check_consistency+ api call 5 times(default). If the response is consistent then return true. Otherwise try again. If consistency checking will run for more than 10 minutes and still not got the +true+ response then return +false+.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("instance_id", "my-table", perform_lookup: true)

if table.wait_for_replication
  puts "Replication done"
end

# With custom timeout and interval
if table.wait_for_replication(timeout: 300, check_interval: 10)
  puts "Replication done"
end

Parameters:

  • timeout (Integer)

    Timeout in seconds. Defaults value is 600 seconds.

  • check_interval (Integer)

    Consistency check interval in seconds. Default is 5 seconds.

Returns:

  • (Boolean)

    Replication is consistent or not.



499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
# File 'lib/google/cloud/bigtable/table.rb', line 499

def wait_for_replication timeout: 600, check_interval: 5
  if check_interval > timeout
    raise(
      InvalidArgumentError,
      "'check_interval' can not be greather then timeout"
    )
  end
  token = generate_consistency_token
  status = false
  start_at = Time.now

  loop do
    status = check_consistency(token)

    break if status || (Time.now - start_at) >= timeout
    sleep(check_interval)
  end
  status
end