Class: Google::Cloud::Bigquery::Dataset
- Inherits:
-
Object
- Object
- Google::Cloud::Bigquery::Dataset
- Defined in:
- lib/google/cloud/bigquery/dataset.rb,
lib/google/cloud/bigquery/dataset/list.rb,
lib/google/cloud/bigquery/dataset/access.rb
Overview
Dataset
Represents a Dataset. A dataset is a grouping mechanism that holds zero or more tables. Datasets are the lowest level unit of access control; you cannot control access at the table level. A dataset is contained within a specific project.
Direct Known Subclasses
Defined Under Namespace
Classes: Access, List, Updater
Attributes collapse
-
#access {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset::Access
Retrieves the access rules for a Dataset.
-
#api_url ⇒ String?
A URL that can be used to access the dataset using the REST API.
-
#created_at ⇒ Time?
The time when this dataset was created.
-
#dataset_id ⇒ String
A unique ID for this dataset, without the project name.
-
#default_expiration ⇒ Integer?
The default lifetime of all tables in the dataset, in milliseconds.
-
#default_expiration=(new_default_expiration) ⇒ Object
Updates the default lifetime of all tables in the dataset, in milliseconds.
-
#description ⇒ String?
A user-friendly description of the dataset.
-
#description=(new_description) ⇒ Object
Updates the user-friendly description of the dataset.
-
#etag ⇒ String?
The ETag hash of the dataset.
-
#labels ⇒ Hash<String, String>?
A hash of user-provided labels associated with this dataset.
-
#labels=(labels) ⇒ Object
Updates the hash of user-provided labels associated with this dataset.
-
#location ⇒ String?
The geographic location where the dataset should reside.
-
#modified_at ⇒ Time?
The date when this dataset or any of its tables was last modified.
-
#name ⇒ String?
A descriptive name for the dataset.
-
#name=(new_name) ⇒ Object
Updates the descriptive name for the dataset.
-
#project_id ⇒ String
The ID of the project containing this dataset.
Lifecycle collapse
-
#delete(force: nil) ⇒ Boolean
Permanently deletes the dataset.
Table collapse
-
#create_table(table_id, name: nil, description: nil) {|table| ... } ⇒ Google::Cloud::Bigquery::Table
Creates a new table.
-
#create_view(table_id, query, name: nil, description: nil, standard_sql: nil, legacy_sql: nil, udfs: nil) ⇒ Google::Cloud::Bigquery::Table
Creates a new view table, which is a virtual table defined by the given SQL query.
-
#table(table_id, skip_lookup: nil) ⇒ Google::Cloud::Bigquery::Table?
Retrieves an existing table by ID.
-
#tables(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Table>
Retrieves the list of tables belonging to the dataset.
Data collapse
-
#exists? ⇒ Boolean
Determines whether the dataset exists in the BigQuery service.
-
#external(url, format: nil) {|ext| ... } ⇒ External::DataSource
Creates a new External::DataSource (or subclass) object that represents the external data source that can be queried from directly, even though the data is not stored in BigQuery.
-
#insert(table_id, rows, skip_invalid: nil, ignore_unknown: nil, autocreate: nil) ⇒ Google::Cloud::Bigquery::InsertResponse
Inserts data into the given table for near-immediate querying, without the need to complete a load operation before the data can appear in query results.
-
#insert_async(table_id, skip_invalid: nil, ignore_unknown: nil, max_bytes: 10000000, max_rows: 500, interval: 10, threads: 4) {|response| ... } ⇒ Table::AsyncInserter
Create an asynchronous inserter object used to insert rows in batches.
-
#load(table_id, file, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil) {|schema| ... } ⇒ Boolean
Loads data into the provided destination table using a synchronous method that blocks for a response.
-
#load_job(table_id, file, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, dryrun: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil) {|schema| ... } ⇒ Google::Cloud::Bigquery::LoadJob
Loads data into the provided destination table using an asynchronous method.
-
#query(query, params: nil, external: nil, max: nil, cache: true, standard_sql: nil, legacy_sql: nil) ⇒ Google::Cloud::Bigquery::Data
Queries data and waits for the results.
-
#query_job(query, params: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil) ⇒ Google::Cloud::Bigquery::QueryJob
Queries data by creating a query job.
-
#reference? ⇒ Boolean
Whether the dataset was created without retrieving the resource representation from the BigQuery service.
-
#reload! ⇒ Google::Cloud::Bigquery::Dataset
(also: #refresh!)
Reloads the dataset with current data from the BigQuery service.
-
#resource? ⇒ Boolean
Whether the dataset was created with a resource representation from the BigQuery service.
-
#resource_full? ⇒ Boolean
Whether the dataset was created with a full resource representation from the BigQuery service.
-
#resource_partial? ⇒ Boolean
Whether the dataset was created with a partial resource representation from the BigQuery service by retrieval through Project#datasets.
Instance Method Details
#access {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset::Access
Retrieves the access rules for a Dataset. The rules can be updated when passing a block, see Access for all the methods available.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
384 385 386 387 388 389 390 391 392 393 394 395 396 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 384 def access ensure_full_data! reload! unless resource_full? access_builder = Access.from_gapi @gapi if block_given? yield access_builder if access_builder.changed? @gapi.update! access: access_builder.to_gapi patch_gapi! :access end end access_builder.freeze end |
#api_url ⇒ String?
A URL that can be used to access the dataset using the REST API.
154 155 156 157 158 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 154 def api_url return nil if reference? ensure_full_data! @gapi.self_link end |
#create_table(table_id, name: nil, description: nil) {|table| ... } ⇒ Google::Cloud::Bigquery::Table
Creates a new table. If you are adapting existing code that was written for the Rest API , you can pass the table's schema as a hash (see example.)
492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 492 def create_table table_id, name: nil, description: nil ensure_service! new_tb = Google::Apis::BigqueryV2::Table.new( table_reference: Google::Apis::BigqueryV2::TableReference.new( project_id: project_id, dataset_id: dataset_id, table_id: table_id ) ) updater = Table::Updater.new(new_tb).tap do |tb| tb.name = name unless name.nil? tb.description = description unless description.nil? end yield updater if block_given? gapi = service.insert_table dataset_id, updater.to_gapi Table.from_gapi gapi, service end |
#create_view(table_id, query, name: nil, description: nil, standard_sql: nil, legacy_sql: nil, udfs: nil) ⇒ Google::Cloud::Bigquery::Table
Creates a new view table, which is a virtual table defined by the given SQL query.
BigQuery's views are logical views, not materialized views, which means that the query that defines the view is re-executed every time the view is queried. Queries are billed according to the total amount of data in all table fields referenced directly or indirectly by the top-level query. (See Table#view? and Table#query.)
567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 567 def create_view table_id, query, name: nil, description: nil, standard_sql: nil, legacy_sql: nil, udfs: nil new_view_opts = { table_reference: Google::Apis::BigqueryV2::TableReference.new( project_id: project_id, dataset_id: dataset_id, table_id: table_id ), friendly_name: name, description: description, view: Google::Apis::BigqueryV2::ViewDefinition.new( query: query, use_legacy_sql: Convert.resolve_legacy_sql(standard_sql, legacy_sql), user_defined_function_resources: udfs_gapi(udfs) ) }.delete_if { |_, v| v.nil? } new_view = Google::Apis::BigqueryV2::Table.new new_view_opts gapi = service.insert_table dataset_id, new_view Table.from_gapi gapi, service end |
#created_at ⇒ Time?
The time when this dataset was created.
237 238 239 240 241 242 243 244 245 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 237 def created_at return nil if reference? ensure_full_data! begin ::Time.at(Integer(@gapi.creation_time) / 1000.0) rescue StandardError nil end end |
#dataset_id ⇒ String
A unique ID for this dataset, without the project name.
74 75 76 77 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 74 def dataset_id return reference.dataset_id if reference? @gapi.dataset_reference.dataset_id end |
#default_expiration ⇒ Integer?
The default lifetime of all tables in the dataset, in milliseconds.
200 201 202 203 204 205 206 207 208 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 200 def default_expiration return nil if reference? ensure_full_data! begin Integer @gapi.default_table_expiration_ms rescue StandardError nil end end |
#default_expiration=(new_default_expiration) ⇒ Object
Updates the default lifetime of all tables in the dataset, in milliseconds.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
223 224 225 226 227 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 223 def default_expiration= new_default_expiration reload! unless resource_full? @gapi.update! default_table_expiration_ms: new_default_expiration patch_gapi! :default_table_expiration_ms end |
#delete(force: nil) ⇒ Boolean
Permanently deletes the dataset. The dataset must be empty before it
can be deleted unless the force
option is set to true
.
418 419 420 421 422 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 418 def delete force: nil ensure_service! service.delete_dataset dataset_id, force true end |
#description ⇒ String?
A user-friendly description of the dataset.
168 169 170 171 172 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 168 def description return nil if reference? ensure_full_data! @gapi.description end |
#description=(new_description) ⇒ Object
Updates the user-friendly description of the dataset.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
185 186 187 188 189 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 185 def description= new_description reload! unless resource_full? @gapi.update! description: new_description patch_gapi! :description end |
#etag ⇒ String?
The ETag hash of the dataset.
140 141 142 143 144 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 140 def etag return nil if reference? ensure_full_data! @gapi.etag end |
#exists? ⇒ Boolean
Determines whether the dataset exists in the BigQuery service. The result is cached locally.
1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1623 def exists? # Always true if we have a gapi object return true unless reference? # If we have a value, return it return @exists unless @exists.nil? ensure_gapi! @exists = true rescue Google::Cloud::NotFoundError @exists = false end |
#external(url, format: nil) {|ext| ... } ⇒ External::DataSource
Creates a new External::DataSource (or subclass) object that represents the external data source that can be queried from directly, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.
1132 1133 1134 1135 1136 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1132 def external url, format: nil ext = External.from_urls url, format yield ext if block_given? ext end |
#insert(table_id, rows, skip_invalid: nil, ignore_unknown: nil, autocreate: nil) ⇒ Google::Cloud::Bigquery::InsertResponse
Inserts data into the given table for near-immediate querying, without the need to complete a load operation before the data can appear in query results.
1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1814 def insert table_id, rows, skip_invalid: nil, ignore_unknown: nil, autocreate: nil if autocreate begin insert_data table_id, rows, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown rescue Google::Cloud::NotFoundError sleep rand(1..60) begin create_table table_id do |tbl_updater| yield tbl_updater if block_given? end # rubocop:disable Lint/HandleExceptions rescue Google::Cloud::AlreadyExistsError end # rubocop:enable Lint/HandleExceptions sleep 60 insert table_id, rows, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, autocreate: true end else insert_data table_id, rows, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown end end |
#insert_async(table_id, skip_invalid: nil, ignore_unknown: nil, max_bytes: 10000000, max_rows: 500, interval: 10, threads: 4) {|response| ... } ⇒ Table::AsyncInserter
Create an asynchronous inserter object used to insert rows in batches.
1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1889 def insert_async table_id, skip_invalid: nil, ignore_unknown: nil, max_bytes: 10000000, max_rows: 500, interval: 10, threads: 4, &block ensure_service! # Get table, don't use Dataset#table which handles NotFoundError gapi = service.get_table dataset_id, table_id table = Table.from_gapi gapi, service # Get the AsyncInserter from the table table.insert_async skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, max_bytes: max_bytes, max_rows: max_rows, interval: interval, threads: threads, &block end |
#labels ⇒ Hash<String, String>?
A hash of user-provided labels associated with this dataset. Labels are used to organize and group datasets. See Using Labels.
The returned hash is frozen and changes are not allowed. Use #labels= to replace the entire hash.
302 303 304 305 306 307 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 302 def labels return nil if reference? m = @gapi.labels m = m.to_h if m.respond_to? :to_h m.dup.freeze end |
#labels=(labels) ⇒ Object
Updates the hash of user-provided labels associated with this dataset. Labels are used to organize and group datasets. See Using Labels.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
338 339 340 341 342 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 338 def labels= labels reload! unless resource_full? @gapi.labels = labels patch_gapi! :labels end |
#load(table_id, file, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil) {|schema| ... } ⇒ Boolean
Loads data into the provided destination table using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See also #load_job.
For the source of the data, you can pass a google-cloud storage file
path or a google-cloud-storage File
instance. Or, you can upload a
file directly. See Loading Data with a POST
Request.
1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1552 def load table_id, file, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil yield (schema ||= Schema.from_gapi) if block_given? = { format: format, create: create, write: write, projection_fields: projection_fields, jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding, delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading, schema: schema, autodetect: autodetect, null_marker: null_marker } job = load_job table_id, file, job.wait_until_done! if job.failed? begin # raise to activate ruby exception cause handling raise job.gapi_error rescue StandardError => e # wrap Google::Apis::Error with Google::Cloud::Error raise Google::Cloud::Error.from_error(e) end end true end |
#load_job(table_id, file, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, dryrun: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil) {|schema| ... } ⇒ Google::Cloud::Bigquery::LoadJob
Loads data into the provided destination table using an asynchronous method. In this method, a LoadJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See also #load.
For the source of the data, you can pass a google-cloud storage file
path or a google-cloud-storage File
instance. Or, you can upload a
file directly. See Loading Data with a POST
Request.
1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1342 def load_job table_id, file, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, dryrun: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil ensure_service! if block_given? schema ||= Schema.from_gapi yield schema end schema_gapi = schema.to_gapi if schema = { format: format, create: create, write: write, projection_fields: projection_fields, jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding, delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading, dryrun: dryrun, schema: schema_gapi, job_id: job_id, prefix: prefix, labels: labels, autodetect: autodetect, null_marker: null_marker } return load_storage(table_id, file, ) if storage_url? file return load_local(table_id, file, ) if local_file? file raise Google::Cloud::Error, "Don't know how to load #{file}" end |
#location ⇒ String?
The geographic location where the dataset should reside. Possible
values include EU
and US
. The default value is US
.
274 275 276 277 278 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 274 def location return nil if reference? ensure_full_data! @gapi.location end |
#modified_at ⇒ Time?
The date when this dataset or any of its tables was last modified.
255 256 257 258 259 260 261 262 263 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 255 def modified_at return nil if reference? ensure_full_data! begin ::Time.at(Integer(@gapi.last_modified_time) / 1000.0) rescue StandardError nil end end |
#name ⇒ String?
A descriptive name for the dataset.
109 110 111 112 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 109 def name return nil if reference? @gapi.friendly_name end |
#name=(new_name) ⇒ Object
Updates the descriptive name for the dataset.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
126 127 128 129 130 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 126 def name= new_name reload! unless resource_full? @gapi.update! friendly_name: new_name patch_gapi! :friendly_name end |
#project_id ⇒ String
The ID of the project containing this dataset.
86 87 88 89 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 86 def project_id return reference.project_id if reference? @gapi.dataset_reference.project_id end |
#query(query, params: nil, external: nil, max: nil, cache: true, standard_sql: nil, legacy_sql: nil) ⇒ Google::Cloud::Bigquery::Data
Queries data and waits for the results. In this method, a QueryJob is created and its results are saved to a temporary table, then read from the table. Timeouts and transient errors are generally handled as needed to complete the query.
Sets the current dataset as the default dataset in the query. Useful for using unqualified table names.
When using standard SQL and passing arguments using params
, Ruby
types are mapped to BigQuery types as follows:
BigQuery | Ruby | Notes |
---|---|---|
BOOL |
true /false |
|
INT64 |
Integer |
|
FLOAT64 |
Float |
|
STRING |
STRING |
|
DATETIME |
DateTime |
DATETIME does not support time zone. |
DATE |
Date |
|
TIMESTAMP |
Time |
|
TIME |
Google::Cloud::BigQuery::Time |
|
BYTES |
File , IO , StringIO , or similar |
|
ARRAY |
Array |
Nested arrays, nil values are not supported. |
STRUCT |
Hash |
Hash keys may be strings or symbols. |
See Data Types for an overview of each BigQuery data type, including allowed values.
1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1063 def query query, params: nil, external: nil, max: nil, cache: true, standard_sql: nil, legacy_sql: nil ensure_service! = { params: params, external: external, cache: cache, legacy_sql: legacy_sql, standard_sql: standard_sql } job = query_job query, job.wait_until_done! if job.failed? begin # raise to activate ruby exception cause handling raise job.gapi_error rescue StandardError => e # wrap Google::Apis::Error with Google::Cloud::Error raise Google::Cloud::Error.from_error(e) end end job.data max: max end |
#query_job(query, params: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil) ⇒ Google::Cloud::Bigquery::QueryJob
Queries data by creating a query job.
Sets the current dataset as the default dataset in the query. Useful for using unqualified table names.
When using standard SQL and passing arguments using params
, Ruby
types are mapped to BigQuery types as follows:
BigQuery | Ruby | Notes |
---|---|---|
BOOL |
true /false |
|
INT64 |
Integer |
|
FLOAT64 |
Float |
|
STRING |
STRING |
|
DATETIME |
DateTime |
DATETIME does not support time zone. |
DATE |
Date |
|
TIMESTAMP |
Time |
|
TIME |
Google::Cloud::BigQuery::Time |
|
BYTES |
File , IO , StringIO , or similar |
|
ARRAY |
Array |
Nested arrays, nil values are not supported. |
STRUCT |
Hash |
Hash keys may be strings or symbols. |
See Data Types for an overview of each BigQuery data type, including allowed values.
892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 892 def query_job query, params: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil = { priority: priority, cache: cache, table: table, create: create, write: write, large_results: large_results, flatten: flatten, legacy_sql: legacy_sql, standard_sql: standard_sql, maximum_billing_tier: maximum_billing_tier, maximum_bytes_billed: maximum_bytes_billed, params: params, external: external, labels: labels, job_id: job_id, prefix: prefix, udfs: udfs } [:dataset] ||= self ensure_service! gapi = service.query_job query, Job.from_gapi gapi, service end |
#reference? ⇒ Boolean
Whether the dataset was created without retrieving the resource representation from the BigQuery service.
1652 1653 1654 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1652 def reference? @gapi.nil? end |
#reload! ⇒ Google::Cloud::Bigquery::Dataset Also known as: refresh!
Reloads the dataset with current data from the BigQuery service.
1599 1600 1601 1602 1603 1604 1605 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1599 def reload! ensure_service! reloaded_gapi = service.get_dataset dataset_id @reference = nil @gapi = reloaded_gapi self end |
#resource? ⇒ Boolean
Whether the dataset was created with a resource representation from the BigQuery service.
1674 1675 1676 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1674 def resource? !@gapi.nil? end |
#resource_full? ⇒ Boolean
Whether the dataset was created with a full resource representation from the BigQuery service.
1721 1722 1723 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1721 def resource_full? @gapi.is_a? Google::Apis::BigqueryV2::Dataset end |
#resource_partial? ⇒ Boolean
Whether the dataset was created with a partial resource representation from the BigQuery service by retrieval through Project#datasets. See Datasets: list response for the contents of the partial representation. Accessing any attribute outside of the partial representation will result in loading the full representation.
1701 1702 1703 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1701 def resource_partial? @gapi.is_a? Google::Apis::BigqueryV2::DatasetList::Dataset end |
#table(table_id, skip_lookup: nil) ⇒ Google::Cloud::Bigquery::Table?
Retrieves an existing table by ID.
620 621 622 623 624 625 626 627 628 629 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 620 def table table_id, skip_lookup: nil ensure_service! if skip_lookup return Table.new_reference project_id, dataset_id, table_id, service end gapi = service.get_table dataset_id, table_id Table.from_gapi gapi, service rescue Google::Cloud::NotFoundError nil end |
#tables(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Table>
Retrieves the list of tables belonging to the dataset.
665 666 667 668 669 670 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 665 def tables token: nil, max: nil ensure_service! = { token: token, max: max } gapi = service.list_tables dataset_id, Table::List.from_gapi gapi, service, dataset_id, max end |