Class: Google::Cloud::Bigquery::Dataset
- Inherits:
-
Object
- Object
- Google::Cloud::Bigquery::Dataset
- Defined in:
- lib/google/cloud/bigquery/dataset.rb,
lib/google/cloud/bigquery/dataset/list.rb,
lib/google/cloud/bigquery/dataset/access.rb
Overview
Dataset
Represents a Dataset. A dataset is a grouping mechanism that holds zero or more tables. Datasets are the lowest level unit of access control; you cannot control access at the table level. A dataset is contained within a specific project.
Direct Known Subclasses
Defined Under Namespace
Classes: Access, List, Updater
Attributes collapse
-
#access {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset::Access
Retrieves the access rules for a Dataset.
-
#api_url ⇒ String?
A URL that can be used to access the dataset using the REST API.
-
#created_at ⇒ Time?
The time when this dataset was created.
-
#dataset_id ⇒ String
A unique ID for this dataset, without the project name.
-
#default_expiration ⇒ Integer?
The default lifetime of all tables in the dataset, in milliseconds.
-
#default_expiration=(new_default_expiration) ⇒ Object
Updates the default lifetime of all tables in the dataset, in milliseconds.
-
#description ⇒ String?
A user-friendly description of the dataset.
-
#description=(new_description) ⇒ Object
Updates the user-friendly description of the dataset.
-
#etag ⇒ String?
The ETag hash of the dataset.
-
#labels ⇒ Hash<String, String>?
A hash of user-provided labels associated with this dataset.
-
#labels=(labels) ⇒ Object
Updates the hash of user-provided labels associated with this dataset.
-
#location ⇒ String?
The geographic location where the dataset should reside.
-
#modified_at ⇒ Time?
The date when this dataset or any of its tables was last modified.
-
#name ⇒ String?
A descriptive name for the dataset.
-
#name=(new_name) ⇒ Object
Updates the descriptive name for the dataset.
-
#project_id ⇒ String
The ID of the project containing this dataset.
Lifecycle collapse
-
#delete(force: nil) ⇒ Boolean
Permanently deletes the dataset.
Table collapse
-
#create_table(table_id, name: nil, description: nil) {|table| ... } ⇒ Google::Cloud::Bigquery::Table
Creates a new table.
-
#create_view(table_id, query, name: nil, description: nil, standard_sql: nil, legacy_sql: nil, udfs: nil) ⇒ Google::Cloud::Bigquery::Table
Creates a new view table, which is a virtual table defined by the given SQL query.
-
#table(table_id, skip_lookup: nil) ⇒ Google::Cloud::Bigquery::Table?
Retrieves an existing table by ID.
-
#tables(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Table>
Retrieves the list of tables belonging to the dataset.
Data collapse
-
#exists? ⇒ Boolean
Determines whether the dataset exists in the BigQuery service.
-
#external(url, format: nil) {|ext| ... } ⇒ External::DataSource
Creates a new External::DataSource (or subclass) object that represents the external data source that can be queried from directly, even though the data is not stored in BigQuery.
-
#insert(table_id, rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil, autocreate: nil) ⇒ Google::Cloud::Bigquery::InsertResponse
Inserts data into the given table for near-immediate querying, without the need to complete a load operation before the data can appear in query results.
-
#insert_async(table_id, skip_invalid: nil, ignore_unknown: nil, max_bytes: 10000000, max_rows: 500, interval: 10, threads: 4) {|response| ... } ⇒ Table::AsyncInserter
Create an asynchronous inserter object used to insert rows in batches.
-
#load(table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil) {|updater| ... } ⇒ Boolean
Loads data into the provided destination table using a synchronous method that blocks for a response.
-
#load_job(table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, dryrun: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil) {|updater| ... } ⇒ Google::Cloud::Bigquery::LoadJob
Loads data into the provided destination table using an asynchronous method.
-
#query(query, params: nil, external: nil, max: nil, cache: true, standard_sql: nil, legacy_sql: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::Data
Queries data and waits for the results.
-
#query_job(query, params: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::QueryJob
Queries data by creating a query job.
-
#reference? ⇒ Boolean
Whether the dataset was created without retrieving the resource representation from the BigQuery service.
-
#reload! ⇒ Google::Cloud::Bigquery::Dataset
(also: #refresh!)
Reloads the dataset with current data from the BigQuery service.
-
#resource? ⇒ Boolean
Whether the dataset was created with a resource representation from the BigQuery service.
-
#resource_full? ⇒ Boolean
Whether the dataset was created with a full resource representation from the BigQuery service.
-
#resource_partial? ⇒ Boolean
Whether the dataset was created with a partial resource representation from the BigQuery service by retrieval through Project#datasets.
Instance Method Details
#access {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset::Access
Retrieves the access rules for a Dataset. The rules can be updated when passing a block, see Access for all the methods available.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
384 385 386 387 388 389 390 391 392 393 394 395 396 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 384 def access ensure_full_data! reload! unless resource_full? access_builder = Access.from_gapi @gapi if block_given? yield access_builder if access_builder.changed? @gapi.update! access: access_builder.to_gapi patch_gapi! :access end end access_builder.freeze end |
#api_url ⇒ String?
A URL that can be used to access the dataset using the REST API.
154 155 156 157 158 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 154 def api_url return nil if reference? ensure_full_data! @gapi.self_link end |
#create_table(table_id, name: nil, description: nil) {|table| ... } ⇒ Google::Cloud::Bigquery::Table
Creates a new table. If you are adapting existing code that was written for the Rest API , you can pass the table's schema as a hash (see example.)
494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 494 def create_table table_id, name: nil, description: nil ensure_service! new_tb = Google::Apis::BigqueryV2::Table.new( table_reference: Google::Apis::BigqueryV2::TableReference.new( project_id: project_id, dataset_id: dataset_id, table_id: table_id ) ) updater = Table::Updater.new(new_tb).tap do |tb| tb.name = name unless name.nil? tb.description = description unless description.nil? end yield updater if block_given? gapi = service.insert_table dataset_id, updater.to_gapi Table.from_gapi gapi, service end |
#create_view(table_id, query, name: nil, description: nil, standard_sql: nil, legacy_sql: nil, udfs: nil) ⇒ Google::Cloud::Bigquery::Table
Creates a new view table, which is a virtual table defined by the given SQL query.
BigQuery's views are logical views, not materialized views, which means that the query that defines the view is re-executed every time the view is queried. Queries are billed according to the total amount of data in all table fields referenced directly or indirectly by the top-level query. (See Table#view? and Table#query.)
569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 569 def create_view table_id, query, name: nil, description: nil, standard_sql: nil, legacy_sql: nil, udfs: nil new_view_opts = { table_reference: Google::Apis::BigqueryV2::TableReference.new( project_id: project_id, dataset_id: dataset_id, table_id: table_id ), friendly_name: name, description: description, view: Google::Apis::BigqueryV2::ViewDefinition.new( query: query, use_legacy_sql: Convert.resolve_legacy_sql(standard_sql, legacy_sql), user_defined_function_resources: udfs_gapi(udfs) ) }.delete_if { |_, v| v.nil? } new_view = Google::Apis::BigqueryV2::Table.new new_view_opts gapi = service.insert_table dataset_id, new_view Table.from_gapi gapi, service end |
#created_at ⇒ Time?
The time when this dataset was created.
237 238 239 240 241 242 243 244 245 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 237 def created_at return nil if reference? ensure_full_data! begin ::Time.at(Integer(@gapi.creation_time) / 1000.0) rescue StandardError nil end end |
#dataset_id ⇒ String
A unique ID for this dataset, without the project name.
74 75 76 77 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 74 def dataset_id return reference.dataset_id if reference? @gapi.dataset_reference.dataset_id end |
#default_expiration ⇒ Integer?
The default lifetime of all tables in the dataset, in milliseconds.
200 201 202 203 204 205 206 207 208 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 200 def default_expiration return nil if reference? ensure_full_data! begin Integer @gapi.default_table_expiration_ms rescue StandardError nil end end |
#default_expiration=(new_default_expiration) ⇒ Object
Updates the default lifetime of all tables in the dataset, in milliseconds.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
223 224 225 226 227 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 223 def default_expiration= new_default_expiration reload! unless resource_full? @gapi.update! default_table_expiration_ms: new_default_expiration patch_gapi! :default_table_expiration_ms end |
#delete(force: nil) ⇒ Boolean
Permanently deletes the dataset. The dataset must be empty before it
can be deleted unless the force
option is set to true
.
418 419 420 421 422 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 418 def delete force: nil ensure_service! service.delete_dataset dataset_id, force true end |
#description ⇒ String?
A user-friendly description of the dataset.
168 169 170 171 172 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 168 def description return nil if reference? ensure_full_data! @gapi.description end |
#description=(new_description) ⇒ Object
Updates the user-friendly description of the dataset.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
185 186 187 188 189 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 185 def description= new_description reload! unless resource_full? @gapi.update! description: new_description patch_gapi! :description end |
#etag ⇒ String?
The ETag hash of the dataset.
140 141 142 143 144 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 140 def etag return nil if reference? ensure_full_data! @gapi.etag end |
#exists? ⇒ Boolean
Determines whether the dataset exists in the BigQuery service. The result is cached locally.
1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1681 def exists? # Always true if we have a gapi object return true unless reference? # If we have a value, return it return @exists unless @exists.nil? ensure_gapi! @exists = true rescue Google::Cloud::NotFoundError @exists = false end |
#external(url, format: nil) {|ext| ... } ⇒ External::DataSource
Creates a new External::DataSource (or subclass) object that represents the external data source that can be queried from directly, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.
1148 1149 1150 1151 1152 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1148 def external url, format: nil ext = External.from_urls url, format yield ext if block_given? ext end |
#insert(table_id, rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil, autocreate: nil) ⇒ Google::Cloud::Bigquery::InsertResponse
Inserts data into the given table for near-immediate querying, without the need to complete a load operation before the data can appear in query results.
1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1878 def insert table_id, rows, insert_ids: nil, skip_invalid: nil, ignore_unknown: nil, autocreate: nil rows = [rows] if rows.is_a? Hash insert_ids = Array insert_ids if insert_ids.count > 0 && insert_ids.count != rows.count raise ArgumentError, "insert_ids must be the same size as rows" end if autocreate begin insert_data table_id, rows, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, insert_ids: insert_ids rescue Google::Cloud::NotFoundError sleep rand(1..60) begin create_table table_id do |tbl_updater| yield tbl_updater if block_given? end # rubocop:disable Lint/HandleExceptions rescue Google::Cloud::AlreadyExistsError end # rubocop:enable Lint/HandleExceptions sleep 60 insert table_id, rows, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, autocreate: true, insert_ids: insert_ids end else insert_data table_id, rows, skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, insert_ids: insert_ids end end |
#insert_async(table_id, skip_invalid: nil, ignore_unknown: nil, max_bytes: 10000000, max_rows: 500, interval: 10, threads: 4) {|response| ... } ⇒ Table::AsyncInserter
Create an asynchronous inserter object used to insert rows in batches.
1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1962 def insert_async table_id, skip_invalid: nil, ignore_unknown: nil, max_bytes: 10000000, max_rows: 500, interval: 10, threads: 4, &block ensure_service! # Get table, don't use Dataset#table which handles NotFoundError gapi = service.get_table dataset_id, table_id table = Table.from_gapi gapi, service # Get the AsyncInserter from the table table.insert_async skip_invalid: skip_invalid, ignore_unknown: ignore_unknown, max_bytes: max_bytes, max_rows: max_rows, interval: interval, threads: threads, &block end |
#labels ⇒ Hash<String, String>?
A hash of user-provided labels associated with this dataset. Labels are used to organize and group datasets. See Using Labels.
The returned hash is frozen and changes are not allowed. Use #labels= to replace the entire hash.
302 303 304 305 306 307 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 302 def labels return nil if reference? m = @gapi.labels m = m.to_h if m.respond_to? :to_h m.dup.freeze end |
#labels=(labels) ⇒ Object
Updates the hash of user-provided labels associated with this dataset. Labels are used to organize and group datasets. See Using Labels.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
338 339 340 341 342 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 338 def labels= labels reload! unless resource_full? @gapi.labels = labels patch_gapi! :labels end |
#load(table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil) {|updater| ... } ⇒ Boolean
Loads data into the provided destination table using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See also #load_job.
For the source of the data, you can pass a google-cloud storage file
path or a google-cloud-storage File
instance. Or, you can upload a
file directly. See Loading Data with a POST
Request.
The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method. If the dataset is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the dataset.
1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1621 def load table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil, &block job = load_job table_id, files, format: format, create: create, write: write, projection_fields: projection_fields, jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding, delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading, schema: schema, autodetect: autodetect, null_marker: null_marker, &block job.wait_until_done! ensure_job_succeeded! job true end |
#load_job(table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, dryrun: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil) {|updater| ... } ⇒ Google::Cloud::Bigquery::LoadJob
Loads data into the provided destination table using an asynchronous method. In this method, a LoadJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See also #load.
For the source of the data, you can pass a google-cloud storage file
path or a google-cloud-storage File
instance. Or, you can upload a
file directly. See Loading Data with a POST
Request.
The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method. If the dataset is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the dataset.
1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1385 def load_job table_id, files, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, dryrun: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil ensure_service! updater = load_job_updater table_id, format: format, create: create, write: write, projection_fields: projection_fields, jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding, delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading, dryrun: dryrun, schema: schema, job_id: job_id, prefix: prefix, labels: labels, autodetect: autodetect, null_marker: null_marker yield updater if block_given? load_local_or_uri files, updater end |
#location ⇒ String?
The geographic location where the dataset should reside. Possible
values include EU
and US
. The default value is US
.
274 275 276 277 278 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 274 def location return nil if reference? ensure_full_data! @gapi.location end |
#modified_at ⇒ Time?
The date when this dataset or any of its tables was last modified.
255 256 257 258 259 260 261 262 263 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 255 def modified_at return nil if reference? ensure_full_data! begin ::Time.at(Integer(@gapi.last_modified_time) / 1000.0) rescue StandardError nil end end |
#name ⇒ String?
A descriptive name for the dataset.
109 110 111 112 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 109 def name return nil if reference? @gapi.friendly_name end |
#name=(new_name) ⇒ Object
Updates the descriptive name for the dataset.
If the dataset is not a full resource representation (see #resource_full?), the full representation will be retrieved before the update to comply with ETag-based optimistic concurrency control.
126 127 128 129 130 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 126 def name= new_name reload! unless resource_full? @gapi.update! friendly_name: new_name patch_gapi! :friendly_name end |
#project_id ⇒ String
The ID of the project containing this dataset.
86 87 88 89 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 86 def project_id return reference.project_id if reference? @gapi.dataset_reference.project_id end |
#query(query, params: nil, external: nil, max: nil, cache: true, standard_sql: nil, legacy_sql: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::Data
Queries data and waits for the results. In this method, a QueryJob is created and its results are saved to a temporary table, then read from the table. Timeouts and transient errors are generally handled as needed to complete the query.
Sets the current dataset as the default dataset in the query. Useful for using unqualified table names.
When using standard SQL and passing arguments using params
, Ruby
types are mapped to BigQuery types as follows:
BigQuery | Ruby | Notes |
---|---|---|
BOOL |
true /false |
|
INT64 |
Integer |
|
FLOAT64 |
Float |
|
STRING |
STRING |
|
DATETIME |
DateTime |
DATETIME does not support time zone. |
DATE |
Date |
|
TIMESTAMP |
Time |
|
TIME |
Google::Cloud::BigQuery::Time |
|
BYTES |
File , IO , StringIO , or similar |
|
ARRAY |
Array |
Nested arrays, nil values are not supported. |
STRUCT |
Hash |
Hash keys may be strings or symbols. |
See Data Types for an overview of each BigQuery data type, including allowed values.
The geographic location for the job ("US", "EU", etc.) can be set via QueryJob::Updater#location= in a block passed to this method. If the dataset is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the dataset.
1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1090 def query query, params: nil, external: nil, max: nil, cache: true, standard_sql: nil, legacy_sql: nil, &block job = query_job query, params: params, external: external, cache: cache, standard_sql: standard_sql, legacy_sql: legacy_sql, &block job.wait_until_done! ensure_job_succeeded! job job.data max: max end |
#query_job(query, params: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::QueryJob
Queries data by creating a query job.
Sets the current dataset as the default dataset in the query. Useful for using unqualified table names.
When using standard SQL and passing arguments using params
, Ruby
types are mapped to BigQuery types as follows:
BigQuery | Ruby | Notes |
---|---|---|
BOOL |
true /false |
|
INT64 |
Integer |
|
FLOAT64 |
Float |
|
STRING |
STRING |
|
DATETIME |
DateTime |
DATETIME does not support time zone. |
DATE |
Date |
|
TIMESTAMP |
Time |
|
TIME |
Google::Cloud::BigQuery::Time |
|
BYTES |
File , IO , StringIO , or similar |
|
ARRAY |
Array |
Nested arrays, nil values are not supported. |
STRUCT |
Hash |
Hash keys may be strings or symbols. |
See Data Types for an overview of each BigQuery data type, including allowed values.
The geographic location for the job ("US", "EU", etc.) can be set via QueryJob::Updater#location= in a block passed to this method. If the dataset is a full resource representation (see #resource_full?), the location of the job will be automatically set to the location of the dataset.
902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 902 def query_job query, params: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil ensure_service! = { priority: priority, cache: cache, table: table, create: create, write: write, large_results: large_results, flatten: flatten, legacy_sql: legacy_sql, standard_sql: standard_sql, maximum_billing_tier: maximum_billing_tier, maximum_bytes_billed: maximum_bytes_billed, job_id: job_id, prefix: prefix, params: params, external: external, labels: labels, udfs: udfs } updater = QueryJob::Updater. service, query, updater.dataset = self updater.location = location if location # may be dataset reference yield updater if block_given? gapi = service.query_job updater.to_gapi Job.from_gapi gapi, service end |
#reference? ⇒ Boolean
Whether the dataset was created without retrieving the resource representation from the BigQuery service.
1710 1711 1712 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1710 def reference? @gapi.nil? end |
#reload! ⇒ Google::Cloud::Bigquery::Dataset Also known as: refresh!
Reloads the dataset with current data from the BigQuery service.
1657 1658 1659 1660 1661 1662 1663 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1657 def reload! ensure_service! reloaded_gapi = service.get_dataset dataset_id @reference = nil @gapi = reloaded_gapi self end |
#resource? ⇒ Boolean
Whether the dataset was created with a resource representation from the BigQuery service.
1732 1733 1734 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1732 def resource? !@gapi.nil? end |
#resource_full? ⇒ Boolean
Whether the dataset was created with a full resource representation from the BigQuery service.
1779 1780 1781 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1779 def resource_full? @gapi.is_a? Google::Apis::BigqueryV2::Dataset end |
#resource_partial? ⇒ Boolean
Whether the dataset was created with a partial resource representation from the BigQuery service by retrieval through Project#datasets. See Datasets: list response for the contents of the partial representation. Accessing any attribute outside of the partial representation will result in loading the full representation.
1759 1760 1761 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 1759 def resource_partial? @gapi.is_a? Google::Apis::BigqueryV2::DatasetList::Dataset end |
#table(table_id, skip_lookup: nil) ⇒ Google::Cloud::Bigquery::Table?
Retrieves an existing table by ID.
622 623 624 625 626 627 628 629 630 631 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 622 def table table_id, skip_lookup: nil ensure_service! if skip_lookup return Table.new_reference project_id, dataset_id, table_id, service end gapi = service.get_table dataset_id, table_id Table.from_gapi gapi, service rescue Google::Cloud::NotFoundError nil end |
#tables(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Table>
Retrieves the list of tables belonging to the dataset.
667 668 669 670 671 672 |
# File 'lib/google/cloud/bigquery/dataset.rb', line 667 def tables token: nil, max: nil ensure_service! = { token: token, max: max } gapi = service.list_tables dataset_id, Table::List.from_gapi gapi, service, dataset_id, max end |