Class: Google::Cloud::Bigquery::Project

Inherits:
Object
  • Object
show all
Defined in:
lib/google/cloud/bigquery/project.rb

Overview

Project

Projects are top-level containers in Google Cloud Platform. They store information about billing and authorized users, and they contain BigQuery data. Each project has a friendly name and a unique ID.

Google::Cloud::Bigquery::Project is the main object for interacting with Google BigQuery. Dataset objects are created, accessed, and deleted by Google::Cloud::Bigquery::Project.

See Google::Cloud#bigquery

Examples:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

Instance Method Summary collapse

Constructor Details

#initialize(service) ⇒ Project

Creates a new Service instance.

See Google::Cloud.bigquery



57
58
59
# File 'lib/google/cloud/bigquery/project.rb', line 57

def initialize service
  @service = service
end

Instance Method Details

#create_dataset(dataset_id, name: nil, description: nil, expiration: nil, location: nil) {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset

Creates a new dataset.

Examples:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

dataset = bigquery.create_dataset "my_dataset"

A name and description can be provided:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

dataset = bigquery.create_dataset "my_dataset",
                                  name: "My Dataset",
                                  description: "This is my Dataset"

Access rules can be provided with the access option:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

dataset = bigquery.create_dataset "my_dataset",
  access: [{"role"=>"WRITER", "userByEmail"=>"writers@example.com"}]

Or, configure access with a block: (See Dataset::Access)

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

dataset = bigquery.create_dataset "my_dataset" do |access|
  access.add_writer_user "writers@example.com"
end

Parameters:

  • dataset_id (String)

    A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

  • name (String)

    A descriptive name for the dataset.

  • description (String)

    A user-friendly description of the dataset.

  • expiration (Integer)

    The default lifetime of all tables in the dataset, in milliseconds. The minimum value is 3600000 milliseconds (one hour).

  • location (String)

    The geographic location where the dataset should reside. Possible values include EU and US. The default value is US.

Yields:

  • (access)

    a block for setting rules

Yield Parameters:

Returns:



313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
# File 'lib/google/cloud/bigquery/project.rb', line 313

def create_dataset dataset_id, name: nil, description: nil,
                   expiration: nil, location: nil
  ensure_service!

  new_ds = Google::Apis::BigqueryV2::Dataset.new(
    dataset_reference: Google::Apis::BigqueryV2::DatasetReference.new(
      project_id: project, dataset_id: dataset_id))

  # Can set location only on creation, no Dataset#location method
  new_ds.update! location: location unless location.nil?

  updater = Dataset::Updater.new(new_ds).tap do |b|
    b.name = name unless name.nil?
    b.description = description unless description.nil?
    b.default_expiration = expiration unless expiration.nil?
  end

  if block_given?
    yield updater
    updater.check_for_mutated_access!
  end

  gapi = service.insert_dataset new_ds
  Dataset.from_gapi gapi, service
end

#dataset(dataset_id) ⇒ Google::Cloud::Bigquery::Dataset?

Retrieves an existing dataset by ID.

Examples:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

dataset = bigquery.dataset "my_dataset"
puts dataset.name

Parameters:

  • dataset_id (String)

    The ID of a dataset.

Returns:



248
249
250
251
252
253
254
# File 'lib/google/cloud/bigquery/project.rb', line 248

def dataset dataset_id
  ensure_service!
  gapi = service.get_dataset dataset_id
  Dataset.from_gapi gapi, service
rescue Google::Cloud::NotFoundError
  nil
end

#datasets(all: nil, token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Dataset>

Retrieves the list of datasets belonging to the project.

Examples:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

datasets = bigquery.datasets
datasets.each do |dataset|
  puts dataset.name
end

Retrieve hidden datasets with the all optional arg:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

all_datasets = bigquery.datasets all: true

Retrieve all datasets: (See Dataset::List#all)

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

datasets = bigquery.datasets
datasets.all do |dataset|
  puts dataset.name
end

Parameters:

  • all (Boolean)

    Whether to list all datasets, including hidden ones. The default is false.

  • token (String)

    A previously-returned page token representing part of the larger set of results to view.

  • max (Integer)

    Maximum number of datasets to return.

Returns:



381
382
383
384
385
386
# File 'lib/google/cloud/bigquery/project.rb', line 381

def datasets all: nil, token: nil, max: nil
  ensure_service!
  options = { all: all, token: token, max: max }
  gapi = service.list_datasets options
  Dataset::List.from_gapi gapi, service, all, max
end

#job(job_id) ⇒ Google::Cloud::Bigquery::Job?

Retrieves an existing job by ID.

Examples:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

job = bigquery.job "my_job"

Parameters:

  • job_id (String)

    The ID of a job.

Returns:



404
405
406
407
408
409
410
# File 'lib/google/cloud/bigquery/project.rb', line 404

def job job_id
  ensure_service!
  gapi = service.get_job job_id
  Job.from_gapi gapi, service
rescue Google::Cloud::NotFoundError
  nil
end

#jobs(all: nil, token: nil, max: nil, filter: nil) ⇒ Array<Google::Cloud::Bigquery::Job>

Retrieves the list of jobs belonging to the project.

Examples:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

jobs = bigquery.jobs
jobs.each do |job|
  # process job
end

Retrieve only running jobs using the filter optional arg:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

running_jobs = bigquery.jobs filter: "running"
running_jobs.each do |job|
  # process job
end

Retrieve all jobs: (See Job::List#all)

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

jobs = bigquery.jobs
jobs.all do |job|
  # process job
end

Parameters:

  • all (Boolean)

    Whether to display jobs owned by all users in the project. The default is false.

  • token (String)

    A previously-returned page token representing part of the larger set of results to view.

  • max (Integer)

    Maximum number of jobs to return.

  • filter (String)

    A filter for job state.

    Acceptable values are:

    • done - Finished jobs
    • pending - Pending jobs
    • running - Running jobs

Returns:



464
465
466
467
468
469
# File 'lib/google/cloud/bigquery/project.rb', line 464

def jobs all: nil, token: nil, max: nil, filter: nil
  ensure_service!
  options = { all: all, token: token, max: max, filter: filter }
  gapi = service.list_jobs options
  Job::List.from_gapi gapi, service, all, max, filter
end

#projectObject

The BigQuery project connected to.

Examples:

require "google/cloud"

gcloud = Google::Cloud.new "my-todo-project",
                           "/path/to/keyfile.json"
bigquery = gcloud.bigquery

bigquery.project #=> "my-todo-project"


73
74
75
# File 'lib/google/cloud/bigquery/project.rb', line 73

def project
  service.project
end

#query(query, max: nil, timeout: 10000, dryrun: nil, cache: true, dataset: nil, project: nil) ⇒ Google::Cloud::Bigquery::QueryData

Queries data using the synchronous method.

Examples:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

data = bigquery.query "SELECT name FROM [my_proj:my_data.my_table]"
data.each do |row|
  puts row["name"]
end

Retrieve all rows: (See QueryData#all)

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

data = bigquery.query "SELECT name FROM [my_proj:my_data.my_table]"
data.all do |row|
  puts row["name"]
end

Parameters:

  • query (String)

    A query string, following the BigQuery query syntax, of the query to execute. Example: "SELECT count(f1) FROM [myProjectId:myDatasetId.myTableId]".

  • max (Integer)

    The maximum number of rows of data to return per page of results. Setting this flag to a small value such as 1000 and then paging through results might improve reliability when the query result set is large. In addition to this limit, responses are also limited to 10 MB. By default, there is no maximum row count, and only the byte limit applies.

  • timeout (Integer)

    How long to wait for the query to complete, in milliseconds, before the request times out and returns. Note that this is only a timeout for the request, not the query. If the query takes longer to run than the timeout value, the call returns without any results and with QueryData#complete? set to false. The default value is 10000 milliseconds (10 seconds).

  • dryrun (Boolean)

    If set to true, BigQuery doesn't run the job. Instead, if the query is valid, BigQuery returns statistics about the job such as how many bytes would be processed. If the query is invalid, an error returns. The default value is false.

  • cache (Boolean)

    Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true. For more information, see query caching.

  • dataset (String)

    Specifies the default datasetId and projectId to assume for any unqualified table names in the query. If not set, all table names in the query string must be qualified in the format 'datasetId.tableId'.

  • project (String)

    Specifies the default projectId to assume for any unqualified table names in the query. Only used if dataset option is set.

Returns:



222
223
224
225
226
227
228
229
# File 'lib/google/cloud/bigquery/project.rb', line 222

def query query, max: nil, timeout: 10000, dryrun: nil, cache: true,
          dataset: nil, project: nil
  ensure_service!
  options = { max: max, timeout: timeout, dryrun: dryrun, cache: cache,
              dataset: dataset, project: project }
  gapi = service.query query, options
  QueryData.from_gapi gapi, service
end

#query_job(query, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, large_results: nil, flatten: nil, dataset: nil) ⇒ Google::Cloud::Bigquery::QueryJob

Queries data using the asynchronous method.

Examples:

require "google/cloud"

gcloud = Google::Cloud.new
bigquery = gcloud.bigquery

job = bigquery.query_job "SELECT name FROM " \
                         "[my_proj:my_data.my_table]"

job.wait_until_done!
if !job.failed?
  job.query_results.each do |row|
    puts row["name"]
  end
end

Parameters:

  • query (String)

    A query string, following the BigQuery query syntax, of the query to execute. Example: "SELECT count(f1) FROM [myProjectId:myDatasetId.myTableId]".

  • priority (String)

    Specifies a priority for the query. Possible values include INTERACTIVE and BATCH. The default value is INTERACTIVE.

  • cache (Boolean)

    Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true. For more information, see query caching.

  • table (Table)

    The destination table where the query results should be stored. If not present, a new table will be created to store the results.

  • create (String)

    Specifies whether the job is allowed to create new tables.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String)

    Specifies the action that occurs if the destination table already exists.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - A 'duplicate' error is returned in the job result if the table exists and contains data.
  • large_results (Boolean)

    If true, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires table parameter to be set.

  • flatten (Boolean)

    Flattens all nested and repeated fields in the query results. The default value is true. large_results parameter must be true if this is set to false.

  • dataset (Dataset, String)

    Specifies the default dataset to use for unqualified table names in the query.

Returns:



149
150
151
152
153
154
155
156
157
158
159
# File 'lib/google/cloud/bigquery/project.rb', line 149

def query_job query, priority: "INTERACTIVE", cache: true, table: nil,
              create: nil, write: nil, large_results: nil, flatten: nil,
              dataset: nil
  ensure_service!
  options = { priority: priority, cache: cache, table: table,
              create: create, write: write,
              large_results: large_results, flatten: flatten,
              dataset: dataset }
  gapi = service.query_job query, options
  Job.from_gapi gapi, service
end