Class: Google::Cloud::Bigquery::Project

Inherits:
Object
  • Object
show all
Defined in:
lib/google/cloud/bigquery/project.rb,
lib/google/cloud/bigquery/project/list.rb

Overview

Project

Projects are top-level containers in Google Cloud Platform. They store information about billing and authorized users, and they contain BigQuery data. Each project has a friendly name and a unique ID.

Google::Cloud::Bigquery::Project is the main object for interacting with Google BigQuery. Dataset objects are created, accessed, and deleted by Google::Cloud::Bigquery::Project.

See Google::Cloud#bigquery.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

Defined Under Namespace

Classes: List

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(service) ⇒ Project

Creates a new Service instance.

See Google::Cloud.bigquery



64
65
66
# File 'lib/google/cloud/bigquery/project.rb', line 64

def initialize service
  @service = service
end

Instance Attribute Details

#nameString? (readonly)

The descriptive name of the project. Can only be present if the project was retrieved with #projects.

Returns:

  • (String, nil)

    the current value of name



53
54
55
# File 'lib/google/cloud/bigquery/project.rb', line 53

def name
  @name
end

#numeric_idInteger? (readonly)

The numeric ID of the project. Can only be present if the project was retrieved with #projects.

Returns:

  • (Integer, nil)

    the current value of numeric_id



53
54
55
# File 'lib/google/cloud/bigquery/project.rb', line 53

def numeric_id
  @numeric_id
end

Instance Method Details

#create_dataset(dataset_id, name: nil, description: nil, expiration: nil, location: nil) {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset

Creates a new dataset.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.create_dataset "my_dataset"

A name and description can be provided:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.create_dataset "my_dataset",
                                  name: "My Dataset",
                                  description: "This is my Dataset"

Access rules can be provided with the access option:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.create_dataset "my_dataset",
  access: [{"role"=>"WRITER", "userByEmail"=>"writers@example.com"}]

Or, configure access with a block: (See Dataset::Access)

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.create_dataset "my_dataset" do |access|
  access.add_writer_user "writers@example.com"
end

Parameters:

  • dataset_id (String)

    A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

  • name (String)

    A descriptive name for the dataset.

  • description (String)

    A user-friendly description of the dataset.

  • expiration (Integer)

    The default lifetime of all tables in the dataset, in milliseconds. The minimum value is 3600000 milliseconds (one hour).

  • location (String)

    The geographic location where the dataset should reside. Possible values include EU and US. The default value is US.

Yields:

  • (access)

    a block for setting rules

Yield Parameters:

Returns:



313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
# File 'lib/google/cloud/bigquery/project.rb', line 313

def create_dataset dataset_id, name: nil, description: nil,
                   expiration: nil, location: nil
  ensure_service!

  new_ds = Google::Apis::BigqueryV2::Dataset.new(
    dataset_reference: Google::Apis::BigqueryV2::DatasetReference.new(
      project_id: project, dataset_id: dataset_id))

  # Can set location only on creation, no Dataset#location method
  new_ds.update! location: location unless location.nil?

  updater = Dataset::Updater.new(new_ds).tap do |b|
    b.name = name unless name.nil?
    b.description = description unless description.nil?
    b.default_expiration = expiration unless expiration.nil?
  end

  if block_given?
    yield updater
    updater.check_for_mutated_access!
  end

  gapi = service.insert_dataset new_ds
  Dataset.from_gapi gapi, service
end

#dataset(dataset_id) ⇒ Google::Cloud::Bigquery::Dataset?

Retrieves an existing dataset by ID.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

dataset = bigquery.dataset "my_dataset"
puts dataset.name

Parameters:

  • dataset_id (String)

    The ID of a dataset.

Returns:



252
253
254
255
256
257
258
# File 'lib/google/cloud/bigquery/project.rb', line 252

def dataset dataset_id
  ensure_service!
  gapi = service.get_dataset dataset_id
  Dataset.from_gapi gapi, service
rescue Google::Cloud::NotFoundError
  nil
end

#datasets(all: nil, token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Dataset>

Retrieves the list of datasets belonging to the project.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

datasets = bigquery.datasets
datasets.each do |dataset|
  puts dataset.name
end

Retrieve hidden datasets with the all optional arg:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

all_datasets = bigquery.datasets all: true

Retrieve all datasets: (See Dataset::List#all)

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

datasets = bigquery.datasets
datasets.all do |dataset|
  puts dataset.name
end

Parameters:

  • all (Boolean)

    Whether to list all datasets, including hidden ones. The default is false.

  • token (String)

    A previously-returned page token representing part of the larger set of results to view.

  • max (Integer)

    Maximum number of datasets to return.

Returns:



378
379
380
381
382
383
# File 'lib/google/cloud/bigquery/project.rb', line 378

def datasets all: nil, token: nil, max: nil
  ensure_service!
  options = { all: all, token: token, max: max }
  gapi = service.list_datasets options
  Dataset::List.from_gapi gapi, service, all, max
end

#job(job_id) ⇒ Google::Cloud::Bigquery::Job?

Retrieves an existing job by ID.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

job = bigquery.job "my_job"

Parameters:

  • job_id (String)

    The ID of a job.

Returns:



400
401
402
403
404
405
406
# File 'lib/google/cloud/bigquery/project.rb', line 400

def job job_id
  ensure_service!
  gapi = service.get_job job_id
  Job.from_gapi gapi, service
rescue Google::Cloud::NotFoundError
  nil
end

#jobs(all: nil, token: nil, max: nil, filter: nil) ⇒ Array<Google::Cloud::Bigquery::Job>

Retrieves the list of jobs belonging to the project.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

jobs = bigquery.jobs
jobs.each do |job|
  # process job
end

Retrieve only running jobs using the filter optional arg:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

running_jobs = bigquery.jobs filter: "running"
running_jobs.each do |job|
  # process job
end

Retrieve all jobs: (See Job::List#all)

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

jobs = bigquery.jobs
jobs.all do |job|
  # process job
end

Parameters:

  • all (Boolean)

    Whether to display jobs owned by all users in the project. The default is false.

  • token (String)

    A previously-returned page token representing part of the larger set of results to view.

  • max (Integer)

    Maximum number of jobs to return.

  • filter (String)

    A filter for job state.

    Acceptable values are:

    • done - Finished jobs
    • pending - Pending jobs
    • running - Running jobs

Returns:



457
458
459
460
461
462
# File 'lib/google/cloud/bigquery/project.rb', line 457

def jobs all: nil, token: nil, max: nil, filter: nil
  ensure_service!
  options = { all: all, token: token, max: max, filter: filter }
  gapi = service.list_jobs options
  Job::List.from_gapi gapi, service, all, max, filter
end

#projectObject

The BigQuery project connected to.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new(
  project: "my-todo-project",
  keyfile: "/path/to/keyfile.json"
)

bigquery.project #=> "my-todo-project"


81
82
83
# File 'lib/google/cloud/bigquery/project.rb', line 81

def project
  service.project
end

#projects(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Project>

Retrieves the list of all projects for which the currently authorized account has been granted any project role. The returned project instances share the same credentials as the project used to retrieve them, but lazily create a new API connection for interactions with the BigQuery service.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

projects = bigquery.projects
projects.each do |project|
  puts project.name
  project.datasets.all.each do |dataset|
    puts dataset.name
  end
end

Retrieve all projects: (See Google::Cloud::Bigquery::Project::List#all)

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

projects = bigquery.projects

projects.all do |project|
  puts project.name
  project.datasets.all.each do |dataset|
    puts dataset.name
  end
end

Parameters:

  • token (String)

    A previously-returned page token representing part of the larger set of results to view.

  • max (Integer)

    Maximum number of projects to return.

Returns:



505
506
507
508
509
510
# File 'lib/google/cloud/bigquery/project.rb', line 505

def projects token: nil, max: nil
  ensure_service!
  options = { token: token, max: max }
  gapi = service.list_projects options
  Project::List.from_gapi gapi, service, max
end

#query(query, max: nil, timeout: 10000, dryrun: nil, cache: true, dataset: nil, project: nil) ⇒ Google::Cloud::Bigquery::QueryData

Queries data using the synchronous method.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

data = bigquery.query "SELECT name FROM [my_proj:my_data.my_table]"
data.each do |row|
  puts row["name"]
end

Retrieve all rows: (See QueryData#all)

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

data = bigquery.query "SELECT name FROM [my_proj:my_data.my_table]"
data.all do |row|
  puts row["name"]
end

Parameters:

  • query (String)

    A query string, following the BigQuery query syntax, of the query to execute. Example: "SELECT count(f1) FROM [myProjectId:myDatasetId.myTableId]".

  • max (Integer)

    The maximum number of rows of data to return per page of results. Setting this flag to a small value such as 1000 and then paging through results might improve reliability when the query result set is large. In addition to this limit, responses are also limited to 10 MB. By default, there is no maximum row count, and only the byte limit applies.

  • timeout (Integer)

    How long to wait for the query to complete, in milliseconds, before the request times out and returns. Note that this is only a timeout for the request, not the query. If the query takes longer to run than the timeout value, the call returns without any results and with QueryData#complete? set to false. The default value is 10000 milliseconds (10 seconds).

  • dryrun (Boolean)

    If set to true, BigQuery doesn't run the job. Instead, if the query is valid, BigQuery returns statistics about the job such as how many bytes would be processed. If the query is invalid, an error returns. The default value is false.

  • cache (Boolean)

    Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true. For more information, see query caching.

  • dataset (String)

    Specifies the default datasetId and projectId to assume for any unqualified table names in the query. If not set, all table names in the query string must be qualified in the format 'datasetId.tableId'.

  • project (String)

    Specifies the default projectId to assume for any unqualified table names in the query. Only used if dataset option is set.

Returns:



227
228
229
230
231
232
233
234
# File 'lib/google/cloud/bigquery/project.rb', line 227

def query query, max: nil, timeout: 10000, dryrun: nil, cache: true,
          dataset: nil, project: nil
  ensure_service!
  options = { max: max, timeout: timeout, dryrun: dryrun, cache: cache,
              dataset: dataset, project: project }
  gapi = service.query query, options
  QueryData.from_gapi gapi, service
end

#query_job(query, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, large_results: nil, flatten: nil, dataset: nil) ⇒ Google::Cloud::Bigquery::QueryJob

Queries data using the asynchronous method.

Examples:

require "google/cloud/bigquery"

bigquery = Google::Cloud::Bigquery.new

job = bigquery.query_job "SELECT name FROM " \
                         "[my_proj:my_data.my_table]"

job.wait_until_done!
if !job.failed?
  job.query_results.each do |row|
    puts row["name"]
  end
end

Parameters:

  • query (String)

    A query string, following the BigQuery query syntax, of the query to execute. Example: "SELECT count(f1) FROM [myProjectId:myDatasetId.myTableId]".

  • priority (String)

    Specifies a priority for the query. Possible values include INTERACTIVE and BATCH. The default value is INTERACTIVE.

  • cache (Boolean)

    Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true. For more information, see query caching.

  • table (Table)

    The destination table where the query results should be stored. If not present, a new table will be created to store the results.

  • create (String)

    Specifies whether the job is allowed to create new tables.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String)

    Specifies the action that occurs if the destination table already exists.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - A 'duplicate' error is returned in the job result if the table exists and contains data.
  • large_results (Boolean)

    If true, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires table parameter to be set.

  • flatten (Boolean)

    Flattens all nested and repeated fields in the query results. The default value is true. large_results parameter must be true if this is set to false.

  • dataset (Dataset, String)

    Specifies the default dataset to use for unqualified table names in the query.

Returns:



156
157
158
159
160
161
162
163
164
165
166
# File 'lib/google/cloud/bigquery/project.rb', line 156

def query_job query, priority: "INTERACTIVE", cache: true, table: nil,
              create: nil, write: nil, large_results: nil, flatten: nil,
              dataset: nil
  ensure_service!
  options = { priority: priority, cache: cache, table: table,
              create: create, write: write,
              large_results: large_results, flatten: flatten,
              dataset: dataset }
  gapi = service.query_job query, options
  Job.from_gapi gapi, service
end