Google Cloud BigQuery Connector
BigQuery is Google Cloud's fully managed, petabyte-scale, and cost-effective analytics data warehouse that lets you run analytics over vast amounts of data in near real time.
Connections
Google Cloud BigQuery OAuth2
Authenticate requests to Google Cloud BigQuery using OAuth2.
This connection uses OAuth 2.0, a common authentication mechanism for integrations. Read about how OAuth 2.0 works here.
Input | Comments | Default |
---|---|---|
Scopes | Space delimited listing of scopes. https://developers.google.com/identity/protocols/oauth2/scopes#bigquery | https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigquery.insertdata https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/cloud-platform.read-only https://www.googleapis.com/auth/devstorage.full_control https://www.googleapis.com/auth/devstorage.read_only https://www.googleapis.com/auth/devstorage.read_write |
Client ID | The Google BigQuery app's Client Identifier. | |
Client Secret | The Google BigQuery app's Client Secret. |
Google Cloud BigQuery Private Key
Authenticate requests to Google Cloud BigQuery using values obtained from the Google Cloud Platform.
Input | Comments | Default |
---|---|---|
Client Email | The email address of the client you would like to connect. | |
Private Key | The private key of the client you would like to connect. |
Triggers
PubSub Notification
PubSub Notification Trigger Settings
Actions
Cancel Job
Requests that a job be cancelled.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Job ID | Job ID of the requested job. | |
Location | The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. |
Create Dataset
Creates a new empty dataset.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Dataset Reference | A reference that identifies the dataset. | |
Kind | Output only. The resource type. | |
ETag | Output only. A hash of the resource. | |
ID | Output only. The fully-qualified unique name of the dataset in the format projectId:datasetId. The dataset name without the project name is given in the datasetId field. When creating a new dataset, leave this field blank, and instead specify the datasetId field. | |
Self Link | Output only. A URL that can be used to access the resource again. You can use this URL in Get or Update requests to the resource. | |
Friendly Name | Optional. A descriptive name for the dataset. | |
Description | Optional. A descriptive name for the dataset. | |
Default Table Expiration (ms) | Optional. The default lifetime of all tables in the dataset, in milliseconds. The minimum lifetime value is 3600000 milliseconds (one hour). To clear an existing default expiration with a PATCH request, set to 0. Once this property is set, all newly-created tables in the dataset will have an expirationTime property set to the creation time plus the value in this property, and changing the value will only affect new tables, not existing ones. When the expirationTime for a given table is reached, that table will be deleted automatically. If a table's expirationTime is modified or removed before the table expires, or if you provide an explicit expirationTime when creating a table, that value takes precedence over the default expiration time indicated by this property. | |
Default Partition Expiration (ms) | This default partition expiration, expressed in milliseconds. |
When new time-partitioned tables are created in a dataset where this property is set, the table will inherit this value, propagated as the TimePartitioning.expirationMs property on the new table. If you set TimePartitioning.expirationMs explicitly when creating a table, the defaultPartitionExpirationMs of the containing dataset is ignored. When creating a partitioned table, if defaultPartitionExpirationMs is set, the defaultTableExpirationMs value is ignored and the table will not be inherit a table expiration deadline. | | | Labels | The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information. | | | Access | Optional. An array of objects that define dataset access for one or more entities. You can set this property when inserting or updating a dataset in order to control who is allowed to access the data. If unspecified at dataset creation time, BigQuery adds default dataset access for the following entities: access.specialGroup: projectReaders; access.role: READER; access.specialGroup: projectWriters; access.role: WRITER; access.specialGroup: projectOwners; access.role: OWNER; access.userByEmail: [dataset creator email]; access.role: OWNER. | | | Creation Time | Output only. The time when this dataset was created, in milliseconds since the epoch. | | | Last Modified Time | Output only. The date when this dataset was last modified, in milliseconds since the epoch. | | | Location | The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. | | | Default Encryption Configuration | The default encryption key for all tables in the dataset. Once this property is set, all newly-created partitioned tables in the dataset will have encryption key set to this value, unless table creation request (or query) overrides the key. | | | Satisfies PZS | Output only. Reserved for future use. | false | | Is Case Insensitive | Optional. TRUE if the dataset and its table names are case-insensitive, otherwise FALSE. By default, this is FALSE, which means the dataset and its table names are case-sensitive. This field does not affect routine references. | false | | Default Collation | Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type. | | | Default Rounding Mode | Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified. | | | Max Time Travel Hours | Optional. Defines the time travel window in hours. The value can be from 48 to 168 hours (2 to 7 days). The default value is 168 hours if this is not set. | | | Tags | Output only. Tags for the Dataset. | | | Storage Billing Model | Optional. Updates storageBillingModel for the dataset. | |
Create Job
Starts a new asynchronous job.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Configuration | Required. Describes the job configuration. | |
Kind | Output only. The resource type. | |
ETag | Output only. A hash of the resource. | |
ID | Output only. The fully-qualified unique name of the dataset in the format projectId:datasetId. The dataset name without the project name is given in the datasetId field. When creating a new dataset, leave this field blank, and instead specify the datasetId field. | |
Self Link | Output only. A URL that can be used to access the resource again. You can use this URL in Get or Update requests to the resource. | |
User Email | Output only. Email address of the user who ran the job. | |
Job Reference | Optional. Reference describing the unique-per-user name of the job. | |
Statistics | Output only. Information about the job, including starting time and ending time of the job. | |
Status | Output only. The status of this job. Examine this value when polling an asynchronous job to see if the job is complete. |
Create Routine
Creates a new routine in the dataset.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the requested dataset | |
Project ID | Project ID of the datasets to be listed | |
Routine Reference | Reference describing the ID of this routine. | |
Routine Type | The type of routine. One of ROUTINE_TYPE_UNSPECIFIED / SCALAR_FUNCTION / PROCEDURE / TABLE_VALUED_FUNCTION | |
Default Trial ID | Required. The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, ' |
', y)) The definitionBody is concat(x, '
', y) (
is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return '
';
'The definitionBody is return '
';
Note that both
are replaced with linebreaks. | |
| ETag | Output only. A hash of the resource. | |
| Arguments | Input/output argument of a function or a stored procedure. | |
| Return Table Type | Optional. Can be set only if routineType = 'TABLE_VALUED_FUNCTION'. If absent, the return table type is inferred from definitionBody at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time. | |
| Return Type | Optional if language = 'SQL'; required otherwise. Cannot be set if routineType = 'TABLE_VALUED_FUNCTION'. If absent, the return type is inferred from definitionBody at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. | |
| Creation Time | Output only. The time when this dataset was created, in milliseconds since the epoch. | |
| Last Modified Time | Output only. The date when this dataset was last modified, in milliseconds since the epoch. | |
| Language | Optional. Defaults to 'SQL' if remoteFunctionOptions field is absent, not set otherwise. One of LANGUAGE_UNSPECIFIED / SQL / JAVASCRIPT / PYTHON / JAVA / SCALA | |
| Imported Libraries | Optional. If language = 'JAVASCRIPT', this field stores the path of the imported JAVASCRIPT libraries. | ["000xxx"]
|
| Description | Optional. The description of the routine, if defined. | |
| Determinism Level | Optional. The determinism level of the JavaScript UDF, if defined. One of DETERMINISM_LEVEL_UNSPECIFIED / DETERMINISTIC / NOT_DETERMINISTIC | |
| Remote Function Options | Optional. Remote function specific options. | |
| Spark Options | Optional. Spark specific options. | |
Create Table
Creates a new, empty table in the dataset.
Input | Comments | Default |
---|---|---|
Connection | ||
Kind | Output only. The resource type. | |
Table Reference | Reference describing the ID of this routine. | |
Friendly Name | Optional. A descriptive name for the dataset. | |
Description | Optional. A descriptive name for the dataset. | |
Labels | The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information. | |
Schema | Optional. Describes the schema of this table. | |
Time Partitioning | If specified, configures time-based partitioning for this table. | |
Range Partitioning | If specified, configures range partitioning for this table. | |
Clustering | Clustering specification for the table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered. | |
Require Partition Filter | Optional. If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. | false |
Expiration Time | Optional. The time when this model expires, in milliseconds since the epoch. If not present, the model will persist indefinitely. Expired models will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created models. | |
View | Optional. The view definition. | |
Materialized View | Optional. The materialized view definition. | |
External Data Configuration | Optional. Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. | |
Encryption Configuration | Custom encryption configuration (e.g., Cloud KMS keys). This shows the encryption configuration of the model data while stored in BigQuery storage. This field can be used with models.patch to update encryption key for an already encrypted model. | |
Default Collation | Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type. | |
Default Rounding Mode | Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified. | |
Max Staleness | Optional. Defines the default collation specification of future tables created in the dataset. If a table is created in this dataset without table-level default collation, then the table inherits the dataset default collation, which is applied to the string fields that do not have explicit collation specified. A change to this field affects only tables created afterwards, and does not alter the existing tables. The following values are supported: 'und:ci': undetermined locale, case insensitive.'' empty string. Default to case-sensitive behavior. | |
Dataset ID | Dataset ID of the table to update. | |
Project ID | Project ID of the table to update. |
Delete Dataset
Deletes the dataset specified by the datasetId value. Before you can delete a dataset, you must delete all its tables, either manually or by specifying deleteContents. Immediately after deletion, you can create another dataset with the same name.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Dataset ID | Dataset ID of the requested dataset |
Delete Job
Requests the deletion of the metadata of a job.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Job ID | Job ID of the requested job. | |
Location | The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. |
Delete Model
Deletes the model specified by model ID from the dataset.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Dataset ID | Dataset ID of the requested dataset | |
Model ID | Model ID of the requested model. |
Delete Routine
Deletes the routine specified by routine ID from the dataset.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the requested dataset | |
Project ID | Project ID of the datasets to be listed | |
Routine ID | Routine ID of the requested routine. |
Delete Table
Deletes the table specified by table ID from the dataset.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the table to delete. | |
Project ID | Project ID of the table to delete. | |
Table ID | Table ID of the table to delete. |
Get Dataset
Returns the dataset specified by datasetID.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Dataset ID | Dataset ID of the requested dataset |
Get Job
Returns information about a specific job.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Job ID | Job ID of the requested job. | |
Location | The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. |
Get Model
Gets the specified model resource by model ID.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Dataset ID | Dataset ID of the requested dataset | |
Model ID | Model ID of the requested model. |
Get Policy
Gets the access control policy for a resource.
Input | Comments | Default |
---|---|---|
Connection | ||
Table ID | The resource for which the policy is being requested. See Resource names for the appropriate value for this field. | |
Options | OPTIONAL: A GetPolicyOptions object for specifying options to tables.getIamPolicy. |
Get Query Job Results
Receives the results of a query job.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Job ID | Job ID of the requested job. | |
Start Index | Zero-based index of the starting row. | |
Page Token | Page token, returned by a previous call, to request the next page of results | |
Max Results | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. | |
Timeout (ms) | Optional. Optional: Specifies the maximum amount of time, in milliseconds, that the client is willing to wait for the query to complete. By default, this limit is 10 seconds (10,000 milliseconds). If the query is complete, the jobComplete field in the response is true. If the query has not yet completed, jobComplete is false. You can request a longer timeout period in the timeoutMs field. However, the call is not guaranteed to wait for the specified timeout; it typically returns after around 200 seconds (200,000 milliseconds), even if the query is not complete. If jobComplete is false, you can continue to wait for the query to complete by calling the getQueryResults method until the jobComplete field in the getQueryResults response is true. | |
Location | The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. |
Get Routine
Gets the specified routine resource by routine ID.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the requested dataset | |
Project ID | Project ID of the datasets to be listed | |
Read Mask | If set, only the Routine fields in the field mask are returned in the response. If unset, all Routine fields are returned. This is a comma-separated list of fully qualified names of fields. Example: 'user.displayName,photo'. | |
Routine ID | Routine ID of the requested routine. |
Get Service Account
Receives the service account for a project used for interactions with Google Cloud KMS
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed |
Get Table
Gets the specified table resource by table ID.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the requested table. | |
Project ID | Project ID of the requested table. | |
Table ID | Table ID of the requested table. | |
Selected Fields | tabledata.list of table schema fields to return (comma-separated). If unspecified, all fields are returned. A fieldMask cannot be used here because the fields will automatically be converted from camelCase to snake_case and the conversion will fail if there are underscores. Since these are fields in BigQuery table schemas, underscores are allowed. | |
View | Optional. Specifies the view that determines which table information is returned. By default, basic table information and storage statistics (STORAGE_STATS) are returned. One of TABLE_METADATA_VIEW_UNSPECIFIED / BASIC / STORAGE_STATS / FULL |
List Datasets
Lists all datasets in the specified project to which the user has been granted the READER dataset role.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Page Token | Page token, returned by a previous call, to request the next page of results | |
All | Whether to list all datasets, including hidden ones | false |
Filter | An expression for filtering the results of the request by label. The syntax is 'labels. | |
Max Results | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. |
List Jobs
Lists all jobs that you started in the specified project.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Page Token | Page token, returned by a previous call, to request the next page of results | |
All Users | Whether to display jobs owned by all users in the project. Default False. | false |
Max Results | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. | |
Min Creation Time | Min value for job creation time, in milliseconds since the POSIX epoch. If set, only jobs created after or at this timestamp are returned. | |
Max Creation Time | Max value for job creation time, in milliseconds since the POSIX epoch. If set, only jobs created after or at this timestamp are returned. | |
Projection | Restrict information returned to a set of selected fields | |
State Filter | Filter for job state, Valid values of this enum field are: DONE, PENDING, RUNNING | ["000xxx"] |
Parent Job ID | If set, show only child jobs of the specified parent. Otherwise, show all top-level jobs. |
List Models
Lists all models in the specified dataset. Requires the READER dataset role. After retrieving the list of models, you can get information about a particular model by calling the models.get method.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Dataset ID | Dataset ID of the requested dataset | |
Page Token | Page token, returned by a previous call, to request the next page of results | |
Max Results | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. |
List Projects
Lists projects to which the user has been granted any project role.
Input | Comments | Default |
---|---|---|
Connection | ||
Page Token | Page token, returned by a previous call, to request the next page of results | |
Max Results | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. |
List Routines
Lists all routines in the specified dataset.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Dataset ID | Dataset ID of the requested dataset | |
Filter | An expression for filtering the results of the request by label. The syntax is 'labels. | |
Page Token | Page token, returned by a previous call, to request the next page of results | |
Max Results | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. | |
Read Mask | If set, only the Routine fields in the field mask are returned in the response. If unset, all Routine fields are returned. This is a comma-separated list of fully qualified names of fields. Example: 'user.displayName,photo'. |
List Table Data
Lists the content of a table in rows.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the requested dataset | |
Project ID | Project ID of the datasets to be listed | |
Table ID | Table ID of the requested table | |
Start Index | Zero-based index of the starting row. | |
Max Results | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. | |
Page Token | Page token, returned by a previous call, to request the next page of results | |
Selected Fields | Subset of fields to return, supports select into sub fields. Example: selectedFields = 'a,e.d.f'; |
List Tables
Lists all tables in the specified dataset.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the tables to list. | |
Project ID | Project ID of the tables to list. | |
Max Results | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. | |
Page Token | Page token, returned by a previous call, to request the next page of results |
Patch Table
Patch information in an existing table.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the table to patch. | |
Project ID | Project ID of the table to patch. | |
Table ID | Table ID of the table to patch. | |
Kind | Output only. The resource type. | |
Table Reference | Reference describing the ID of this routine. | |
Friendly Name | Optional. A descriptive name for the dataset. | |
Description | Optional. A descriptive name for the dataset. | |
Labels | The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information. | |
Schema | Optional. Describes the schema of this table. | |
Time Partitioning | If specified, configures time-based partitioning for this table. | |
Range Partitioning | If specified, configures range partitioning for this table. | |
Clustering | Clustering specification for the table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered. | |
Require Partition Filter | Optional. If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. | false |
Expiration Time | Optional. The time when this model expires, in milliseconds since the epoch. If not present, the model will persist indefinitely. Expired models will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created models. | |
View | Optional. The view definition. | |
Materialized View | Optional. The materialized view definition. | |
External Data Configuration | Optional. Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. | |
Encryption Configuration | Custom encryption configuration (e.g., Cloud KMS keys). This shows the encryption configuration of the model data while stored in BigQuery storage. This field can be used with models.patch to update encryption key for an already encrypted model. | |
Default Collation | Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type. | |
Default Rounding Mode | Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified. | |
Max Staleness | Optional. Defines the default collation specification of future tables created in the dataset. If a table is created in this dataset without table-level default collation, then the table inherits the dataset default collation, which is applied to the string fields that do not have explicit collation specified. A change to this field affects only tables created afterwards, and does not alter the existing tables. The following values are supported: 'und:ci': undetermined locale, case insensitive.'' empty string. Default to case-sensitive behavior. |
Query Job
Runs a BigQuery SQL query synchronously and returns query results if the query completes within a specified timeout.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Kind | Output only. The resource type. | |
Query | Required. A query string to execute, using Google Standard SQL or legacy SQL syntax. Example: 'SELECT COUNT(f1) FROM myProjectId.myDatasetId.myTableId'. | |
Max Results | The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection. | |
Default Dataset | Optional. Specifies the default datasetId and projectId to assume for any unqualified table names in the query. If not set, all table names in the query string must be qualified in the format 'datasetId.tableId'. | |
Timeout (ms) | Optional. Optional: Specifies the maximum amount of time, in milliseconds, that the client is willing to wait for the query to complete. By default, this limit is 10 seconds (10,000 milliseconds). If the query is complete, the jobComplete field in the response is true. If the query has not yet completed, jobComplete is false. You can request a longer timeout period in the timeoutMs field. However, the call is not guaranteed to wait for the specified timeout; it typically returns after around 200 seconds (200,000 milliseconds), even if the query is not complete. If jobComplete is false, you can continue to wait for the query to complete by calling the getQueryResults method until the jobComplete field in the getQueryResults response is true. | |
Dry Run | Optional. If set to true, BigQuery doesn't run the job. Instead, if the query is valid, BigQuery returns statistics about the job such as how many bytes would be processed. If the query is invalid, an error returns. The default value is false. | false |
Use Query Cache | Optional. Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true. | true |
Use Legacy SQL | Specifies whether to use BigQuery's legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery's GoogleSQL: https://cloud.google.com/bigquery/sql-reference/ When useLegacySql is set to false, the value of flattenResults is ignored; query will be run as if flattenResults is false. | true |
Parameter Mode | GoogleSQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query. | |
Query Parameters | Optional. An array of query parameters for a query. Reference to the Google docs for this input. https://cloud.google.com/bigquery/docs/reference/rest/v2/QueryParameter | |
Location | The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. | |
Connection Properties | Optional. Connection properties which can modify the query behavior. | |
Labels | The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information. | |
Maximum Bytes Billed | Optional. Limits the bytes billed for this query. Queries with bytes billed above this limit will fail (without incurring a charge). If unspecified, the project default is used. | |
Request ID | Optional. A unique user provided identifier to ensure idempotent behavior for queries. Note that this is different from the jobId. It has the following properties: It is case-sensitive, limited to up to 36 ASCII characters. A UUID is recommended. Read only queries can ignore this token since they are nullipotent by definition. For the purposes of idempotency ensured by the requestId, a request is considered duplicate of another only if they have the same requestId and are actually duplicates. When determining whether a request is a duplicate of another request, all parameters in the request that may affect the result are considered. For example, query, connectionProperties, queryParameters, useLegacySql are parameters that affect the result and are considered when determining whether a request is a duplicate, but properties like timeoutMs don't affect the result and are thus not considered. Dry run query requests are never considered duplicate of another request. When a duplicate mutating query request is detected, it returns: a. the results of the mutation if it completes successfully within the timeout. b. the running operation if it is still in progress at the end of the timeout. Its lifetime is limited to 15 minutes. In other words, if two requests are sent with the same requestId, but more than 15 minutes apart, idempotency is not guaranteed. | |
Create Session | Optional. If true, creates a new session using a randomly generated sessionId. If false, runs query with an existing sessionId passed in ConnectionProperty, otherwise runs query in non-session mode. The session location will be set to QueryRequest.location if it is present, otherwise it's set to the default location based on existing routing logic. | false |
Raw Request
Send raw HTTP request to Google Cloud BigQuery
Input | Comments | Default |
---|---|---|
Connection | ||
API Version | The API version to use. This is used to construct the base URL for the request. | v2 |
URL | Input the path only (/projects/{projectId}/jobs), The base URL is already included (https://bigquery.googleapis.com/bigquery/{version}). For example, to connect to https://bigquery.googleapis.com/bigquery/v2/projects/{projectId}/jobs, only /projects/{projectId}/jobs is entered in this field. | |
Method | The HTTP method to use. | |
Data | The HTTP body payload to send to the URL. | |
Form Data | The Form Data to be sent as a multipart form upload. | |
File Data | File Data to be sent as a multipart form upload. | |
File Data File Names | File names to apply to the file data inputs. Keys must match the file data keys above. | |
Query Parameter | A list of query parameters to send with the request. This is the portion at the end of the URL similar to ?key1=value1&key2=value2. | |
Header | A list of headers to send with the request. | |
Response Type | The type of data you expect in the response. You can request json, text, or binary data. | json |
Timeout | The maximum time that a client will await a response to its request | |
Debug Request | Enabling this flag will log out the current request. | false |
Retry Delay (ms) | The delay in milliseconds between retries. This is used when 'Use Exponential Backoff' is disabled. | 0 |
Retry On All Errors | If true, retries on all erroneous responses regardless of type. This is helpful when retrying after HTTP 429 or other 3xx or 4xx errors. Otherwise, only retries on HTTP 5xx and network errors. | false |
Max Retry Count | The maximum number of retries to attempt. Specify 0 for no retries. | 0 |
Use Exponential Backoff | Specifies whether to use a pre-defined exponential backoff strategy for retries. When enabled, 'Retry Delay (ms)' is ignored. | false |
Set Policy
Sets the access control policy on the specified resource.
Input | Comments | Default |
---|---|---|
Connection | ||
Table ID | The resource for which the policy is being requested. See Resource names for the appropriate value for this field. | |
Policy | The complete policy to be applied to the resource. The size of the policy is limited to a few 10s of KB. An empty policy is a valid policy but certain Google Cloud services (such as Projects) might reject them. | |
Update Mask | OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only the fields in the mask will be modified. If no mask is provided, the following default mask is used: paths: 'bindings, etag' | |
This is a comma-separated list of fully qualified names of fields. Example: 'user.displayName,photo'. |
Table Data Insert All
Streams data into BigQuery one record at a time without needing to run a load job.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the requested dataset | |
Project ID | Project ID of the datasets to be listed | |
Table ID | Table ID of the requested table | |
Kind | Output only. The resource type. | |
Skip Invalid Rows | Optional. Insert all valid rows of a request, even if invalid rows exist. The default value is false, which causes the entire request to fail if any invalid rows exist. | false |
Ignore Unknown Values | Optional. Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors. | false |
Template Suffix | Optional. If specified, treats the destination table as a base template, and inserts the rows into an instance table named '{destination}{templateSuffix}'. BigQuery will manage creation of the instance table, using the schema of the base template table. | |
See https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables for considerations when working with templates tables. | ||
Rows | The complete policy to be applied to the resource. The size of the policy is limited to a few 10s of KB. An empty policy is a valid policy but certain Google Cloud services (such as Projects) might reject them. |
Update Dataset
Updates information in an existing dataset. The update method replaces the entire dataset resource, whereas the patch method only replaces fields that are provided in the submitted dataset resource.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Dataset ID | Dataset ID of the requested dataset | |
Dataset Reference | A reference that identifies the dataset. | |
Kind | Output only. The resource type. | |
ETag | Output only. A hash of the resource. | |
ID | Output only. The fully-qualified unique name of the dataset in the format projectId:datasetId. The dataset name without the project name is given in the datasetId field. When creating a new dataset, leave this field blank, and instead specify the datasetId field. | |
Self Link | Output only. A URL that can be used to access the resource again. You can use this URL in Get or Update requests to the resource. | |
Friendly Name | Optional. A descriptive name for the dataset. | |
Description | Optional. A descriptive name for the dataset. | |
Default Table Expiration (ms) | Optional. The default lifetime of all tables in the dataset, in milliseconds. The minimum lifetime value is 3600000 milliseconds (one hour). To clear an existing default expiration with a PATCH request, set to 0. Once this property is set, all newly-created tables in the dataset will have an expirationTime property set to the creation time plus the value in this property, and changing the value will only affect new tables, not existing ones. When the expirationTime for a given table is reached, that table will be deleted automatically. If a table's expirationTime is modified or removed before the table expires, or if you provide an explicit expirationTime when creating a table, that value takes precedence over the default expiration time indicated by this property. | |
Default Partition Expiration (ms) | This default partition expiration, expressed in milliseconds. |
When new time-partitioned tables are created in a dataset where this property is set, the table will inherit this value, propagated as the TimePartitioning.expirationMs property on the new table. If you set TimePartitioning.expirationMs explicitly when creating a table, the defaultPartitionExpirationMs of the containing dataset is ignored. When creating a partitioned table, if defaultPartitionExpirationMs is set, the defaultTableExpirationMs value is ignored and the table will not be inherit a table expiration deadline. | | | Labels | The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information. | | | Access | Optional. An array of objects that define dataset access for one or more entities. You can set this property when inserting or updating a dataset in order to control who is allowed to access the data. If unspecified at dataset creation time, BigQuery adds default dataset access for the following entities: access.specialGroup: projectReaders; access.role: READER; access.specialGroup: projectWriters; access.role: WRITER; access.specialGroup: projectOwners; access.role: OWNER; access.userByEmail: [dataset creator email]; access.role: OWNER. | | | Creation Time | Output only. The time when this dataset was created, in milliseconds since the epoch. | | | Last Modified Time | Output only. The date when this dataset was last modified, in milliseconds since the epoch. | | | Location | The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. | | | Default Encryption Configuration | The default encryption key for all tables in the dataset. Once this property is set, all newly-created partitioned tables in the dataset will have encryption key set to this value, unless table creation request (or query) overrides the key. | | | Satisfies PZS | Output only. Reserved for future use. | false | | Is Case Insensitive | Optional. TRUE if the dataset and its table names are case-insensitive, otherwise FALSE. By default, this is FALSE, which means the dataset and its table names are case-sensitive. This field does not affect routine references. | false | | Default Collation | Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type. | | | Default Rounding Mode | Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified. | | | Max Time Travel Hours | Optional. Defines the time travel window in hours. The value can be from 48 to 168 hours (2 to 7 days). The default value is 168 hours if this is not set. | | | Tags | Output only. Tags for the Dataset. | | | Storage Billing Model | Optional. Updates storageBillingModel for the dataset. | |
Update Model
Patch specific fields in the specified model.
Input | Comments | Default |
---|---|---|
Connection | ||
Project ID | Project ID of the datasets to be listed | |
Dataset ID | Dataset ID of the requested dataset | |
Model ID | Model ID of the requested model. | |
Model Reference | Unique identifier for this model. | |
ETag | Output only. A hash of the resource. | |
Creation Time | Output only. The time when this dataset was created, in milliseconds since the epoch. | |
Last Modified Time | Output only. The date when this dataset was last modified, in milliseconds since the epoch. | |
Description | Optional. A descriptive name for the dataset. | |
Friendly Name | Optional. A descriptive name for the dataset. | |
Labels | The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information. | |
Expiration Time | Optional. The time when this model expires, in milliseconds since the epoch. If not present, the model will persist indefinitely. Expired models will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created models. | |
Location | The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. | |
Encryption Configuration | Custom encryption configuration (e.g., Cloud KMS keys). This shows the encryption configuration of the model data while stored in BigQuery storage. This field can be used with models.patch to update encryption key for an already encrypted model. | |
Model Type | Output only. Type of the model resource. | |
Training Runs | Information for all training runs in increasing order of startTime. | |
Feature Columns | Output only. Input feature columns for the model inference. If the model is trained with TRANSFORM clause, these are the input of the TRANSFORM clause. | |
Label Columns | Output only. Label columns that were used to train this model. The output of the model will have a 'predicted_' prefix to these columns. | |
Hparam Search Spaces | Output only. Trials of a hyperparameter tuning model sorted by trialId. | |
Default Trial ID | Output only. The default trialId to use in TVFs when the trialId is not passed in. For single-objective hyperparameter tuning models, this is the best trial ID. For multi-objective hyperparameter tuning models, this is the smallest trial ID among all Pareto optimal trials. | |
Hparam Trials | Output only. Trials of a hyperparameter tuning model sorted by trialId. | |
Optimal Trial IDs | Output only. For single-objective hyperparameter tuning models, it only contains the best trial. For multi-objective hyperparameter tuning models, it contains all Pareto optimal trials sorted by trialId. | ["000xxx"] |
Update Routine
Updates information in an existing routine.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the requested dataset | |
Project ID | Project ID of the datasets to be listed | |
Routine Reference | Reference describing the ID of this routine. | |
Routine Type | The type of routine. One of ROUTINE_TYPE_UNSPECIFIED / SCALAR_FUNCTION / PROCEDURE / TABLE_VALUED_FUNCTION | |
Default Trial ID | Required. The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, ' |
', y)) The definitionBody is concat(x, '
', y) (
is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return '
';
'The definitionBody is return '
';
Note that both
are replaced with linebreaks. | |
| ETag | Output only. A hash of the resource. | |
| Arguments | Input/output argument of a function or a stored procedure. | |
| Return Table Type | Optional. Can be set only if routineType = 'TABLE_VALUED_FUNCTION'. If absent, the return table type is inferred from definitionBody at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time. | |
| Return Type | Optional if language = 'SQL'; required otherwise. Cannot be set if routineType = 'TABLE_VALUED_FUNCTION'. If absent, the return type is inferred from definitionBody at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. | |
| Creation Time | Output only. The time when this dataset was created, in milliseconds since the epoch. | |
| Last Modified Time | Output only. The date when this dataset was last modified, in milliseconds since the epoch. | |
| Language | Optional. Defaults to 'SQL' if remoteFunctionOptions field is absent, not set otherwise. One of LANGUAGE_UNSPECIFIED / SQL / JAVASCRIPT / PYTHON / JAVA / SCALA | |
| Imported Libraries | Optional. If language = 'JAVASCRIPT', this field stores the path of the imported JAVASCRIPT libraries. | ["000xxx"]
|
| Description | Optional. The description of the routine, if defined. | |
| Determinism Level | Optional. The determinism level of the JavaScript UDF, if defined. One of DETERMINISM_LEVEL_UNSPECIFIED / DETERMINISTIC / NOT_DETERMINISTIC | |
| Remote Function Options | Optional. Remote function specific options. | |
| Spark Options | Optional. Spark specific options. | |
Update Table
Updates information in an existing table.
Input | Comments | Default |
---|---|---|
Connection | ||
Dataset ID | Dataset ID of the table to update. | |
Project ID | Project ID of the table to update. | |
Table ID | Table ID of the table to update. | |
Kind | Output only. The resource type. | |
Table Reference | Reference describing the ID of this routine. | |
Friendly Name | Optional. A descriptive name for the dataset. | |
Description | Optional. A descriptive name for the dataset. | |
Labels | The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information. | |
Schema | Optional. Describes the schema of this table. | |
Time Partitioning | If specified, configures time-based partitioning for this table. | |
Range Partitioning | If specified, configures range partitioning for this table. | |
Clustering | Clustering specification for the table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered. | |
Require Partition Filter | Optional. If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. | false |
Expiration Time | Optional. The time when this model expires, in milliseconds since the epoch. If not present, the model will persist indefinitely. Expired models will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created models. | |
View | Optional. The view definition. | |
Materialized View | Optional. The materialized view definition. | |
External Data Configuration | Optional. Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. | |
Encryption Configuration | Custom encryption configuration (e.g., Cloud KMS keys). This shows the encryption configuration of the model data while stored in BigQuery storage. This field can be used with models.patch to update encryption key for an already encrypted model. | |
Default Collation | Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type. | |
Default Rounding Mode | Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified. | |
Max Staleness | Optional. Defines the default collation specification of future tables created in the dataset. If a table is created in this dataset without table-level default collation, then the table inherits the dataset default collation, which is applied to the string fields that do not have explicit collation specified. A change to this field affects only tables created afterwards, and does not alter the existing tables. The following values are supported: 'und:ci': undetermined locale, case insensitive.'' empty string. Default to case-sensitive behavior. |