class Google::Apis::BigqueryV2::JobConfigurationLoad
Attributes
- Optional
-
Accept rows that are missing trailing optional columns. The missing
values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats. Corresponds to the JSON property `allowJaggedRows` @return [Boolean]
- Optional
-
Accept rows that are missing trailing optional columns. The missing
values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats. Corresponds to the JSON property `allowJaggedRows` @return [Boolean]
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false. Corresponds to the JSON property `allowQuotedNewlines` @return [Boolean]
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false. Corresponds to the JSON property `allowQuotedNewlines` @return [Boolean]
- Optional
-
Specifies whether the job is allowed to create new tables. The
following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion. Corresponds to the JSON property `createDisposition` @return [String]
- Required
-
The destination table to load the data into.
Corresponds to the JSON property `destinationTable` @return [Google::Apis::BigqueryV2::TableReference]
- Optional
-
The character encoding of the data. The supported values are UTF-8
or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties. Corresponds to the JSON property `encoding` @return [String]
- Optional
-
The separator for fields in a CSV file. The separator can be any
ISO-8859-1 single-byte character. To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence “t” to specify a tab separator. The default value is a comma (','). Corresponds to the JSON property `fieldDelimiter` @return [String]
- Optional
-
Indicates if BigQuery should allow extra values that are not
represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Corresponds to the JSON property `ignoreUnknownValues` @return [Boolean]
- Optional
-
Indicates if BigQuery should allow extra values that are not
represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Corresponds to the JSON property `ignoreUnknownValues` @return [Boolean]
- Optional
-
The maximum number of bad records that BigQuery can ignore when
running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. Corresponds to the JSON property `maxBadRecords` @return [Fixnum]
- Experimental
-
If sourceFormat is set to “DATASTORE_BACKUP”, indicates which
entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result. Corresponds to the JSON property `projectionFields` @return [Array<String>]
- Optional
-
The value that is used to quote data sections in a CSV file.
BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('“'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true. Corresponds to the JSON property `quote` @return [String]
- Optional
-
The schema for the destination table. The schema can be omitted if
the destination table already exists, or if you're loading data from Google Cloud Datastore. Corresponds to the JSON property `schema` @return [Google::Apis::BigqueryV2::TableSchema]
- Deprecated
-
The inline schema. For CSV schemas, specify as “Field1:Type1[,
Field2:Type2]*“. For example, ”foo:STRING, bar:INTEGER, baz:FLOAT“. Corresponds to the JSON property `schemaInline` @return [String]
- Deprecated
-
The format of the schemaInline property.
Corresponds to the JSON property `schemaInlineFormat` @return [String]
- Optional
-
The number of rows at the top of a CSV file that BigQuery will skip
when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. Corresponds to the JSON property `skipLeadingRows` @return [Fixnum]
- Optional
-
The format of the data files. For CSV files, specify “CSV”. For
datastore backups, specify “DATASTORE_BACKUP”. For newline-delimited JSON, specify “NEWLINE_DELIMITED_JSON”. For Avro, specify “AVRO”. The default value is CSV. Corresponds to the JSON property `sourceFormat` @return [String]
- Required
-
The fully-qualified URIs that point to your data in Google Cloud
Storage. Each URI can contain one '*' wildcard character and it must come after the 'bucket' name. Corresponds to the JSON property `sourceUris` @return [Array<String>]
- Optional
-
Specifies the action that occurs if the destination table already
exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result. The default value is WRITE_APPEND. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion. Corresponds to the JSON property `writeDisposition` @return [String]
Public Class Methods
# File generated/google/apis/bigquery_v2/classes.rb, line 1257 def initialize(**args) update!(**args) end
Public Instance Methods
Update properties of this object
# File generated/google/apis/bigquery_v2/classes.rb, line 1262 def update!(**args) @allow_jagged_rows = args[:allow_jagged_rows] if args.key?(:allow_jagged_rows) @allow_quoted_newlines = args[:allow_quoted_newlines] if args.key?(:allow_quoted_newlines) @create_disposition = args[:create_disposition] if args.key?(:create_disposition) @destination_table = args[:destination_table] if args.key?(:destination_table) @encoding = args[:encoding] if args.key?(:encoding) @field_delimiter = args[:field_delimiter] if args.key?(:field_delimiter) @ignore_unknown_values = args[:ignore_unknown_values] if args.key?(:ignore_unknown_values) @max_bad_records = args[:max_bad_records] if args.key?(:max_bad_records) @projection_fields = args[:projection_fields] if args.key?(:projection_fields) @quote = args[:quote] if args.key?(:quote) @schema = args[:schema] if args.key?(:schema) @schema_inline = args[:schema_inline] if args.key?(:schema_inline) @schema_inline_format = args[:schema_inline_format] if args.key?(:schema_inline_format) @skip_leading_rows = args[:skip_leading_rows] if args.key?(:skip_leading_rows) @source_format = args[:source_format] if args.key?(:source_format) @source_uris = args[:source_uris] if args.key?(:source_uris) @write_disposition = args[:write_disposition] if args.key?(:write_disposition) end