curl --request POST \
--url https://developer.synq.io/api/datawarehouse/v1/connection/{connection_id}/upload/{upload_id}/query-logs \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"queryLogs": [
{
"queryId": "<string>",
"createdAt": "2023-11-07T05:31:56Z",
"workspace": "<string>",
"integrationId": "<string>",
"connectionId": "<string>",
"startedAt": "2023-11-07T05:31:56Z",
"finishedAt": "2023-11-07T05:31:56Z",
"sql": "<string>",
"sqlHash": "<string>",
"normalizedQueryHash": "<string>",
"sqlDialect": "<string>",
"queryType": "<string>",
"status": "<string>",
"dwhContext": {
"instance": "<string>",
"database": "<string>",
"schema": "<string>",
"warehouse": "<string>",
"user": "<string>",
"role": "<string>",
"cluster": "<string>"
},
"sqlObfuscationMode": "SQL_OBFUSCATION_MODE_NONE",
"hasCompleteNativeLineage": true,
"isTruncated": true,
"metadata": {},
"nativeLineage": {
"inputTables": [
{
"objectName": "<string>",
"instanceName": "<string>",
"databaseName": "<string>",
"schemaName": "<string>"
}
],
"outputTables": [
{
"objectName": "<string>",
"instanceName": "<string>",
"databaseName": "<string>",
"schemaName": "<string>"
}
]
}
}
]
}
'{}curl --request POST \
--url https://developer.synq.io/api/datawarehouse/v1/connection/{connection_id}/upload/{upload_id}/query-logs \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"queryLogs": [
{
"queryId": "<string>",
"createdAt": "2023-11-07T05:31:56Z",
"workspace": "<string>",
"integrationId": "<string>",
"connectionId": "<string>",
"startedAt": "2023-11-07T05:31:56Z",
"finishedAt": "2023-11-07T05:31:56Z",
"sql": "<string>",
"sqlHash": "<string>",
"normalizedQueryHash": "<string>",
"sqlDialect": "<string>",
"queryType": "<string>",
"status": "<string>",
"dwhContext": {
"instance": "<string>",
"database": "<string>",
"schema": "<string>",
"warehouse": "<string>",
"user": "<string>",
"role": "<string>",
"cluster": "<string>"
},
"sqlObfuscationMode": "SQL_OBFUSCATION_MODE_NONE",
"hasCompleteNativeLineage": true,
"isTruncated": true,
"metadata": {},
"nativeLineage": {
"inputTables": [
{
"objectName": "<string>",
"instanceName": "<string>",
"databaseName": "<string>",
"schemaName": "<string>"
}
],
"outputTables": [
{
"objectName": "<string>",
"instanceName": "<string>",
"databaseName": "<string>",
"schemaName": "<string>"
}
]
}
}
]
}
'{}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Show child attributes
Query identifiers
A Timestamp represents a point in time independent of any time zone or local calendar, encoded as a count of seconds and fractions of seconds at nanosecond resolution. The count is relative to an epoch at UTC midnight on January 1, 1970, in the proleptic Gregorian calendar which extends the Gregorian calendar backwards to year one.
All minutes are 60 seconds long. Leap seconds are "smeared" so that no leap second table is needed for interpretation, using a 24-hour linear smear.
The range is from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z. By restricting to that range, we ensure that we can convert to and from RFC 3339 date strings.
Example 1: Compute Timestamp from POSIX time().
Timestamp timestamp;
timestamp.set_seconds(time(NULL));
timestamp.set_nanos(0);Example 2: Compute Timestamp from POSIX gettimeofday().
struct timeval tv;
gettimeofday(&tv, NULL);
Timestamp timestamp;
timestamp.set_seconds(tv.tv_sec);
timestamp.set_nanos(tv.tv_usec * 1000);Example 3: Compute Timestamp from Win32 GetSystemTimeAsFileTime().
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
// A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z
// is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z.
Timestamp timestamp;
timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));Example 4: Compute Timestamp from Java System.currentTimeMillis().
long millis = System.currentTimeMillis();
Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000)
.setNanos((int) ((millis % 1000) * 1000000)).build();Example 5: Compute Timestamp from Java Instant.now().
Instant now = Instant.now();
Timestamp timestamp =
Timestamp.newBuilder().setSeconds(now.getEpochSecond())
.setNanos(now.getNano()).build();Example 6: Compute Timestamp from current time in Python.
timestamp = Timestamp()
timestamp.GetCurrentTime()In JSON format, the Timestamp type is encoded as a string in the RFC 3339 format. That is, the format is "{year}-{month}-{day}T{hour}:{min}:{sec}[.{frac_sec}]Z" where {year} is always expressed using four digits while {month}, {day}, {hour}, {min}, and {sec} are zero-padded to two digits each. The fractional seconds, which can go up to 9 digits (i.e. up to 1 nanosecond resolution), are optional. The "Z" suffix indicates the timezone ("UTC"); the timezone is required. A proto3 JSON serializer should always use UTC (as indicated by "Z") when printing the Timestamp type and a proto3 JSON parser should be able to accept both UTC and other timezones (as indicated by an offset).
For example, "2017-01-15T01:30:15.01Z" encodes 15.01 seconds past 01:30 UTC on January 15, 2017.
In JavaScript, one can convert a Date object to this format using the
standard
toISOString()
method. In Python, a standard datetime.datetime object can be converted
to this format using
strftime with
the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one can use
the Joda Time's ISODateTimeFormat.dateTime() to obtain a formatter capable of generating timestamps in this format.
Workspace and integration identifiers (for multi-tenancy)
Empty for direct connections, populated for agent uploads
Query start time (optional, uses created_at if not set)
Query finish time (optional, uses created_at if not set)
Query content SQL text (may be obfuscated based on sql_obfuscation_mode)
SHA256 hash of original SQL for deduplication (computed during storage if not provided)
Hash of normalized query for lineage caching (empty if not available from platform)
SQL dialect (e.g., "snowflake", "bigquery", "clickhouse")
Platform-specific query type (e.g., "CREATE_TABLE_AS_SELECT", "SELECT")
Execution status: "SUCCESS", "FAILED", "CANCELED"
DWH execution context
Show child attributes
Instance identifier (account, workspace_url, host, etc.)
Database/catalog name
Schema name
Warehouse identifier (Snowflake, Databricks)
User who executed the query
Role used for execution
Cluster identifier (Redshift, ClickHouse)
Obfuscation and parsing hints
SQL_OBFUSCATION_MODE_NONE, SQL_OBFUSCATION_MODE_REDACT_LITERALS If true, native lineage is complete and SQL parsing can be skipped
If true, SQL was truncated by the warehouse
Platform-specific metadata (arbitrary key-value pairs) Contains execution metrics, costs, etc. depending on the platform
Show child attributes
Value represents a dynamically typed value which can be either
null, a number, a string, a boolean, a recursive struct value, or a
list of values. A producer of value is expected to set one of these
variants. Absence of any variant indicates an error.
The JSON representation for Value is JSON value.
Native lineage from the platform (if available)
Show child attributes
Success
The response is of type IngestQueryLogsResponse · object.