You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Specify the API at the beginning of the title. For example, "BigQuery: ...").
General, Core, and Other are also allowed as types
OS type and version: Linux
Java version: 1.8
version(s): 2.20.1
Steps to reproduce
I'm not at liberty to share the code, however, when attempting to use Datanucleus with CloudSpanner JDBC driver newer than 2.8.0 Datanucleus will be unable to get the type info from the JDBC connection string.
Code example
When Datanucleus attempts to load a JDBC connection, it will get the Type info from the connection, however, this type info is then immediately cast to a short in Datanucleus (1). However, with Google SQL and support for JSON it will crash upon trying to convert 100011 to a short, which is coming from the JSON vendor type number (2).
Caused by: com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory$JdbcSqlExceptionImpl: OUT_OF_RANGE: Value out of range for short: 100011
at com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory.of(JdbcSqlExceptionFactory.java:247) ~[google-cloud-spanner-jdbc-2.20.1-single-jar-with-dependencies.jar:2.20.1]
at com.google.cloud.spanner.jdbc.AbstractJdbcWrapper.checkedCastToShort(AbstractJdbcWrapper.java:237) ~[google-cloud-spanner-jdbc-2.20.1-single-jar-with-dependencies.jar:2.20.1]
at com.google.cloud.spanner.jdbc.JdbcResultSet.getShort(JdbcResultSet.java:330) ~[google-cloud-spanner-jdbc-2.20.1-single-jar-with-dependencies.jar:2.20.1]
at org.datanucleus.store.rdbms.schema.SQLTypeInfo.<init>(SQLTypeInfo.java:133) ~[datanucleus-rdbms-4.1.1.jar:?]
... 59 more
The question is, can we either bring down the 100,000 addition to 32,000 ? Would that be a breaking change? This would ensure compatibility to those jdbc loaders who assume JDBC types are very small always.
The text was updated successfully, but these errors were encountered:
@Noremac201 Thanks for the detailed and clear report.
The question is, can we either bring down the 100,000 addition to 32,000 ? Would that be a breaking change?
Yeah, I don't think we can safely do that. It could be that there are applications out there that have these type codes hard coded, and changing the actual type code to something else in the driver could then lead to failures, as the driver checks that the type code that an application uses is valid.
I do however think that we can fix this in a different way:
The standard type code still remains the current 100,000 + internal code.
In addition, we'll define a type code that fits in a short. This type code will be the unchecked value of (short) VENDOR_TYPE_CODE. For JSON, this will be -31061.
We'll allow both the short vendor code and the normal vendor code to be used.
We'll remove the checked cast specifically for the result set that is returned by DatabaseMetadata#getTypeInfo(). This means that anyone calling ResultSet#getShort("DATA_TYPE") will get the short vendor code, and can freely use that with the driver.
The ResultSet that is returned by DatabaseMetadata#getTypeInfo has
a column at index 2 with the name DATA_TYPE. This field should
contain one of the java.sql.Types constants, or a vendor-specific
type code. The JDBC specification states that this column should
be a `short` (although the constants in java.sql.Types are of
type `int`).
Cloud Spanner (at the time of writing) does not support any int16
fields. The type code is therefore returned as an int64. The codes
that are used for vendor-specific types by Spanner exceed the max
value of a `short`, and therefore resulted in an OUT_OF_RANGE
exception if you tried to call `ResultSet#getShort(int)` on
this column for any of the Spanner-specific types (e.g. JSON).
This change fixes that by adding an additional vendor type code
for these types that does fit in a `short`. This value is
returned when `getShort(int)` is called on the ResultSet.
Fixes#1688
Environment details
General, Core, and Other are also allowed as types
2.20.1
Steps to reproduce
Code example
When Datanucleus attempts to load a JDBC connection, it will get the Type info from the connection, however, this type info is then immediately cast to a
short
in Datanucleus (1). However, with Google SQL and support for JSON it will crash upon trying to convert100011
to a short, which is coming from the JSON vendor type number (2).Stack trace
The question is, can we either bring down the 100,000 addition to 32,000 ? Would that be a breaking change? This would ensure compatibility to those jdbc loaders who assume JDBC types are very small always.
The text was updated successfully, but these errors were encountered: