I have debugged nanodbc method call.
I think the bug is in nanodbc, because when query was successfully executed and urho3d try to bind primitive data type that Variant class support, it’s called this method
// All other types are stored using their string representation in the Variant
colValues[i] = result.resultImpl_.get<nanodbc::string_type>((short)i).c_str();
in the result we step into below method, where nanodbc reads row’s data in the buffer, look -
template<>
inline void result::result_impl::get_ref_impl<string_type>(short column, string_type& result) const
{
const bound_column& col = bound_columns_[column];
const SQLULEN column_size = col.sqlsize_;
switch(col.ctype_)
{
case SQL_C_CHAR:
{
if(col.blob_)
{
// Input is always std::string, while output may be std::string or std::wstring
std::stringstream ss;
char buff[1024] = {0};
std::size_t buff_size = sizeof(buff);
SQLLEN ValueLenOrInd;
SQLRETURN rc;
void* handle = native_statement_handle();
do
{
NANODBC_CALL_RC(
SQLGetData
, rc
, handle // StatementHandle
, column + 1 // Col_or_Param_Num
, SQL_C_CHAR // TargetType
, buff // TargetValuePtr
, buff_size // BufferLength
, &ValueLenOrInd); // StrLen_or_IndPtr
if (ValueLenOrInd > 0)
ss << buff;
} while(rc > 0);
convert(ss.str(), result);
}
else
{
const char* s = col.pdata_ + rowset_position_ * col.clen_;
const std::string::size_type str_size = std::strlen(s);
result.assign(s, s + str_size);
}
return;
}
....
....
....
I have checked the do { … } while(rc > 0); loop. I noticed that if text’s size is 1787 character, then in the first iteration we get 1024 first characters and remains to read 715 characters.
However it looks like in the second iteration we do not start reading from 1025 character, but start reading from the beginning. In the result we get the wrong result that I have described previous.
I suppose the bug is how nanodbc reads the row’s data in the buffer or it just consequence to wrong executing of query.
I will file a bug in nanodbc project.