Yes, thanks for the input and this is exactly what I tried to do, but it yields:
Could not execute 'CREATE FUNCTION hexstr2int (IN i_hex VARCHAR(2000)) RETURNS o_result BIGINT LANGUAGE SQLSCRIPT SQL ...' in 1.603 seconds .
SAP DBTech JDBC: [7]: feature not supported: Scalar UDF does not support the type as its input/output argument: Varchar3
on the HANA version I currently use in the cloud. It seems that older HANA versions only support primitive datatypes for scalar UDFs (so varchar is not supported).
Besides, my hope was that HANA provides a speedy/internal datatype conversion out of the box (or even a numerical hash function) but it seems that this hope was in vain. I can imagine many use-cases where I would want to have a numeric hash based partitioning. This seems to be somewhat outdated in the context of todays memory sizes where 32 bytes space to store a hash value is ok even if I only plan to have 8 partitions with uniform distribution.
Again, Lars I really appreciate your input and in fact my procedure doesn't fit my need but your UDF does. But it is currently not supported by the HANA cloud version.
Any clever tricks to solve this with HANA internal typeconversion?
Best regards,
Bodo