Not sure why or how you would go on with "break the column down into multiple schemas" - but that's your thing anyhow.
For SAP HANA large columns (as in many records) don't pose a problem.
Technically you have a limitation of 2 billion records per table or per partition.
Currently (SPS 8) you can have 1000 partitions per table, which would allow you to store 2.000 billion records in a single table.
Depending on the data distribution in each of your columns, the records can be very efficiently compressed - so the base operation (column scan) can be really fast.
As you will see when using SAP HANA, it is more about how your queries look like than how the data is actually stored.
So, if you ask me, you would never try to split a column into multiple schemas. If you really have to do it, you may partition a table.
if that doesn't allow your data to be stored, you anyhow need to think in more details about the data structures you use and what kind of processing you want to do.
- Lars