You may want to re-look into the reporting requirement. You are trying to get 200 columns with 2 million (20 lakhs) records as output and dump it to the user report. This bound to go out of memory considering the fact that this is the final output. The selected data set from the database might even be larger and get processed to to arrive at the final output mentioned above.
Hence, you may consider splitting the report into multiple reports (horizontally column wise or vertically data set wise) to come up with manageable dataset.
In my opinion, it is unwise to use HANA as a "data dump generator" system, but educate the users to use HANA in more effective way.
But coming back to your issue, I think split the data set into manageable chunks or increase the HANA memory.
Regards,
Ravi