We come up short on space on our Hadoop Cluster that is set up thusly:
1x 1TB HDD/<-Ubuntu System Partition
3x 1.5TB HDD/data1/data2/data3 <-HDFS Data Volumes
The framework parcel is nearly not utilized (97% free) and won’t be utilized for undertakings disconnected to Hadoop.
Is it safe to include the framework parcel as HDFS information dir to the DataNode Configuration?
I’m anxious about the possibility that that Hadoop tops off the segment and make the framework unusable.
The ideal way would presumably to set up separate lvm volumes or re-parcel the plate. In any case, I would abstain from going along these lines.
Does Hadoop regard Unix shares? E.g. in the event that I include an index from the framework segment and confine the Hadoop client through share to just utilize e.g. 0.5TB would that assistance?