Can you send us the spark parameters with overhead. assuming you are
running with yarn
 - 864GB
The parameter spark.yarn.executor.memoryOverhead is explained as below:
spark.yarn.executor.memoryOverhead = executorMemory * 0.10, with minimum
The amount of off-heap memory (in megabytes) to be allocated per executor.
This is memory that accounts for things like VM overheads, interned
strings, other native overheads, etc. This tends to grow with the executor
Dr Mich Talebzadeh
LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.
On Thu, 10 Oct 2019 at 21:39, Nimmi Cv <[EMAIL PROTECTED]> wrote: