In recent years, the task of sequence to sequence based neural abstractive||summarization has gained a lot of attention. Many novel strategies have been||used to improve the saliency, human readability, and consistency of these models,||resulting in high-quality summaries. However, because the majority of these pretrained||models were trained on news datasets, they contain an inherent bias. One||such bias is that most of these generated summaries originate from the start or end||of the text, much like a news story might be summarised.
Text documents are rich repositories of causal knowledge. While journal publications typically contain analytical explanations of observations on the basis of scientific experiments conducted by researchers, analyst reports, News articles or even consumer generated text contain not only viewpoints of authors, but often contain causal explanations for those viewpoints.