It is an important activity for our society to create new value by combining materials. From daily cooking to industrial manufacturing, procedural texts describe the way to do it allowing readers to reproduce procedures for these activities.
As pointed by some previous studies for natural language understanding, one important property of the procedural text is its context dependency, which is the merging operations of materials and can be represented by a graph or tree structure.
This paper aims to investigate the impact of explicitly introducing such a structure on the vision and language task of procedural text generation from an image sequence.
To this end, we propose (1) a new dataset, which extends a definition of a tree structure merging tree to a vision and language version and (2) a novel structure-aware procedural text generation model, which learns the context dependency efficiently.
Experimental results show that the proposed method can boost the performance of traditional versatile methods.